Skip to end of metadata
Go to start of metadata

Plugin Information

View S3 publisher on the plugin site for more information.

Older versions of this plugin may not be safe to use. Please review the following warnings before using an older version:

Upload build artifacts to Amazon S3

Making artifacts public

If you'd like to have all of your artifacts be publicly downloadable, see http://ariejan.net/2010/12/24/public-readable-amazon-s3-bucket-policy/.

Usage with IAM

If you used IAM to create a separate pair of access credentials for this plugin, you can lock down its AWS access to simply listing buckets and writing to a specific bucket. Add the following custom policy to the user in the IAM console, replacing occurrences of "my-artifact-bucket" with your bucket name, which you'll have to create first:

{
  "Statement": [
    {
      "Action": [
        "s3:ListAllMyBuckets"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Action": "s3:*",
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::my-artifact-bucket", "arn:aws:s3:::my-artifact-bucket/*"]
    }
  ]
}

Version History

Version 0.10.11 (Dec 31, 2016) - do not update - backward compatibility for pipeline scripts are broken

  • Make plugin compatible with storage backends compatible with Amazon S3 (OpenStack Swift...) (JENKINS-40654, PR-100)
  • Add Standard - Infrequent Access storage class (PR-98)
  • Constrain build result severity (JENKINS-27284, PR-95)
  • Add job setting to suppress console logging (PR-94)

Version 0.10.10 (Oct 10, 2016)

  • Add method for changing S3Profile via GroovyVersion

Version 0.10.9 (June 27, 2016)

  • Added option to open content directly in browser (JENKINS-37346)
  • FIXED IE and Chrome download issue when file path is window style ([PR-93|https://github.com/jenkinsci/s3-plugin/pull/93)

Version 0.10.8 (Aug 31, 2016)

  • Doesn't exist (broken release because of changes in Jenkins plugin repository)

Version 0.10.7 (July 21, 2016)

  • Handle InterruptedExceptions that no files are found (PR-92)

Version 0.10.6 (July 1, 2016)

  • Don't upload on aborted build (JENKINS-25509, PR-90)

Version 0.10.5.1 (June 27, 2016)

  • Plugin missing transitive dependencies ( JENKINS-36096 )

Version 0.10.5 (June 17, 2016)

  • Failed to reset the request input stream (JENKINS-34216 / PR-90 )

Version 0.10.4 (June 10, 2016)

  • Restore support for MatrixPlugin (JENKINS-35123)
  • Add new parameter on Profile level - to keep or not to folder structure. By default, plugin doesn't keep folder structure. And option to keep structure will be removed in some of next releases (JENKINS-34780)

Version 0.10.3 (May 25, 2016)

  • Add option to keep artifacts forever
  • S3 Plugin switches credential profiles on-the-fly (JENKINS-14470)

Version 0.10.2 (May 11, 2016)

  • Add usages to README file (PR-87)
  • Add option to set content-type on files (PR-86)
  • S3 artifacts are visible from API

Version 0.10.1 (Apr 25, 2016)

  • Parallel uploading
  • Support uploading for unfinished builds

Version 0.9.4 (Apr 23, 2016)

  • Update AWS SDK to latest version
  • Fix credential issue

Version 0.9.2 (Apr 06, 2016)

  • Update AWS SDK to latest version
  • Fix credential issue

Version 0.9.1 (Apr 05, 2016)

  • Updated the aws-java-sdk dependency to support java region uploads
  • Uploading and downloading files more than 5GB using TransferManager
  • Flatten directories
  • Excludes for downloading and uploading
  • Several profiles
  • Retries for downloading
  • Workflow plugin support
  • Using default Jenkins proxy
  • Now artifacts are using full name instead of project name only

Version 0.5 (Aug 09, 2013)

  • Added Regions Support (JENKINS-18839)
  • Update AWS SDK to latest version

Version 0.4 (Jul 12, 2013)

  • Added storage class support
  • Added arbitrary metadata support
  • Fixed the problem where the plugin messes up credential profiles upon concurrent use (JENKINS-14470)
  • Plugin shouldn't store S3 password in clear (JENKINS-14395)

Version 0.3.1 (Sept. 20th, 2012)

  • Prevent OOME when uploading large files.
  • Update Amazon SDK

Version 0.3.0 (May 29th, 2012)

  • Use AWS MimeType library to determine the Content-Type of the uploaded file.

36 Comments

  1. FYI, it says the required Jenkins core version is 1.434, but I've built it fine with 1.424 and have it running successfully on Jenkins 1.424.1 LTS.

  2. Is it possible to publish an entire folder recursively?

    1. You can create an archive with a command line tool like `zip -r` or `tar cvzf` then publish that. Alternately, use the ant wildcard **.

  3. M S

    How can I get the build artifacts to go inside the S3 bucket within a subfolder named with the build name or a date time stamp? Currently the artifacts just go into the root of the S3 bucket and overwrite the previous builds.
    Also, I noticed that Bucket is not create
    How can I get the build artifacts to go inside the S3 bucket within a subfolder named with the build number or a date time stamp? Currently the artifacts just go into the root of the S3 bucket and overwrite the previous builds.

    Also, I noticed that when you enter the bucket name on the Job Configuration page, the help text says that the bucket will be created if it does not exist. However, this caused a build failure as an exception was raised due to the bucket not existing. I had to manually create this bucket before the build succeeded.

    1. I'm publishing stamped builds by creating the local file with the desired name before uploading. For example:

      tar cvzf "regress_install-$BUILD_TAG.tar.gz" regress_install

      You can also use the output of the `date` command, other Jenkins-set env vars, etc.

      It'd be nice if the S3 plugin allowed you to specify a destination file name or name prefix, but this works well enough.

  4. Hi,

    Last time I have a problem with this plugin, it happens when upload task started by timer and artifact is bigger than 124Mb

    Microsoft Windows [Version 6.1.7601]
    Copyright (c) 2009 Microsoft Corporation.  All rights reserved.
    
    C:\Users\mako>echo %JAVA_OPTS%
    -Djava.awt.headless=true \-Xms300m \-Xmx600m \-XX:PermSize=256m \-XX:MaxPermSize=512m \-XX:+DisableExplicitGC
    FATAL: Java heap space
    java.lang.OutOfMemoryError: Java heap space
    	at org.apache.http.util.ByteArrayBuffer.expand(ByteArrayBuffer.java:62)
    	at org.apache.http.util.ByteArrayBuffer.append(ByteArrayBuffer.java:92)
    	at org.apache.http.util.EntityUtils.toByteArray(EntityUtils.java:102)
    	at org.apache.http.entity.BufferedHttpEntity.<init>(BufferedHttpEntity.java:62)
    	at com.amazonaws.http.HttpRequestFactory.newBufferedHttpEntity(HttpRequestFactory.java:246)
    	at com.amazonaws.http.HttpRequestFactory.createHttpRequest(HttpRequestFactory.java:122)
    	at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:224)
    	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:166)
    	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2198)
    	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:958)
    	at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:843)
    	at hudson.plugins.s3.S3Profile.upload(S3Profile.java:75)
    	at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:119)
    	at hudson.tasks.BuildStepMonitor$2.perform(BuildStepMonitor.java:27)
    	at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:717)
    	at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:692)
    	at hudson.model.Build$BuildExecution.post2(Build.java:183)
    	at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:639)
    	at hudson.model.Run.execute(Run.java:1509)
    	at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
    	at hudson.model.ResourceController.execute(ResourceController.java:88)
    	at hudson.model.Executor.run(Executor.java:236)
    1. Looks like the S3 plugin isn't bright enough to stream the artifact progressively; it must be loading the whole lot into RAM at once. Consider patching it to use a read/write loop or stream copying and submit the patch to Jenkins' JIRA.

  5. Unlike most of the other artifact publishers, the S3 publisher doesn't seem to attach a link to the artifact to the build record for easy retrieval.

    I'm considering implementing that, but I'd like to know if anyone else has done it first, or if there's some reason I'm not aware of that'd make it harder than would be expected. I'm quite new to Jenkins and very new to plugin development in Jenkins, so I'd love some pointers.

    What I'm thinking of doing if it's possible is storing the S3 bucket and object ID in the build record, and generating a signed URL to the object on demand whenever the build page is displayed. If I can't generate the URL on the fly when the build page is displayed I'd instead generate and store a long-expiry signed URL when the artifact is uploaded.

  6. I've installed the S3 Plugin in Hudson to copy a war files that will be used for deployments to S3 buckets.

    I've set up 2 different S3 profiles in Hudson, one for production and one for test (2 different AWS accounts).

    My instance of hudson is running on an EC2 instance inside the Test AWS account.

    Inside the build for the project I've indicated to use my production profile.

    The copy from Hudson to S3 will fail due to access denied. Unless I give the bucket permission to the Test AWS account. But then, the object in the bucket does not have the correct permissions for the Production account to get the object out of the bucket to use.

    I thought that Hudson would use the keys provided in the S3 profile for authorization for the copy to the bucket but it doesn't appear that way.

    I know I could just use the Test account keys in the Production environment to get the object, but I was hoping to keep the keys contained to just that single environment and not have to do any cross authorization or usage.

    Any thoughts anyone? 

  7. So if  understand this correctly, there is no way that this plugin can upload a folder, and all it's sub-directories to the S3 while preserving the directory structure?

    In other words, if I have this:
    $WORKSPACE/foo/bar/index.html

    and I want to copy "foo" and all its sub-directories to S3, so that it looks exactly like it does on in my workspace, this plugin can NOT do this?

    Thanks!

    FYI, it looks like the answer is "No", according to this StackOverflow question: http://stackoverflow.com/questions/5407742/how-can-i-publish-static-web-resources-to-amazon-s3-using-hudson-jenkins-and-mav

  8. Unknown User (dave.johnston@me.com)

    Looks like the latest version of the plugin (0.5) requires Jenkins core version 1.526.

    Has anyone built or tested this with the LTS version of Jenkins ? 1.509.3 ?

    Cheers

  9. There is a problem, I believe, with how this plugin interacts with the Promoted Builds plugin.  I'm not sure which plugin would be the cause of the this issue.  I reported this on the google group as well, but here it is:

    I'm finding that if you add a "publish to s3" step to a build promotion process - and you have multiple build promotion processes defined (say, one for Dev, Test, and Production) - you get a very strange interaction.

    I wanted to publish to a different S3 bucket for each of Dev, Test, and Production - and wanted to wire that into the three different build promotion definitions.

    However, upon Save, the configuration got very strange:  Every build promotion process I had defined now had every S3 step from all promotions.

    In other words I had defined:
    Dev Promo
        Publish to s3 dev bucket
    Test Promo
        Publish to s3 test bucket
    Prod Promo
        Publish to s3 prod bucket

    but upon Save it became:

    Dev Promo
        Publish to s3 dev bucket
        Publish to s3 test bucket
        Publish to s3 prod bucket
    Test Promo
        Publish to s3 dev bucket
        Publish to s3 test bucket
        Publish to s3 prod bucket
    Prod Promo
        Publish to s3 dev bucket
        Publish to s3 test bucket
        Publish to s3 prod bucket

    and every subsequent save actually multiplied the S3 configs.

    Happy to provide more information if I can.
    Scott

  10. I've been using the plugin successfully to upload to an S3 bucket in the US_WEST_1 region.  I tried to use the plugin with another project to upload to a bucket in the US_WEST_2 region and I'm getting the exception copied below.  Wonder if it is related to this issue: https://issues.jenkins-ci.org/browse/JENKINS-18839.  We're using v0.6 of the plugin and v1.562 of Jenkins.  Anyone else have any experience uploading to US_WEST_2?  Publish artifacts to S3 Bucket Using S3 profile: API
    Publish artifacts to S3 Bucket bucket=deployment-artifacts, file=ROOT.war region=US_WEST_2, upload from slave=false managed=false
    ERROR: Failed to upload files
    java.io.IOException: put Destination bucketName=deployment-artifacts, objectName=ROOT.war: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: 7703E745610E74FA), S3 Extended Request ID: pI1hxDpmS6Hp8H3kErwjiIp6rUEHQS0R01V+URHqXLlkT3k3QmcZJV+I8yYFvzsisDVICVLaA68=
    at hudson.plugins.s3.S3Profile.upload(S3Profile.java:140)
    at hudson.plugins.s3.S3BucketPublisher.perform(S3BucketPublisher.java:174)
    at hudson.tasks.BuildStepMonitor$2.perform(BuildStepMonitor.java:32)
    at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:745)
    at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:709)
    at hudson.model.Build$BuildExecution.post2(Build.java:182)
    at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:658)
    at hudson.model.Run.execute(Run.java:1729)
    at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
    at hudson.model.ResourceController.execute(ResourceController.java:88)
    at hudson.model.Executor.run(Executor.java:231)
    Build step 'Publish artifacts to S3 Bucket' changed build result to UNSTABLE

    Thanks,

    Neil

  11. It seems that the plugin isn't saving the Proxy Host and Port configuration in the 'Amazon S3 profile'.

    Anyone else else had this problem or have a workaround inplace?

    Thanks!

    1. Any luck on proxy host and port configuration wiping out issue?

      Thanks!

  12. Hello All

    When I use "S3 Copy Artifact" and give the correct Project name,I get below error
    Build #459459 doesn't have any S3 artifacts uploaded
    Build step 'S3 Copy Artifact' marked build as failure(I have verified the project name to be correct and it works in other jenkins job where I use 'Copy artifacts from another project')

    Please note when I use "Publish artifacts to S3 Build", the push to S3 is working fine but I do not want to use it since it uses Source and I want to use the Project name

    Any idea?

  13. Hello everybody,

    I'm using this revision https://github.com/jenkinsci/s3-plugin/compare/s3-0.8...master

    Everything working fine, but when I tried to upgrade 0.10.x version upload of files is working differently.

    OLD one: when I'm setting source path Project\Output**

    It will upload all structure of folders and subfolder and files from source path directly to the dest path bucket/JOB_NAME/VERSION

    NEW one: it will create full path in the bucket bucket/JOB_NAME/VERSION/Project/Output/....
    How can I skip "Project/Output" in new version of plugin? Flatten directories option removing all subfolder and folders structure, so it doesn't actually helps?

    Please advise.

    Thank you.

    1. Hello,

      I have also just encountered this issue.

      I am having the exact same problem where the plugin is now copying the folder structure before ** when it should just be copying the content inside of dist/**

      Thanks.

      1. Hi

        I added option on global config level to fix this problem in 0.10.4 (June 10, 2016).

  14. Hello.

    2 questions

    1.Is there a way to upload file to s3 with using this header  ContentType: Application/json and not Content-Type: application/octet-stream.

    2.is there a way to upload recursive folders - so the folder structure in s3 will be the same i sent ?

    Content-Type: application/octet-stream

    1. Hi

      You need to enable "option to open content directly in browser". Was added in 0.10.9 (June 27, 2016).

      1. Having this same issue with 0.11.0.  I have "Show content directly in browser" enabled, but json files are still uploaded with content type "application/octet-stream".

         

  15. I would like to upload file to the 'root' of my S3 bucket.

    source file : /var/lib/jenkins/workspace/job/build/foo.war

    desired destination : s3bucket/foo.war

    actual destination: s3bucket/ob/build/foo.war

    How do I get the desired output?

    1. Figured it out after reviewing the source code. I need to set managedArtifacts to false.

  16. Hi all

    I'm very newbie on jenkins and i'm trying to upload a lot of image files to S3 with this plugin.

    I've created and .xml that contains the path to the images, following the fileset syntax but the only think I reach is to upload the xml file

    I don't want to upload an artifact (which i suspect that is some kind of jenkins job file), I need to upload a lot of images.

    Is this possible?

    Thanks!

  17. Is there any way to get this as a BUILD STEP in addition to the POST BUILD.  In my case, we use S3 for hosting some sites.  I need to run an INVALIDATE CACHE for Cloudfront on AWS CLI after the push of the files.  AWS CLI is only available in BUILD STEPS as an Execute Shell so I cannot run it AFTER the S3 Upload unless I create a new job.  The only S3 BUILD plugin I have found doesnt allow me to specify a bucketname.

  18. I implemented a separate down stream invalidate-cache job that is called if the job that publishes to S3 succeeds. You can alternatively call the AWS CLI through groovy postbuild from the same job.

  19. When I specify the "Destination bucket" I'm observing that the actual location the source is uploaded to is <whatever I put>/jobs/${JOB_NAME}/${BUILD_NUMBER}; is there a way to not append that? When I look at the console output, there's no indication that this should happen, and if I explicitly append that in my configuration, then the behavior is a double-append.

  20. I am not seeing that append on the latest plugin 0.10.12. Double check your source field, or perhaps there is another plugin you are using that is causing this odd behavior?

    1. I'm using the latest version (0.10.12) and I've verified the configuration and the console results - https://imgur.com/a/69EGh

      I'm by no means a Jenkins expert but I have no idea how another plugin would cause the behavior I'm seeing. Unfortunately I'm working on an existing project with a great deal of plugins. Is there anything you could suggest that would help me sift through which one(s) could be contributing to this behavior?

    2. Solution: there is a "Manage artifacts" checkbox which adds this behavior. Unchecking that box resolved my issue. My apologies for the silly question (and not providing enough information in the previous screenshot; it would have been more obvious to someone).

  21. could someone give me example, how to configure pipeline for s3plugin.

    Thanks

  22. Is there any way to configure endpoints.json to point to a custom S3-compatible server? And to use "bucket in path" style access, not "bucket in host" which breaks the https certificate?

  23. I've been trying to work out a way to configure the s3 plugin and save new bucket profiles globally with a groovy script. So far I have something like this:

    import hudson.plugins.s3.S3Profile
    import hudson.plugins.s3.S3BucketPublisher
    s3new = new hudson.plugins.s3.S3Profile('xx', '123', '123', false, 1, '1','2','3','4',false)


    This creates the object, but I have no idea how to save it to Jenkins as the there is no save() method. Any help would be amazing!

    1. After WAYYYY too much time, I figured it out myself. If anyone is interested:

      import hudson.plugins.s3.*
      
      instance          = Jenkins.instance
      new_name          = "S5 profile random name"
      S3Profile profile = new S3Profile(new_name, null, null, true, 0, "0", "0", "0", "0", true);
      
      publisher = hudson.plugins.s3.S3BucketPublisher
      des       = new hudson.plugins.s3.S3BucketPublisher.DescriptorImpl();
      s3Plugin  = instance.getDescriptor(S3BucketPublisher.class);
      
      new_profiles = [];
      des.getProfiles().each { if (it.name != new_name) { new_profiles.push(it) } }
      new_profiles.push(profile)
      
      s3Plugin.replaceProfiles(new_profiles)