Skip to end of metadata
Go to start of metadata

Plugin Information

View Logstash on the plugin site for more information.

This plugin pushes logs and build data to a Logstash indexer such as Redis, RabbitMQ ElasticSearch, Logstash or Valo. 

Migration from v1.x

With version 2.0 the global configuration has been moved from Global Tool Configuration to the regular Jenkins configuration page (Jenkins → Manage Jenkins → Configure System). There was also a major change in the way the plugin works. It is no longer a BuildWrapper but a ConsoleLogFilter and you enable it via a JobProperty. This is necessary to reliably ensure that passwords are masked when the MaskPasswords plugin is installed and allow to enable log forwarding globally.

An existing global configuration will be migrated and FreeStyleJobs that use the BuildWrapper will be converted to use the JobProperty after updating the plugin and restarting Jenkins.

The migration from v0.8.0 to 2.0 is not tested. You will definitely need to configure the indexer in the global configuration.

Migration from v0.8.0 to v1.x

Beginning with version 1.0.0, connection information for the Logstash indexer is stored in a global config (version 0.8.0 and older stored this information in the project settings). Upon upgrading you will need to go to Jenkins → Manage Jenkins → Global Tool Configuration to re-enter the connection information.

You should also refresh the configuration of every job that uses this plugin to eliminate the obsolete fields and prevent warnings from occurring in the Jenkins server logs. To do this, either edit the jobs individually in the UI and click the "Save" button, or go to Jenkins → Manage Jenkins → Manage Old Data and click "Discard Unreadable Data".

Figure 1: Global configuration settings

Figure 2: Obsolete configuration data found in jobs using v0.8.0 or older.


Indexers Currently Supported

The following data stores are currently supported for pushing logs and build data to:

  • Redis
  • RabbitMQ, vhosts are supported
  • Syslog
  • Elastissearch (you have to configure the url including the index and a type, e.g. http://elasticseach:9200/logstash/jenkins. Specifying just the index is not sufficient)
  • Logstash
    1. When configured with a tcp input choose logstash mode
    2. When configured with a http input choose elasticsearch mode. In that case index and type are not required.


Enable Globally

It is now possible to enable the log forwarding globally.

Note: Enable globally doesn't work for pipeline jobs currently as the workflow api doesn't support this yet (see JENKINS-45693)


This component streams individual log lines to the indexer for post-processing, along with any build data that is available at the start (some information such as the build status is unavailable or incomplete).

Post-Build Publisher

This component pushes the tail of the job's log to the indexer for post-processing, along with all build data at the time the post-build action had started (if any post-build actions are scheduled after this plugin they will not be recorded).



Logstash plugin can be used as a publisher in pipeline jobs to send the tail of the log as a single document.

Example for publisher in pipeline
node('master') {
		echo 'Hello, world!'
	logstashSend failBuild: true, maxLines: 1000
Note: Due to the way logging works in pipeline currently, the logstashSend step might not transfer the lines logged directly before the step is called. Adding a sleep of 1 second might help here.
Note: In order to get the the result set in pipeline it must be set before the logstashSend step.
Note: the logstashSend step requires a node to run.

Step with Block

It can be used as a wrapper step to send each log line separately.

Once the result is set, it will appear in the data sent to the indexer.

Note: when you combine with timestamps step, you should make the timestamps the outer most block. Otherwise you get the timestamps as part of the log lines, basically duplicating the timestamp information.

Example for pipeline step
timestamps {
  logstash {
    node('somelabel') {
		echo 'Hello, World!'
      try {
        // do something that fails
        sh "exit 1"
        currentBuild.result = 'SUCCESS'
	  } catch (Exception err) {
        currentBuild.result = 'FAILURE'


Note: Information on which agent the steps are executed is not available at the moment.


JSON Payload Format

JSON payload Example
      "fullDisplayName":"My Example Job #1",
      "Started by user anonymous",
      "Building in workspace /var/lib/jenkins/jobs/my_example_job/workspace",
      "Hello, World!"

Example payload sent to the indexer (e.g. RabbitMQ) using the post-build action component. Note that when the buildwrapper is used, some information such as the build result will be missing or incomplete, and the "message" array will contain a single log line.

Note that data.testResults will only be present if a publisher records your test results in the build, for example by using the JUnit Plugin.


See Changelog on github


To report a bug or request an enhancement to this plugin please create a ticket in JIRA.

Issues  (${entries.size()} issues)

T P Key Summary Status


  1. I would like assistance with maintaining this plugin. Read this doc if you are interested:

  2. As of version 1.20 of this plugin, its configuration may be found in Jenkins -> Manage Jenkins -> Global Tool Configuration (not Configure System as shown above).

  3. The plugin send the details to the elastic server but the console log isn't indexed.

    How can I add index parts to the console output?

    I would add, apache.maven.shared.jar, apache.maven....


    1. Your query isn't clear. The configuration has to be done in 'Global Tool configuration' page where the user has to define the /index/type values in the key field(for the indexer type Elasticsearch). The console log gets pushed to elastic as index defined in the key field.

      1. The logstash jenkins plugin, if I set the Indexer Type to ELASTICSEARCH can not indexing the consol output.

        My question is, how can I add more patterns?

  4. I have installed logstash plugin and as mentioned in documentation I'm trying to push data to rabbit mq.
    I have created a job for it and when I check this option in job "Send console log to Logstash" it crashes and fails to save the job.


    Also one of my colleague is trying this and the jenkins console output vanishes.

    1. If you need support please use the email group.

      If you think you found a bug please file an issue in JIRA

    2. You can use the 'post build action' in jenkins job configuration page and select "send console log to Logstash" option.

      I think you are getting the error, when selecting the "Send console log to Logstash" under the Build Environment options. 


      JENKINS-47817 - Getting issue details... STATUS  issue has been filed for the exception.

    1. vaibhav gulati The error you encountered while enabling the 'send console log to logstash' in the build environment options, will go away if you install the Mask Password plugin.

      Mask Passwords Plugin

      1. Can you raise a ticket for it to support other protocols apart from udp?

  5. Thanks Naga Pavan Kumar T,

    I will try this as wll.

    There is another issue I encountered "Failed to send log data to SYSLOG" message too long

    1. Most likely you are hitting the syslog message size limit:

      1. I agree on this.

        There are 2 Syslog formats available in jenkins with this plugin: RFC5424 & RFC3164 with protocol support only UDP.

        Now how to solve this issue?

        1. I suggest using a different protocol. We use logstash indexer

          1. And how to use that with this plugin?

            I only see udp as an option.

            1. No, the solution is not to use syslog at all.

  6. Hello there,

    Please feel free to redirect me to appropriate discussion channel if this is not the place. 

    This is about a bug in Logstash which prevents me from sending jenkins build logs to Logstash over HTTP (logstash-http-plugin) using ElasticSearch indexer.
    Suggestion to use codec=>json is not an option for me because I want to conditionally apply json codec/parsing. Users with similar requirement would potentially face this issue.

    Plugin uses  ContentType.APPLICATION_JSON which sets the header value to "application/json; charset=UTF-8" and that is not recognized by Logstash and treats the json string as plain text. Hence, I would like to propose a trivial change to ElasticSearchDao.getHttpPost() to manually add the request header. Proposed change looks something like this.

    HttpPost getHttpPost(String data) {
        HttpPost postRequest;
        postRequest = new HttpPost(uri);
        postRequest.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
        StringEntity input = new StringEntity(data, StandardCharsets.UTF_8);
        if (auth != null) {
          postRequest.addHeader("Authorization", "Basic " + auth);
        return postRequest;

    This change sets request header to application/json instead of application/json; charset=UTF-8 and that makes Logstash happy and everything works great. IMHO, this is a very trivial change and should not have any impact on existing behavior.

    Please let me know your comments/suggestions or if you have an alternative way to solve this. I can raise a PR if required.

    1. Vaman Kulkarni please file a PR and continue there.

      I'd rather solve this by exposing a new configuration field that would allow to override the content type to any value.

      1. @Jakub Bochenski Thanks for your suggestion. I shall work on that and raise a PR. Do you foresee any use case where one would want to set a different content-type? The reason I am asking because the logstash-plugin as such generates a json string and feel that no one would want the json string to be interpreted as any other content type(smile)

        1. Vaman Kulkarni I have those motivations:

          1. We need this change to be opt-it for sake of existing setups.
          2. This is a workaround for an obvious defect in the logstash plugin. I want to cover other potential issues like this. I can imagine some  backend or middleware requiring the  legacy text/json mime
          3. One could change to a more specific content type (application/vnd....+json) to do some REST content negotiation tricks
          1. @Jakub Bochenski: I raised a PR ( for this a couple of days ago. I could not find any option to add reviewers, perhaps lack of permissions? Could you please help with adding reviewers to the PR? Thanks!

  7. Hi,

    We are having an issue related with transport message size. This issue already was previously mentioned in a last comment but we do not get any conclusions about it.

    In some jobs we are getting this message in logs:  

    logstash-plugin: Failed to send log data to SYSLOG <Server>
    logstash-plugin: No Further logs will be sent to <Server> The message is larger than the maximum supported by the underlying transport: Datagram send failed

    Does anyone have any suggestion of how we can overcome this issue?

    Thank you in advance.

    Alberto Monteiro


  8. How can I add more key value pairs to json?

  9. Hello, Is there a way to globally enable this setting for our pipeline multi-branch builds or an option to enable it in the jenkinsfile using options vs wrapping the whole pipeline with the build wrapper?

    1. There is no way to enable it globally for pipeline jobs. A change was just merged yesterday in the master branch (not released yet) that will allow to enable for classic (Freestyle) jobs.

      Due to the nature of the pipeline jobs it is not possible to enable it globally I think. The best it can get currently is the logstash step (also in master branch but not released) that allows to do


      logstash {
        node('label') {
          do something

      1. when JENKINS-45693 - Getting issue details... STATUS is fixed, enabling globally will also work for pipeline jobs

        1. That is great news! I will keep watch on this JIRA. Thanks!

  10. Can you add the Jenkins jobs status in the output? We want to trend completed, unstable, and failed jobs but there is not status value.

    1. 2.1.0 contains a fix so that when you set the build result in the pipeline explicitly, the result will be included from that point on when using the logstash step

  11. I have tried this plugin in the past. We use pipelines and this seems to have a limitation. It only outputs the build result when it has failed as this result is caught in the pipeline --> Whereas if the build succeeds, it only creates this at the end of the pipeline, therefore logStashSend does not get the result.
    Because of this I am needing to use the /var/log/jenkins folder to get build logs and send to Logstash and grok out the results etc etc etc.
    This has a limitation too though, I do not get the total build times to complete the builds and therefore cannot map build times over weeks / months etc
    Does anyone have a way to fix this?

    1. Look at to see how to set the build result in a pipeline. Of course this only works when you have control over the pipeline and logstashSend requires a node.

  12. How does one send the jenkins build logs to the logstash-plugin  (v 1.3)?

    Does one select 'ELASTICSEARCH' as the 'INDEXER_TYPE', and what value goes in the "KEY"?

    We're seeing this exception, when trying to send logs over

    [logstash-plugin]: Failed to send log data to ELASTICSEARCH:http://HOSTNAME:12100.
    [logstash-plugin]: No Further logs will be sent to http://HOSTNAME:12100.
    org.apache.http.NoHttpResponseException: HOSTNAME:12100 failed to respond


    Logstash has been configured for http input, and can get data when we're sending json data via curl.

    curl -H 'content-type:application/json; charset=UTF-8' -XPUT https://HOSTNAME:12100 -d '{"id":"test","time":"now"}'


    Any pointers?

    1. Key is not required when sending to logstash.

      The plugin is sending POST requests not PUT.

      See the post from Vaman Kulkarni above about bug in logstash (not this plugin) with content-type. You must set codec >= json in your logstash input configuration.

      Next release (probably 2.1.0) will contain a fix that will allow to explicitly set content type and an option send to logstash via tcp.


  13. I'm attempting to configure this plugin programmatically as part of Jenkins initialization a la this doc. I almost had it with the following script, but unfortunately the setActiveIndexer(LogstashIndexer<?> activeIndexer) method is not accessible outside the class. The only other way I see to set the indexer would be to call configure(StaplerRequest staplerRequest, JSONObject json), but I'm not sure what I would call it with. 

    Has anyone successfully configured this plugin through automation? If so, how? If not, is there any reason the setActiveIndexer() method couldn't be exposed for this purpose?


    import jenkins.plugins.logstash.LogstashConfiguration
    import jenkins.plugins.logstash.configuration.Redis
    Redis indexer = new Redis()
    LogstashConfiguration config = LogstashConfiguration.getInstance()
    1. You should definitely call
      so that what you configured is visible in the UI.

      This will not get it working for your case so a fix is needed here.

      Can you open a JIRA ticket please.

      I don't want to expose setActiveIndexer, this is meant for testing only.

      1. Thanks for the response! I've updated my script to use `setLogstashIndexer` as you suggested. The desired configuration now appears in the UI, though it is nonfunctional (getting error "[logstash-plugin]: Unable to instantiate LogstashIndexerDao with current configuration."). Based off your reply, I take it that this behavior is expected.

        I have created a ticket for this enhancement:  JENKINS-52643 - Getting issue details... STATUS

  14. I have configured Jenkins logstash plugin for free style jobs and I was able to fetch the logs but when I created the index on kibana and choose discover its showing same logs multiple times. I did a google search but didnt find any solution.


    August 1st 2018, 15:35:07.286 data.projectName: data.fullProjectName: data.displayName:#68 data.fullDisplayName: #68 data.url:job/***/68/ data.buildHost:Jenkins data.buildLabel:master data.buildNum:68 data.buildDuration:4,953 data.rootProjectName: data.rootFullProjectName: data.rootProjectDisplayName:#68 data.rootBuildNum:68 data.buildVariables.BUILD_DISPLAY_NAME:#68 data.buildVariables.BUILD_ID:


    The same logs displaying around 99 times. can Some one help on this?


    1. Happened for me as well. Not 99 times, but 4-5 times for each job when i enable it globally.

      1. Sorry, no idea what that might be.

        The only related know problem is but this wouldn't generate 99 duplicates,

        Maybe you can check the http input entering your ES instance to see if the payload is sent multiple times or if the message gets duplicated later

    2. Enabling globally currently means an event is sent for each log line it is not the behavior of the notifier. What you show here is just the metadata which is sent with each event. Most likely you have additional data for each event namely the log content itself which is different for each event.

      There is room for improvement here to send the metadata which is static only once per build and not for each logline.

  15. I already have a pipeline job. I tried to get the complete log with the command "logstashSend failBuild: true, maxLines: 1000", but found that only the log before the command was executed was retrieved, and the log of whether the job was successfully executed was not retrieved. Official tips can be added "sleep" to solve the problem, how to do it? Thank you!


    def label = "kube-app-slave"
    podTemplate(label: label, cloud: 'kubernetes') {
        container('kube-app-slave') {

             echo "hello world!"
       logstashSend failBuild: true, maxLines: 1000

    1. This should get you the everything (or the last 100 lines if it more than 1000) what is in the log up to this point

      sleep 1
      logstashSend failBuild: true, maxLines: 1000

      Anything that comes afterwards is not included.

      The last log line in the full log:


      is not possible to get in a pipeline script currently.

      The best it gets is to catpure any exception and then explicitly set the result. Then you get the build Result in the data section of the event (in one of my comments is a link on how to set the result in a pipeline script).

      1. Thanks for the response! It's working!


        echo "result1 ${currentBuild.currentResult}"

  16. Hello,

    I downloaded the code of logstah-plugin on my computer, when I tried to build it I got (in 5 files - ElasticSearch, LogstashIndexer, RabbitMq, Redis and LogstashSendStep):

    cannot find symbol
    symbol: class Messages
    location: package jenkins.plugins.logstash

    this is the line I got that error: 

    import jenkins.plugins.logstash.Messages;

    waiting for you help (smile)

    1. This import is a generated class. Build your project with maven and it will be there. In your IDE you will have to include the path where generated sources are loated to the classpath (if not done automatically).

  17. I'm trying to build a Jenkins as Configuration as Code and setting a logstash:
    This is my jenkins.yaml:

        enableGlobally: true
        enabled: true
                mimeType: "application/json"
                scheme: "http"
                host: "localhost"
                port: 9200
                path: "logstash-jenkins/jenkins"
        milliSecondTimestamps: true

    But when I try to build Jenkins, I have received an error.
    Caused by: java.lang.IllegalArgumentException: single entry map expected to configure a jenkins.plugins.logstash.configuration.LogstashIndexer

    PS: If I remove uri tag and dependences (scheme, host, port and path), it works.

    Can some one help on this?

      1. Yes, I did.

        It request to use object, do not accept a string.

        1. Using JCasC plugin version 1.5 it tells me

          Configuration-as-Code can't handle type class

          There is an issue open JENKINS-52697 to make logstash plugin compatible to JCasC

          1. Ok, thansk for help.

            I have resolved with groovy file (set-elk.groovy):

            #!/usr/bin/env groovy


            import jenkins.plugins.logstash.*

            import jenkins.plugins.logstash.configuration.*

            import jenkins.model.GlobalConfiguration


            config = GlobalConfiguration.all().get(LogstashConfiguration.class)

            config.enableGlobally = true

            config.enabled = true

            config.milliSecondTimestamps = true


            def env = System.getenv()


            indexer = new ElasticSearch()

            indexer.uri = new URI("http://elk:9200/logstash-jenkins/jenkins")

            indexer.mimeType = "application/json"

            indexer.username = ""

            indexer.password = ""




            It runs after Jenkins installation, adding following line to Dockerfile:

            COPY set-elk.groovy /usr/share/jenkins/ref/init.groovy.d/set-elk.groovy

            I think could help others.


  18. Is there any reason to keep maxLines at 1000? I'm trying to find ways to create visualizations from console logs, so is there any reason why I wouldn't want to set this to 10,000 for example? I'm assuming it's just a storage preference, but just making sure. 

    Also - logstashSend failBuild: true, maxLines: 1000

    What does failBuild do here? Does it just send a failure status to Elasticsearch or will it actually fail the build in CJE if it fails to send the logs? If it fails the job in Jenkins, can you provide an example to not fail the build. I don't think I'd want to fail the entire job if it doesn't successfully send the logs.

  19. I get the error at the bottom of my console log: 

    "error":"Incorrect HTTP method for uri [/gpd-jenkins] and method [POST], allowed: [GET, HEAD, DELETE, PUT]"

    There's a comment that said version 2.1.0 would have a fix that would allow us to choose method, and there isn't.  Can you tell me how I can do that with an out-of-the-box installation of the plugin?  I'm using the ElasticSearch option, have the authentication info filled out, the server line is "https://my-es-cluster:9200/gpd-jenkins" . 

    Like I said, everything is "working" fine, except being able to send as a PUT instead of a POST. I can mimic the issue in POSTMAN.  I don't know how to set the request type to PUT inside of JENKINS though.  Thanks in advance.

    1. I think you are referring to Re: Logstash Plugin
      Please re-read that comment as it didn't promise a PUT method, but I think described other ways to solve the issue.
      Said changes have been integrated in

    2. I think you url is not complete. Elastic search requires and index and a type. So the index is gdb-jenkins but you're missing the type.

      Read the elastic search documentation ( about autogenerating ids.