Skip to end of metadata
Go to start of metadata

Plugin Information

View Logstash on the plugin site for more information.

This plugin pushes logs and build data to a Logstash indexer such as Redis, RabbitMQ ElasticSearch, Logstash or Valo. 

Migration from v1.x

With version 2.0 the global configuration has been moved from Global Tool Configuration to the regular Jenkins configuration page (Jenkins → Manage Jenkins → Configure System). There was also a major change in the way the plugin works. It is no longer a BuildWrapper but a ConsoleLogFilter and you enable it via a JobProperty. This is necessary to reliably ensure that passwords are masked when the MaskPasswords plugin is installed and allow to enable log forwarding globally.

An existing global configuration will be migrated and FreeStyleJobs that use the BuildWrapper will be converted to use the JobProperty after updating the plugin and restarting Jenkins.

The migration from v0.8.0 to 2.0 is not tested. You will definitely need to configure the indexer in the global configuration.

Migration from v0.8.0 to v1.x

Beginning with version 1.0.0, connection information for the Logstash indexer is stored in a global config (version 0.8.0 and older stored this information in the project settings). Upon upgrading you will need to go to Jenkins → Manage Jenkins → Global Tool Configuration to re-enter the connection information.

You should also refresh the configuration of every job that uses this plugin to eliminate the obsolete fields and prevent warnings from occurring in the Jenkins server logs. To do this, either edit the jobs individually in the UI and click the "Save" button, or go to Jenkins → Manage Jenkins → Manage Old Data and click "Discard Unreadable Data".

Figure 1: Global configuration settings


Figure 2: Obsolete configuration data found in jobs using v0.8.0 or older.

Features

Indexers Currently Supported

The following data stores are currently supported for pushing logs and build data to:

  • Redis
  • RabbitMQ, vhosts are supported
  • Syslog
  • Elastissearch (you have to configure the url including the index and a type, e.g. http://elasticseach:9200/logstash/jenkins. Specifying just the index is not sufficient)
  • Logstash
    1. When configured with a tcp input choose logstash mode
    2. When configured with a http input choose elasticsearch mode. In that case index and type are not required.

 

Enable Globally

It is now possible to enable the log forwarding globally.

Note: Enable globally doesn't work for pipeline jobs currently as the workflow api doesn't support this yet (see JENKINS-45693)

JobProperty

This component streams individual log lines to the indexer for post-processing, along with any build data that is available at the start (some information such as the build status is unavailable or incomplete).

Post-Build Publisher

This component pushes the tail of the job's log to the indexer for post-processing, along with all build data at the time the post-build action had started (if any post-build actions are scheduled after this plugin they will not be recorded).

Pipeline

Publisher

Logstash plugin can be used as a publisher in pipeline jobs to send the tail of the log as a single document.

Example for publisher in pipeline
node('master') {
	sh'''
		echo 'Hello, world!'
	'''
	logstashSend failBuild: true, maxLines: 1000
}
 
Note: Due to the way logging works in pipeline currently, the logstashSend step might not transfer the lines logged directly before the step is called. Adding a sleep of 1 second might help here.
Note: In order to get the the result set in pipeline it must be set before the logstashSend step.
Note: the logstashSend step requires a node to run.

Step with Block

It can be used as a wrapper step to send each log line separately.

Once the result is set, it will appear in the data sent to the indexer.

Note: when you combine with timestamps step, you should make the timestamps the outer most block. Otherwise you get the timestamps as part of the log lines, basically duplicating the timestamp information.

Example for pipeline step
timestamps {
  logstash {
    node('somelabel') {
      sh'''
		echo 'Hello, World!'
	  '''
      try {
        // do something that fails
        sh "exit 1"
        currentBuild.result = 'SUCCESS'
	  } catch (Exception err) {
        currentBuild.result = 'FAILURE'
      }    
    }
  }
}

 

Note: Information on which agent the steps are executed is not available at the moment.

 

JSON Payload Format

JSON payload Example
{
   "data":{
      "id":"2014-10-13_19-51-29",
      "result":"SUCCESS",
      "projectName":"my_example_job",
      "fullProjectName":"folder/my_example_job",
      "displayName":"#1",
      "fullDisplayName":"My Example Job #1",
      "url":"job/my_example_job/1/",
      "buildHost":"Jenkins",
      "buildLabel":"",
      "buildNum":1,
      "buildDuration":0,
      "rootProjectName":"my_example_job",
      "rootFullProjectName":"folder/my_example_job",
	  "rootProjectDisplayName":"#1",
      "rootBuildNum":1,
      "buildVariables":{
         "PARAM1":"VALUE1",
         "PARAM2":"VALUE2"
      },
      "testResults":{
         "totalCount":45,
         "skipCount":0,
         "failCount":0,
         "failedTests":[]
      }
   },
   "message":[
      "Started by user anonymous",
      "Building in workspace /var/lib/jenkins/jobs/my_example_job/workspace",
      "Hello, World!"
   ],
   "source":"jenkins",
   "source_host":"http://localhost:8080/jenkins/",
   "@timestamp":"2014-10-13T19:51:29-0700",
   "@version":1
}

Example payload sent to the indexer (e.g. RabbitMQ) using the post-build action component. Note that when the buildwrapper is used, some information such as the build result will be missing or incomplete, and the "message" array will contain a single log line.

Note that data.testResults will only be present if a publisher records your test results in the build, for example by using the JUnit Plugin.

Changelog

See Changelog on github

Issues

To report a bug or request an enhancement to this plugin please create a ticket in JIRA.

Issues (${entries.size()} issues)

T P Key Summary Status
Loading...
Refresh

42 Comments

  1. I would like assistance with maintaining this plugin. Read this doc if you are interested: https://wiki.jenkins-ci.org/display/JENKINS/Adopt+a+Plugin

  2. As of version 1.20 of this plugin, its configuration may be found in Jenkins -> Manage Jenkins -> Global Tool Configuration (not Configure System as shown above).

  3. The plugin send the details to the elastic server but the console log isn't indexed.

    How can I add index parts to the console output?

    I would add apache.maven.org, apache.maven.shared.jar, apache.maven....

     

    1. Your query isn't clear. The configuration has to be done in 'Global Tool configuration' page where the user has to define the /index/type values in the key field(for the indexer type Elasticsearch). The console log gets pushed to elastic as index defined in the key field.

      1. The logstash jenkins plugin, if I set the Indexer Type to ELASTICSEARCH can not indexing the consol output.

        My question is, how can I add more patterns?

  4. I have installed logstash plugin and as mentioned in documentation I'm trying to push data to rabbit mq.
    I have created a job for it and when I check this option in job "Send console log to Logstash" it crashes and fails to save the job.

     

    Also one of my colleague is trying this and the jenkins console output vanishes.

    1. If you need support please use the email group.

      If you think you found a bug please file an issue in JIRA

    2. You can use the 'post build action' in jenkins job configuration page and select "send console log to Logstash" option.

      I think you are getting the error, when selecting the "Send console log to Logstash" under the Build Environment options. 

       

      JENKINS-47817 - Getting issue details... STATUS  issue has been filed for the exception.

    1. vaibhav gulati The error you encountered while enabling the 'send console log to logstash' in the build environment options, will go away if you install the Mask Password plugin.

      Mask Passwords Plugin

      1. Can you raise a ticket for it to support other protocols apart from udp?

  5. Thanks Naga Pavan Kumar T,

    I will try this as wll.

    There is another issue I encountered "Failed to send log data to SYSLOG" message too long

    1. Most likely you are hitting the syslog message size limit: https://stackoverflow.com/a/2012139/1237617

      1. I agree on this.

        There are 2 Syslog formats available in jenkins with this plugin: RFC5424 & RFC3164 with protocol support only UDP.

        Now how to solve this issue?

        1. I suggest using a different protocol. We use logstash indexer

          1. And how to use that with this plugin?

            I only see udp as an option.

            1. No, the solution is not to use syslog at all.

  6. Hello there,

    Please feel free to redirect me to appropriate discussion channel if this is not the place. 

    This is about a bug in Logstash which prevents me from sending jenkins build logs to Logstash over HTTP (logstash-http-plugin) using ElasticSearch indexer.
    Suggestion to use codec=>json is not an option for me because I want to conditionally apply json codec/parsing. Users with similar requirement would potentially face this issue.

    Plugin uses  ContentType.APPLICATION_JSON which sets the header value to "application/json; charset=UTF-8" and that is not recognized by Logstash and treats the json string as plain text. Hence, I would like to propose a trivial change to ElasticSearchDao.getHttpPost() to manually add the request header. Proposed change looks something like this.

    HttpPost getHttpPost(String data) {
        HttpPost postRequest;
        postRequest = new HttpPost(uri);
        postRequest.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
        StringEntity input = new StringEntity(data, StandardCharsets.UTF_8);
        postRequest.setEntity(input);
        if (auth != null) {
          postRequest.addHeader("Authorization", "Basic " + auth);
        }
        return postRequest;
    }

    This change sets request header to application/json instead of application/json; charset=UTF-8 and that makes Logstash happy and everything works great. IMHO, this is a very trivial change and should not have any impact on existing behavior.

    Please let me know your comments/suggestions or if you have an alternative way to solve this. I can raise a PR if required.

    1. Vaman Kulkarni please file a PR and continue there.

      I'd rather solve this by exposing a new configuration field that would allow to override the content type to any value.

      1. @Jakub Bochenski Thanks for your suggestion. I shall work on that and raise a PR. Do you foresee any use case where one would want to set a different content-type? The reason I am asking because the logstash-plugin as such generates a json string and feel that no one would want the json string to be interpreted as any other content type(smile)

        1. Vaman Kulkarni I have those motivations:

          1. We need this change to be opt-it for sake of existing setups.
          2. This is a workaround for an obvious defect in the logstash plugin. I want to cover other potential issues like this. I can imagine some  backend or middleware requiring the  legacy text/json mime
          3. One could change to a more specific content type (application/vnd....+json) to do some REST content negotiation tricks
          1. @Jakub Bochenski: I raised a PR (https://github.com/jenkinsci/logstash-plugin/pull/41) for this a couple of days ago. I could not find any option to add reviewers, perhaps lack of permissions? Could you please help with adding reviewers to the PR? Thanks!

  7. Hi,

    We are having an issue related with transport message size. This issue already was previously mentioned in a last comment but we do not get any conclusions about it.

    In some jobs we are getting this message in logs:  

    logstash-plugin: Failed to send log data to SYSLOG <Server>
    logstash-plugin: No Further logs will be sent to <Server>
    java.net.SocketException: The message is larger than the maximum supported by the underlying transport: Datagram send failed


    Does anyone have any suggestion of how we can overcome this issue?

    Thank you in advance.

    Regards,
    Alberto Monteiro

     

  8. How can I add more key value pairs to json?

  9. Hello, Is there a way to globally enable this setting for our pipeline multi-branch builds or an option to enable it in the jenkinsfile using options vs wrapping the whole pipeline with the build wrapper?

    1. There is no way to enable it globally for pipeline jobs. A change was just merged yesterday in the master branch (not released yet) that will allow to enable for classic (Freestyle) jobs.

      Due to the nature of the pipeline jobs it is not possible to enable it globally I think. The best it can get currently is the logstash step (also in master branch but not released) that allows to do

       

      logstash {
        node('label') {
          do something
          ....
        }
      }



      1. when JENKINS-45693 - Getting issue details... STATUS is fixed, enabling globally will also work for pipeline jobs

        1. That is great news! I will keep watch on this JIRA. Thanks!

  10. Can you add the Jenkins jobs status in the output? We want to trend completed, unstable, and failed jobs but there is not status value.

    1. 2.1.0 contains a fix so that when you set the build result in the pipeline explicitly, the result will be included from that point on when using the logstash step

  11. I have tried this plugin in the past. We use pipelines and this seems to have a limitation. It only outputs the build result when it has failed as this result is caught in the pipeline --> Whereas if the build succeeds, it only creates this at the end of the pipeline, therefore logStashSend does not get the result.
    Because of this I am needing to use the /var/log/jenkins folder to get build logs and send to Logstash and grok out the results etc etc etc.
    This has a limitation too though, I do not get the total build times to complete the builds and therefore cannot map build times over weeks / months etc
    Does anyone have a way to fix this?

    1. Look at https://support.cloudbees.com/hc/en-us/articles/218554077-How-to-set-current-build-result-in-Pipeline- to see how to set the build result in a pipeline. Of course this only works when you have control over the pipeline and logstashSend requires a node.

  12. How does one send the jenkins build logs to the logstash-plugin  (v 1.3)?

    Does one select 'ELASTICSEARCH' as the 'INDEXER_TYPE', and what value goes in the "KEY"?

    We're seeing this exception, when trying to send logs over

    [logstash-plugin]: Failed to send log data to ELASTICSEARCH:http://HOSTNAME:12100.
    [logstash-plugin]: No Further logs will be sent to http://HOSTNAME:12100.
    org.apache.http.NoHttpResponseException: HOSTNAME:12100 failed to respond

     

    Logstash has been configured for http input, and can get data when we're sending json data via curl.

    curl -H 'content-type:application/json; charset=UTF-8' -XPUT https://HOSTNAME:12100 -d '{"id":"test","time":"now"}'

     

    Any pointers?

    1. Key is not required when sending to logstash.

      The plugin is sending POST requests not PUT.

      See the post from Vaman Kulkarni above about bug in logstash (not this plugin) with content-type. You must set codec >= json in your logstash input configuration.

      Next release (probably 2.1.0) will contain a fix that will allow to explicitly set content type and an option send to logstash via tcp.

       

  13. I'm attempting to configure this plugin programmatically as part of Jenkins initialization a la this doc. I almost had it with the following script, but unfortunately the setActiveIndexer(LogstashIndexer<?> activeIndexer) method is not accessible outside the class. The only other way I see to set the indexer would be to call configure(StaplerRequest staplerRequest, JSONObject json), but I'm not sure what I would call it with. 

    Has anyone successfully configured this plugin through automation? If so, how? If not, is there any reason the setActiveIndexer() method couldn't be exposed for this purpose?

     

    #!groovy
    
    import jenkins.plugins.logstash.LogstashConfiguration
    import jenkins.plugins.logstash.configuration.Redis
    
    Redis indexer = new Redis()
    indexer.setHost('logstash.example.com')
    indexer.setPort(6379)
    indexer.setKey('logstash')
    
    LogstashConfiguration config = LogstashConfiguration.getInstance()
    config.setEnabled(true)
    config.setEnableGlobally(false)
    config.setActiveIndexer(indexer)
    config.save()
    1. You should definitely call
      config.setLogstashIndexer(indexer)
      so that what you configured is visible in the UI.

      This will not get it working for your case so a fix is needed here.

      Can you open a JIRA ticket please.

      I don't want to expose setActiveIndexer, this is meant for testing only.

      1. Thanks for the response! I've updated my script to use `setLogstashIndexer` as you suggested. The desired configuration now appears in the UI, though it is nonfunctional (getting error "[logstash-plugin]: Unable to instantiate LogstashIndexerDao with current configuration."). Based off your reply, I take it that this behavior is expected.

        I have created a ticket for this enhancement:  JENKINS-52643 - Getting issue details... STATUS

  14. I have configured Jenkins logstash plugin for free style jobs and I was able to fetch the logs but when I created the index on kibana and choose discover its showing same logs multiple times. I did a google search but didnt find any solution.

    ===============

    August 1st 2018, 15:35:07.286 data.id:68 data.projectName: data.fullProjectName: data.displayName:#68 data.fullDisplayName: #68 data.url:job/***/68/ data.buildHost:Jenkins data.buildLabel:master data.buildNum:68 data.buildDuration:4,953 data.rootProjectName: data.rootFullProjectName: data.rootProjectDisplayName:#68 data.rootBuildNum:68 data.buildVariables.BUILD_DISPLAY_NAME:#68 data.buildVariables.BUILD_ID:

    =================

    The same logs displaying around 99 times. can Some one help on this?

    Regards,

    1. Happened for me as well. Not 99 times, but 4-5 times for each job when i enable it globally.

      1. Sorry, no idea what that might be.

        The only related know problem is https://github.com/jenkinsci/logstash-plugin/pull/66 but this wouldn't generate 99 duplicates,

        Maybe you can check the http input entering your ES instance to see if the payload is sent multiple times or if the message gets duplicated later

    2. Enabling globally currently means an event is sent for each log line it is not the behavior of the notifier. What you show here is just the metadata which is sent with each event. Most likely you have additional data for each event namely the log content itself which is different for each event.

      There is room for improvement here to send the metadata which is static only once per build and not for each logline.

Write a comment…