Skip to end of metadata
Go to start of metadata

Plugin Information

View Splunk on the plugin site for more information.

Older versions of this plugin may not be safe to use. Please review the following warnings before using an older version:

Splunk plugin for Jenkins provides deep insights into your Jenkins master and slave infrastructure, job and build details such as console logs, status, artifacts, and an incredibly efficient way to analyze test results.


The plugin is used together with a Splunk App for Jenkins that provides out-of-the-box dashboards and search capabilities to enable organizations to run a high performing Jenkins cluster and bring operational intelligence into the software development life cycle.

Splunk Plugin for Jenkins

1. Configure Splunk Server

  • Go to https://<jenkins-url>/configure
  • Enter Hostname, Port, and Token
    • For Splunk cloud user, the host name is something like http-inputs-xx.splunkcloud.com or http-inputs-xx.cloud.splunk.com, port is 443
    • For Splunk enterprise user, the host name is the indexer host name, and port is 8088 by default
  • Check "Raw Events Supported" if you are using Splunk version 6.3.1511 or later
  • SSL is enabled by default in Splunk, it will protect the data transferred on network.
  • Click "Test Connection" to verify the setup
  • Check "Enable" and Save

         

2. Configure Metadata

Specify index, host, sourcetype for the various events. Metadata can be configured to collect as little or as much Jenkins information as you need and sent to Splunk for analysis. 

  • index is a data repository in Splunk, you can set a different data retention policy and access privileges for each index in Splunk. You need create the index in Splunk manually, the plugin will not create any index.
  • sourcetype is used by Splunk to determine how incoming data is formatted, see also Why sourcetype matters
  • host is used to identify which Jenkins master is the source of the data

There are 7 types of events which can be customized

Build Report

Junit or other Reports sent by calling "send(message)" DSL script

Build Event

Sent when job are started and completed

Queue Information

Basic queue information and Jenkins health metrics, sent when queue get updated and every 45 seconds by default.
You can add "-Dcom.splunk.splunkjenkins.queueMonitorSeconds=N" to Jenkins start parameters to adjust the timing

Console Log

Build console log, e.g. job/abc/123/console, slave log and Jenkins master log (jenkins.log)

Log File

Artifact contents, send by calling "archive(includes)" DSL script

Slave Information

The slave(agent) health metrics, sent every 8 minutes by default. You can add "-Dcom.splunk.splunkjenkins.slaveMonitorMinutes=N" to Jenkins start parameters to adjust the timing

Jenkins Config

The contents of Jenkins configuration items, e.g. config.xml, sent when the config file get updated.

you can customize the index, sourcetype in the "Custom Metadata" section.

2.1. Metadata configuration for Splunk App for Jenkins

  • For Splunk version 6.5 or later, it is recommended to use the plugin's default config
  • For Splunk 6.3.x or 6.4.x, please adjust the default sourcetype to json:jenkins:old (please remove it if Splunk get upgraded to latest version otherwise data will be extracted twice)

3. Customize The Job Data Sent to Splunk

In the Advanced configuration section, you can customize the post data using groovy DSL. 
While the default settings should suffice for most Jenkins users, the Advanced configuration section allows you to use groovy DSL to customize the data sent to Splunk.

The groovy script can use the variable splunkins, which provides access to the following objects and methods:

  • send(Object message) will send the information to splunk
  • getBuildEvent() will return metadata about the build, such as build result, build URL, user who triggered the build
  • getJunitReport(int pageSize) will return a list of test results, which contains total, passes, failures, skips, time and testcase of type List<hudson.tasks.junit.CaseResult>
  • getJunitReport(int pageSize, List<String> ignoredTestResultActions) will return a list of test results except test formats specified in ignoredTestResultActions
  • sendCoverageReport(pageSize) send coverage, each event contains max pageSize metrics
  • getJunitReport() is an alias of getJunitReport(Integer.MAX_VALUE)
  • archive(String includes, String excludes, boolean uploadFromSlave, String fileSizeLimit) send log file to splunk
  • archive(String includes) is an alias of archive(includes, null, false, "")
  • getAction(Class type) is an alias of build.getAction(type)
  • getActionByClassName(String className) same as getAction(Class type) but no need to import the class before use
  • hasPublisherName(String className) check whether the publisher is configured for the build (applied to AbstractBuild only)
  • Here is the default settings for post job data processing (since v.1.5.0)
//send job metadata and junit reports with page size set to 50 (each event contains max 50 test cases)
splunkins.sendTestReport(50)
//send coverage, each event contains max 50 class metrics
splunkins.sendCoverageReport(50)
//send all logs from workspace to splunk, with each file size limits to 10MB
splunkins.archive("**/*.log", null, false, "10MB")

4. Customize log files at job level (optional)

Jenkins builds can produce many artifacts which can contain useful build information. The plugin can be configured globally (step #3) to collect all artifacts using the archive command. You can also specify what artifacts to send to Splunk at the job level by adding Splunk's post-build action 'Send data to Splunk'

  • Add a "post-build action" called "Send data to Splunk"
  • Enter an ant-style pattern matching string for your junit xml collection

5. System properties (optional)

System properties are defined by passing -Dproperty=value to the java command line to start Jenkins. Make sure to pass all of these arguments before the -jar argument, otherwise they will be ignored. Example: java -Dsplunkins.buffer=4096 -jar jenkins.war

PropertyDefault ValueNote
splunkins.buffer4096console log buffer size
com.splunk.splunkjenkins.JdkSplunkLogHandler.levelINFOlog message levels lower than this will not be send to splunk
splunkins.debugLogBatchSize128batch size for sending verbose level (FINE,FINER,FINEST) log record
splunkins.consoleLogFilterPattern(empty)regular expression for 'interesting' build. if it is set, only send console log to splunk for the job whose build url matches the pattern
splunkins.ignoreConfigChangePattern(queue|nodeMonitors|UpdateCenter|global-build-stats|fingerprint|build)(.*?xml)regular expression for ignoring config file changes



Splunk Dashboards for Jenkins

You can download and install the Splunk App for Jenkins from https://splunkbase.splunk.com/app/3332

Once installed, the Splunk App will use the data being sent by the Splunk plugin for Jenkins and show various dashboards and search capabilities. Here are some of the key features

Overview - Visualize multiple masters and associated slaves in a single page. View build status trends and be able to drill down and get details information about any build.

Build Analysis - Easily find any Jenkins build using a variety of easy to use filters. View build summary or drill down to see build status trends, build time and queue time analysis, tests pass/fail trends, test runtime distribution, and console logs couple with Splunk's powerful search interface.

Test Results - If you are a test engineer and spend countless hours looking at test results in Jenkins, you will love this feature. Test Results shows all the failing tests with stack traces, flags regression failures, groups test failures by errors, captures Jenkin's environment variables, and provides nifty filters to find tests with long run times, particular errors, testsuites, etc.

Jenkins Health - Splunk Jenkins Apps captures Jenkins internal JVM information as well as keys metrics like queue size, executors and slaves stats, Jenkins master logs, and Jenkins slave stats. All this information is captured in real-time, allowing you to quickly discover hard to find issues and fix them before they become a bottleneck for development teams. No more ssh-ing into Jenkins systems to find issues.

Jenkins Slaves - Analyze all activity on a particular slave. View builds executed on a slave, view real-time slave logs, build activity across all slaves, and check connection history to find out unstable Jenkins slaves. This feature is extremely helpful in identifying problematic components in a Jenkins cluster and optimizing your team's throughput.

Audit Trail - Audit trail feature allows you to see who has logged into your Jenkins system and done any activity like starting/aborting/changing jobs. You can also see what configs have been changed by some user and can view the config xml directly in Splunk. This feature is particularly useful for organization with security and compliance use cases. 

To Contribute

  • clone the repo and update code
  • start splunk, you can get a free trail version from Splunk
  • run $ mvn clean verify -Dsplunk-host=localhost -Dsplunk-username=admin -Dsplunk-passwd=changeme to run tests using local splunk instance.
  • send pull requests

Docker Demo Image

curl -s -L https://raw.githubusercontent.com/fengxx/docker-splunk-app-jenkins/master/demo.sh | sh

FAQ

How can I send historical data prior to the time plugin get installed

Access <jenkins_url>/script, and execute the groovy code (update the time range if needed).

def dataParser=new java.text.SimpleDateFormat("yyyy-MMM-dd HH:mm")
def startTime=0
def endTime=dataParser.parse("2016-OCT-26 13:24").getTime()
def archiver=new com.splunk.splunkjenkins.utils.BuildInfoArchiver()
archiver.run(startTime, endTime)

I got "Server is busy, maybe caused by blocked queue", what can I do

Please try below options

a) Adjust splunk queue size to a larger value, such as 5MB. Edit SPLUNK_HOME/etc/system/local/server.conf and add

[queue]
maxSize = 5MB

b) Adjust console text/log file buffer size, such as 10KB, Add below line to Jenkins startup script

-Dsplunkins.buffer=10240

I am using upstream/downstream jobs, how can I consolidate the test results to root trigger job?

You can use "Customize Event Processing Script" 

 Click here to expand...
/**
 * Transform job metadata before sending to splunk
 * This script is configured in Jenkins->Configure System->Splunk for Jenkins Configuration->Advanced
 *  ->Customize Event Processing Command
 */


import groovy.json.JsonSlurperClassic
import hudson.model.*
import com.splunk.splunkjenkins.model.CoverageMetricsAdapter
import com.splunk.splunkjenkins.utils.LogEventHelper
import org.apache.commons.codec.digest.DigestUtils

/**
 * @param run Jenkins job Run
 * @return the upstream job url and build number
 */
def getUpStreamBuild(Run build) {
    for (CauseAction action : build.getActions(CauseAction.class)) {
        Cause.UpstreamCause upstreamCause = action.findCause(Cause.UpstreamCause.class)
        if (upstreamCause != null) {
            return [upstreamCause.upstreamUrl, upstreamCause.upstreamBuild, upstreamCause.upstreamProject]
        }
    }
    return [build.parent.url, build.number, build.parent.fullName]
}

def isRebuild(String cause) {
    return cause?.contains("Rebuilds build")
}


def sendReport() {
    //junit report with page size set to 50, each page has maximum 50 test cases.
    //need ignore AggregatedTestResultAction since we already send downstream results
    def junitResults = splunkins.getJunitReport(50, ["hudson.tasks.test.AggregatedTestResultAction"])
    if (!junitResults) {
        return
    }

    def build = splunkins.build
    def metadata = LogEventHelper.getBuildVariables(build)
    def upStream = getUpStreamBuild(build)
    def buildEnv = LogEventHelper.getEnvironment(build)

    metadata["root_trigger"] = upStream[0]
    metadata["root_trigger_build_no"] = upStream[1]
    def causes = LogEventHelper.getBuildCauses(build)
    def rebuildFlag = isRebuild(causes)
    //end of metadata
    def event = [
            "job_url"      : upStream[0],
            "event_tag"    : "tests",
            "metadata"     : metadata,
            "build_number" : upStream[1],
            "user"         : LogEventHelper.getTriggerUserName(build),
            "job_name"     : upStream[2],
            "original_link": build.url,
            "rebuild"      : rebuildFlag,
            "trigger_by"   : causes
    ]

    if (junitResults && junitResults[0]["total"] > 0) {
        junitResults.eachWithIndex { junitResult, idx ->
            Map pagedEvent = event + ["testsuite": junitResult, "page_num": idx + 1]
            splunkins.send(pagedEvent)
        }
    } else {
        //test result not found
        def noResultEvent = event + ["event_tag": "no-tests", "page_num": 1]
        splunkins.send(noResultEvent)
    }
    def coverageList = CoverageMetricsAdapter.getReport(build, 200);
    //send code coverage
    event["event_tag"] = "coverage"
    coverageList.eachWithIndex { coverage, idx ->
        Map pagedEvent = event + ["coverage": coverage, "page_num": idx + 1]
        splunkins.send(pagedEvent)
    }
}

sendReport()



 
Changelog

1.7.2 (May 20, 2019) 

  • JENKINS-57410 connection leak after clicking 'Test Connection' button
  • respect TestNG is-config settings (beforeClass, beforeMethod) for counting test methods
  • add splunkins.allowConsoleLogPattern and splunkins.ignoreConfigChangePattern

1.7.1 (Dec 7, 2018)  

  • truncate single line text at 100000 (a sign of garbage data) to get in align with splunk source type text:jenkins, it can be adjusted via splunkins.lineTruncate system property

  • add user authenticated log information

1.7.0 (Aug 20, 2018)  

  • support multiple http event collector(HEC) hosts which are separated by comma
  • optimize event congestion handling
  • allow garbage collector to release unsent log under memory demand, to prevent OOM
  • allow user to adjust log queue size via -Dcom.splunk.splunkjenkins.utils.SplunkLogService.queueSize=x
  • add LogConsumer thread alive check
  • prefer tls 1.2

1.6.4 (Jan 5, 2018)  

  • fix performance issue on Jenkins v2.89.2 caused by JDK-8184907

1.6.3 (Dec 1, 2017)  

  • fix configuration miration issue for versions prior to 1.5.0

1.6.2 (Nov 28, 2017)  

  • defer LogHandler hook registration
  • add covered number and total number in addition to percentage for code coverage (index=jenkins event_tag=coverage)

1.6.1 (Oct 15, 2017)  

  • remove restricted computer.getDisplayExecutors api call
  • add splunkins.buffer property which can be added to jenkins start up parameter (such as -Dsplunkins.buffer=4096) to adjust console log buffer

1.6.0 (August 15, 2017)  

  • add splunkins.getJunitReport(int pageSize, List<String> ignoredTestResultActions = null) which allow user to ignore specific test result formats

  • unify junit test results with xunit and cucumber test results

  • defer updateCache operation to JOB_LOADED phase

  • send JVM memory pool usage,  can be searched via

    index="jenkins_statistics" event_tag=jvm_memory
 Older versions...

1.5.3 (July 25, 2017)  

  • fix SECURITY-479 (Arbitrary code execution vulnerability in rare circumstances)

1.5.2 (May 22, 2017)  

  • convert Float.NaN or Double.NaN to null
  • make sure workspace exists before sending files, thanks to ctran
  • fix Log type and allow verbose logging

1.5.1 (April 24, 2017)  

  • Fix log congestion issue when slave launcher generated verbose logs during Jenkins restart phase

1.5.0 (April 16, 2017)  

  • Use SecureGroovyScript to address security issues mentioned on https://jenkins.io/security/advisory/2017-04-10/ . If you hit errors like 
    org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script not yet approved for use
    

    you need go to "Manage Jenkins -> In-process Script Approval"  (JENKINS_URL/scriptApproval) page to review the script and approve it.
  • Add support for jacoco-plugin

1.4.3 (Mar 3, 2017)

  • Do not extract scm info for job start event, since the info maybe obtained from last build, not current build
  • Add null check for Node
  • Use job's full name instead of url to get compliance with env.JOB_NAME

1.4.2 (Jan 4, 2017)

  • Improve retry handling when Splunk is busy

1.4.1 (Dec 19, 2016)

  • Send seperate event for running jobs, used for long running job alert

1.4 (Dec 19, 2016)

  • Support Coverage Report generated by Clover plugin and Cobertura plugin
  • Rewrite the metadata configuration page to improve the readability.
  • Shaded org.apache.http package to avoid conflicts with other plugin which is using an older version
  • Improve http posting performance by using Gzip.

1.3.1 (Oct 27, 2016)

  • Masked Password parameter, send ***
  • Do not send whole Environment variable list, only send build parameters.
  • Added BuildInfoArchiver to send historical data

1.3 (Oct 19, 2016)

1.2 (Oct 16, 2016)

1.1 (Oct 14, 2016)

  • Simplify metadata configuration
  • Fixed No signature of method: static com.splunk.splunkjenkins.utils.LogEventHelper.sendFiles() is applicable for argument types: (org.jenkinsci.plugins.workflow.job.WorkflowRun ...

1.0 (Oct 8, 2016)

  • Initial release



18 Comments

  1. would it be possible to elaborate the section :MetaData under
     Manage Jenkins -> Configure System -> Splunk for Jenkins Configuration ->
     
     I do not see anything in splunk even after "Splunk Connection Verified"
     
     Could not get data in splunk !
    would it be possible to elaborate the section :MetaData under

     Manage Jenkins -> Configure System -> Splunk for Jenkins Configuration ->

     I do not see anything in splunk even after "Splunk Connection Verified"

     Could not get data in splunk !

  2. Does this plugin support Jenkins MultiBranch Pipelines?

    1. It can send test report and coverage report for pipeline, but not console log which you need Splunk Plugin for Pipeline Job Support

  3. After I installed and configured the plugin exactly as showed, nothing appears in Jenkins app for Splunk.

    the "Test Connection" button shows the "Splunk connection verified" message, but still nothing appears, not even the Jenkins server.

    Do I have to add a "Custom Metadata" for the index and another one for source type?

    Splunk is installed in one server but Jenkins app for Splunk is installed in another one (one server receive all the information and the other one shows it)

    What am I missing?

  4.  Configure Splunk Server

     

    Instead of just one host, can this be used to send to multiple Splunk hosts at the same time. I'm trying to get logs into a General IT Spunk and one restricted for our DevOps/CICD teams.

     

    Thanks

    1. I think you may use a heavy forwader with outputs.conf

      [tcpout]
      defaultGroup = sandbox1,sandbox2
      
      [tcpout:sandbox1]
      server = y:9997
      
      [tcpout:sandbox2]
      server = x:9997
      1. Thanks Ted,

        I don't actually have access to the jenkins box/shell so I'll have to hand config the outputs.conf blind.

        Do you know the path/files to monitor to get the different logs/sourcetypes?

         

        Mike

        1. > can this be used to send to multiple Splunk hosts at the same time

          I thought you want to replicate the data to different splunk servers, the outputs.conf can be put on the splunk host with http event collector enabled (heavy forwarder, https://docs.splunk.com/Splexicon:Heavyforwarder ) it can be placed in $SPLUNK_HOME/etc/system/local/

          1. Ted,

            I see what you mean, sorry for not being clearer. I cannot bifurcate the logs at the HEC. The org that I'm trying to support has very strict guidance around their separate Splunk environments and I have to go to the source of the logs, the Jenkins server itself.

             

            Pardon the ASCII!

             

            What your proposing and would work pretty much anywhere else:

            /Jenkins/ ---- /HEC With HvyFwd/ ---------  /Splunk 1/

                                                 |------------------ /Splunk 2/

             

            Which the way this org is, is already part of the Splunk 1 environment.

             

            What I need is

             

            /Jenkin/ -------  /HEC 1/ -------- /Splunk 1/

                 |-----------/HEC 2/ ---------/Splunk 2/

            1. sorry, this is not supported. 

              1. Figured,

                If I run a UF on the box itself, do you know what files I can monitor to simulate this?

                Again, thanks for the help. This is something that if I had access to the box I could probably find or figure out if they are there.

                 

                 

  5. Having an issue with the plugin … I get this as a result …

     

     

    When the query runs …
    index=jenkins_statistics (host="<my hostname>" )  event_tag=job_event (type=started OR type=completed) | dedup host build_url sortby -_time  | eval job_result=if(type="started", "INPROGRESS", job_result) | timechart count by job_result - I get 0 Results

     

    If I add the field directly from search like this … index=jenkins_statistics (host="<my hostname>" )| spath event_tag | search event_tag=job_event  - I see all of the events.

    I am on Splunk version 7.0.3, the plugin is installed on all the search heads, on all of the index nodes, does this need to be installed on the Forwarder as well to Transform the event_tag so its searchable inside of the plugin?  Am i newer version of splunk that is not supported with this plugin?  That has to be the issue the index is there but the data is not searchable to splunk … any thoughts?

    1. If you are using forwarder for HEC, you need install the app (props.conf from the apps default folder defines field extraction).

  6. Mike

    What Ted said, if your using the app in Jenkins which used the HEC settings, you need the app from splunkbase.com installed. If your seeing the solution is to extract fields using an spath call, then some props are not occuring.

     

     

  7. H G

    I am using the  ‘Splunk Plugin’ and ‘Splunk Plugin extension’ version 1.7.0. The pipeline information which is send to the Splunk does not contain the node name on which the stage actually ran. In Splunk it shows that the node is ‘master’ for all the stages. Is it possible to have stages/steps send the node name (something like stages{}.nodename) too?

  8. added in 1.7.1 in Splunk Plugin for Pipeline Job Support, can be searched via stages{}.children{}.exec_node 

  9. Hello,

    I am trying to configure the plugin to use the App but I cannot figure out the index / sourcetype mapping (i.e. among the indexes / sourcetypes suggested for the App, what is the index / sourcetype needed for "Build Report" and so on).

    I have tried to ask the quesiton here : https://answers.splunk.com/answers/716760/jenkins-data-sourcetype-mapping-1.html

    Thanks in advance for any clarification,

  10. Hello,

    First of all, thank you for the wonderful app ! 


    Question, how can i change the index=jenkins_statistics in the search query for all the dashboards in the app ? Do i need to rebuild the app ?

    The reason is because, in my company's Splunk instance, we have standard naming convention for all the indexes and sourcetypes.

    We need to change the index and sourcetype to match the naming convention. 

    I can change it in the Github source, but where do we change for the dashboards query for Splunk App ?