Child pages
  • Configure the CA APM Jenkins Plugin Properties
Skip to end of metadata
Go to start of metadata


Configure the following required inputs to run the plugin. The sample configuration file is at the <Downloaded_Plugin_Package>\properties directory.

Configure APM REST API

Set value of introscope.public.restapi.enabled property to true In IntroscopeEnterpriseManager.properties file that is located in the <EM_HOME>\config folder.

Configure APM Connection

The APM Connection property lets you fetch performance metrics from CA APM. Modify the CA APM connection configuration section in the performance-comparator.properties file.

############################
#em configuration
############################
#url of enterprise manager. e.g.http://dafth03-i19470.ca.com:8081/
em.url=
#security token generated through the securitytoken feature of em., e.g 386deb23-404f-4e75-807d-06e2195310c0
em.authtoken=
#timezone of enterprise manager. e.g.UTC 
em.timezone=
#webview port of em. e.g. 8080
em.webview.port=


Depending on the environment (on-prem or SaaS) in which your Enterprise Manager is hosted, the port number can be different. The authtoken is the system token (no public tokens) generated by the APM administrator.

Configure LoadGenerator When Using CA BlazeMeter

The plug-in uses CA BlazeMeter for test metadata using REST API.  Use the CA BlazeMeter configuration UI to obtain an API key, API key secret, and the test ID. The following code shows the sample configuration using CA BlazeMeter:

#name of loadgenerator for blazemeter
loadgenerator.name=blazemeter
#blazemeter resturl to pull the master summary data of the tests e.g., for ca blazemeter https://a.blazemeter.com:443/api/v4
blazemeter.resturl=
#apikeyid generated while creating test in blazemeter, to connect to below configured test. e.g., 4f45829b8e6f984758c094c6
blazemeter.apikeyid=
#apikeysecret generated while creating a test in blazemeter, to connect to below configured test. e.g., 5acfaf1d4bd1197ca08c16c97d8fead65e7de6197784c45ec3587bde30b5d6f90095e1a1
blazemeter.apikeysecret=
#test id generated while creating test in blazemeter , e.g., 6448793
blazemeter.testid=

Configure LoadGenerator When Using JMeter

Define the period during which you did load test for your build (both current and benchmark). Benchmark build start and end time must be stable. You can use a script to monitor the current start and end time. You can also create a Java extension of the ManualMetadataRetriever class. Consume the properties that you set from the load-generator configuration from the properties file to retrieve the start and end time of your load test.

The following code shows the sample of manual configuration:

#name of loadgenerator for jmeter
loadgenerator.name=jmeter
#output filetype of jmeter, it can be csv/xml
jmeter.filetype=

Configure LoadGenerator Manually

Define the period during which you did load test for your build (both current and benchmark). Benchmark build start and end time must be stable. You can use a script to monitor the current start and end time. You can also create a Java extension of the ManualMetadataRetriever class. Consume the properties that you set from the load-generator configuration from the properties file to retrieve the start and end time of your load test.

The following code shows the sample of manual configuration:

#manual configuration of start time and end time of the tests completed
loadgenerator.name=manual
#end time of current build , currentendtime can not be less than currentstarttime and   benchmarkstarttime 
manual.currentrunloadendtime=2018-10-25 13:50:01
#start time of current build
manual.currentrunloadstarttime=2018-10-25 13:40:02
#end time of benchmark build, benchmarkendtime can not be less than benchmarkstarttime and currentendtime
manual.benchmarkrunloadendtime=2018-10-25 13:40:00
#start time of benchmark build
manual.benchmarkrunloadstarttime=2018-10-25 13:30:00


Configure Strategies Information

Configuring the strategies lets you do the following tasks:

  • List the strategies with which the two builds are compared
  • Map the properties that are associated with each strategy with respective output handlers
  • List the Output handlers
  • Configure Email

The following code shows the sample of Strategies section of the configuration file:

########################################################
#default metric comparison strategies : 
#    MeanLatency: compares metric values of current build with the corresponding value of benchmark build
#    StaticThreshold: compares metric values of current build with the threshold value configured for threshold property
########################################################
#list of metrics, e.g. cpu,concurrentinvocations,errorperinterval,gcheap
metric.list=cpu,concurrentinvocations

#MeanLatency Comparison-Strategy
#threshold value, e.g.,2. the build will fail,if the build.fail property value is set true and if the metric's avg value difference 
#between current and benchmark build cross this value
cpu.threshold=1
#agent name of the application,  e.g., .* means any agent
cpu.agentspecifier=.*
#metric path , it will be specific to agent, application. e.g, .*CPU.*Processor 0:Utilization % \\(aggregate\\)
cpu.metricspecifier=.*CPU.*Processor 0:Utilization % \\(aggregate\\)
#comparator class name excluding the "ComparisonStrategy", e.g., MeanLatency for MeanLatencyComparisonStrategy
cpu.comparator=MeanLatency
#list of output handlers for this strategy. available output handlers : plaintextemail,jsonfilestore,chartoutputhtml,histogramoutputhtml
cpu.outputhandlers=plaintextemail,jsonfilestore,chartoutputhtml,histogramoutputhtml

#StaticThreshold Comparison-Strategy
#threshold value,e.g., 1.  it will be compared with the average value of metric for current  build
concurrentinvocations.threshold=1
#agent name of the application, e.g., .*
concurrentinvocations.agentspecifier=.*
#metric path , it will be specific to agent, application. e.g., .*Business Segment.*Health:Concurrent Invocations
concurrentinvocations.metricspecifier=.*Business Segment.*Health:Concurrent Invocations
#comparator class name excluding the "ComparisonStrategy", e.g., StaticThreshold for StaticThresholdComparisonStrategy
concurrentinvocations.comparator=StaticThreshold
#list of output handlers for this strategy. available output handlers : plaintextemail,jsonfilestore,chartoutputhtml,histogramoutputhtml
concurrentinvocations.outputhandlers=plaintextemail,jsonfilestore,chartoutputhtml,histogramoutputhtml

###############################################
#default list of outputhandlers
#      plaintextemail: mail will be generated with the report of each metric
#      jsonfilestore : metric values in jsonformat
#      chartoutput : graph representation, 
#      histogramoutputhtml: buildtobuild comparison
#output file for outputhandlers, except plaintextemail, will be in current build directory, inside jenkins workspace folder
###############################################
outputhandlers.list=plaintextemail,jsonfilestore,chartoutputhtml,histogramoutputhtml

#Email Configuration for outlook
#smtp host for outlook
email.smtp.host=mail
#flag to authenticate or not
email.smtp.auth=true
#email id of sender e.g, noreply_apm_jenkins@def.com
email.sender.id=
#email password
email.password=
# to list of email recepients, allows multiple email ids separated by comma (,) . e,g, xyz@ca.com
email.recepients.to=
# cc list of email recepients, allows multiple email ids separated by comma (,) 
email.recepients.cc=
# bcc list of email recepients, allows multiple email ids separated by comma (,) 
email.recepients.bcc=


Common Configuration

You can set the logging level. Specify the directories where the extensions, JARs, and output are available. The files are generated by the plug-in.

#name of the application for which you run this plugin e.g. Inventory
application.name=

#########################
#Optional properties
#########################
#benchmark build number for comparison, if blank, value will be previous successful build number
build.benchmarkbuildnumber=
#number of builds for buildtobuild chart, max value is 10
histogram.builds=
#flag to make the build fail/pass if the difference of metric average values of current and benchmark build cross 
#the configured threshold value. e.g., true/false, default is true.
build.fail=
#flag to publish the build results to enterprise manager. e.g., true/false, default is false.
build.result.publishtoem= 

#Log level SEVERE > WARNING > INFO > CONFIG > FINE > FINER > FINEST > OFF. 
#Logs can be found at current build directory, inside jenkins workspace folder. default is INFO
logging.level=

#path of jar file for customized strategies.if this folder is empty, extended strategies (in-case) defined in properties won't be able to execute
#e.g., C:\\APM\\AutomicJenkins\\Jenkins\\Jenkins Server\\extensions\\
extensions.directory=



  • No labels