Child pages
  • Static Code Analysis Plug-ins
Skip to end of metadata
Go to start of metadata

View Static Analysis Utilities on the plugin site for more information.

Older versions of this plugin may not be safe to use. Please review the following warnings before using an older version:


 This plug-in provides utilities for the static code analysis plug-ins.Jenkins understands the result files of several static code analysis tools. For each result file a different plug-in is used for configuration and parsing. Since these results are visualized by the same back-end, the description of this back-end is combined in this section. The following plug-ins use the same visualization:

(lightbulb) Additionally the add-on plug-in Static Analysis Collector is available that combines the individual results of these plug-ins into a single trend graph and view.

The following features are provided by these plug-ins:

  • View column that shows the total number of warnings in a job
  • Build summary showing the new and fixed warnings of a build
  • Several trend reports showing the number of warnings per build
  • Several portlets for the Jenkins dashboard view
  • Overview of the found warnings per module, package, author, category, or type
  • Detail reports of the found warnings optionally filtered by severity (or new and fixed)
  • Colored HTML display of the corresponding source file and warning lines
  • Several failure thresholds to mark a build as unstable or failed
  • Configurable project health support
  • Highscore computation for builds without warnings and successful builds
  • Works with the freestyle and native m2 build option of Jenkins
  • Email Support to show aggregated results of the warnings in a project
  • Remote API to export the build quality and found warnings
  • Several tokens to simplify post processing of the analysis results

View column

The total number of warnings in a job can be visualized in every view by adding a new column "Number of * warnings".

Trend Graphs

There are several trend graphs available for the plug-ins. Currently, you can select one of the following trend graphs for a job:

  • Total warnings per build including the distribution of the priorities low, normal, and high in different colors.
  • Total warnings per build showing how many warnings are below (blue), in between (yellow), or above (red) the build health thresholds.
  •  New and fixed warnings per build, fixed in blue, and new in red.
  • Difference between new and fixed warnings per build (cumulative).
  • Total number of warnings (with auto scaled range)
  • Number of warnings per "author" 

You can adjust the size of graph and the number of builds to include. These graphs can be configured globally (in the job configuration's plug-in section, open "Advanced" and follow the link at the bottom) and can be changed by each user.

Portlets for the dashboard view

The following portlets for the dashboard view are available:

  • The number of warnings per project (total, priority high, priority normal, priority low)
  • Trend graph with number of warnings in the selected projects (with priority distribution)
  • Trend graph with number of new and fixed warnings in the selected projects

Build Summary

The results for each build are summarized on the build view. Here you see how may warnings or open tasks have been found for the selected build. Moreover, the summary shows the number of new and fixed warnings as well as the number of scanned or parsed files. The details views for each plug-in are accessible via hyper links. You can also directly navigate to the plug-in results by clicking into the trend image (see image above).

Result Overview

Each plug-in presents the results of a build in several overview tabs: here you see the number of the warnings or tasks per item as well as the severity distribution. The severity graphs provide a tool tip to show the actual number of warnings or tasks for each severity. By following the link in the first overview table column you will be directed to the filtered details of the selection. The overview table is sortable, so you can easily find the modules or packages with the most warnings by clicking on the table header.

  • The modules tab shows the number of warnings or open tasks per module. The module name is extracted from the pom.xml (Maven) or build.xml (Ant) build configuration files. If you are using another build tool then the path segment above the scanned analysis report file is used as module name.
  • The packages tab shows the number of warnings or open tasks per package or namespace. There is currently only support for Java or C# files.
  • The files tab shows the number of warnings or open tasks per file.
  • The categories tab shows the number of warnings per category. The available set of categories is obtained from the underlying static code analysis tool.
  • The type tab shows the number of warnings per type. The type depends on the static code analysis tool but typically is a 1:1 mapping to the actual rule that produced the warning.

The overview tabs for packages, files, categories and types are equivalent, click on the thumbnails below to view a screenshot of these tabs.

Package Overview

Files Overview

Category Overview

Types Overview

Result Details

The details of the individual warnings are shown in the remaining tabs. In the Details tab you will see all warnings of the current selection (e.g., a given package) printed one after another. For each warning you will see the warning message and a detailed description (with example) of the static analysis tool. If you are viewing the results of the current build then the file names are hyperlinks: clicking on the file name will open the actual source code with the selected warning highlighted.


The detail tabs in the other plug-ins are equivalent, click on the thumbnails below to view a screenshot of these tabs.





Besides this details tab there are additional tabs that show the details for a filtered sub-set of the warnings or tasks. I.e., the tabs high, normal, and low show the details of the selected severity, while the tabs new and fixed show warnings in the current build that are new or fixed, respectively.

Since release 1.88 of analysis-core there are two new tabs that show the origin of the warnings. In the people tab, all warnings are mapped to "authors" from the SCM (currently only Git is supported, PRs for other SCMs are welcome). For each author, the number of created warnings is shown. In the origin tab, all warnings are shown in a table. The table shows the warning details as well as the author, the commit and the build number where a warning occurred the first time. Note that the computation of author, commit, and build is not always correct since a refactoring might falsely mark a warning as new.  

Finally, the tab Warnings shows a sortable table of all warnings. Here you can sort the warnings by all available attributes to decide which warnings should be looked at in more detail. The warning message and description is shown when hoovering over the cell content.

Source Code Visualization

The actual warning is visualized in the source code view (with syntax highlighting).  Some warnings have several source code markers attached. In this case, the primary range of the warnings is colored with orange, the remaining ranges are colored with yellow. When hoovering over a colored warning annotation, then the warning message and detailed description is shown in a tool tip.

Email Support

The warning results can be shown in build notifications, too. In order to get an aggregation report in build emails you can use the static-analysis.jelly template for the Email-Ext Plug-in.

In case you want to send notification emails to users introducing new warnings or violations but without failing a build you can use this groovy trigger script for the Email-Ext Plug-in.

Remote API

All plug-ins also do have a remote API to obtain information on the quality of the current build. You can use the following commands, the variable [Plugin-URL] needs to be replaced with the URL of the plug-in, e.g., checkstyle, findbugs, tasks, etc. :

  • ...job/[Job-Name]/[Build-Number]/[Plugin-URL]Result/api/xml?depth=0 will return only the build results:
  • ...job/[Job-Name]/[Build-Number]/[Plugin-URL]Result/api/xml?depth=1 will additionally return the current (and new) warnings:
    <message>The String literal "</li>" appears 5 times in this
      file; the first occurrence is on line 62.</message>

Build Tokens

All plug-in provide several tokens that are available during post build processing. In order to use these tokens you need to install the latest release of the token macro plug-in. The following tokens are currently available (for the plug-in names CHECKSTYLE, DRY, FINDBUGS, PMD, TASKS, WARNINGS and ANALYSIS):

  • [plug-in name ]_RESULT: Expands to the build result of a plug-in
  • [plug-in name ]_COUNT: Expands to the total number of warnings in a build
  • [plug-in name ]_NEW: Expands to the total number of new warnings in a build
  • [plug-in name ]_FIXED: to the total number of fixed warnings in a build

Maven Notes

These plug-ins normally get built in the site phase, not in the 'normal' package phase. The configuration help for the plug-in specifies which goal you'll have to add to your maven build options a bit further up on the same page.


  1. Unknown User (pgelinas)

    I've just started using these plug-ins for our build process and I've been wondering if there is a way to aggregate the reports in an upstream project, just like the "Aggregate test report" does.

    I have a build configuration like this one: Project A is upstream and Project B and C are downstream of A. Project A is like the real project where B and C are sub-modules. Project A doesn't really have any source code or tests, it just polls the SCM and launch the build for B and C, then aggregate their test result. I'd like to do the same for the reports generated by the static analyzer but I haven't found a way to do it yet. Any ideas?

    1. This option is only available for multi-module m2 projects.

      1. Unknown User (michelnolard)

        Shouldn't it be quite straightforward to use the downstream project list as if it was a maven2 multi-module list ? Maybe I am wrong thinking it is simple _and_ easy so I'm waiting for your advice.

        Wouldn't it be wonderful to offer that feature to people out there who are not using maven ? Some developer cannot use maven simply because :

         - they are working on huge "legacy" systems,

         - they have not enough time/money/people/knowledge/self-confidence to actually do the migration,

         - some work in companies whose standards do not include maven,

         - they use another similar tool already like Apache Ivy,


        Maybe _you_ are lucky, but it is not everybody's case.

        Thank you for reconsidering your point of view for one second at least.

        1. Yes, it shouldn't be too complicated to aggregate the results. I just meant that the support is automatically available for maven projects. For freestyle projects the support needs to implemented. So the best thing would be to open a feature request in our issue tracker.

  2. Unknown User (


    I'm working on extension of another parser for warning plug-in. My parser could provide detailed information of multiple source code in a single warning.

    But how to hyperlink and highlight all of these source code lines in details tab?

    But class Warning could only be initialize by only one file name and line number, and it's hyperlink is like ....../107/warningsResult/source.43/#425

    How could I hyperlink and highlight multiple source code lines or multiple files like example picture in this page?


    1. Unknown User (

      Furthermore, how to format and paragragh text in detail tab?

  3. Unknown User (stripathy)

    I am seeing the issue, that when I look at a build it says something like 41 new warnings, and 38 fixed warnings, but if I click on the it takes me to a page which contains actually 0 issues, total 0, to high low and all other category fixes 0. In effect the whole build might have only a couple of extra issues in the build, but this seeing so many fixed and new confuses the developers, and they are not sure whats wrong and which one is actually new.

    Is anybody else experiencing this problem ? I am getting same unusual numbers for the Duplicate Code checker, and the Checkstyle always says all warning are new.

    We are on Hudson 1.344,

    Static Analysis Utilities 1.3

    Checkstyle Plug-in 3.2

    Duplicate Code Scanner Plug-in 2.2

    Findbugs Plug-in 4.3

    We are using ant to do the build, and publish the findings to xml files.

    1. Can you please file an issue in Jira?

  4. Unknown User (

    We're doing incremental Maven multi-module builds in Hudson, and I've noticed the following behavior: When a particular module in the multi-module build doesn't run (because it hasn't changed), the static analysis plugins for the top-level build report that all of that module's warnings, FindBugs issues, etc. have been resolved.  I understand why this is happening, but it's a bit frustrating -- the only way we can get accurate trending for static analysis reports is by forcing a full (and lengthy) rebuild whenever any module is changed. 

    Theoretically, couldn't the analysis utilities recognize builds that didn't execute (as opposed to builds that failed) and use the results from the most recent executed build?  It seems (although I have no idea what the code looks like) as if those results should be available in some form, since they're necessary to generate the trend graphs.

    1. Which version are you using? I improved the detection of new warnings in the latest release. At least for freestyle projects this should work now. Are you using the freestyle or m2 job type? BTW: please create an issue Jira because sometimes the confluence notifications don't work...

      1. Unknown User (

        Hi Ulli,

        We're on Hudson 1.352, with Static Analysis Utilities 1.4 and Static Analysis Collector Plug-in 1.2. And we're using the m2 job type. I've added a Jira ticket . Thanks!

  5. Unknown User (


    How to make warnings plugin display portlet dashboard view? I did not find configuration for it.


  6. Part of the FindBug report page has disappeared.

    The page ends with
      Packages Files Categories
      < then nothing >

    It was working until:
    o I upgraded Hudson to 1.357 (probably from 1.356)
    o I changed the hostname

    I do not see anything unusual in the log or in stdout.

    Hudson 1.357
    FindBugs Plug-in 4.8
    Static Analysis Utilities 1.8
    Static Analysis Collector Plug-in 1.5
    Dashboard View 1.5

  7. Unknown User (

    Hi Ulli,

    Could you provide the change log from 1.6 to 1.8?

    and why it jumped 2 release?


    1. The analysis-core plug-in has no separate changelog, please use the changelog of the individual plug-ins. (Versions are sometimes skipped due to networking/locking errors in the release process).

  8. Is there a way to "reset" the statistics?  Some initial issues with the configuration of the reporting caused a high number of false positives to be reported with the first couple of builds.  I'd like to clear any past measurements and go forward to make the graphs more realistic. 

    1. You need to delete the corresponding builds.

  9. Unknown User (quipo)

    What font is it using to generate the graphs?
    We have a weird font in our setup (latest version of hudson/plugins, maven2 project, centos 5.5):

    -Edit: actually, nevermind, fixed by installing msttcorefonts.

  10. I want everyone who looks at my Jenkins reports to see the full history of the builds in the graphs.  It keeps defaulting to the last month of builds only.  I want the full history and would prefer to disable the feature where users can change what the view contains.  Is there a way to do this?

    1. No, that is not possible.

  11. Excellent plugin - I'm encountering one hiccup however - when a build fails my graphs nosedive to 0 - even though my checkstyle xml output contains all the necessary data.

    This is a bit misleading and leaves an akward drop in the charts due to a failed build.

    I'm using the latest Jenkins, and analysis plugins; is this intended behavior?

    1. There is a checkbox where you can configure that.

      1. Found the checkboxes - It's not under the static analysis plugin settings for the project - but under each different tool's section.

        e.g.: PMD or Checkstyle - look for "Run always" checkbox by clicking on the "advanced" button for those tools in the project's configuration.

  12. I have a plugin which uses the static analysis core plugin. I would like builds to be failed/unstable if there are any new warnings. But only the build which introduced the warning should fail.

    I set the plugin to mark builds as unstable if there are more than 0 new warnings. But once there is a new warning, then it keeps using the last "stable" build as a comparison, so all subsequent builds are also marked as unstable.

    Am I misunderstanding how this works or is there a way to have only the build which introduces a new warning to be marked as unstable or failed?

    1. No, this is not possible anymore. (This was the behaviour before I introduced the reference builds.)

  13. Hi, here it is a newbie question, about how the "detect modules" feature is supposed to work.

    I've one PHP project and I've started using Jenkins and all the code-analysis plugins recently. As far as the code-analysis plugins do not offer introspection for PHP files, looking for @package/@subpackage PHPDocs nor namespaces, I decided to try with the "detect modules" route.

    So, in my project, before running all the cpd/pmd/test/coverage... utilities, I've tried to spread "fake" build.xml (ant) files over some of the directories, hoping that it would provide me some sort of organization by modules across all the code analysis results.

    Once all those files are in place, I run the (free-style) project (usually one shell script) against the whole codebase, generating 1 file per code-analysis tool (cpd_results.xml, pmd_results.xml and so on).

    But it seems that the code-analysis plugin is not really searching recursively for all the build.xml files in order to provide the modules tab. Only the main build.xml "fake" file is detected and its "name" considered. And, as far as it is the unique module detected, I don't get the nice modules tab at all.

    And here it's the question. Not sure why, but I assumed that the search of ant/maven "module names" was recursive and the code-analysis was later able to match the "files" against all those module names. Is that assumption correct?

    Or do I have to execute as much cpd-s, pmd-s... as build.xml files I've and only then the code-analysis aggregator will show the information grouped by modules?

    Note that, the 2nd approach is perfect for some utilities, but for example the cpd (dry) one... loses a lot of accuracy detecting dupes if it's not executed against he whole codebase.

    And that's the newbie question, I've tried hard to look over all the plugins wikis, the net, and also trying to understand your code @ github, lol. But at this point, I think it's better to ask directly.

    TIA and ciao :-)

    1. Ups, sorry I missed your comment.

      It shouldn't be complicated to detect the packages of PHP Modules. The Java and C# detectors are just a couple of lines... Can you please post a follow-up on the mailing lists or create an issue? I'm not sure if the event notification always works here in Confluence...

  14. I want to use the Checkstyle, PMD and Findbugs Plugin.

    but when I want to publish the results my Project gets a HTTP ERROR 500

    Problem accessing /job/Config%20Manager/. Reason: jar:file:/C:/Hudson_Data/war/webapp/WEB-INF/lib/hudson-core-3.0.0-M1.jar!/lib/hudson/project/projectActionFloatingBox.jelly:30:74: <st:include> org/jfree/chart/renderer/xy/XYItemRenderer

    Caused by:org.apache.commons.jelly.JellyTagException: jar:file:/C:/Hudson_Data/war/webapp/WEB-INF/lib/hudson-core-3.0.0-M1.jar!/lib/hudson/project/projectActionFloatingBox.jelly:30:74: <st:include> org/jfree/chart/renderer/xy/XYItemRenderer

    at org.apache.commons.jelly.impl.TagScript.handleException(
    at org.apache.commons.jelly.TagSupport.invokeBody(
    at org.apache.commons.jelly.tags.core.ForEachTag.doTag(
    at org.kohsuke.stapler.jelly.ReallyStaticTagLibrary$
    at org.apache.commons.jelly.tags.core.CoreTagLibrary$
    at org.apache.commons.jelly.tags.core.CoreTagLibrary$
    at org.kohsuke.stapler.jelly.IncludeTag.doTag(


    Caused by: java.lang.NoClassDefFoundError: org/jfree/chart/renderer/xy/XYItemRenderer
    at hudson.plugins.analysis.core.AbstractProjectAction.getAvailableGraphs(
    at hudson.plugins.analysis.core.AbstractProjectAction.createConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.createUserConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.isTrendVisible(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    Caused by: java.lang.ClassNotFoundException: org.jfree.chart.renderer.xy.XYItemRenderer
    at org.aspectj.weaver.bcel.ExtensibleURLClassLoader.findClass(
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    ... 118 more

    Caused by:
    java.lang.NoClassDefFoundError: org/jfree/chart/renderer/xy/XYItemRenderer
    at hudson.plugins.analysis.core.AbstractProjectAction.getAvailableGraphs(
    at hudson.plugins.analysis.core.AbstractProjectAction.createConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.createUserConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.isTrendVisible(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)


    Caused by: java.lang.ClassNotFoundException: org.jfree.chart.renderer.xy.XYItemRenderer
    at org.aspectj.weaver.bcel.ExtensibleURLClassLoader.findClass(
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    ... 118 more

    Caused by:
    java.lang.ClassNotFoundException: org.jfree.chart.renderer.xy.XYItemRenderer
    at org.aspectj.weaver.bcel.ExtensibleURLClassLoader.findClass(
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at java.lang.ClassLoader.loadClass(Unknown Source)
    at hudson.plugins.analysis.core.AbstractProjectAction.getAvailableGraphs(
    at hudson.plugins.analysis.core.AbstractProjectAction.createConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.createUserConfiguration(
    at hudson.plugins.analysis.core.AbstractProjectAction.isTrendVisible(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)


    Can you please help me?

    1. Seems that the Hudson team decided to remove the JFree charts from Hudson. These classes are required by my plug-ins. My plug-ins works quite well in Jenkins, so you can either migrate to Jenkins or try to use an older Hudson version.

  15. We (at our company) are using Jenkins core version 1.410 and "Static Code Analysis Plug-ins" plugin installed is v1.38. Somehow the result overview is showing wrong. eg: Total warnings are 52, however in the details it shows as 40. Surprisingly, the difference 12 warnings went into fixed (even without any change of code).   Appreciate any insights on this.

    thank you

    1. Please create a bug report in our issue tracker.

  16. I'm finding that the dashboard portlet views are not particularly useful.

    I have a number of projects, but the frequency that they are build differs wildly -- I have some projects that are build several times a day, and others that are built only once a month or less. For all of our projects, I have configured them to retain details of the last ten builds.

    On the individual projects, the trend graphs work very well. But on the portlet views, the variable build frequency seems to be throwing the graphs out quite badly.

    The graphs are obviously built using the available data from all the builds, but obviously in our case some projects only provide data for a few days whereas others provide data going back months. This means that the dashboard view graphs show high figures for the last week or so, crammed up against the right hand side, and then the graphs drop off a cliff and we get a long tail of several months' worth of data that only includes a few projects and doesn't accurately represent what happened over that time.

    I guess it is working accurately according to the data available, but the net effect is that the graphs are all-but useless.

    I don't know how you'd solve it. The only things I can think of to suggest are either limit the graph to the time frame of project with the shortest gap oldest build and now, or to retain the data yourself independently of the projects so you're not relying on the projects retaining stats for a length of time. I see complications with both of these approaches, so I won't try to tell you what to do.

    1. I would recommend to retain the build data using the '#days' field. E.g., I'm using 90 days in our projects which works quite nice. You can configure all your jobs to store the build information for 90 days and discard the build artifacts e.g. after 10 builds. Wouldn't that work for you, too?

      1. Thanks for the reply. Yes, that probably would work. I avoided over-configuring that sort of thing in the projects because I was concerned about disk space usage (I've got a tight resource limit), but that probably would do the trick without weighing it down too much. Thanks; I'll give it a try.

        1. One thought, though -- yes, this would reduce the *impact* of this problem, but it wouldn't remove it entirely.

          Given a project that is only built every 30 days and a setting to keep builds for 90 days: If I understand things correctly, this 90 days limit is respected by Jenkins, but the out-of-date builds are only deleted when a new build is run, meaning that the day before a build, that project will actually have 119 days-worth of data. This will be reflected on the portlet graphs. But other projects that are built more often will be dropping old build data much closer to the 90 day limit.

          Therefore the portlet graphs will show a trend that is accurate for 90 days, but with an inaccurate tail, the length of which will vary, but could be quite long, up depending on the frequency of the builds on the less-used projects.

          So yes, this is better than what I had before -- at least this way, I get accurate figures for the last 90 days -- but I still get an inaccurate tail.

          But the solution is easier than my previous suggestions: The portlets graphs just need to have a config parameter to tell them to cut off at 90 days, regardless of whether there happens to be older data hanging around.

          1. Isn't that parameter available in the portlet?

            1. eeep! yes it is.... hah, I didn't even see that.

              wow, I feel very silly now.  :-)

              thank you!

  17. Great plugins! We've been using them for ages.

    However, we recently moved to using large maven module builds. When a developer commits a bit of code, only that module (and any modules that depend on it) gets rebuilt (i.e. "Incremental build - only build changed modules" option). This means that the plugins only run for those modules for that build. See the screenshot on how this affects the collection of data. This also means that we have a hard time using the thresholds to make the builds unstable. Is there any way for us to solve this problem? perhaps some configuration I missed?

  18. Hi Ulli,

    I'm getting warnings in checkstyle like the one below for every class in my project. I upgraded the version as recommended, but that did not resolve the problem.  I suspect I have a classpath problem, but don't know how to easily debug this.  Do you have any suggestions?  Thanks.*, TreeWalker, Priority: High*

    Got an exception - java.lang.ClassCastException: antlr.CommonToken cannot be cast to antlr.Token
    No description available. Please upgrade to latest checkstyle version.

    1. This seems to be error during your build when checkstyle is invoked. Can you please check if in your checkstyle.xml that is produced by your build also contains this exception in some warnings?

      1. Yes, the exception is in the checkstyle.xml.  Interesting thing is that this is the only reported problem (no whitespace, etc issues reported even though I know I have some) and this exception is logged in the checkstyle.xml file for every single java class in my project that is being checked by checkstyle.

        1. I think that there is a library conflict in your build system that is the reason for that exception. What build system are you using? Maybe you will get some better feedback on the maven or ant mailing list?

  19. Hello every one, I need some help on PMD plugin development.
    I want to clone PMD plugin in order that It can show two trend reports for two different PMD
    results in one job! Now I have cloned one with its archive ID called 'cloned-pmd',
    and I changed src package from 'com.hudson.pmd.*' to 'com.hudson.clonedpmd.*',
    and I also change the project action name from 'PMD Warnings' to 'PMD Warnings II'.
    and the report xml file name was changed too.
    After that I built the cloned plugin successfully and fininshed installation in jenkins plugin management page,

    Then I found that the cloned PMD plugin could be added in the job config page successfully,
    however it only displays origional PMD plugin menu with the name 'PMD Warnings' on left side tab,
    my cloned PMD plugin could not show. I have checked that no errors or warnings were printed out
    on the console.
    Did I miss any places to modify in the source code? I will be appreciate for any reply, thanks!

  20. Hi - We have a maven build job that started failing randomly with a null pointer exception.  The first build was successful, the second build and on failed with no changes to source code.  If we copy the bad job to a new job name and build it, it succeeds.  In other words, once it fails for this reason, it always fails no matter what.   We were at 1.491 when the failure happened, then we upgraded to the 1.509.1.1 of jenkins.   The job still fails after the upgrade of jenkins and the plugins.  Can you give us some insight as to where we should look?

    Snippet from console:CHECKSTYLE Computing warning deltas based on reference build #10
    INFO ------------------------------------------------------------------------
    INFO ------------------------------------------------------------------------
    INFO Total time: 3:13.021s
    INFO Finished at: Mon Jun 03 09:27:50 EDT 2013
    INFO Final Memory: 74M/1092M
    INFO ------------------------------------------------------------------------
    JENKINS Archiving disabled
    Waiting for Jenkins to finish collecting data
    mavenExecutionResult exceptions not empty
    message : Internal error: java.lang.NullPointerException
    cause : null
    Stack trace :
    org.apache.maven.InternalErrorException: Internal error: java.lang.NullPointerException
    at org.apache.maven.lifecycle.internal.BuilderCommon.handleBuildError(
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(
    at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(
    at org.apache.maven.DefaultMaven.doExecute(
    at org.apache.maven.DefaultMaven.execute(
    at org.jvnet.hudson.maven3.launcher.Maven3Launcher.main(
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(

  21.      when I'm using PMD ,from the "status" page,I can't see the "X warnings in XX PMD files" message, only exists the "X warnings in one analysis" message.I need to know how many files the PMD had scanned.How to make it to show "X warnings in XX PMD files" message?

         Thx a lot!

    1. one means that 1 file has been scanned. What pattern did you specify?

  22. Can anyone provide more detail about how to accomplish this:

    In case you want to send notification emails to users introducing new warnings or violations but without failing a build you can use this groovy trigger script for the Email-Ext Plug-in.

    I've got the Email-Ext plug-in installed and I can see where to add a post-build e-mail to my job and how to add a trigger, but I don't see where I'm supposed to specify the groovy trigger script.  It seems like I've only got a specific set of pre-configured e-mail triggers to work with.

    Also, right now I'm using the static analysis core plugin with checkstyle, pmd, and findbugs, but not the analysis collector plugin.


    1. You should better ask this question in the mailing list (or in the comments section of the email ext plugin).

    2. Here's how to do it:

      • go to your job configuration page and click "add post-build action"
      • select "editable e-mail notification" (this is from the Email-Ext plugin)
      • then "advanced settings"
      • then click "add trigger" and select "script"
      • then click "advanced..." on the trigger
      • add this in the "trigger script" field: URL('').getText())

      There's all kind of other options you can configure, but this is the basic hook needed to get started.

  23. Hi,

    I've got two question regarding the usability of the plugin.

    We are massively using parametrized builds. Is it possible to show the trend graph depending on a build parameter?

    We are testing many different products which are quite stable but the trend graph shows many ups and downs because it doesn't filter the build parameters.

    Another question is regarding incremental builds. As we don't want to do full builds all the time, the warnings trend doesn't have too much value. Is it somehow possible to use the changeset to only check for "real" differences to the last build?
    I hope you could answer my questions.

    Best regards


    1. Currently it is not possible to show the graph depending on a build parameter. Actually, support for parameterized builds is quite limited, since I don't use these projects at all and therefore I have no real experience what use cases are important here. Someone opened an issue quite some time ago, but I got only limited feedback so far on what is required:

      There is another trend graph (configure link) that shows the delta between two builds, is this what you are looking for?

      1. Hi Ulli,

        I have 15 different configurations handled through a build parameter that always execute the exact same steps. For these it doens't make sense to create 15 different projects. So, it would help to show the trend graph depending on a certain parameter. It could somehow be compared to the Simple parameterized build report (

        The trend graph delta doesn't really help. If I do a full build and I have lets say 10 files with 100 warnings they are shown in the graph. Then one file changes and the incremental build only shows 10 warnings of the file compiled and 90 fixed even if they haven't been resolved. So it would be helpful if it could be to only compare the changed files and keep the old results.

        1. I think the way the plug-in currently handles multi-configuration builds needs to be changed. The parameter is not stored with the warnings. So in the end all warnings are aggregated as a single result. Either create a new issue or comment on existing JENKINS-11225: i think it would be very useful to describe some typical use cases and define how to handle them in the plugin.

  24. Hi,

    in trend graph configuration, the "Aggregate per day" option doesn't work for me.

    I assume the following exception I find in jenkins.err.log is connected to that.

            at sun.reflect.GeneratedMethodAccessor396.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
            at java.lang.reflect.Method.invoke(Unknown Source)
            at org.kohsuke.stapler.Function$InstanceFunction.invoke(
            at javax.servlet.http.HttpServlet.service(
    Caused by: java.lang.NoClassDefFoundError: org/joda/time/LocalDate
            at hudson.plugins.analysis.graph.CategoryBuildResultGraph.createMultiSeriesPerDay(
            at hudson.plugins.analysis.graph.CategoryBuildResultGraph.averageByDate(
            at hudson.plugins.analysis.graph.CategoryBuildResultGraph.createChart(
            at hudson.plugins.analysis.graph.CategoryBuildResultGraph.create(
            at hudson.plugins.analysis.graph.BuildResultGraph$1.createGraph(
            at hudson.util.Graph.render(
            at hudson.util.Graph.doPng(
            ... 76 more

    So probably this plugin requires joda time, which is not present in my installation, but probably is in yours :) because by chance you have some other plugin installed that includes joda-time.

    I'd be happy if for a quick workaround (before this is fixed by adding joda time to this plugin, or using Java 8 time), could you give me a tip which other plugin contains joda-time so that I can install this to get the "Aggregate per day" working ? :)



    1. Seems that this dependency is from dashboard-view plug-in. Please use the issue tracker for such bugs, so that it is easier to track these...

      1. Thanks, installing the dashboard-view plugin helped.

        There seems to be a JIRA issue about this already:

        1. Thanks for the pointer. Seems that I did not get a notification about the issue...

  25. In my opinion, I think the higher level warnings should be at the bottom on the stacked graph. The lowest number is the easiest to properly gauge how many errors there are, and those are the ones that matter the most.

    Here's a poorly photoshopped example to show how this would look in comparison to what we have now. Could an option to reverse the order of errors/warnings/messages be added for this graph style?

    1. Can you please create a feature request in Jira?

  26. "These graphs can be configured globally for a job" - I remember this option does exist, but I can't find it.

    How do I access this global configuration? Thanks.

    1. Open job configuration, section of the plug-in to configure: click advanced button, then there should be a link visible that redirects to the configuration page.

      Or use the direct link job-name/descriptorByName/WarningsPublisher/configureDefaults/

      1. Thanks, got it (the link is labelled "You can define the default values for the trend graph in a separate view.")

        As it seems to be common in Jenkins (and generally in UIs) to have buttons rather than links that lead to places where you can configure things: You wouldn't mind submitting a patch that turns this link into a button?

      2. Thanks, got it (the link is labelled "You can define the default values for the trend graph in a separate view.")

        As it seems to be common in Jenkins (and generally in UIs) to have buttons rather than links that lead to places where you can configure things: You wouldn't mind submitting a patch that turns this link into a button?

        1. Actually, in Jenkins all configurations that are presented in a new page are represented with a link (as far as I know). Would it be more obvious if the link would not span the whole line?

          For me it is ok to replace the link with a button if it would make the configuration option more obvious. I think it would make sense to define such a behavior consistent for all configuration pages (currently the configuration UI is very hard to understand (and to implement(wink), so many different buttons and options...). Maybe we should have a new tag that is used for such advanced->advanced options...

  27. Could you add release-notes plase?

  28. Release notes are typically provided by the individual plugins that use analysis-core.

  29. Great plugin!

    One question, is there any way we can include the name of the last person who committed the code in the warnings report by file? 

    1. This is great, what we exactly need. Does it work with SVN as well?  

      1. No, not yet implemented.

  30. Latest version release needs to be updated in {jenkins-plugin-info:pluginId=analysis-core} section

Write a comment…