Skip to end of metadata
Go to start of metadata

Deprecated: Users should migrate to

Plugin Information

Distribution of this plugin has been suspended due to unresolved security vulnerabilities, see below.

The current version of this plugin may not be safe to use. Please review the following warnings before use:

This plugin allows managing Jenkins jobs orchestration using a dedicated DSL, extracting the flow logic from jobs.


This plugin is designed to handle complex build workflows (aka build pipelines) as a dedicated entity in Jenkins. Without such a plugin, to manage job orchestration the user has to combine parameterized-build, join, downstream-ext and a few more plugins, polluting the job configuration. The build process is then scattered in all those jobs and very complex to maintain. Build Flow enables you to define an upper level Flow item to manage job orchestration and link up rules, using a dedicated DSL. This DSL makes the flow definition very concise and readable.


After installing the plugin, you'll get a new Entry in the job creation wizard to create a Flow. Use the DSL editor to define the flow.


The DSL defines the sequence of jobs to be built :

build( "job1" )
build( "job2" )
build( "job3" )

You can pass parameters to jobs, and get the resulting AbstractBuild when required :

b = build( "job1", param1: "foo", param2: "bar" )
build( "job2", param1: )

Environment variables from a job can be obtained using the following, which is especially useful for getting things like the checkout revision used by the SCM plugin ('P4_CHANGELIST', 'GIT_REVISION', etc) :

def revision = b.environment.get( "GIT_REVISION" )

You can also access some pre-defined variables in the DSL :

  • "build" the current flow execution
  • "out" the flow build console
  • "env" the flow environment, as a Map
  • "params" triggered parameters
  • "upstream" the upstream job, assuming the flow has been triggered as a downstream job for another job.

For example:

// output values
out.println 'Triggered Parameters Map:'
out.println params
out.println 'Build Object Properties:' { out.println "$it.key -> $it.value" }

// use it in the flow
build("job1", parent_param1: params["param1"])
build("job2", parent_workspace:build.workspace)

Guard / Rescue

You may need to run a cleanup job after a job (or set of jobs) whenever they succeeded or not. The guard/rescue structure is designed for this use-case. It works mostly like a try+finally block in Java language :

guard {
    build( "this_job_may_fail" )
} rescue {
    build( "cleanup" )

The flow result will then be the worst of the guarded job(s) result and the rescue ones


You may also want to just ignore result of some job, that are optional for your build flow. You can use ignore block for this purpose :

ignore(FAILURE) {
    build( "send_twitter_notification" )

The flow will not take care of the triggered build status if it's better than the configured result. This allows you to ignore UNSTABLE < FAILURE < ABORTED


You can ask the flow to retry a job a few times until success. This is equivalent to the retry-failed-job plugin :

retry ( 3 ) {
    build( "this_job_may_fail" )


The flow is strictly sequential, but let you run a set of jobs in parallel and wait for completion. This is equivalent to the join plugin :

parallel (
    // job 1, 2 and 3 will be scheduled in parallel.
    { build("job1") },
    { build("job2") },
    { build("job3") }
// job4 will be triggered after jobs 1, 2 and 3 complete

compared to join plugin, parallel can be used for more complex workflows where the parallel branches can sequentially chain multiple jobs :

parallel (

you also can "name" parallel executions, so you can later use reference to extract parameters / status :

join = parallel ([
        first:  { build("job1") },
        second: { build("job2") },
        third:  { build("job3") }

// now, use results from parallel execution

and this can be combined with other orchestration keywords :

parallel (
        guard {
        } rescue {
        retry 3, {

Extension Point

Other plugins that expose themselves to the build flow can be accessed with extension.'plugin-name'

So the plugin foobar might be accessed like:

def x = extension.'my-plugin-name'

Plugins implementing extension points

(searching github for "BuildFlowDSLExtension")

Implementing Extension

Write the extension in your plugin

@Extension(optional = true)
public class MyBuildFlowDslExtension extends BuildFlowDSLExtension {

     * The extensionName to use for the extension.
    public static final String EXTENSION_NAME = "my-plugin-name";

    public Object createExtension(String extensionName, FlowDelegate dsl) {
        if (EXTENSION_NAME.equals(extensionName)) {
            return new MyBuildFlowDsl(dsl);
        return null;

Write the actual extension

public class MyBuildFlowDsl {
    private FlowDelegate dsl;

     * Standard constructor.
     * @param dsl the delegate.
    public MyBuildFlowDsl(FlowDelegate dsl) {
        this.dsl = dsl;

     * World.
    public void hello() {
        ((PrintStream)dsl.getOut()).println("Hello World");


And more ...

future releases may introduce support for some more features and DSL syntax for advanced job orchestration.


As any Job, the Flow is executed by a trigger, and the Cause is exposed to the flow DSL. If you want to implement a build-pipeline after a commit on your scm, you can configure the flow to be triggered as the first scm-polling job is run, but you can as well use any other trigger (manual trigger, XTrigger plugin, ...) for your flow to integrate in a larger process.

Need help ?

Join jenkins-user mailing list and explain your use-case there


Work in Progress

0.20 (release Aug 04, 2016)

0.19 (release May 09, 2016)

0.14 (release Sep. 09, 2014)

  • enable test-jar for plugins leveraging the extension point.
  • use build.displayName in JobInvocation.toString.

0.13 (release Sep. 09, 2014)

  • read DSL from a file
  • fix buildgraph when using 2nd level flows.
  • swap dependency with buildgraph-view.

0.12 (release May 14, 2014)

  • wait for build to be finalized
  • fixed-width font in DSL box
  • print stack traces when parallel builds fail
  • restore ability to use a workspace, as a checkbox in flow configuration (useful for SCM using workspace)


  • no changes (added the compatibility warning to update center)

0.11 (released Apr. 8, 2014)

  • plugin re-licensed to MIT
  • build flow no longer has a workspace
  • Validation of the DSL when configuring the flow
  • If a build could not be scheduled show the underlying cause
  • extensions can contribute to dsl help
  • aborting a flow causes all jobs scheduled and started by the flow to be aborted
  • retry is configurable
  • misc tweaks to UI and small fixes


  • add support for SimpleParameters (parameter that can be set from a String)
  • mechanism to define DSL extensions
  • visualization moved to build-graph-view plugin
  • minor fixes

0.8 (released Feb. 11, 2013)

  • Fix folder support
  • Basic flow visualization support (thanks to ~gregory144)
  • Alternative map-style way to define parallel executions (thanks to Jeremy Voorhis)

0.7 (released Jan. 11, 2013)

  • Add support for ignore(Result)

0.6 (released November 24, 2012)

  • Enable "read job" permissions for Anonymous (JENKINS-14027)
  • Print errors as .. errors
  • Better failed test reporting
  • Use transient ref to Job/Build …
  • Fix a NullPointer to render FlowCause after jenkins restart
  • Use futures for synchronization plus publisher support plus console println cleanup (Pull request #14 from coreyoconnor)
  • Call to Parametrized jobs correctly use their default values (Pull request #16 from dbaeli)

0.5 (released September 03, 2012)

  • fixed support for publishers (JENKINS-14411)
  • improved job configuration UI (dedicated section, help, prepare code mirror integration)
  • improved error message

0.4 (released June 28, 2012)

  • fixed cast error parsing DSL (Collections$UnmodifiableRandomAccessList' to class 'long') on some version of Jenkins
  • add groovy bindings for current build as "build", console output as "out", environment variables Map as "env", and triggered parameters of current build as "params"
  • fixed bug when many "Parameters" links were shown for each triggered parameter on build page

0.3 (released April 12, 2012)

  • add support for hierarchical project structure (typically : cloudbees folders plugin)

0.2 (released April 9, 2012)

  • changed parallel syntax to support nested flows concurrent execution
  • fixed serialization issues

0.1 (released April 3, 2012)

  • initial release with DSL-based job orchestration


  1. Excellent, But where can I find the download link? And I also can not find this plugin in "Manage Plugin" page.

  2. In my opinion, the best solution is to provide a solution on top of the XTrigger plugin based on input/output of an environment infrastructure.
    The job to be scheduled depends on previous job outputs.

    1. Both make sense for distinct use cases. This plugin don't try to remove job dependencies, but to make the job orchestration trivial without a bunch of plugins to configure in each job.

      1. Yes, it depends on the context.
        However, you don't provide tips about jobs granularity.
        Unfortunately with this approach, I'm afraid you encourage users to increase dependencies between jobs. Therefore, it will lead to climb the number of issues about this subject such as synchronization points.

        In the context of a CI process with Jenkins, the process has to be implemented by only one job (in any case, in most case).
        Using XTrigger lets you delegate synchronization points with an external resource: your infrastructure environment such as a simple file or your binary repository.

        1. I know your point of view about job coupling, but I don't worry about dependencies between jobs, that's something I thing is useful (contributing DependencyGraph from the DSL is on my roadmap). The goal of this plugin is to remove flow configuration from jobs and to give a single place to look at it.

          XTrigger makes sense for a large set of use cases, that's not exclusive

          I don't think CI job should be implemented as a single job. Splitting in simple jobs and orchestrating them allow to parallelize and distribute the build steps on the infra for better efficiency

  3. This sounds great!

    Will there be a way to specify a non-blocking job?  Meaning that I want to kick of a job, but not wait for it to finish, and it's build result does not matter?

    Also, a way to specify when main build can start again while other downstream jobs continue? 

    1. I figured out a way to do both of these, but it still would be nice to have DSL syntax for it.

      1. Would you mind sharing how you did this? Either with the Build Flow Plugin or without

        1. This is my DSL:

           parallel (

          For the non-blocking job, I have the regress-1 use "Trigger/call builds on other projects" Build step which allows you to block or not-block and ignore build result.

          Then to allow the DSL job to start before the entire process is done, the regress-done triggers the final release jobs.

          1. Thanks for the reply.

            I ended up figuring this out. What I am trying to figure out now is how to nest parallel execution. I was hoping the following would work but it doesn't.

            parallel (
                { build("TryBuildFlowPackaging")
                        parrallel (
                                       {build("TryBuildFlowPostBuild") }
               { build("TryBuildFlowDeploy") }
  4. This plugin doesn't seem working with git plugin? or do I do something wrong?

    1. This plugin doesn't relate to SCM.

      If you want to trigger a build from SCM change, define a job to poll your SCM and the flow as first downstream

  5. I got the following error with the simple DSL script build ("job"):
    Building on master in workspace /var/lib/jenkins/jobs/Felix/workspace
    FATAL: Cannot cast object 'hudson.model.Cause$UserIdCause@c6388a81' with class 'java.util.Collections$UnmodifiableRandomAccessList' to class 'long'
    org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'hudson.model.Cause$UserIdCause@c6388a81' with class 'java.util.Collections$UnmodifiableRandomAccessList' to class 'long'

    1. I get the same error.

    2. This issue was due to a conflict between Groovy runtime used by the plugin and the one packaged into jenkins-core

      It has been fixed on master

  6. I tried using parallel several times, adding and removing brackets and whitespaces, but I couldn't get it to work with more than one target.

    Whenever I had more than one build inside the parallel I would get the following output:

    Started by user XXXXX
    Building on master in workspace XXXXXXXX
    parallel {
    Notifying upstream projects of job completion
    Finished: SUCCESS
    1. I have the same issue.

      I can't get it to build jobs in parallel. Would it have anything to do with the characters in the job name?

      1. please open a Jira issue, with detailled configuration (OS, JDK, Jenkins version etc)

        1. Created - hopefully created correctly. If you need anymore information let me know.

  7. I have this DSL:

    parallel (
    parallel (
      { build("test-buildflow-release-1") },
      { build("test-buildflow-release-2") }

    I purposely made test-buildflow-regress-3 fail, but it still ran release-1 and release-2.  Is that expected?  I thought it would stop on a failure.  This log shows it still kicks off the 2 release jobs?

    Started by user XXXX
    Building on master in workspace XXXX
    parallel {
    Trigger job test-buildflow-regress-1
    Trigger job test-buildflow-regress-2
    Completed test-buildflow-regress-2 #4
    Trigger job test-buildflow-regress-3
    Completed test-buildflow-regress-1 #4
    Completed test-buildflow-regress-3 #3
    parallel {
    Trigger job test-buildflow-release-1
    Trigger job test-buildflow-release-2
    Completed test-buildflow-release-2 #1
    Completed test-buildflow-release-1 #1
    Notifying upstream projects of job completion
    Finished: FAILURE
    1. This is clearly unexpected.

      I've added a test to check this scenario, and test pass :-/

      Please open a Jira and report jenkins version you use for further investigations

      1. Using version 0.5, I still get this problem. Using the following flow:

        guard {
           build ("test-a")
           parallel (
              { build("test-b") },
              { build("test-c") }
           parallel (
              { build("test-d") },
              { build("test-e") }
        } rescue {
           build ("test-f")

        Whenever "test-b" job fails (for example), "test-d" and "test-e" will be executed still. Is there something wrong with the syntax?

        I have created the JENKINS-15900 issue to follow this problem.


        1. syntax is ok, I just have no idea why this don't behave as expected and can't reproduce

  8. It would be nice if there was an option to read the DSL from a file, like the envinject plugin allows.

  9. How can I run job with current build parameters?

    build("job", foo: $foo) does not work

    1. This is not implemented yet, will probably expose a "params" variable as a Map. Woudl you like to contribute this ? ;)

        1. Can you add a sample in the doc ? it's not clear on how to use that (If I correctly guess that it's included in 0.4).


        2. The usage for params is 

          build("paramJob1", PARAM_1:params["BUILD_PARAM"])
  10. I tried to run a job with several parameters.

    build("PMD", "root.cvs":"cvsroot", "package.cvs":"WEB/APP")

    However, it seems that the plugin only recognizes the first parameter.

    $ cmd.exe /C '"ant.bat -Droot.cvs=cvsroot tools.pmd.continuousIntegration ....

  11. Hello Everyone,

    Is there a faclility to comment within the DSL block so that at times one of the job can be commented ???

    for instance :  I have 3 jobs in the sequence of BuildFlow as follows :

    among the above 3 jobs, if i just want to run jobA and jobC ,
    so currently i have to delete the entry of  jobB and then again add it when needed.

    It will be good to have an option like below :
    // build("jobB")

    Here after jobA directly jobC will run !!!

    Hoping for a reply, suggestions always welcome !!!

  12. Issue Tracking links to wrong component ('build-flow-plugin') or it was changed. Correct name is just 'build-flow'.

    1. don't know how to change this on Jira :'(

      1. I guess that this block is automatically embedded here by Confluence... BTW, I see others plugin with same problem, so I guess that components name was bulk changed on JIRA... (sad)

        Maybe KK could fix this, but I guess he have more priority things to do, so, what about put a tip on page's top, like that:

        [Here is our issues|'build-flow']
  13. Why version 0.4 is tagged in git but not available for download in jenkins (even 1.473) ?

    Is it a mistake or the version is being verified somehow ?

    Thank you, this plugin looks really great.

  14. Unknown User (

    Is it possible to use this plugin in Hudson version 2.2.1

    I need a similar flow control and we are using hudson as a build tool.

    1. don't know, and honesty don't care. Give it a try

  15. Thanks for this plugin. It's great. I'm wondering if there'll ever be support for a more dynamic setup. Specifically, I'd like to spawn off a dynamic number of parallel builds with dynamic parameters. Something like:


    for each item in items

    build ("job", param: "item")


    We can do some these things using the Jenkins API but I would like to find a way to manage this within Jenkins with less code.


    1. I very hardly got an answer from source code.

      Try below script. It works!!

       // allows syntax like : parallel(["Kohsuke","Nicolas"].collect { name -> return { build("job1", param1:name) } })
  16. How may I access parameters passed to a build in another build ? I've tried pulling it out from the Abstract Build but that doesn't seem to work.

    For instance I want to do :

    core = build("TryBuildFlowCore", PRIMARYWORKSPACE: "foobar") { out.println "$it.key -> $it.value" }
    out.println 'Triggered Parameters Map:'
    out.println params
    out.println 'Build Object Properties:' { out.println "$it.key -> $it.value" }
    parallel (

    This will lead to a Null Reference exception.

    Edited to put code in code block.

  17. Hi, apologies for asking this on the plugin page (I know you requested not to). I can't join the jenkins-users list at the moment (Can't access from work!)

    Is there any way to trigger a manual step in build-flow? For example, perform a  build, trigger a deployment, run integration tests against the deployed app, with the deployment needing to wait on manual approval.

    For example:

    Example build flow
    parallel {
    manual {
       build("deploy-all", deploy-ip: "")
    //Or manual("deploy-all", deploy-ip: "")???
    build("run-integration-tests", it-env-ip: "")

    I have also asked this on StackOverflow.


  18. Is it possible to pass git commit hash to next step? My flow is "Run unit tests" > "Deploy" > "Run smoke tests". I need to deploy the correct commit. What can happen is that I'll push two commits really fast and before the unit tests finish, so the "Deploy" job will try to deploy last commit and not the commit which was tested.

    1. You could do something like:

      Pass GIT_COMMIT sha to other jobs
      git_commit =["environment"]["GIT_COMMIT"]
      build( "unit-tests", GIT_COMMIT: git_commit )
      build( "deploy", GIT_COMMIT: git_commit )
      build( "smoke-tests", GIT_COMMIT: git_commit )

      Then use the Git plugin and have the pipeline job connected to git, then GIT_COMMIT will be in the pipeline env. Then in the other jobs use GIT_COMMIT instead of the head of the branch.

  19. Suppose i am running four parallel jobs and one job fails while others are still running. is there any way where Jenkins can stop all other running jobs if one job fails.

    1. this is technically possible as jenkins offer a job cancellation feature. Could maybe be an option for parallel() keyword

  20. You can use an environment variable from one build into another with:

    b = build( "test-bf" )
    //use to get all variables["SVN_URL"]
    out.println a

    I guess that this should be helpful, but don't know where put it....

  21. I needed use a SCM polling to trigger a Build Flow; you can type a Cron expression but currently you can´t select your SCM source, and its parameters .

    so I've added the next tag to the file $JENKINS_HOME/plugins/build-flow-plugin/WEB-INF/classes/com/cloudbees/plugins/flow/BuildFlow/configure-entries.jelly  


    and It works fine !! you can launch a new build trough the SCM trigger

    I think this could be a great idea to include it in the new version.

    1. For this use case you should better have a job to get triggered by SCM and declare the flow as downstream job.

      having a flow to declare scm don't make much sense as it don't access this scm to orchestrate the execution.

  22. Awesome plugin.

    A very nice enhancement would be the ability to block the build if some job is running, kind of like this plugin does:

  23. Upgrade van 0.4 to 0.5 causes all flow jobs to dissapear.. Why and how to prevent this?

    We have like 20 flows and it would be really bad if we had to recreate them all...

    1. Due to inheritance change, jenkins can't load the job due to collection not being initialized in parent class during deserialization

      You can fix this by adding to your job config.xml :


      Please don't use this wiki as a bug tracker, either open ticket on, or (better) subscribe to user list and ask there

      1. Fair enough, sorry for reporting the bug this way.

        I'll subscribe to the user list as well :-)

        Thanks for replying though! I'll pick up the manual actions ASAP. 

  24. I've opened a ticket in JIRA about Publishers not running after the flow jobs complete.  There was a previous ticket which addressed a similar issue -- namely, publishers not being saved from the configure page.  Well, they are being saved now -- but never run.  

    I've been digging for a bit and -- if I understand how the Publishers are invoked -- I believe that things should work with the current version of the plugin, but they aren't.  Any insight here would be useful.

     If we figure it out, we'll submit a pull request.  However, someone more familiar with this could probably fix it much quicker.

    1. And to follow up on this -- I can't see from the ticket history or the code that the plugin ever successfully supported publishers.  I'd think this is a pretty major oversight, as there aren't many viable workarounds.

  25. Hi, any indication about the roadmap and the future releases?



    1. We just released one version, what would you like to have next ?

      1. Roadmap is mostly about volunteers to contribute. 

        My personal focus will be on making the DSL safe (using groovy-sandbox) and make visualization work

  26. Ignore does not work. Can you please help me?

    ignore(FAILURE) {
        build( "send_twitter_notification" )


    ERROR: Failed to run DSL Script
    groovy.lang.MissingMethodException: No signature of method: com.cloudbees.plugins.flow.FlowDelegate.ignore() is \
    applicable for argument types: (java.lang.String, Script1$_run_closure1_closure4) values: [FAILURE, Script1$_run_closure1_closure4@3a549607]
    Possible solutions: grep()
    	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  27. Hi,

    is there any way to use a for loop in a parallel( ... ) statement?

    We want to start many instances of a job with a counter parameter to run in parallel.



    1. use a loop to create a closure list or map, and pass it to parallel()

  28. Hi,

    It seems the ignore(FAILURE) when using a variable. here is my code:

    I use a text parameter called tests

    def tests= params["tests"].split();
    ignore(FAILURE) {

    if i remove the ignore(FAILURE) {} part, the script works, but unfortunately the test suite stops at the first failure and the others tests won't be executed. 

    Could you please help me figure this out?

  29. (Windows) I downloaded the source code,

    import it as Eclipse project.

    And run hpi:run, open web browser http://localhost:8080/.

    But the job configruation page is displayed like this (I cannot attach screenshot due to secruity)

    and when to run build,  nothing is executed.

    build trigger


    - xxxx

    - Build periodically

    - Poll SCM

     Define build flow using flow DSL (textarea)

    Post-build action


    Add post-build action

    As compared to normal situation there is no Flow Definition section.

  30. I'm still seeing this issue, even though it was supposed to be fixed back in March - - All build flow graphs disappear after a few hours (whether Jenkins restarts or not). Does anyone know if this is a known issue or an incompatibility with other plugins? I can't seem to find much evidence of anyone else having the same problems anymore.

    • Jenkins 1.537 (Windows 2012)
    • Build Flow 0.10
  31. Hi,

    I found some issues for build flow plug-in.

    1. For lasted version 0.10 which has dependency “buildgraph-view” plug-in I install buildgraph-view then trigged run jobs. Jobs run fine except graph did not show.
    2. I downgrade to previous version 0.8 everything work fine including graph when jobs is running.

              When I come back to previous jobs that finished the graph disappeared.  In the case I’m not sure about how long build flow plug-in keeps the graph.

    Any help would be appreciated.

    1. This wiki is not an issue tracker, please use jira and join jenkins-user mailing list

  32. Is there a way to specify where particular build should run? By providing a label or a node name.

  33. Hi,

    I have a few questions and a big problem.

    First my questions:

    1 Is it possible to start a sub flow in the main flow like this

    main flow dsl named main_flow:build ('sub_flow')

    2 Can I start another job after completing the flow using a post build action like 'trigger parametrized build on other plugins'?

    And after all the problem.


    {retry ( 2 ){  build ("Job_Deploy_Batch_ReferenzBatch_Flow_Snap")  }
    {retry ( 2 ){  build ("Job_Deploy_Batch_Referenz_Flow_Snap")  }
    {retry ( 2 ){  build ("Job_Deploy_EJB_Referenz_Flow_Snap")  }
    {retry ( 2 ){  build ("Job_Deploy_Service_GenesisWlM_Flow_Snap")  }
    {retry ( 2 ){  build ("Job_Deploy_Service_Referenz_Flow_Snap")  }
    {retry ( 2 ){  build ("Job_Deploy_Web_GenesisBatchAdmin_Flow_Snap")  }

    Everything is all right, but at the end the job failed.

    parallel {
    retry (attempt 1) {
    retry (attempt 1) {
    Schedule job Job_Deploy_Batch_ReferenzBatch_Flow_Snap
    retry (attempt 1) {
    Schedule job Job_Deploy_Batch_Referenz_Flow_Snap
    retry (attempt 1) {
    Schedule job Job_Deploy_Service_GenesisWlM_Flow_Snap
    retry (attempt 1) {
    Schedule job Job_Deploy_Web_GenesisBatchAdmin_Flow_Snap
    Schedule job Job_Deploy_EJB_Referenz_Flow_Snap
    retry (attempt 1){                             Build Job_Deploy_Batch_Referenz_Flow_Snap #8 started                             Build Job_Deploy_Batch_ReferenzBatch_Flow_Snap #8 started                             Build Job_Deploy_Service_GenesisWlM_Flow_Snap #8 started                             Build Job_Deploy_Web_GenesisBatchAdmin_Flow_Snap #8 started                             Build Job_Deploy_EJB_Referenz_Flow_Snap #8 started                             Job_Deploy_Batch_ReferenzBatch_Flow_Snap #8 completed                         }
    Job_Deploy_EJB_Referenz_Flow_Snap #8 completed
    Job_Deploy_Web_GenesisBatchAdmin_Flow_Snap #8 completed
    Job_Deploy_Service_GenesisWlM_Flow_Snap #8 completed
    Job_Deploy_Batch_Referenz_Flow_Snap #8 completed
    Notifying upstream projects of job completion
    Finished: FAILURE

    Normally this steps should follow.

    build ("Server_Restart_Portal_HEAD")
    out.println 'Deploy skins and themes'
    retry ( 2 ){ 					skins_deploy = build ("Job_Deploy_SkinsAndThemes61_Flow_Snap") 				}
    out.println skins_deploy

    I have no idea why and how this happens.

    Any suggestions?

    I use jenkins 1.532.1 and build flow plugin 1.480.

    Thx Tommy

    1. if you have questions, JOIN THE USER-LIST

      this wiki isn't a forum

      1. Joined.

        Oh, not joined. Subscribing doesn't work. Mail routing error
        550-5.1.1 The email account that you tried to reach does not exist. Please try
        What about the problem? Do you have any ideas, suggestions relating to my problem?

  34. Hi,

    This plugin is just great. Thanks a lot.

    Is there a way to use conditionals in the DSL? I want to use a Boolean parameter that'll indicate whether a certain job should or shouldn't run.



    1. How do I join the users list?



  35. "build flow no longer has a workspace"

    But I need workspace. I want to create file on workspace to pass results to upstream job.

    Please return workspace back.

  36. Is there a work around to using custom workspace, as it doesn't appear to be available? as i'd like to archive some files for the build flow job i have created, linking them to each run of this job. (all my sub jobs use the same workspace).

  37. Is there a work around to using custom workspace, as it doesn't appear to be available? as i'd like to archive some files for the build flow job i have created, linking them to each run of this job. (all my sub jobs use the same workspace).

  38. Hi 

     I am using this plugin for my project and in some projects jobs i am executing batch script using groovy( .execute() ) function and these all job are executing in Slave.

    Problem: I have a Build flow job and i have written " println "cmd /c ipconfig/all".execute().text " command  in flow DSL.

                  If i run normally then it will execute on Master and display Master PC system details.

                  But if i run this job on slave then its running on slave but its showing Master PC system details  but it should have shown the Slave PC system details.

    And i have already configured groovy on slave also.

    i am facing this problem , if you have any solution then please inform me.

    1. I'm having the same issue but you won't get your answer here as the plugin developers refuses to use this platform for plugin support.

      I have tried to click on the users-list link above but got to a dead end.

      1. Thanks Gil Shinar for your opinion,

         But I understand that using this plugin we can control the jobs flow in efficient way,So in my project i am using this plugin for this purpose only.

        and to come up from this current problem i go for different logic (using properties file). 

  39. Hi,

    I am using "Read DSL from file" to get the DSL store in a file in local storage, but it can't seem to resolve environment variable, and just treat it as normal string.

    Any workaround? Thanks.

    build-flow reading DSL from file '/user/${PLATFORM_NAME}/test.groovy'
    FATAL: /user/${PLATFORM_NAME}/test.groovy (No such file or directory)
    1. i have worked on it but in path it can't resolve Env. variables.

      it always consider as a string, that's by i always give a hard coded Path.  

  40. I wasted several days trying to find documentation on the dsl you are using. Why doesn't it mention anywhere in the main documentation that this is using groovy? FYI...., if you want to do more than just the very limited scope of these examples, please refer to groovy scripting. One simple example that goes beyond the scope of this documentation are running jobs via a build parameters.

  41. In the Basics section, it refers to "triggered parameters" (params). Are these the same parameters that can be configured from a job's configure page (under "This build is parametrized")?

    1. For the curious: Yes, they are the same parameters that are passed to the build and can be accessed in the Groovy DSL Script as follows:

      // Will print value of myParamName
      println params['myParamName']

      Params is basically a dictionary or associative array.

  42. Is there a way to pass in the delay for the retry?

    retry ( 3, X) {
        build( "this_job_may_fail" )

    where X is the second?

  43. Hi,

    I'm using the Jenkins build flow plugin and wondering about some use-case of re-usability.
    Imagine the situation you want to orchestrate multiple components the same way (following some blueprint, just some steps and variables may differ).
    It's not possible to directly write this at the DSL, but you can use groovy for that and have something like the below snippet (in the real situation it's much moooore complicated):

    public class CommonOrchestrator {
        def delegate
        def currentBuild
        def params
        public CommonOrchestrator(delegate, currentBuild, params) {
            this.delegate = delegate
            this.currentBuild = currentBuild
            this.params = params
        def pipe() {
        protected void step1() {}
        protected void step2() {
    public class SpecificOrchestrator extends CommonOrchestrator {
        public SpecificOrchestrator(delegate, currentBuild, params) {
            super(delegate, currentBuild, params)
        public void step1() {
    new SpecificOrchestrator(this, build, params).pipe()

    Can you please tell me if you have ever considered something like this or do you have some recommendation how to implement it using the build flow plugin?
    I have tried various approaches how to use the multiple files/dependencies in the dsl or groovy file but with no extra result.
    I would be really glad for any help, thank you in advance!

  44. Build Pipeline Plugin and Delivery Pipeline Plugin don't display any upstream or downstream jobs of a build flow.

    Is there a way to get this plugin to work together with these views? Is there a way to get a better visualization then with buildgraph-view?

  45. What's the status of this plugin?

    Is it fair to say that at this point it is mostly legacy and new implementations should check instead?

    Is there any new development / bug fixing happening?