Skip to end of metadata
Go to start of metadata

This plugin allows Jenkins agents to be dynamically provisioned on a Kubernetes cluster.

Plugin Information

View Kubernetes on the plugin site for more information.

Older versions of this plugin may not be safe to use. Please review the following warnings before using an older version:

Background

The aim of the Kubernetes plugin is to be able to use a Kubernetes cluster to dynamically provision a Jenkins agent (using Kubernetes scheduling mechanisms to optimize the loads), run a single build, then tear-down that agent.

Setup

A quick setup is :

- get a Kubernetes cluster running

- use a docker image for the agents, or create your own

Kubernetes Environment

Follow the getting started guide on Kubernetes or use Google Kubernetes Engine.

Docker image for Agents

You can find ready-made Docker images for Jenkins agents using jnlp at jenkins/jenkins-slave.

The images can be customized to fit your needs.

Configuration

Refer to the README in the plugin repository

Releases

Refer to the CHANGELOG in the plugin repository

Issue Tracker

Can be found HERE

37 Comments

  1. One the things that I am confused about (and I suspect others might be as wel)

    What are the requirements for the Jenkins Master to manage Kubernetes. I assume it needs the kubectl command installed?

    If Jenkins is itself running on K8S, is there a Jenkins image that has the requirements?

    1. no need for kubectl, the jenkins/jenkins image will do

  2. Having followed the setup instructions precisely, Kubernetes is spinning up slave containers that die with "connect timed out" errors when they try to JNLP connect back to my Jenkins master. Is there some additional network configuration necessary to get the slave containers to talk to the master?

    1. Brett,

      I presume you would have JNLP port disabled. You have go to "Manage Jenkins > Configure Global Security", and enable either specify a "Fixed" port or a "Random" port(in case it is "Disabled").

  3. Can different Jenkins masters use the same Kubernetes cluster?

    1. Yes. Kubernetes is inherently multi-tenant.

      To facilitate ease of management, consider running your various Jenkins masters (and their associated spawned agent pods) in dedicated namespaces within the shared cluster.

  4. j u

    What's the best practice for how one would build docker images from within these jenkins slaves?

    1. Depends on what you are looking for regarding "best", but "Docker outside of Docker" pattern will work:
      https://github.com/jenkinsci/kubernetes-plugin/blob/master/examples/docker.groovy

  5. I'm not sure I follow the nesting pod templates section of the user guide. What docker image is actually used when you nest the pod templates? In the examples, each pod template specifies a docker image to use (one pod uses a maven-enabled image, the other uses a docker-enabled image) - in what container are the ```sh``` steps executed, and how are the dual requirements (ability to execute docker and maven commands) met in a single container?

  6. Hai jenkinsci/jnlp-slave

    1. Then you will need to build the JNLP agent into your docker image.

      1. i builded my image taking base image as

        in my dockerfile From jenkinsci/jnlp-slave

  7. Hi. The config option "Max number of instances" is not working as I expect. I have two pod templates, one is configured with 2 max instances and the other one with 6 max instances. The "Container Cap" is set to 8 but for example the podtemplate configured with 2 max instances is creating 3 or 4 instances. Am i doing something wrong?

    1. I'm seeing same issue. I set this number to 10 but actually 20+ containers are launched. This number has worked before, is this a regression?

  8. I found one thing that is really bad, that when a pod is created it sets these labels on all slaves that are created:

    Labels: jenkins=slave
    jenkins/autoscale=true

    What I would like is that it sets it to a unique value as it does when deploying something normally on kubernetes:

    Labels: pod-template-hash=380424406
    run=net-tools

     

    Not having a unique label will cause problems if you need to expose services, f.ex if you have function tests and hardware running outside the cluster. If multiple pods have SAME labels and you run the kubectl expose command it will use the labels as selectors resulting in multiple endpoints which is not what you want.

    It would be nice to have something like this instead:

     

    Labels: jenkins-slave-hash=<unique/random hash>

    jenkins=slave

     

     

  9. Hello.

     

    I am k8s newbie. Is there an example use case? Or can you guide how to use this ?

    I am setting kubenetes cluster (3 servers for example) using kubeadm.

    But I have no idea for next steps... 

     

    In my case, I have one dedicated jenkins master server, I want to use kubernetes for agents. But I don't know how to do.

    Thank you.

  10. Hello,

    When I execute the pipeline which is provided at GitHub Repo, it always fails to run. 

    It throws the following exception, how to resolve this? Any help would be highly appreciated. 

    org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
    WorkflowScript: 3: Expected to find someKey "someValue" @ line 3, column 16.
           kubernetes {
                      ^
    WorkflowScript: 3: Missing required parameter for agent type "kubernetes": containerTemplate @ line 3, column 5.
           kubernetes {
           ^
    
    WorkflowScript: 6: Invalid config option "yaml" for agent type "kubernetes". Valid config options are [label, containerTemplate, activeDeadlineSeconds, cloud, inheritFrom, instanceCap, nodeSelector, serviceAccount, workingDir] @ line 6, column 7.
             yaml """
             ^
    
    3 errors
    
    	at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
    	at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1085)
    	at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:603)
    	at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:581)
    	at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:558)
    	at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
    	at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
    	at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
    	at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
    	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.doParse(CpsGroovyShell.java:133)
    	at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:127)
    	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:559)
    	at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:520)
    	at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:323)
    	at hudson.model.ResourceController.execute(ResourceController.java:97)
    	at hudson.model.Executor.run(Executor.java:429)
    Finished: FAILURE
    1. sorry, the link was wrong. I am trying to execute the Declarative pipeline with container template defined as a yaml

      1. You need the latest 1.6.0 released yesterday.

        Questions are better addressed in jenkins-user mailing list

  11. When I used kubernetes plugin 1.1.2, everything was working fine. Just upgrade the plugin to 1.6.0 and now it is having issues when creating kubernetes credentials using "OpenShift OAuth token" kind in Credentials. The only field visible to me is Scope. Nothing after that. 

    I am expecting to see 3 more fields: Token, ID, Description. Thanks!

    1. Hi,

      I stumbled across this error, too.

      Fixed it by noticing that using the "OpenShift OAuth token" credentials is deprecated.

      You are supposed to use the "StringCredentials", i.e. plain text instead.

  12. After upgrading from v1.3.2 to 1.6 I am getting a permission error (AccessDenied) when trying to access an EmptyDir volume I have created for the pod template created for my job. Is the usage of a ServiceAccount in the pod now enforced in v1.6 as at present I am not using that.

    I have jenkins deployed in the cluster using the Jenkins Helm chart and the master pod does use a Service Account.

  13. How do I configure the Kube/Jenkins plugin to use the specific Service Account that's been setup for access?


    > kubectl describe serviceaccounts/cd-jenkins
    Name: cd-jenkins
    Namespace: default
    Labels: app=cd-jenkins
    chart=jenkins-0.16.1
    heritage=Tiller
    release=cd
    Annotations: <none>
    Image pull secrets: <none>
    Mountable secrets: cd-jenkins-token-wc2sj
    Tokens: cd-jenkins-token-wc2sj
    Events: <none>

     

    # from inside the jenkins slave pod....
    root@default-dzspv:~# kubectl auth can-i get pods
    no - Unknown user "system:serviceaccount:default:default"

  14. hello,

     

    I'm trying to understand the approach/mindset I need to take with Jenkins and the Kubernetes plugin when it comes to getting my maven settings.xml file (I've created a global one with the Config File plugin) that has my Artifactory username and password available to the agent that Kubernetes spins up.   

    I'm currently using a pipeline (jenkins file) job.  

    But also need to support non pipeline jobs.  

    My job is currently failing to resolve my maven dependencies due to authentication issues.

    Thanks

    Jay

     

  15. Does usage of this plugin requires use of privileged mode? Also can we specify user to be used while starting up slaves?

  16. Hi,

    I want to suggest you to add a function to retain the slave after a job, thus they can be used in next job without creation again.

    we can add a checkbox to indicate if user want to keep the slave after a job, also in case of a slave is offline, the slave can try to reconnect to the master periodically until it is connected.

    Thanks.

    1. that already exists, slave retention timeout 

  17.  

    How can I use a specific container for Execute Shell?  I specify a label jenkins-node which is the label that I use for Kubernetes container in the plugin, but it'snot using that container, it's using busybox.  

    Details: 

    Kubernetes 1.9.6, kubernetes plugin: 1.10.1, Jenkins 2.121.1

    I set up a container Template in K8s plugin.  Name: jenkins-node, Docker image: darenjacobs/custom-slave:3.20.1

    When I create a pipeline job, everything works fine.  It says Running on "Running on jenkins-node-xxxxx" and I have all the applications I have installed in that image, specifically packer, ansible, maven, etc. I am able to get to the container I want by specifying the label that correlates to the image.  Again this works just fine

     

    When I run a Freestyle job, it will create a container "jenkins-node-8hql3",

    I have the job configured to Restrict it to Label Expression: "jenkins node", which again has the Docker image I want, in the Kubernetes plugin, the job runs, and I see that it claims to grab the correct Docker image: 

    <grabbing the correct info>

    Agent jenkins-node-8hql3 is provisioned from template Kubernetes Pod Template
    Agent specification [Kubernetes Pod Template] (foo-bar bar-foo jenkins-slave jenkins-node jenkins-pod): 
    * [jenkins-node] darenjacobs/custom-slave:3.20.1(resourceRequestCpu: , resourceRequestMemory: , resourceLimitCpu: , resourceLimitMemory: )
    
    Building remotely on jenkins-node-8hql3 (jenkins-pod bar-foo foo-bar jenkins-node jenkins-slave) in workspace /home/jenkins/workspace/Freestyle-jenkins_node
    [Freestyle-jenkins_node] $ /bin/sh -xe /tmp/jenkins7849169988982767396.sh

    </grabbing the correct info>

    The build command is as follows: 

    echo "Hello World!"
    sleep 10
    packer --version

    The job fails: 

    + echo Hello World!
    Hello World!
    + sleep 10
    + packer --version
    /tmp/jenkins4809177356905328279.sh: line 1: packer: not found
    Build step 'Execute shell' marked build as failure
    Finished: FAILURE

     

    It's not running the correct image.  

     

    If I go to the console and connect to the container I can run packer just fine.

    > kubectl exec -it jenkins-node-8hql3 /bin/bash
    Defaulting container name to jenkins-node.
    Use 'kubectl describe pod/jenkins-node-8hql3 -n default' to see all of the containers in this pod.
    jenkins@jenkins-node-8hql3:~$ packer --version
    1.2.4

     

    If I'm not being clear please let me know.  I just want the Execute commands to run packer, ansible, maven, etc, which are installed on the image, but not accessible from the Freestyle job.  

  18. Need some help.

     

    Anyone knows how can I over-write the "tools→home path" when the container spins up with a pipeline PodTemplate?

  19. Need help regarding the config of Kuberenetes  credential in  cloud section of Jenkins.

    I did provide the valid URL for the Kubernetes URL, but when trying to provide credentials, its not taking or even showing them inthe drop down list.

     Credentials->Add->Jenkins-Kind→Kubernetes configuration(Kubeconfig)

    I provided the following details 

     

    • Scope: Global(Jenkins, nodes, items,all child items, etc)
    • ID: Test_Kube_cred
    • kubeconfig: selected "Enter Directly" and pasted the contents of the .kube/config file from master node of K8S.

    then I did select the "Add" Button.

    But the credentials with ID Test_Kube_cred are not shown in the drop down for "Credential" section.

     

    i tried all available options of "KubeConfig" 

    • Enter directly
    • From a file on the Jenkins master
    • From a file on the Kubernetes master node

    but none of the credentials are getting displayed in the drop down to select. I am not able to configure this plugin.

    Is this a bug? or i am missing some thing there. can any one help me please.

    Regards,

    Raja.Kavuri 

     

     

     

  20. Need For Help.

    what does it mean in jenkins log ? 

    "Terminating Kubernetes instance for agent jenkins-slave-h34rl-8p0cz"

    my jenkinsfile is 

    pipeline{
        agent{
            label "docker"
        }
        stages{
            stage("Clone"){
                steps{
                    git "https://github.com/chengjingtao/alauda-ci.git"
                }
            }
            stage("Build"){
                steps{
                    script{
                        stash "code"
                        def label = "ci-${UUID.randomUUID().toString()}" 
                      podTemplate(
                              label: label,
                              containers:[
                                      containerTemplate(name: 'ci-container', image: "gobuild:1.8-alpine", ttyEnabled: true,
                                          envVars: [envVar(key: "LANG", value: "C.UTF-8")])
                              ]
                      ){
                        node(label) {
                              container('ci-container') {
                                  unstash "code"
                                  sh "make build"
                                  stash "dest"
                              }
                        } 
                      }
                      
                      dir("__dest__"){
                        unstash "dest"  
                      }
                    }                
                }
    
            }
            stage("Else"){
                steps{
                    echo "some thing else"
                }
            }
        }
    }

     

    it works well. but I found some exceptions in jenkins log 

     

    Accepted JNLP4-connect connection #24 from 10.1.3.72/10.1.3.72:55392
    
    Aug 14, 2018 6:10:49 PM INFO okhttp3.internal.platform.Platform log
    ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
    
    Aug 14, 2018 6:10:49 PM INFO okhttp3.internal.platform.Platform log
    ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1 doLaunch
    Created process inside pod: [jenkins-slave-h34rl-8p0cz], container: [ci-container] with pid:[-1]
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
    Terminating Kubernetes instance for agent jenkins-slave-h34rl-8p0cz
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.pipeline.PodTemplateStepExecution$PodTemplateCallback finished
    Removing pod template jenkins-slave-h34rl from cloud kubernetes
    
    Aug 14, 2018 6:10:53 PM INFO okhttp3.internal.platform.Platform log
    ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
    
    Aug 14, 2018 6:10:53 PM WARNING jenkins.slaves.DefaultJnlpSlaveReceiver channelClosed
    Computer.threadPoolForRemoting [#115] for jenkins-slave-h34rl-8p0cz terminated
    java.nio.channels.ClosedChannelException
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
    	at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:314)
    	at hudson.remoting.Channel.close(Channel.java:1450)
    	at hudson.remoting.Channel.close(Channel.java:1403)
    	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:799)
    	at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:103)
    	at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:715)
    	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    
    
    Aug 14, 2018 6:10:53 PM WARNING hudson.remoting.Request$2 run
    Failed to send back a reply to the request hudson.remoting.Request$2@6289fb03
    java.nio.channels.ClosedChannelException
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
    	at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:314)
    	at hudson.remoting.Channel.close(Channel.java:1450)
    	at hudson.remoting.Channel.close(Channel.java:1403)
    	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:799)
    	at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:103)
    	at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:715)
    	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    Caused: hudson.remoting.ChannelClosedException: Channel "hudson.remoting.Channel@64aebe73:JNLP4-connect connection from 10.1.3.72/10.1.3.72:55392": channel is already closed
    	at hudson.remoting.Channel.send(Channel.java:717)
    	at hudson.remoting.Request$2.run(Request.java:382)
    	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
    	at org.jenkinsci.remoting.CallableDecorator.call(CallableDecorator.java:19)
    	at hudson.remoting.CallableDecoratorList$1.call(CallableDecoratorList.java:21)
    	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
    	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave deleteSlavePod
    Terminated Kubernetes instance for agent e2equota/jenkins-slave-h34rl-8p0cz
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
    Disconnected computer jenkins-slave-h34rl-8p0cz
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
    Terminating Kubernetes instance for agent docker-55tnl
    
    Aug 14, 2018 6:10:53 PM INFO org.jenkinsci.plugins.workflow.job.WorkflowRun finish
    demo #4 completed: SUCCESS
    
    Aug 14, 2018 6:10:53 PM INFO okhttp3.internal.platform.Platform log
    ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?
    
    Aug 14, 2018 6:10:53 PM WARNING jenkins.slaves.DefaultJnlpSlaveReceiver channelClosed
    Computer.threadPoolForRemoting [#115] for docker-55tnl terminated
    java.nio.channels.ClosedChannelException
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:832)
    	at org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:181)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.switchToNoSecure(SSLEngineFilterLayer.java:283)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processWrite(SSLEngineFilterLayer.java:503)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processQueuedWrites(SSLEngineFilterLayer.java:248)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doSend(SSLEngineFilterLayer.java:200)
    	at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.doCloseSend(SSLEngineFilterLayer.java:213)
    	at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.doCloseSend(ProtocolStack.java:800)
    	at org.jenkinsci.remoting.protocol.ApplicationLayer.doCloseWrite(ApplicationLayer.java:173)
    	at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer$ByteBufferCommandTransport.closeWrite(ChannelApplicationLayer.java:314)
    	at hudson.remoting.Channel.close(Channel.java:1450)
    	at hudson.remoting.Channel.close(Channel.java:1403)
    	at hudson.slaves.SlaveComputer.closeChannel(SlaveComputer.java:799)
    	at hudson.slaves.SlaveComputer.access$800(SlaveComputer.java:103)
    	at hudson.slaves.SlaveComputer$3.run(SlaveComputer.java:715)
    	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    
    
    Aug 14, 2018 6:10:53 PM WARNING org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
    Slave pod docker-55tnl was not deleted due to retention policy Always.
    
    Aug 14, 2018 6:10:53 PM INFO org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
    Disconnected computer docker-55tnl

     

    did I do something wrong ? 

    Thanks  for your help.

     

     

  21. I have same issue above, in Jenkins log and Agent log /\.

    But my pipeline not work, returning:

     

    java.lang.InterruptedException: sleep interrupted
    	at java.lang.Thread.sleep(Native Method)
    	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.waitUntilExists(BaseOperation.java:959)
    	at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.waitUntilReady(HasMetadataOperation.java:219)
    	at io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.waitUntilReady(HasMetadataOperation.java:37)
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.waitUntilContainerIsReady(ContainerExecDecorator.java:417)
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:255)
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:236)
    	at hudson.Launcher$ProcStarter.start(Launcher.java:449)
    	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:186)
    	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:86)
    	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:182)
    	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
    	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
    	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
    	at sun.reflect.GeneratedMethodAccessor425.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
    	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
    	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
    	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
    	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
    	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
    	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
    	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
    	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
    	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
    	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
    Caused: java.io.IOException: Failed to execute shell script inside container [build] of pod [localhost]. Timed out waiting for container to become ready!
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.waitUntilContainerIsReady(ContainerExecDecorator.java:438)
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.doLaunch(ContainerExecDecorator.java:255)
    	at org.csanchez.jenkins.plugins.kubernetes.pipeline.ContainerExecDecorator$1.launch(ContainerExecDecorator.java:236)
    	at hudson.Launcher$ProcStarter.start(Launcher.java:449)
    	at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:186)
    	at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:86)
    	at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:182)
    	at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:229)
    	at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:153)
    	at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
    	at sun.reflect.GeneratedMethodAccessor425.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
    	at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
    	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
    	at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
    	at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
    	at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
    	at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
    	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
    	at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
    	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:133)
    	at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
    	at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
    	at WorkflowScript.run(WorkflowScript:25)
    	at ___cps.transform___(Native Method)
    	at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
    	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
    	at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
    	at sun.reflect.GeneratedMethodAccessor250.invoke(Unknown Source)
    	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.lang.reflect.Method.invoke(Method.java:498)
    	at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
    	at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
    	at com.cloudbees.groovy.cps.Next.step(Next.java:83)
    	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
    	at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
    	at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:122)
    	at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:261)
    	at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
    	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34)
    	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59)
    	at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:108)
    	at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58)
    	at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:174)
    	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332)
    	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83)
    	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244)
    	at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232)
    	at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
    	at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    	at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
    	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    	at java.lang.Thread.run(Thread.java:748)
    Finished: FAILURE

    Happens after update my Kubernetes cluster IKS to 1.11

    Somebody help?

  22. Hi, I could be approaching this completely wrong, but let me start by describing my organization's infrastructure. We have an official in-house cloud that provides access to many powerful nodes. However, the cluster is plagued by frequent downtime and network issues. My group has been hurt so much by this that we decided to setup our own cluster. Since we operate on a much smaller budget for compute resources, we do not have the scale of the company's cluster.

    We would like our Jenkins masters to live in our own cluster and would like to spawn pods in the company's cluster, when it is operational. During downtime, we would flip the settings in Jenkins and have our cluster provide the service. Degraded service is better than no service. I am not a k8s expert but from what I understand there are network routing challenges that need to be solved so that the pods in a different cluster can communicate back to the Jenkins master. Are there any tips in what needs to be setup to accomplish this? Any help is much appreciated!

  23. Hi everyone , I would like to know if there any community channel where we can discuss about this plugin like gitter , slack .. ? Please let me know

  24. Vinicius Xavier I can only support your issue. Im just testing the new IKS 1.11 as we need to switch soon to it.
    With IKS 1.11 I run into the same Issue.

    There is JENKINS-53297 - Getting issue details... STATUS For this, please follow

    sample jenkinsfile
    containerTemplate(name: 'java', image: 'openjdk:8-jdk', ttyEnabled: true, command: 'cat')
      ]) {
    
        node('mypod') {
            stage('Checkout') {
                echo "test"
            }
            container('java') {
                stage('Static Code Analysis') {
                    sh "java -version"
                }
            }
        }
    }



    Regards, Leon