Trying to add this filebot script to my context menu but my script isn't working I think, I can add them to the menu but when I click it nothing happens.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Prefs]
[HKEY_CLASSES_ROOT*\shell]
[HKEY_CLASSES_ROOT*\shell\FBRenameM] #="Rename Movie" "NoWorkingDirectory"=""
[HKEY_CLASSES_ROOT*\shell\FBRenameM\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n} [{y}] {vf} {af}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n} [{y}] {vf} {af}""
[HKEY_CLASSES_ROOT*\shell\Filebot] #="Rename Movies To H:\Movies" "NoWorkingDirectory"=""
[HKEY_CLASSES_ROOT*\shell\Filebot\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "H:/Movies/{n} [{y}]/{n} [{y}] {vf} {af}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "H:/Movies/{n} [{y}]/{n} [{y}] {vf} {af}""
[HKEY_CLASSES_ROOT*\shell\FilebotTV] "NoWorkingDirectory"="" #="Rename Tv To G:\TV"
[HKEY_CLASSES_ROOT*\shell\FilebotTV\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "G:/TV/{n}/{n} {'Season'+s.pad(2)}/{n}-{s00e00}-{t}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "G:/TV/{n}/{n} {'Season'+s.pad(2)}/{n}-{s00e00}-{t}""
[HKEY_CLASSES_ROOT*\shell\FilebotTVR] "NoWorkingDirectory"="" #="Rename Tv"
[HKEY_CLASSES_ROOT*\shell\FilebotTVR\command] "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n}-{s00e00}-{t}"" #="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n}-{s00e00}-{t}""
[HKEY_CLASSES_ROOT\Directory\shell\FBRenameM] #="Rename Movie" "NoWorkingDirectory"=""
[HKEY_CLASSES_ROOT\Directory\shell\FBRenameM\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n} [{y}] {vf} {af}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n} [{y}] {vf} {af}""
[HKEY_CLASSES_ROOT\Directory\shell\filebot] #="Rename Movie To G:\Movies" "NoWorkingDirectory"=""
[HKEY_CLASSES_ROOT\Directory\shell\filebot\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "H:/Movies/{n} [{y}]/{n} [{y}] {vf} {af}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "H:/Movies/{n} [{y}]/{n} [{y}] {vf} {af}""
[HKEY_CLASSES_ROOT\Directory\shell\FilebotTV] "NoWorkingDirectory"="" #="Rename Tv To G:\TV"
[HKEY_CLASSES_ROOT\Directory\shell\FilebotTV\command] #="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "G:/TV/{n}/{n} {'Season'+s.pad(2)}/{n}-{s00e00}-{t}"" "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --action move --conflict skip -non-strict --format "G:/TV/{n}/{n} {'Season'+s.pad(2)}/{n}-{s00e00}-{t}""
[HKEY_CLASSES_ROOT\Directory\shell\FilebotTVR] "NoWorkingDirectory"="" #="Rename Tv"
[HKEY_CLASSES_ROOT\Directory\shell\FilebotTVR\command] "IsolatedCommand"="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n}-{s00e00}-{t}"" #="C:\Program Files\FileBot\filebot.exe -rename "%1" --conflict skip -non-strict --format "{n}-{s00e00}-{t}""
Related
I have a pretty standard Dockerfile:
FROM python:3.6.5
WORKDIR /src
COPY . /src
... install and configure things ...
and a pretty standard Jenkinsfile
pipeline {
agent none
stages {
stage('Test') {
agent { dockerfile true }
steps {
dir('/src') {
sh 'pwd' // any command
}
}
}
}
}
However when this is run in Jenkins (I should mention I'm currently testing using a local Jenkins instance in a docker container) the Dockerfile builds successfully but when Jenkins attempts to run the step I get the following output:
[Pipeline] withDockerContainer
Jenkins seems to be running inside container aaee62f2a28e29b94c13fcdc08c1a82ef7baed48beabe54579db07b2fbd26b23
$ docker run -t -d -u 1000:1000 -w "/var/jenkins_home/workspace/My Project" --volumes-from aaee62f2a28e29b94c13fcdc08c1a82ef7baed48beabe54579db07b2fbd26b23 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** 12724611f0bf2363c9eee7288654e43eca2aabaf cat
$ docker top 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5 -eo pid,comm
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] dir
Running in /src
[Pipeline] {
[Pipeline] sh
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
$ docker stop --time=1 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5
$ docker rm -f 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5
which is not expected, and when the step is "finished" there is the following failure message:
java.nio.file.AccessDeniedException: /src
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3273)
at hudson.FilePath.access$1300(FilePath.java:213)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1254)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1250)
at hudson.FilePath.act(FilePath.java:1078)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.mkdirs(FilePath.java:1246)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController.<init>(FileMonitoringTask.java:181)
at org.jenkinsci.plugins.durabletask.BourneShellScript$ShellController.<init>(BourneShellScript.java:221)
at org.jenkinsci.plugins.durabletask.BourneShellScript$ShellController.<init>(BourneShellScript.java:210)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:131)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:23)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor143.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:136)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
The problem appears to be with this line: docker run -t -d -u 1000:1000 there is no user with UID 1000 on the container built by the Dockerfile, commands are run by root and therefore with the directory change to /src the "user" that jenkins specifies has no access to the files.
How do you prevent Jenkins from trying to use -u 1000:1000 by default? or what steps should I take to resolve this issue. I could in theory add a chmod 777 into the dockerfile but this seems like an ugly workaround
Update:
I added RUN chmod -R 777 /src to my Dockerfile but the issue remains. If I run the container manually from inside the docker Jenkins container:
docker run -it -u 1000:1000 -w "/var/jenkins_home/workspace/My Project" --volumes-from aaee62f2a28e29b94c13fcdc08c1a82ef7baed48beabe54579db07b2fbd26b23 12724611f0bf2363c9eee7288654e43eca2aabaf /bin/bash
and cd /src I have access to the files and can do e.g. touch newfile without issues. The java.nio.file.AccessDeniedException: /src still appears in the logs though
It seems that you can override/pass docker arguments according to this, so you might try this in order to override the default using uid/gid
args: A string. Runtime arguments to pass to docker run.
This option is valid for docker and dockerfile.
agent {
dockerfile {
args '-u 0:0' // or try to make it -u root
}
}
Following on from the issue I had in this question: Why when using Jenkins dockerfile agent does it run container with invalid user?
I successfully managed to run the container as root user, however the actual java stack trace failure I received remained the same- implying it is not a permissions error with the user in the container.
The actual issue appears to be with the dir() {} step within my stage. To re-iterate, when I have the following:
steps {
dir('/src') {
sh 'pwd' // any command
}
}
the container exits without running the command:
[Pipeline] withDockerContainer
Jenkins seems to be running inside container aaee62f2a28e29b94c13fcdc08c1a82ef7baed48beabe54579db07b2fbd26b23
$ docker run -t -d -u 1000:1000 -w "/var/jenkins_home/workspace/My Project" --volumes-from aaee62f2a28e29b94c13fcdc08c1a82ef7baed48beabe54579db07b2fbd26b23 -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** 12724611f0bf2363c9eee7288654e43eca2aabaf cat
$ docker top 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5 -eo pid,comm
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] dir
Running in /src
[Pipeline] {
[Pipeline] sh
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
$ docker stop --time=1 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5
$ docker rm -f 4e29bc102d8f4e6b4ffc142fc06eb706e95b00fa6190b2927f4f79f0cfa53af5
and the failure stack trace at the end of the build is:
java.nio.file.AccessDeniedException: /src
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3273)
at hudson.FilePath.access$1300(FilePath.java:213)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1254)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1250)
at hudson.FilePath.act(FilePath.java:1078)
at hudson.FilePath.act(FilePath.java:1061)
at hudson.FilePath.mkdirs(FilePath.java:1246)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask$FileMonitoringController.<init>(FileMonitoringTask.java:181)
at org.jenkinsci.plugins.durabletask.BourneShellScript$ShellController.<init>(BourneShellScript.java:221)
at org.jenkinsci.plugins.durabletask.BourneShellScript$ShellController.<init>(BourneShellScript.java:210)
at org.jenkinsci.plugins.durabletask.BourneShellScript.launchWithCookie(BourneShellScript.java:131)
at org.jenkinsci.plugins.durabletask.FileMonitoringTask.launch(FileMonitoringTask.java:99)
at org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep$Execution.start(DurableTaskStep.java:305)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:268)
at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:176)
at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1022)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:42)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:157)
at org.kohsuke.groovy.sandbox.GroovyInterceptor.onMethodCall(GroovyInterceptor.java:23)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxInterceptor.onMethodCall(SandboxInterceptor.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker$1.call(Checker.java:155)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:159)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
at org.kohsuke.groovy.sandbox.impl.Checker.checkedCall(Checker.java:129)
at com.cloudbees.groovy.cps.sandbox.SandboxInvoker.methodCall(SandboxInvoker.java:17)
at WorkflowScript.run(WorkflowScript:23)
at ___cps.transform___(Native Method)
at com.cloudbees.groovy.cps.impl.ContinuationGroup.methodCall(ContinuationGroup.java:57)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.dispatchOrArg(FunctionCallBlock.java:109)
at com.cloudbees.groovy.cps.impl.FunctionCallBlock$ContinuationImpl.fixArg(FunctionCallBlock.java:82)
at sun.reflect.GeneratedMethodAccessor143.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.cloudbees.groovy.cps.impl.ContinuationPtr$ContinuationImpl.receive(ContinuationPtr.java:72)
at com.cloudbees.groovy.cps.impl.ConstantBlock.eval(ConstantBlock.java:21)
at com.cloudbees.groovy.cps.Next.step(Next.java:83)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:174)
at com.cloudbees.groovy.cps.Continuable$1.call(Continuable.java:163)
at org.codehaus.groovy.runtime.GroovyCategorySupport$ThreadCategoryInfo.use(GroovyCategorySupport.java:129)
at org.codehaus.groovy.runtime.GroovyCategorySupport.use(GroovyCategorySupport.java:268)
at com.cloudbees.groovy.cps.Continuable.run0(Continuable.java:163)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.access$101(SandboxContinuable.java:34)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.lambda$run0$0(SandboxContinuable.java:59)
at org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.GroovySandbox.runInSandbox(GroovySandbox.java:136)
at org.jenkinsci.plugins.workflow.cps.SandboxContinuable.run0(SandboxContinuable.java:58)
at org.jenkinsci.plugins.workflow.cps.CpsThread.runNextChunk(CpsThread.java:182)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:332)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$200(CpsThreadGroup.java:83)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:244)
at org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:232)
at org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:64)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:131)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Finished: FAILURE
However if I use the following:
steps {
sh 'cd /src && pwd'
}
The command runs as expected. Is this a bug within Jenkins or am I misunderstanding what the dir() directive is used for?
Generally speaking, The dir() directive's purpose is to change your current working directory to another directory on the jenkins agent where the build itself is running.
In your case specifically there is a limitation from the docker plugin side as explained in the following issue's comment:
dir with an absolute path is not supported inside a Docker container. Simply start your sh script with cd. Or avoid using the withDockerContainer step alogether - if it works perfectly for you out of the box, great, otherwise forget about it.
So your current workaround sh 'cd /src' - if we considered it a workaround - is the recommended way to do it.
I agree that this is a bug. And reported the issue here: https://issues.jenkins-ci.org/browse/JENKINS-46055
The core maintainers seemed to feel that there isn't actually a problem because there is a workaround. (the one you described of cd <dir> && <cmd>)
My opinion is that you're using dir correctly, it's just broken.
I have dissected Google's distroless images to see how they're built as I'd like to use them but my company wants to have images we have built ourselves.
It looks like Google build glibc, libssl, and openssl in their base images so I did the same. I added a statically linked busybox, curl, and sudo for testing purposes. However, when I login as my user sudo tells me that it is unable to read libraries that exist on the system. At first I thought it had to do with root's environment because setuid but passwd is also setuid and it works.
Here is some output:
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ldd /usr/bin/sudo
linux-vdso.so.1 (0x00007fff5e728000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007f4bf61a1000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f4bf5f6a000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4bf5d4c000)
libc.so.6 => /lib64/libc.so.6 (0x00007f4bf59b3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4bf6621000)
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libutil.so.1
lrwxrwxrwx 1 root root 15 Oct 2 17:45
/lib64/libutil.so.1 -> libutil-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libcrypt.so.1
lrwxrwxrwx 1 root root 16 Oct 2 17:45
/lib64/libcrypt.so.1 -> libcrypt-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libpthread.so.0
lrwxrwxrwx 1 root root 18 Oct 2 17:45
/lib64/libpthread.so.0 -> libpthread-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ ls -l /lib64/libc.so.6
lrwxrwxrwx 1 root root 12 Oct 2 17:45
/lib64/libc.so.6 -> libc-2.25.so
Distroless 🐳 [gdanko#5bd77574a894]:~ $ sudo
sudo: error while loading shared libraries: libutil.so.1: cannot open
shared object file: No such file or directory
If I su - root and execute sudo it works
Distroless 🐳 [gdanko#5bd77574a894]:~ $ su - root
Password:
Distroless 🐳 [root#5bd77574a894]:~ # sudo
usage: sudo -h | -K | -k | -V
usage: sudo -v [-AknS] [-g group] [-h host] [-p prompt] [-u user]
usage: sudo -l [-AknS] [-g group] [-h host] [-p prompt] [-U user] [-u user] [command]
usage: sudo [-AbEHknPS] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] [VAR=value] [-i|-s] [<command>]
usage: sudo -e [-AknS] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] file ...
I was wondering if it was in how Google built glibc but I could not find anything glaring.
The other thing is, if I use Google's distroless base for my image and copy my sudo over, busybox works in the container. I have tried a number of different things trying to troubleshoot this but I am at a loss. Can anyone see something I may be missing??
I am working with the following docker files: https://github.com/zanata/zanata-docker-files
After I ran the ./zanata-server/runapp.sh, It started two docker containers:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
654580794e7c zanata/server:latest "/opt/jboss/wildfl..." 18 seconds ago Up 17 seconds 0.0.0.0:8080->8080/tcp zanata
311f3379635e mariadb:10.1 "docker-entrypoint..." 2 weeks ago Up 2 weeks 3306/tcp zanatadb
After a blackout, the zanata server container broke, it left some Lock files around and I cannot start it again:
org.zanata.exception.ZanataInitializationException: Lucene lock files found. Check if Zanata is already running. Otherwise, Zanata was not shut down cleanly: delete the lock files: [/var/lib/zanata/indexes/org.zanata.model.
HTextFlowTarget/write.lock, /var/lib/zanata/indexes/org.zanata.model.HProjectIteration/write.lock, /var/lib/zanata/indexes/org.zanata.model.HProject/write.lock]
How can I delete the lock files?
Okay, I thought I need to delete the files while the container is offline, but indeed I needed to run the container, after I could connect to it and run commands on like I was on a normal server.
The main solution:
sudo docker exec -it 654580794e7c bash
This allows me to execute commands on the container:
[jboss#654580794e7c ~]$ ls
wildfly
The whole process, if you would like to see:
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$ sudo docker ps
[sudo] password for zanata:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
654580794e7c zanata/server:latest "/opt/jboss/wildfl..." 17 minutes ago Up 17 minutes 0.0.0.0:8080->8080/tcp zanata
311f3379635e mariadb:10.1 "docker-entrypoint..." 2 weeks ago Up 2 weeks 3306/tcp zanatadb
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$ sudo docker exec -it 654580794e7c bash
[jboss#654580794e7c ~]$ ls
wildfly
[jboss#654580794e7c ~]$ cd /var/lib
[jboss#654580794e7c lib]$ ls
alternatives games machines rpm systemd zanata
dbus initramfs misc rpm-state yum
[jboss#654580794e7c lib]$ cd zanata/indexes
[jboss#654580794e7c indexes]$ ls -lh
total 28K
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HAccount
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HGlossaryEntry
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.HGlossaryTerm
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:30 org.zanata.model.HProject
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:30 org.zanata.model.HProjectIteration
drwxr-xr-x 2 jboss jboss 4.0K Mar 3 07:23 org.zanata.model.HTextFlowTarget
drwxr-xr-x 2 jboss jboss 4.0K Mar 2 13:14 org.zanata.model.tm.TransMemoryUnit
[jboss#654580794e7c indexes]$ cd org.zanata.model.HTextFlowTarget/
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ ls
_0.cfe _0.cfs _0.si segments_2
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ cd .
[jboss#654580794e7c org.zanata.model.HTextFlowTarget]$ cd ..
[jboss#654580794e7c indexes]$ cd org.zanata.model.HProject
[jboss#654580794e7c org.zanata.model.HProject]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HProject]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HProject]$ cd ..
[jboss#654580794e7c indexes]$ cd org.zanata.model.HProjectIteration/
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ ls
_0.cfe _0.cfs _0.si segments_2 write.lock
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ rm write.lock
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ ^C
[jboss#654580794e7c org.zanata.model.HProjectIteration]$ exit
zanata#zanata:~/docker/zanata-docker-files-platform-4.1.1/zanata-server$
As below, i have a centos7 container and i have a cron job configured. But it does not seem to be executing. What am i missing?
host: centos:7 docker container running on a mac
[root#a2118127510b /]# cat /etc/*-release
CentOS Linux release 7.2.1511 (Core)
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
BUG_REPORT_URL="https://bugs.centos.org/
CentOS Linux release 7.2.1511 (Core)
CentOS Linux release 7.2.1511 (Core)
[root#a2118127510b /]#
[root#a2118127510b /]# date
Fri Sep 16 06:27:49 UTC 2016
[root#a2118127510b /]# crontab -l
no crontab for root
[root#a2118127510b /]# cat mycron
* * * * * echo "hello" >> /var/log/cron1.log 2>&1
[root#a2118127510b /]# touch /var/log/cron1.log
[root#a2118127510b /]# crontab -u root mycron
[root#a2118127510b /]# crontab -l
* * * * * echo "hello" >> /var/log/cron1.log 2>&1
[root#a2118127510b /]# date
Fri Sep 16 06:27:55 UTC 2016
[root#a2118127510b /]# cat /var/log/cron1.log
[root#a2118127510b /]# date
Fri Sep 16 06:32:03 UTC 2016
[root#a2118127510b /]# cat /var/log/cron1.log
[root#a2118127510b /]#
Docker containers (generally) only run one process, whereas in a typical VM/OS setup there are multiple services running in the background performing things like Cron execution and the like. It's likely that the Cron service isn't running in your container and therefore isn't triggering any Cron jobs.
You can check if the Cron service is running using ps or a similar command. http://www.cyberciti.biz/faq/howto-linux-unix-start-restart-cron/ also gives information about starting and stopping the Cron service.
The most 'Dockerlike' approach would be to have a container that just ran the Cron process as it's single job running in foreground mode and displaying the output from the process instead of writing to a log file. https://github.com/aptible/docker-cron-example does something similar to this although it runs Cron in the background and then tail the log in the foreground.