How to escape CMD in Dockerfile - docker

I've tried to start a server inside docker via the following syntax permutations:
CMD [ "forever", "start", "server/server.js" ]
CMD [ "forever", "start", "server\/server.js" ]
CMD forever start server/server.js
But each of them has failed.
The first two ran as "forever start server" ... notice the missing /server.js piece.
The last one ran as "/bin/sh -c 'forever "
So what is the correct syntax to place forever start server/server.js inside a Dockerfile to run it as a detached container?

I've just run into the same issue with starting a Java application inside the docker container when running it.
From the docker reference you have three opportunities:
CMD ["executable","param1","param2"]
CMD ["param1","param2"]
CMD command param1 param2
Have a look here: Docker CMD
I'm not familiar with JavaScript, but assuming that the application you want to start is a Java application:
CMD ["/some path/jre64/bin/java", "server.jar", "start", "forever", ...]
And as the others in your comments say, you could also add the script via docker ADD or COPY in your Dockerfile and start it with docker RUN.
Yet another solution would be to run the docker container and mount a directory with the desired script via docker run .. -v HOSTDIR:CONTAINERDIR inside the container and trigger that script with docker exec.
Have a read here: Docker Filemounting + Docker Exec

Just run it via sh -c as suggested in the comments,
The syntax is:
CMD["/bin/sh", "-c", "'forever start server/server.js'"]
In case your tool requires a login shell to run, maybe try this one too:
CMD["/bin/bash", "-lc", "'forever start server/server.js'"]
This should work fine, having the same effect as putting the command into a standard sh shell in a single line.

Related

How should I use environment variables at the Entrypoint in Docker?

ENV ADDRESS=http://peer1:8761/eureka/,http://peer2:8762/eureka/
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar","/app.jar", "--eureka.client.serviceUrl.defaultZone=$ADDRESS"]
I want to specify the eureka address to the entrypoint via environment variables.But it still $ADDRESS when i use docker run,and i use ENTRYPOINT java -jar xxxx,it can be replaced correctly,but when i use ENTRYPOINT like ENTRYPOINT java -jar xxx.jar,and i use docker run image_name --spring.profiles.active=peer1,ending parameters active=peer1 will not take effect,What should I do to use environment variables and parameters in Entrypoint
I just tried resolving PATH in dockerfile and it worked for one scenario (I used CMD though) not sure if that helps. If this did not work let me know with more details.
There are two forms to specify command (Exec form and Shell form), and two ways to specify the default command(ENTRYPOINT and CMD) in a dockerfile.
Exec form:
FROM ubuntu
MAINTAINER xyz<xyz.gmail.com>
ENTRYPOINT ["ping"]
The command specified (ping) will run as PID 1. That is the reason why if we press CTRL + C (SIGTERM) is passed to PID1 process and container is shutdown.
Shell form:
FROM ubuntu
MAINTAINER xyz<xyz.gmail.com>
CMD echo $PATH
PATH got resolved to the shell environment PATH when run.(output verified).
shell form specifiies command tokens separated by spaces.
shell form executes command by calling /bin/sh -c (this means PID 1 is actually the shell and the command specified in docker file is just another process)(pressing CTRL + C will not kill container)
Advantage of using shell form is since you are executing with /bin/sh -c we can resolve environment variables in command (in above example we are using PATH).
Hope this helps

I would like to populate a entry in config file at run time via docker compose

I have tomcat installed in a container, inside it there is application configuration file. I would like to populate a value inside it during run time. (before that I dont know what the value so cant populate at the time of building the image)
I am invoking service with docker-compose up, and I would like the value in configuration file gets replaced via the value I provide to docker compose as parameter
something like docker-compose up -e "value at run time via docker compose"
URL for server
SERVERADD=https://{{value at run time via docker compose}}/{{index}}
Can I accomplish this with environment variable or any other way kindly suggest !!!
This is normally done in an ENTRYPOINT or CMD script that is built into the image.
The script checks for the environment variable, does the replacements or other work required, then continues on to run the command as before.
#!/bin/sh
if [ -n "$SOME_ENV" ]; then
sed -i '' -e 's/^param=.*/param='"$SOME_ENV"'/' /etc/file.conf
fi
exec "$#"
The script needs to be added to an image, the Dockerfile could be:
FROM whatever
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "run_server", "-o", "option" ]

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

return from docker-compose up in jenkins

I have base image with Jboss copied on it. Jboss is started with a script and takes around 2 minutes.
In my Dockerfile I have created a command.
CMD start_deploy.sh && tail -F server.log
I do a tail to keep the container alive otherwise "docker-compose up" exits when script finishes and container stops.
The problem is when I do "docker-compose up" through Jenkins the build doesn't finishes because of tail and I couldn't start the next build.
If I do "docker-compose up -d" then next build starts too early and starts executing tests against the container which hasn't started yet.
Is there a way to return from docker-compose up when server has started completely.
Whenever you have chained commands or piped commands (|), it is easier to:
easier wrap them in a script, and use that script in your CMD directive:
CMD myscript
or wrap them in an sh -c command:
sh -c 'start_deploy.sh && tail -F server.log'
(but that last one depends on the ENTRYPOINT of the image.
A default ENTRYPOINT should allow this CMD to work)

Does 'docker start' execute the CMD command?

Let's say a docker container has been run with 'docker run' and then stopped with 'docker stop'. Will the 'CMD' command be executed after a 'docker start'?
I believe #jripoll is incorrect, it appears to run the command that was first run with docker run on docker start too.
Here's a simple example to test:
First create a shell script to run called tmp.sh:
echo "hello yo!"
Then run:
docker run --name yo -v "$(pwd)":/usr/src/myapp -w /usr/src/myapp ubuntu sh tmp.sh
That will print hello yo!.
Now start it again:
docker start -ia yo
It will print it again every time you run that.
Same thing with Dockerfile
Save this to Dockerfile:
FROM alpine
CMD ["echo", "hello yo!"]
Then build it and run it:
docker build -t hi .
docker run -i --name hi hi
You'll see "hello yo!" output. Start it again:
docker start -i hi
And you'll see the same output.
When you do a docker start, you call api/client/start.go, which calls:
cli.client.ContainerStart(containerID)
That calls engine-api/client/container_start.go:
cli.post("/containers/"+containerID+"/start", nil, nil, nil)
The docker daemon process that API call in daemon/start.go:
container.StartMonitor(daemon, container.HostConfig.RestartPolicy)
The container monitor does run the container in container/monitor.go:
m.supervisor.Run(m.container, pipes, m.callback)
By default, the docker daemon is the supervisor here, in daemon/daemon.go:
daemon.execDriver.Run(c.Command, pipes, hooks)
And the execDriver creates the command line in daemon/execdriver/windows/exec.go:
createProcessParms.CommandLine, err = createCommandLine(processConfig, false)
That uses the processConfig.Entrypoint and processConfig.Arguments in daemon/execdriver/windows/commandlinebuilder.go:
// Build the command line of the process
commandLine = processConfig.Entrypoint
logrus.Debugf("Entrypoint: %s", processConfig.Entrypoint)
for _, arg := range processConfig.Arguments {
logrus.Debugf("appending %s", arg)
if !alreadyEscaped {
arg = syscall.EscapeArg(arg)
}
commandLine += " " + arg
}
Those ProcessConfig.Arguments are populated in daemon/container_operations_windows.go:
processConfig := execdriver.ProcessConfig{
CommonProcessConfig: execdriver.CommonProcessConfig{
Entrypoint: c.Path,
Arguments: c.Args,
Tty: c.Config.Tty,
},
, with c.Args being the arguments of a Container (runtile parameters or CMD)
So yes, the 'CMD' commands are executed after a 'docker start'.
If you would like your container to run the same executable every time, then you should consider using ENTRYPOINT in combination with CMD.
Note: don’t confuse RUN with CMD. RUN actually runs a command and commits the result; CMD does not execute anything at build time, but specifies the intended command for the image.
https://docs.docker.com/engine/reference/builder/
No, the CMD command only executed when you execute 'docker run' to run a container based in a image.
In the documentation:
When used in the shell or exec formats, the CMD instruction sets the command to be executed when running the image.
https://docs.docker.com/reference/builder/#cmd

Resources