exited with code 0 docker - docker

I'm trying to launch container using docker-compose services.But unfortunetly, container exited whith code 0.
Containers is build thanks to a repository which is from a .tar.gz archive. This archive is a Centos VM.
I want to create 6 container from the same archive.
Instead of typing 6 times docker command, I would like to create a docker-compose.yml file where i can summarize their command and tag.
I have started to write docker-compose.yml file just for create one container.
Here is my docker-compose.yml :
version: '2'
services:
dvpt:
image: compose:test.1
container_name: cubop1
command: mkdir /root/essai/
tty: true
Do not pay attention to the command as I have just to specify one.
So my question is, why the container is exiting ? Is there a another solution to build these container at the same time ?
Thanks for your responses.

The answer is actually the first comment. I'll explain Miguel's comment a bit.
First, we need to understand that a Docker container runs a single command. The container will be running as long as that process the command started is running. Once the process is completed and exits then the container will stop.
With that understanding, we can make an assumption of what is happening in your case. When you start your dvpt service it runs the command mkdir /root/essai/. That command creates the folder and then exits. At this point, the Docker container is stopped because the process exited (with status 0, indicating that mkdir completed with no error).

run your docker in background with -d
$ docker-compose up -d
and on docker-compose.yml add:
mydocker:
tty: true

You can end with command like tail -f /dev/null
It often works in my docker-compose.yml with
command: tail -f /dev/null
And it is easy to see how I keep the container running.
docker ps

We had a problem where two of the client services(vitejs) exited with code 0. I added the tty: true and it started to work.
dashboard:
tty: true
container_name: dashboard
expose:
- 8001
image: tilt.dev/dashboard
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.entrypoints=web"
- "traefik.http.routers.dashboard-wss.tls=true"
- "traefik.http.routers.dashboard-wss.entrypoints=wss"

One solution is to create a process that doesn't end, an infinite loop or something that can run continuously in the background. This will keep the container opened because the the process won't exit.
This is very much a hack though. I'm still looking for a better solution.
The Zend Server image does something like this. In their .sh script they have a final command:
exec /usr/local/bin/nothing
Which executes a file that continuously runs in the background. I've tried to copy the file contents here but it must be in binary.
EDIT:
You can also end your file with /bin/bash which begins a new terminal process in the container and keeps it from closing.

It can be case that program (from ENTRYPOINT/CMD) run successfully and exited (without demonizing itself). So check your ENTRYPOINT/CMD in Dockerfile.

Create a Dockerfile and add the below line to execute any shell scripts or commands without exit code 0 error. In your case, it should be
RUN mkdir /root/essai/
However, use the below line to execute shell script
RUN /<absolute_path_of_container>/demo.sh

I know i am too late for the answer but few days ago i also ran into the same problem and everything mentioned above not working. The real problem is as mentioned in the above answer that the docker stops after the command exits.
So i did a hack for this
Note i have used Dockerfile for creating image you can do it in your way below is just an example.
I used Supervisor for monitoring the process. As long as supervisor is monitoring the docker container will also not exit.
For those who also ran into the same problem will do the following thin to solve the issue:
#1 Install supervisor in Dockerfile
RUN apt-get install -y supervisor
#2 Create a config file (named supervisord.conf )for supervisor like this
[include]
files = /etc/supervisor/conf.d/*.conf
[program:app]
command=bash
#directory will be any folder where you wnat supervisor to cd before executing.
directory=/project
autostart=true
autorestart=true
startretries=3
#user will be anyone you want but make sure that user will have the enough privilage.
user=root
[supervisord]
nodaemon=true
[supervisorctl]
#3 Copy the supervisor conf file to docker
COPY supervisord.conf /etc/supervisord.conf
#4 Define an entrypoint
ENTRYPOINT ["supervisord","-c","/etc/supervisord.conf"]
Tht`s it now just build the file and run the container. it will keep container running.
Hope it helps you to solve the problem.
And Happy coding :-)

Related

Docker container staying up and running

I have a failure to understand how does the docker container stay up and running. From what I know, if the container doesn't have any active processes it will shut down automatically independent of commands given.
That's the reason I have instructed my docker-compose.yml to do this command which keeps it alive:
gateway:
build: .
image: me/gateway
container_name: gateway
command: tail -F /dev/null <------
# restart: always
ports:
- "10091:10091"
volumes:
- ./logs:/root/logs
- vendor:/root/vendor
- .:/root
env_file:
- .env
While my Dockerfile does the following:
FROM php:7-fpm-alpine
EXPOSE 10091
WORKDIR /root
COPY . .
COPY src/scripts/generateConfig.sh /usr/local/bin/generateConfig
RUN ["chmod", "+x", "/usr/local/bin/generateConfig"]
In this scenario the container stays up, and it's all fine. However I would like to run a script once the container starts, so I have added the additional line to the end of my Dockerfile:
ENTRYPOINT ["generateConfig"]
After the command is ran, the container automatically stops. There are no errors when I inspect the log, as the script does the job as it is supposed to. Script is responsible for running a Ratchet web socket process indefinitely.
How can I make the script run and simultaneously keep my container running?
When you start a container it runs the command you defined in ENTRYPOINT or in CMD.
This command usually starts a process and as long as this process runs in the foreground the container will still run.
If your process is run in the background the container will stop.
Thus, if you are running a script you need to make sure it never ends.
You can simply achieve that by adding such line to the end of your script:
tail -f <some log file> # keep listening on your application log file.
Alternatively, you can add some while true:
while true; sleep 2; done
Hope it'll help
I have found the issue. Script which was in command to start the web socket process had this line starting it:
nohup php ${serviceProcess} >> /dev/null 2>&1 &
The solution was not to suppress output by using only:
nohup php ${serviceProcess}
It works well now

Docker compose - run shell and application inside shell

I'm using docker compose for running my application in dev. environment.
version: '3.4'
services:
web:
build:
context: .
target: base
ports:
- "5000:5000"
stdin_open: true
tty: true
volumes:
- ./src:/src
command: node src/main/server/index.js
Composer is starting container and I can see logs output from node application. When I press CTR-C - container is stopped and my application is stopped as well.
I would like to have my application to be stopped when I press CTRL-C instead of whole container.
The same behavior when running an app within Windows CMD or Linux shell. For example, to restart an app - press CTRL-C, repeat startup command (node src/main/server/index.js by pressing top arrow key), and press enter.
I was thinking I could use something like this, but it does not work.
command: bash -c "node src/main/server/index.js
I know I can use command below to achieve expected behavior:
docker-compose up -d (to start in detached mode)
docker-compose exec web bash (run interactive shell)
node src/main/server/index.js (start node manually)
But maybe there is a way to start bash interactive bash and run an application in bash using singe command docker-compose up ?
Docker runs a main process in its containers, as such, stopping the main process will also stop the container.
I will attempt to answer your question, but I don't think that you should work like that in a Dev environment.
Answering your question, you can "trap" the container in a main process, then just bash into the container and perform the app start.
In order to trap the container, just change the docker-compose command to:
command: while true; do sleep 1; done;
To get into an interactive bash in the container:
docker exec -it <CONTAINER-ID> bash
And then you can start or stop the node app.
It seems that the problem you are facing is a container taking a lot to start, you should probably reorder your Dockerfile to prevent it from redownloading all dependencies (or other long process) every time a file changes.
You should place your COPY command after all commands that should persist from across builds, and take advantage of docker's image layering.
If you need a "hot reload" feature, you can research Webpack hot reloading.
You would need to bind your host volume to the container's work directory in order to let webpack properly watch the files and reload the app.

Docker Compose keep container running

I want to start a service with docker-compose and keep the container running so I can get its IP-address via 'docker inspect'. However, the container always exits right after starting up.
I tried to add "command: ["sleep", "60"]" and other things to the docker-compose.yml but whenever I add the line with "command:..." I cant call "docker-compose up" as I will get the message "Cannot start container ..... System error: invalid character 'k' looking for beginning of value"
I also tried adding "CMD sleep 60" and whatnot to the Dockerfile itself but these commands do not seem to be executed.
Is there an easy way to keep the container alive or to fix one of my problems?
EDIT:
Here is the Compose file I want to run:
version: '2'
services:
my-test:
image: ubuntu
command: bash -c "while true; do echo hello; sleep 2; done"
It's working fine If I start this with docker-compose under OS X, but if I try the same under Ubuntu 16.04 it gives me above error message.
If I try the approach with the Dockerfile, the Dockerfile looks like this:
FROM ubuntu:latest
CMD ["sleep", "60"]
Which does not seem to do anything
EDIT 2:
I have to correct myself, turned out it was the same problem with the Dockerfile and the docker-compose.yml:
Each time I add either "CMD ..." to the Dockerfile OR add "command ..." to the compose file, I get above error with the invalid character. If I remove both commands, it works flawlessly.
To keep a container running when you start it with docker-compose, use the following command
command: tail -F anything
In the above command the last part anything should be included literally, and the assumption is that such a file is not present in the container, but with the -F option (capital -F not to be confused with -f which in contrast will terminate immediateley if the file is not found) the tail command will wait forever for the file anything to appear. A forever waiting process is basically what we need.
So your docker-compose.yml becomes
version: '2'
services:
my-test:
image: ubuntu
command: tail -F anything
and you can run a shell to get into the container using the following command
docker exec -i -t composename_my-test_1 bash
where composename is the name that docker-compose prepends to your containers.
You can use tty configuration option.
version: '3'
services:
app:
image: node:8
tty: true # <-- This option
Note: If you use Dockerfile for image and CMD in Dockerfile, this option won't work; however, you can use the entrypoint option in the compose file which clears the CMD from the Dockerfile.
Based on the comment of #aanand on GitHub Aug 26, 2015, one could use tail -f /dev/null in docker-compose to keep the container running.
docker-compose.yml example
version: '3'
services:
some-app:
command: tail -f /dev/null
Why this command?
The only reason for choosing this option was that it received a lot of thumbs up on GitHub, but the highest voted answer does not mean that it is the best answer. The second reason was a pragmatic one as issues had to be solved as soon as possible due to deadlines.
Create a file called docker-compose.yml
Add the following to the file
version: "3"
services:
ubuntu:
image: ubuntu:latest
tty: true
Staying in the same directory, run docker-compose up -d from the terminal
Run docker ps to get the container id or name
You can run docker inspect $container_id
You can enter the container and get a bash shell running docker-compose exec ubuntu /bin/bash or docker-compose exec ubuntu /bin/sh
When done, make sure you are outside the container and run docker-compose down
Here's a small bash script (my-docker-shell.sh) to create the docker compose file, run the container, login to the container and then finally cleanup the docker container and the docker compose file when you log out.
#!/bin/bash
cat << 'EOF' > ./docker-compose.yml
---
version: "3"
services:
ubuntu:
image: ubuntu:latest
command: /bin/bash
# tty: true
...
EOF
printf "Now entering the container...\n"
docker-compose run ubuntu bash
docker-compose down
rm -v ./docker-compose.yml
In the Dockerfile you can use the command:
{CMD sleep infinity}
Some people here write about overwriting the entrypoint so that the command can also have its effect. But no one gives an example. I then:
docker-compose.yml:
version: '3'
services:
etfwebapp:
# For messed up volumes and `sudo docker cp`:
command: "-f /dev/null"
entrypoint: /usr/bin/tail
tty: true
# ...
I am not sure if tty is needed at this point. Is it better to do it twice? In my case it did no harm and worked perfectly. Without entrypoint it didn't work for me because then command had no effect. So I guess for this solution tty is optional.
To understand which command is executed at start-up, simply read the entrypoint before the command (concat with space): /usr/bin/tail -f /dev/null.
I'm late to the party, but you can simply use: stdin_open: true
version: '2'
services:
my-test:
image: ubuntu
stdin_open: true
Blocking command is all you need.
I have been struggling with this problem for half a day.
. There are many answers below, but not clear enough. And nobody said why.
In short, there are two methods, but it can also be said that there is only one, running a Blocking processes in background.
This first one is using COMMAND:
version: '3'
services:
some-app:
command: ["some block command"]
put some block command like sleep infinity, tail -f /dev/null, watch anything, while true ...
Here I recommend sleep infinity.
The second is enable tty=true, then open a shell in command like /bin/bash.
services:
ubuntu:
image: ubuntu:latest
tty: true
command: "/bin/bash"
Since the tty is enabled, bash will keep running background, you can put some other block commands before it if you want.
Be careful, you must excute shell command at the end, like
command: /bin/bash -c "/root/.init-service && /bin/bash"
As you can see, all you need is blocking command.
Just a quick note
I have tested single image based on golang, so when I call docker-compose down here what I get:
version: "3.1"
...
command: tail -f /dev/null # stopping container takes about 10 sec.
tty: true # stopping container takes about 2 sec.
My system info:
Ubuntu 18.04.4 LTS (64-bit)
Docker version 19.03.6, build 369ce74a3c
docker-compose version 1.26.0, build d4451659
As the commenter stated, we'd have to see the Dockerfile in question to give you a complete answer, but this is a very common mistake. I can pretty much guarantee that the command you're trying to run is starting a background process. This might be the command you'd run in non-Docker situations, but it's the wrong thing to do in a Dockerfile. For instance, if what you're running is typically defined as a system service, you might use something like "systemctl start". That would start the process in the background, which will not work. You have to run the process in the foreground, so the entire process will block.
Okay I found my mistake. In the Dockerfile for the image used for compose I specified that the base image should be ubuntu:latest, but I previously created an image called ubuntu by myself and that image did not work. So I did not use the original ubuntu image but rather a corrupt version of my own image also called ubuntu.

Linode/lamp + docker-compose

I want to install linode/lamp container to work on some wordpress project locally without messing up my machine with all the lamp dependencies.
I followed this tutorial which worked great (it's actually super simple).
Now I'd like to use docker-compose because I find it more convenient to simply having to type docker-compose up and being good to go.
Here what I have done:
Dockerfile:
FROM linode/lamp
RUN service apache2 start
RUN service mysql start
docker-compose.yml:
web:
build: .
ports:
- "80:80"
volumes:
- .:/var/www/example.com/public_html/
When I do docker-compose up, I get:
▶ docker-compose up
Recreating gitewordpress_web_1...
Attaching to gitewordpress_web_1
gitewordpress_web_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
I'm guessing I need a command argument in my docker-compose.yml but I have no idea what I should set.
Any idea what I am doing wrong?
You cannot start those two processes in the Dockerfile.
The Dockerfile determines what commands are to be run when building the image.
In fact many base images like the Debian ones are specifically designed to not allow starting any services during build.
What you can do is create a file called run.sh in the same folder that contains your Dockerfile.
Put this inside:
#!/usr/bin/env bash
service apache2 start
service mysql start
tail -f /dev/null
This script just starts both services and forces the console to stay open.
You need to put it inside of your container though, this you do via two lines in the Dockerfile. Overall I'd use this Dockerfile then:
FROM linode/lamp
COPY run.sh /run.sh
RUN chmod +x /run.sh
CMD ["/bin/bash", "-lc", "/run.sh"]
This ensures that the file is properly ran when firing up the container so that it stays running and also that those services actually get started.
What you should also look out for is that your port 80 is actually available on your host machine. If you have anything bound to it already this composer file will not work.
Should this be the case for you ( or you're not sure ) try changing the port line to like 81:80 or so and try again.
I would like to point you to another resource where LAMP server is already configured for you and you might find it handy for your local development environment.
You can find it mentioned below:
https://github.com/sprintcube/docker-compose-lamp

Docker Compose and execute command on starting container

I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).

Resources