Docker keeps restarting without error on logs - docker

I'm trying to create a simple instance using docker-compose with this simple yml(It's a Laravel project, but right now i'm not initializing anything):
version: '3'
services:
api:
image: amazonlinux
container_name: test-backend
restart: unless-stopped
working_dir: /var/www
ports:
- 8000:80
volumes:
- .:/var/www
- ~/.ssh:/root/.ssh
Here i'm just trying to create an AmazonLinux instance to test some libraries i'm installing to let the backend run, but for some reason, when i make docker-compose up, the instance keeps restarting. I tried checking the logs, but they are empty, there is no error message, warning, or anything that tells me what is happening.
I tried running the instance manually with docker run -dit amazonlinux:latest and that works, create an instance with AmazonLinux that doesn't restart, but the compose one keeps doint it. I have also tried wiping everything with
- docker rm $(docker ps -aq) -f
- docker volume rm $(docker volume ls -q) -f
- docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
Yet keeps happening, i restarted Docker, still happens. Other instances of other projects don't have any problem, they can be launched with docker-compose up, it's just this one the one causing a problem, Does someone know what i might be doing wrong? As an aditional detail, i believe this started happening after i made an accidental ctrl+c in the middle of a docker-compose up, but i'm not sure if that might be the cause.

You're not giving your container anything to do.
If you docker image inspect amazonlinux, you can see that the default behavior of the image is to start an interactive bash shell:
[...]
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
[...]
When you start a container via docker-compose up, this is similar to running docker run <containername>: by default, the container is not attached to stdin and does not allocate a tty. When bash is run in this environment, it's default behavior is to exit immediately. You can simulate this yourself by running:
bash < /dev/null
This is why you container keeps restarting: it starts up, bash attempts to run an interactive shell but it can't, so it exits.
The solution here is to run something besides bash. What are you trying to do with this container? You would typically run some sort of service (like a web server, or a database server, or a message broker, or...etc). Set an appropriate command key in the service in your docker-compose.yml.
If you're really just playing around, a good option is sleep inf, which means "do nothing, forever". This will allow the container to start and keep running, and you can then use docker-compose exec to run commands inside wthe container:
version: '3'
services:
api:
image: amazonlinux
container_name: test-backend
restart: unless-stopped
working_dir: /var/www
ports:
- 8000:80
volumes:
- .:/var/www
- ~/.ssh:/root/.ssh
command:
- sleep
- inf

Related

Docker container immediately exists after loading from compose file (exit code 0)

Perhaps I'm missing something obvious, but I just tried to add the firebase-tools docker image to my docker-compose file:
version: '3.6'
services:
firebase-tools-test:
tty: true
image: andreysenov/firebase-tools
ports:
- 9099:9099
- 4000:4000
- 5000:5000
- 5001:5001
- 9199:9199
- 9005:9005
- 9000:9000
- 8085:8085
- 8080:8080
Howeber when running it immediatly exists with exit code 0. Logs don't show anything at all and I wanted to know if this was a simple misconfiguration and how I could get more verbose logging to see why it's exiting.
Docker does not keep your container running by default. If things done it will exit it. To keep it waiting for input create TTY by using docker run -it for interactive or docker run -dt for detached mode. For compose it would be tty: true, alternatively you overwrite the given CMD by something like
entrypoint: ["tail"]
command: ["-f","/dev/null"]
or command: tail -F anything or another mimic keeping a process running forever..
Remark: This works because the container is just running sh anyways. If there is something different in CMD you have to call that and chain above by command: <command to start container logic> && tail -F anythingor something like that.

error while removing network: <network> id has active endpoints

I am trying run docker compose down using jenkins job.
"sudo docker-compose down --remove-orphans"
I have used --remove-orphans command while using the docker-compose down.
Still it gives below error.
Removing network. abc
error while removing network: network id ************ has active endpoints
Failed command with status 1: sudo docker-compose down --remove-orphans
Below is my docker compose:
version: "3.9"
services:
abc:
image: <img>
container_name: 'abc'
hostname: abc
ports:
- "5****:5****"
- "1****:1***"
volumes:
- ~/.docker-conf/<volume>
networks:
- <network>
container-app-1:
image: <img2>
container_name: 'container-app-1'
hostname: 'container-app-1'
depends_on:
- abc
ports:
- "8085:8085"
env_file: ./.env
networks:
- <network>
networks:
<network>:
driver: bridge
name: <network>
To list your networks, run docker network ls. You should see your <network> there. Then get the containers still attached to that network with (replacing your network name at the end of the command):
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
For the various returned container id's, you can check why they haven't stopped (inspecting the logs, making sure they are part of the compose project, etc), or manually stop them if they aren't needed anymore with (replacing the <cid> with your container id):
docker container stop "<cid>"
Then you should be able to stop the compose project.
There is a situation when there are no containers at all, but there is an error. Then systemctl restart docker helped me
This can also happen, when you have a db instance running on separate container and using the same network. In this case, removing the db instance using the command
docker container stop "<cid>"
will stop the container. We can find the container id that is using the network by using the command provided by #BMitch
docker network inspect \
--format '{{range $cid,$v := .Containers}}{{printf "%s: %s\n" $cid $v.Name}}{{end}}' \
"<network>"
But in my case, when I did that, it also made that postgres instance "orphaned". Then i did
docker-compose up -d --remove-orphans
After that, I booted up a new db instance (postgres) using docker compose file and mapped the volume of data directory of that to the data directory of the previous db instance.
volumes:
- './.docker/postgres/:/docker-entrypoint-initdb.d/'
- ~/backup/postgress:/var/lib/postgresql/data
My Problem was solved only by restarting the docker and then deleting the container manually from the docker desktop.

image runs properly using docker run -dit, but exits using docker stack deploy -c

I've been porting a web service to docker recently. As mentioned in the title, I'm encountering a weird scenario where in when I run it using docker run -dit, the service runs in the background, but when I use a docker-compose.yml, the service exits.
To be clearer, I have this entrypoint in my Dockerfile:
ENTRYPOINT ["/data/start-service.sh"]
this is the code of start-service.sh:
#!/bin/bash
/usr/local/bin/uwsgi --emperor=/data/vassals/ --daemonize=/var/log/uwsgi/emperor.log
/etc/init.d/nginx start
exec "$#";
as you can see, I'm just starting uwsgi and nginx here in this shell script. The last line (exec) is just make the script accept a parameter and keep it running. Then I run this using:
docker run -dit -p 8080:8080 --name=web_server webserver /bin/bash
As mentioned, the service runs OK and I can access the webservice.
Now, I tried to deploy this using a docker-compose.yml, but the service keeps on exiting/shutting down. I attempted to retrieve the logs, but I have no success. All I can see from doing docker ps -a is it runs for a second or 2 (or 3), and then exits.
Here's my docker-compose.yml:
version: "3"
services:
web_server:
image: webserver
entrypoint:
- /data/start-service.sh
- /bin/bash
ports:
- "8089:8080"
deploy:
resources:
limits:
cpus: "0.1"
memory: 2048M
restart_policy:
condition: on-failure
networks:
- webnet
networks:
- webnet
The entrypoint entry in the yml file is just to make sure that start-service.sh script will be ran with /bin/bash as its parameter, to keep the service running. But again, the service shuts down.
bash will exit without a proper tty. Since you execute bash via exec it becomes PID 1. Whenever PID 1 exits the container is stopped.
To prevent this add tty: true to the service's description in your compose file. This is basically the same thing as you do with -t with the docker run command.

Docker container always restarting

I try to start a Docker container from debian image, with a Docker compose file.
But when I do docker ps - a, the container is always restarting. I don't know why :s
Here my dockerfile :
FROM debian:jessie
ENV DEBIAN_FRONTEND noninteractive
RUN mkdir /home/server
RUN cd /home/server
VOLUME /home/server
CMD /bin/bash
Here my docker compose file :
version: '2'
services:
server:
build: .
restart: always
container_name: server
volumes:
- "/home/binaries:/home/server"
When docker-compose runs your "server" container, it will immediately terminate. A docker container needs at least one running process, otherwise, the container will exit. In your example, you are not starting a process that keeps alive.
As you have configured restart: always, docker-compose will endlessly restart new containers for "server". That should explain the behavior that you describe.
I have seen docker-compose files, where data containers were defined which only mounted images (in combination with volumes_from). They deliberately used /bin/true as a command, which also lead to permanent but harmless restarts. For example:
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
If restarts are not what you want, you could start a process in the container that does something useful, like running a web server or a database. But a bash alone is not something that will keep a container alive. A bash running in non-interactive mode will exit immediately.

Why does docker compose exit right after starting?

I'm trying to configure docker-compose to use GreenPlum db in Ubuntu 16.04. Here is my docker-compose.yml:
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:
The issue is when I run it with sudo docker-compose up the GrrenPlum db is shutdowm immedately after starting. It looks as this:
greenplum_1 | 20170602:09:01:01:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Starting Master instance 72ba20be3774 directory /gpdata/master/gpseg-1
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Command pg_ctl reports Master 72ba20be3774 instance active
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-No standby master configured. skipping...
greenplum_1 | 20170602:09:01:02:000050 gpstart:e1ae49da386c:gpadmin-[INFO]:-Database successfully started
greenplum_1 | ALTER ROLE
dockergreenplumn_greenplum_1 exited with code 0 <<----- Here
Actually, when I start it with just sudo docker run pivotaldata/gpdb-base it's ok.
What's wrong with the docker compose?
First of all, be cautious running this image: the image looks to be badly maintained, and the information on Docker Hub indicates it's neither "official", nor "supported" in any way;
2017-01-09: Toolsmiths reviewed this image; it is not one we create. We make no promises about whether this is up to date or if it works. Feel free to email pa-toolsmiths#pivotal.io if you are the owner and are interested in collaborating with us on this image.
When using images from Docker Hub, it's recommended to either use official images, or when not available, prefer automated builds (in which case the source code of the image can be verified to see what's used to build theimage).
I think the image is built from this GitHub repository, which means it has not been updated for over a year, and uses an outdated (CentOS 6.7) base image that has a huge amount of critical vulnerabilities
Back to your question;
I tried starting the image, both with docker-compose and docker run, and both resulted in the same for me.
Looking at that image, it is designed to be run interactively, or to be used as a base image (and overriding the command).
I inspected the image to find out what the container's command is;
docker inspect --format='{{json .Config.Cmd}}' pivotaldata/gpdb-base
["/bin/sh","-c","echo \"127.0.0.1 $(cat ~/orig_hostname)\" >> /etc/hosts && service sshd start && su gpadmin -l -c \"/usr/local/bin/run.sh\" && /bin/bash"]
So, this is what's executed when the container is started;
echo "127.0.0.1 $(cat ~/orig_hostname)" >> /etc/hosts \
&& service sshd start \
&& su gpadmin -l -c "/usr/local/bin/run.sh" \
&& /bin/bash"
Based on the above, there is no "foreground" process in the container, so the moment /usr/local/bin/run.sh finishes, a bash shell is started. A bash shell wothout a tty attached, exits immediately, at which point the container exits.
To run this image
(Again; be cautious running this image)
Either run the image interactively, by passing it stdin and a tty (-i -t, or -it as a shorthand);
docker run -it pivotaldata/gpdb-base
Or can run it "detached", as long as a tty is passed as well (add the -d and -t flags, or -dt as a shorthand); doing so, keeps the container running in the background;
docker run -dit pivotaldata/gpdb-base
To do the same in docker-compose, add a tty to your service;
tty: true
Your compose file will then look like this;
version: '2'
services:
greenplum:
image: "pivotaldata/gpdb-base"
ports:
- "5432:5432"
tty: true
volumes:
- gp_data:/tmp/gp
volumes:
gp_data:

Resources