Today I tried to run my containers in detached mode and I faced some issue.
When I ran the command docker container run -d nginx, the image of nginx got pulled and the output of the container was not shown as it was in detached mode.
Then I ran the command docker container ls which we all know shows only running containers and it showed my nginx container running.
Then I tried the same thing with the ubuntu image i.e.
docker container run -d ubuntu but when I ran docker container ls command my ubuntu image was not running, only the nginx container was running.
Why is it so?
You don't see a running container with the ubuntu image, because the container stops immediately after being started. while the nginx image starts an nginx server that keeps the container running, the ubuntu image executes a sh -c "bash" on start - bash is not a process that keeps on running after execution. You will be able to see your stopped ubuntu container with the docker ps -a
If you want to keep the ubuntu container running, you need to pass it a command that starts a process that keeps on running, e.g. docker run -d ubuntu tail -f /dev/null
Related
I built a container on a private server running the command docker build -t image-name . and then ran it docker run -it image-name. But when I check the container list docker ps, it doesn't show.
Probably the container is failing to start and is not listed in the active containers of your server.
Try to check the status of all your containers with docker ps -a docker ps.
See the logs of the container using docker logs.
If I run this command: docker-compose up --detach:
It just returns the default information about Docker:
Builds, (re)creates, starts, and attaches to containers for a service.
Unless they are already running, this command also starts any linked services.
The `docker-compose up` command aggregates the output of each container. When
the command exits, all containers are stopped. Running `docker-compose up -d`
starts the containers in the background and leaves them running.
If there are existing containers for a service, and the service's configuration
or image was changed after the container's creation, `docker-compose up` picks
up the changes by stopping and recreating the containers (preserving mounted
volumes). To prevent Compose from picking up changes, use the `--no-recreate`
flag.
How can I get it to run?
I've tried docker-compose up -d, which returns:
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default`.
Are you sure the Docker daemon is running? Try running this:
sudo systemctl start docker
Or on older systems:
sudo service docker start
You can also use environment variables to debug what is the problem with the Docker daemon:
eval "$(docker-machine env default)"
then
docker-compose --verbose up -d
It seems Docker daemon is not running. Start it by this command (temporarily until the next reboot):
systemctl start docker
If on older OSes:
service docker start
Then run your command docker-compose up -d. If it worked, now you should enable docker so that when you reboot the OS, the daemon starts automatically:
systemctl enable docker
If on older OSes:
service docker enable
I am running single docker container on two different ports using below command
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} -p ${EXTERNAL_PORT_NUMBER_SECOND}:${INTERNAL_PORT_NUMBER_SECOND} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -m 1024M --memory-swap -1 -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can see the container is running fine
My question is How can I see the logs of this docker container.
Every time I do sudo docker logs database-service -f I can see the log of container running on 9003 port only.
How can I view the logs of container running on 9113
You are getting all the logs that was displayed on stdout or stderr in the container.
It has nothing to do with the processes which are exposed on different ports.
If 2 instance is running inside the container and both are showing there logs on system console then you will be getting both logs on the docker logs command for the container.
You can try multitail utility to tail more than one log files in docker exec command.
For that you have to install it in that container.
You can bind external volumes to container service logs and see the logs
docker run -v 'path_to_you_host_logs':'container_service_log_path'
docker run -v 'home/user/app/apache_access.log':
'/var/log/apache_access.log'
I am using CoreOS as the host system and there are many docker containers running on it. How can I get the docker launch command for these containers? I searched there are some ways by installing third party library to do the reverse engineering work. But it doesn't work on CoreOS since I can only install docker container there.
The reason I want to know the launcher command is that I have a running container (this container was launched by some other scripts). I attach to this container and fork a process. The container will exit with 137 code if I kill that process. It works fine if I launch the container by this command: docker run -it -d $NAME bash. I am not sure why this happens. There must be something different with the launcher command.
I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.
The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.