Container exit when I run them separately from docker-compose - docker

I run the one of the open source microservices from here. When i run docker ps then all the containers status are UP, means they keep running. My issue is when I separately run a container then it did not keep running and exits. Below is one of the service defined in docker-compose file.
social-graph-service:
image: yg397/social-network-microservices
hostname: social-graph-service
restart: always
entrypoint: SocialGraphService
when i run it using command
sudo docker run -d --restart always --entrypoint SocialGraphService --hostname social-graph-service yg397/social-network-microservices
then its status does not UP, it exits after running. Why all the containers run continuously when i run them using sudo docker-compose up? and exit when i run them individually?

It looks like the graph service depends on MongoDB in order to run. My guess is it crashes when you run it individually because the mongo instance doesn't exist and it fails to connect.
The author of the repo wrote the docker-compose file to hide away some of the complexity from you, but that's a substantial tree of relationships between microservices, and most of them seem to depend on others existing in order to boot up.
-- Update --
The real issue is in the comments below. OP was already running the docker-compose stack while attempting to start another container, but forgot to connect the container to the docker network generated by docker-compose.

Related

The container exited with status code (1)

I encountered an issue when running the docker container.
An error log was generated as below:
[Error] mysqld : unknown variable “wait_timeout = 288000”.
I wanted to test some docker container features.
So, I opened the docker bash and entered the directory /etc/mysql/my.cnf.
And I added the variable “wait_timeout = 288000” below [mysqld] option.
However, after rebooting, when I ran the container, it exited immediately with status code (1).
I knew that the error was caused by the variable I just added.
So, I wanted to delete the variable, but now the docker container bash won’t open.
Is there any way that I can delete the variable “wait_timeout” in this case?
If there isn’t, could you recommend other methods for troubleshooting?
Thanks for checking the issue.
Delete and recreate the container, and it will start fresh from a clean container filesystem.
That is probably also a better way to modify the database configuration (if you do, in fact, need a custom my.cnf). You can bind-mount a directory of configuration files into the container at startup time:
docker run -d -p 3306:3306 --name mysql \
-v $PWD/mysql-conf:/etc/mysql/conf.d \
mysql:8
Then when the configuration changes, you can delete and recreate this container:
docker stop mysql
docker rm mysql
docker run -d -p 3306:3306 ... mysql:8 # as above
(See "Using a custom MySQL configuration file" in the Docker Hub mysql image page for more information.)
Deleting and recreating Docker containers is very routine, and one of the benefits is that when a new container starts, it always has a "clean" filesystem. This particular setup also makes sure the modified configuration file is stored outside the container, so if you are forced to recreate the container (to upgrade MySQL to get a critical security fix, for example) it's something you're used to doing and you won't lose data or settings.

Ignore container exit when using docker-compose

I am setting up a test infrastructure using docker-compose. I want to use the docker-compose option --exit-code-from to return the exit code from the container that is running tests. However, I also have a container that runs migrations on my database container using the sequelize cli. This migrations container exits with code 0 when migrations are complete and then my tests run. This causes an issue with both the --exit-code-from and --abort-on-container-exit options. Is there a way to ignore when the migration container exits?
Don't use docker-compose up for running one-off tasks. Use docker-compose run instead, as the documentation suggests:
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
Source: https://docs.docker.com/compose/faq/
For example:
docker-compose build my_app
docker-compose run db_migrations # this starts the services it depends on, such as the db
docker-compose run my_app_tests
--exit-code-from implies --abort-on-container-exit, which according to documentation
--abort-on-container-exit Stops all containers if any container was stopped.
But you could try:
docker inspect <container ID> --format='{{.State.ExitCode}}'
You can get a list of all (including stopped) containers with
docker container ls -a
Here's a nice example: Checking the Exit Code of Stopped Containers

What's the difference between docker-compose -up -d and docker-compose up -build?

I'm wondering what is the difference between those two commands?
When I'm doing docker-compose up --build I got a message:
php-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs).
I red that it's because it runs as a foreground process and I need to use -d instead.
After running docker-compuse up -d I don't get that message.
And the main question is the result are different between those two commands?
From the docs
docker-compose up builds, (re)creates, starts, and attaches to containers for a service.
docker-compose up -d starts the containers in the background and leaves them running. (this means that if you want to see the logs of the containers you will have to use docker-compose logs -f)
docker-compose up --build builds images before starting containers
This similar question: docker-compose up vs docker-compose up --build vs docker-compose build --no-cache
mentions that:
if you add the --build option, it is forced to build the images even when not needed.

Difference between docker-compose run, start, up

I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/

How to stop all containers when one container stops with docker-compose?

Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.

Resources