I am setting up a simple gitlab ci for a Rails app with build, test and release stages:
build:
stage: build
script:
- docker build --pull -t $TEST_IMAGE .
- docker push $TEST_IMAGE
test:
stage: test
services:
- docker:dind
script:
- docker pull $TEST_IMAGE
- docker run -d --name mysql -e MYSQL_ROOT_PASSWORD=mysql_strong_password mysql:5.7
- docker run -e RAILS_ENV=test --link mysql:db $TEST_IMAGE bundle exec rake db:setup
build succeeds building the docker image and pushing to registry
test launches another mysql container which I use as my host db, but fails when establishing connection to mysql.
Couldn't create database for {"host"=>"db", "adapter"=>"mysql2", "pool"=>5, "username"=>"root", "encoding"=>"utf8", "timeout"=>5000, "password"=>"mysql_strong_password", "database"=>"my_tests"}, {:charset=>"utf8"}
(If you set the charset manually, make sure you have a matching collation)
rails aborted!
Mysql2::Error: Can't connect to MySQL server on 'db' (111 "Connection refused")
I also tried creating seperate docker network using --network instead of link approach, did not help.
That happens only on Gitlab runner instance. When I perform those steps on local machine it works fine.
After much reading I get to think it is a bug with docker executor. Am I missing something?
Connection refused indicates that the containers know how to reach each other, but the target container does not have anything accepting connections on the selected port. This most likely means you are starting your application up before the database has finished initializing. My recommendation is to update/create your application or create an entrypoint in your application container that polls the database for it to be up and running, and fail after a few minutes if it doesn't start up. I'd also recommend using networks and not links since links are deprecated and do not gracefully handle containers being recreated.
The behavior you're seeing is documented in the mysql image:
No connections until MySQL init completes
If there is no database initialized when the container starts, then a default database will be created. While this is the expected behavior, this means that it will not accept incoming connections until such initialization completes. This may cause issues when using automation tools, such as docker-compose, which start several containers simultaneously.
If the application you're trying to connect to MySQL does not handle MySQL downtime or waiting for MySQL to start gracefully, then a putting a connect-retry loop before the service starts might be necessary. For an example of such an implementation in the official images, see WordPress or Bonita.
From the linked wordpress example, you can see their retry code:
$maxTries = 10;
do {
$mysql = new mysqli($host, $user, $pass, '', $port, $socket);
if ($mysql->connect_error) {
fwrite($stderr, "\n" . 'MySQL Connection Error: (' . $mysql->connect_errno . ') ' . $mysql->connect_error . "\n");
--$maxTries;
if ($maxTries <= 0) {
exit(1);
}
sleep(3);
}
} while ($mysql->connect_error);
A sample entrypoint script to wait for mysql without changing your application itself could look like:
#!/bin/sh
wait-for-it.sh mysql:3306 -t 300
exec "$#"
The wait-for-it.sh comes from vishnubob/wait-for-it, and the exec "$#" at the end replaces pid 1 with the command you passed (e.g. bundle exec rake db:setup). The downside of this approach is that the database could potentially be listening on a port before it is really ready to accept connections, so I still recommend doing a full login with your application in a retry loop.
Related
Given the following Docker Compose file....
version: '3.8'
services:
producer:
image: producer
container_name: producer
depends_on: [db]
build:
context: ./producer
dockerfile: ./Dockerfile
db:
image: some-db-image
container_name: db
When I do docker-compose up producer obviously the db service gets started too. When I CTRL+C both services are stopped. This is expected and fine.
But sometimes, the db service is started before, on a different shell and so doing docker-compose up producer understands that db is running and only starts producer. But when I hit CTRL+C, both producer and db are stopped even though db was not started as part of this docker compose up command.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
When running just docker-compose up, the CTRL+C command always stops all running services in the current compose scope. It doesn't care about depends_on.
You would need to spin it up with detach option -d, like
docker-compose up -d producer
Then you can do
docker stop producer
And db service should still be running.
As I understand your question: You want to stop a container A which depends on another container B. But when stopping A, you don't want docker-compose to stop B.
Docker-compose stops the dependent containers ('B' in this case) when 'A' is stopped.
How I would approach this:
Split up the docker-compose files into A and B
In docker-compose for A create a health check testing (and waiting) for container B to be alive.
Since this is a database, you could do this with a dummy query.
Then you still have dependency, but not the docker-compose connection of stopping dependant containers.
You can't simply do that with CTRL+C.
Your docker-compose file and the services defined in it are treated as a project. You may notice that all containers, networks and volumes are prefixed with the name of the directory where the docker-compose file is located by default. This is the project name. It can be changed via an environment variable or the -p flag of the docker-compose command.
What docker-compose does is it keeps track of all the resources for a given project.
In your case there are two services: db and producer. Whenever you run docker-compose up, both of them start up. They both end up being part of the same project. The same applies when you only start one of the services (e.g. with docker-compose up db). You can later start the other service and it will still be part of the same project.
One more thing to note here: Whenever you run docker-compose without the -d (detached) flag, you get attached to the whole project, meaning whenever you hit CTRL+C, you'll stop all services. It does not matter if the last compose command started only one of the services or if they depend on each other. Attaching to the project and hitting CTRL+C will stop them.
A possible solution to your problem would be the following:
Start up your services via docker-compose up -d (both db and producer will get created). They are now in detached mode. If you still want to check the logs in real time (kinda like attaching), use docker-compose logs -f. Now, however, if you want to stop only one of the services you can simply do docker-compose stop $SVC_NAME (where $SVC_NAME is either db or producer) and this will keep the other one running. This way, whatever happens to your terminal session, your services won't stop, unless you explicitly tell them to.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
Yes.
Using the new version docker compose instead of docker-compose might solve your problem Reference.
Simple example
Assuming now you are using the new version, your process could be something like this.
docker-compose.yml
version: "3.8"
services:
db:
build: .
producer:
build: .
depends_on: [db]
extra:
build: .
Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
ENTRYPOINT [ "/bin/sh", "script.sh" ]
script.sh
while :; do sleep 1; done
Suppose db has started before with
$ docker compose up -d db.
Then later,
$ docker compose up -d producer.
Now you can stop only producer with
$ docker compose stop producer.
You can check if db is still running with
$ docker compose ps.
Notice the use of -d flag for detached mode, as pointed out in another answer, so you don't need to kill the process with CTRL+C. Also, using detached flag allows you to check the services that are running with docker compose ps.
A similar issue as yours was reported and fixed a while ago, as you can see here.
I was not able to reproduce the behavior you observe with a complete minimal example. Namely, when running docker compose stop producer, the underlying db is not stopped AFAICT.
Anyway, you may be interested in an alternative command that is a bit more flexible than docker compose up, regarding how to run "one-off commands": docker compose run.
The typical use cases are as follows:
docker compose run db bash → run the db service, replacing the default CMD with bash
docker compose run -d db → run the db service in the background (detach mode)
docker compose run --service-ports producer → run the service producer and its dependencies (unless they were run with docker compose up), enabling the ports mapping.
So for your specific use case, you could run:
docker compose up -d db
docker compose run --service-ports producer
I have my own CI server with gitlab and I'm trying to run docker runner (version 10.6) with this configuration:
image: php:7.1
services:
- mysql:latest
- redis:latest
- elasticsearch:latest
before_script:
- bash ci/install.sh > /dev/null
- php composer install -a
stages:
- test
test:
stage: test
variables:
API_ENVIRONMENT: 'test'
script:
- echo "Running tests"
- php composer app:tests
But everytime when I pull docker container with elastic, I've got error message:
*** WARNING: Service runner-1de473ae-project-225-concurrent-0-elasticsearch-2 probably didn't start properly.
Error response from daemon: Conflict. The container name "/runner-1de473ae-project-225-concurrent-0-elasticsearch-2-wait-for-service" is already in use by container "f26f56b2905e8c3da1977bc7c48e7eba00e943532146b7a8711f91fe67b67c3b". You have to remove (or rename) that container to be able to reuse that name.
*********
I also tried to log into this server and list all containers, but there is only redis one:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cec961e03b2 811c03fb36bc "gitlab-runner-ser..." 39 hours ago Up 39 hours runner-1de473ae-project-247-concurrent-1-redis-1-wait-for-service
After googling this problem I found this issue: https://gitlab.com/gitlab-org/gitlab-runner/issues/2667 then I update runner to 10.6, but problem persists.
After all, there is no running elastic on my server, then my tests fails on:
FAILED: Battle/BattleDataElasticProviderTest.php method=testGetLocalBattles
Exited with error code 255 (expected 0)
Elasticsearch\Common\Exceptions\NoNodesAvailableException: No alive nodes found in your cluster
Is there any way, how to start ES or at least put ES into more verbosive mode?
Thanks!
When a container is stopped, it still exists even though it's now in an exited state. Using command docker ps -a shows you all the running and exited containers.
To start a new container with an already existing name, you need to first manually remove the old container occupying this name by using docker rm.
A convenient way is to use the --rm argument when starting a container, the container will be automatically removed once it stops.
I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database
I'm developing a server and its client simultaneously and I'm designing them in Docker containers. I'm using Docker Compose to link them up and it works just fine for production but I can't figure out how to make it work with a development workflow in which I've got a shell running for each one.
My docker-compose-devel.yml:
server:
image: node:0.10
client:
image: node:0.10
links:
- server
I can do docker-compose up client or even docker-compose run client but what I want is a shell running for both server and client so I can make rapid changes to both as I develop iteratively.
I want to be able to do docker-compose run server bash in one window and docker-compose run --no-deps client bash in another window. The problem with this is that no address for the server is added to /etc/hosts on the client because I'm using docker-compose run instead of up.
The only solution I can figure out is to use docker run and give up on Docker Compose for development. Is there a better way?
Here's a solution I came up with that's hackish; please let me know if you can do better.
docker-compose-devel.yml:
server:
image: node:0.10
command: sleep infinity
client:
image: node:0.10
links:
- server
In window 1:
docker-compose --file docker-compose-dev.yml up -d server
docker exec --interactive --tty $(docker-compose --file docker-compose-dev.yml ps -q server) bash
In window 2:
docker-compose --file docker-compose-dev.yml run client bash
I guess your main problem is about restarting the application when there are changes in the code.
Personnaly, I launch my applications in development containers using forever.
forever -w -o log/out.log -e log/err.log app.js
The w option restarts the server when there is a change in the code.
I use a .foreverignore file to exclude the changes on some files:
**/.tmp/**
**/views/**
**/assets/**
**/log/**
If needed, I can also launch a shell in a running container:
docker exec -it my-container-name bash
This way, your two applications could restart independently without the need to launch the commands yourself. And you have the possibility to open a shell to do whatever you want.
Edit: New proposition considering that you need two interactive shells and not simply the possibility to relaunch the apps on code changes.
Having two distinct applications, you could have a docker-compose configuration for each one.
The docker-compose.yml from the "server" app could contain this kind of information (I added different kind of configurations for the example):
server:
image: node:0.10
links:
- db
ports:
- "8080:80"
volumes:
- ./src:/src
db:
image: postgres
environment:
POSTGRES_USER: dev
POSTGRES_PASSWORD: dev
The docker-compose.yml from the "client" app could use external_links to be able to connect to the server.
client:
image: node:0.10
external_links:
- project_server_1:server # Use "docker ps" to know the name of the server's container
ports:
- "80:80"
volumes:
- ./src:/src
Then, use docker-compose run --service-ports service-name bash to launch each configuration with an interactive shell.
Alternatively, the extra-hosts key may also do the trick by calling the server app threw a port exposed on the host machine.
With this solution, each docker-compose.yml file could be commited in the repository of the related app.
First thing to mention, for development environment you want to utilize volumes from docker-compose to mount your app to the container when it's started (at the runtime). Sorry if you're already doing it and I mention this, but it's not clear from your definition of docker-compose.yml
To answer your specific question - start your containers normally, then when doing docker-compose ps, you'll see a name of your container. For example 'web_server' and 'web_client' (where web is the directory of your docker-compose.yml file or name of the project).
When you got name of the container you want to connect to, you can run this command to run bash exactly in the container that's running your server:
docker exec -it web_server bash.
If you want to learn more about setting up development environment for reasonably complex app, checkout this article on development with docker-compose
I am trying to get my head around the COMMAND option in docker compose. In my current docker-compose.yml i start the prosody docker image (https://github.com/prosody/prosody-docker) and i want to create a list of users when the container is actually started.
The documentation of the container states that a user can be made using environment options LOCAL, DOMAIN, and PASSWORD, but this is a single user. I need a list of users.
When reading some stuff around the internet it seemed that using the command option i should be able to execute commands in a starting or running container.
xmpp:
image: prosody/prosody
command: prosodyctl register testuser localhost testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
But this seems not to work, i checked to running container using docker exec -it <imageid> bash but the user is not created.
Is it possible to execute a command on a started container using docker-compose or are there other options?
The COMMAND instruction is exactly the same as what is passed at the end of a docker run command, for example echo "hello world" in:
docker run debian echo "hello world"
The command is interpreted as arguments to the ENTRYPOINT of the image, which in debian's case is /bin/bash. In the case of your image, it gets passed to this script. Looking at that script, your command will just get passed to the shell. I would have expected any command you pass to run successfully, but the container will exit once your command completes. Note that the default command is set in the Dockerfile to CMD ["prosodyctl", "start"] which is presumably a long-running process which starts the server.
I'm not sure how Prosody works (or even what it is), but I think you probably want to either map in a config file which holds your users, or set up a data container to persist your configuration. The first solution would mean adding something like:
volumes:
- my_prosodoy_config:/etc/prosody
To the docker-compose file, where my_prosody_config is a directory holding the config files.
The second solution could involve first creating a data container like:
docker run -v /etc/prosody -v /var/log/prosody --name prosody-data prosody-docker echo "Prosody Data Container"
(The echo should complete, leaving you with a stopped container which has volumes set up for the config and logs. Just make sure you don't docker rm this container by accident!)
Then in the docker-compose file add:
volumes_from:
- prosody-data
Hopefully you can then add users by running docker exec as you did before, then running prosodyctl register at the command line. But this is dependent on how prosody and the image behave.
CMD is directly related to ENTRYPOINT in Docker (see this question for an explanation). So when changing one of them, you also have to check how this affects the other. If you look at the Dockerfile, you will see that the default command is to start prosody through CMD ["prosodyctl", "start"]. entrypoint.sh just passes this command through as Adrian mentioned. However, your command overrides the default command, so your prosody demon is never started. Maybe you want to try something like
xmpp:
image: prosody/prosody
command: sh -c prosodyctl register testuser localhost testpassword && prosodyctl start
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
instead. More elegant and somehow what the creator seems to have intended (judging from the entrypoint.sh script) would be something like
xmpp:
image: prosody/prosody
environment:
- LOCAL=testuser
- DOMAIN=localhost
- PASSWORD=testpassword
ports:
- "5222:5222"
- "127.0.0.1:5347:5347"
To answer your final question: no, it is not possible (as of now) to execute commands on a running container via docker-compose. However, you can easily do this with docker:
docker exec -i prosody_container_name prosodyctl register testuser localhost testpassword
where prosody_container_name is the name of your running container (use docker ps to list running containers).