Start a Maridb service in GitLab CI failed - docker

I tried to add a mariadb service in GitLab CI for running tests.
Define the docker vars in gobal variables and add a mariadb in the job test/services.
variables:
MYSQL_DATABASE: backend
MYSQL_USER: admin
MYSQL_PASSWORD: admin
test:
stage: test
image: maven:3.6.3-openjdk-16
services:
- name: mariadb
alias: db
command: [ "--character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci" ]
...
When the codes are pushed to GitLab.com.
And I saw the following logs.
Starting service mariadb:latest ...
Pulling docker image mariadb:latest ...
Using docker image sha256:e76a4b2ed1b4014a9d638e15cd852544d8171c64ed78096fbe6e5a108fbf20b0 for mariadb:latest with digest mariadb#sha256:9c681cefe72e257c6d58f839bb504f50bf259a0221c883fcc220f0755563fa46 ...
Waiting for services to be up and running...
*** WARNING: Service runner-fa6cab46-project-18612327-concurrent-0-0fddafc5b30beaaa-mariadb-0 probably didn't start properly.
Health check error:
start service container: Error response from daemon: Cannot link to a non running container: /runner-fa6cab46-project-18612327-concurrent-0-0fddafc5b30beaaa-mariadb-0 AS /runner-fa6cab46-project-18612327-concurrent-0-0fddafc5b30beaaa-mariadb-0-wait-for-service/service (docker.go:1156:0s)
Service container logs:
2021-04-13T08:30:50.821859467Z 2021-04-13 08:30:50+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 1:10.5.9+maria~focal started.
2021-04-13T08:30:50.920686916Z 2021-04-13 08:30:50+00:00 [ERROR] [Entrypoint]: mysqld failed while attempting to check config
2021-04-13T08:30:50.920714063Z command was: mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --verbose --help --log-bin-index=/tmp/tmp.Kzx9BNn0Bl --encrypt-tmp-files=0
2021-04-13T08:30:50.920720617Z mysqld: Character set 'utf8mb4 --collation-server=utf8mb4_unicode_ci' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file
2021-04-13T08:30:50.920875405Z mysqld: Character set 'utf8mb4 --collation-server=utf8mb4_unicode_ci' is not a compiled character set and is not specified in the '/usr/share/mysql/charsets/Index.xml' file
But I ran a mariadb instance in my docker container, it is working well, not seen such info.

Related

Connect the Cassandra container to application web container failed - Error: 202 Connecting to Node

So, I created two docker's images and I want to connect one to which other with the docker composer. The first image is Cassandra 3.11.11 (from the official hub docker) and the other I created by myself with the tomcat version 9.0.54 and my application spring boot.
I ran the docker-compose.ylm below to connect the two container, where cassandra:latest is the cassandra's image and centos7-tomcat9-myapp is my app web's image.
version: '3'
services:
casandra:
image: cassandra:latest
myapp:
image: centos7-tomcat9-myapp
depends_on:
- casandra
environment:
- CASSANDRA_HOST=cassandra
I ran the command line to start the app web's image : docker run -it --rm --name fe3c2f120e01 -p 8888:8080 centos7-tomcat9-app .
In the console log the spring boot show me the error below. It happened, because the myapp's container could not connect to the Cassandra's container.
2021-10-15 15:12:14.240 WARN 1 --- [ s0-admin-1]
c.d.o.d.i.c.control.ControlConnection : [s0] Error connecting to
Node(endPoint=127.0.0.1:9042, hostId=null, hashCode=47889c49), trying
next node (ConnectionInitException: [s0|control|connecting...]
Protocol initialization request, step 1 (OPTIONS): failed to send
request (io.netty.channel.StacklessClosedChannelException))
What am I doing wrong?
EDIT
This is the nodetool status about the cassandra's image:
[root#GDBDEV04 cassandradb]# docker exec 552d359d177e nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.18.0.3 84.76 KiB 16 100.0% 685b6e0a-13c2-4d41-ba99-f3b0fa94477c rack1
EDIT 2
I need to connect the Cassandra's DB image with the web application image. It is different to connect microservices. I tried to change the 127.0.0.0 (inside the cassandra.yaml) to 0.0.0.0 (only to test) and the error persist. I think something missing in my docker-compose.yml for sure. However, I did not know what.
Finally I found the error. In my case, I need to fixed the docker-compose.yml file adding the Cassandra and Tomcat's ports. And in my application.properties (spring boot config file), I changed the cluster's name.
Docker-compose.yml:
version: '3'
services:
cassandra:
image: cassandra:latest
ports:
- "9044:9042"
myapp:
image: centos7-tomcat9-myapp
ports:
-"8086:8080"
depends_on:
- cassandra
environment:
- CASSANDRA_HOST=cassandra
Application.config :
# CASSANDRA (CassandraProperties)
cassandra.cluster = Test Cluster
cassandra.contactpoints=${CASSANDRA_HOST}
This question help me to resolve my problem: Accessing docker container mysql databases

Service "postgis" fails to start in GitLab CI

I am trying to use the Docker image "postgis/postgis:latest" as a service in GitLab CI but the service fails to start.
This is the start of the CI log, the last line is most important:
Running with gitlab-runner 12.9.0 (4c96e5ad)
on xxxxxxx xxxxxxxx
Preparing the "docker" executor
Using Docker executor with image node:lts-stretch ...
Starting service redis:latest ...
Pulling docker image redis:latest ...
Using docker image sha256:4cdbec704e477aab9d249262e60b9a8a25cbef48f0ff23ac5eae879a98a7ebd0 for redis:latest ...
Starting service postgis/postgis:latest ...
Pulling docker image postgis/postgis:latest ...
Using docker image sha256:a412dcb70af7acfbe875faea4467a1594e7cba3dfca19e5e1c6bcf35286380df for postgis/postgis:latest ...
Waiting for services to be up and running...
*** WARNING: Service runner-xxxxxxxx-project-1-concurrent-0-postgis__postgis-1 probably didn't start properly.
Health check error:
service "runner-xxxxxxxx-project-1-concurrent-0-postgis__postgis-1-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-04-06T11:58:09.487216183Z The files belonging to this database system will be owned by user "postgres".
2020-04-06T11:58:09.487254326Z This user must also own the server process.
2020-04-06T11:58:09.487260023Z
2020-04-06T11:58:09.488674041Z The database cluster will be initialized with locale "en_US.utf8".
2020-04-06T11:58:09.488696993Z The default database encoding has accordingly been set to "UTF8".
2020-04-06T11:58:09.488704024Z The default text search configuration will be set to "english".
2020-04-06T11:58:09.488710330Z
2020-04-06T11:58:09.488716134Z Data page checksums are disabled.
2020-04-06T11:58:09.488721778Z
2020-04-06T11:58:09.490435786Z fixing permissions on existing directory /var/lib/postgresql/data ... ok
2020-04-06T11:58:09.490649106Z creating subdirectories ... ok
2020-04-06T11:58:09.490656485Z selecting dynamic shared memory implementation ... posix
2020-04-06T11:58:09.525841255Z selecting default max_connections ... 100
2020-04-06T11:58:09.562735034Z selecting default shared_buffers ... 128MB
2020-04-06T11:58:09.614695491Z selecting default time zone ... Etc/UTC
2020-04-06T11:58:09.616784837Z creating configuration files ... ok
2020-04-06T11:58:09.917724902Z running bootstrap script ... ok
2020-04-06T11:58:10.767115421Z performing post-bootstrap initialization ... ok
2020-04-06T11:58:10.924542026Z syncing data to disk ... ok
2020-04-06T11:58:10.924613120Z
2020-04-06T11:58:10.924659485Z initdb: warning: enabling "trust" authentication for local connections
2020-04-06T11:58:10.924720453Z You can change this by editing pg_hba.conf or using the option -A, or
2020-04-06T11:58:10.924753751Z --auth-local and --auth-host, the next time you run initdb.
2020-04-06T11:58:10.925150488Z
2020-04-06T11:58:10.925175359Z Success. You can now start the database server using:
2020-04-06T11:58:10.925182577Z
2020-04-06T11:58:10.925188661Z pg_ctl -D /var/lib/postgresql/data -l logfile start
2020-04-06T11:58:10.925195041Z
2020-04-06T11:58:10.974712774Z waiting for server to start....2020-04-06 11:58:10.974 UTC [47] LOG: starting PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2020-04-06T11:58:10.976267115Z 2020-04-06 11:58:10.976 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-04-06T11:58:11.003287980Z 2020-04-06 11:58:11.002 UTC [48] LOG: database system was shut down at 2020-04-06 11:58:10 UTC
2020-04-06T11:58:11.011056242Z 2020-04-06 11:58:11.010 UTC [47] LOG: database system is ready to accept connections
2020-04-06T11:58:11.051536096Z done
2020-04-06T11:58:11.051578164Z server started
2020-04-06T11:58:11.051855017Z
2020-04-06T11:58:11.052088262Z /usr/local/bin/docker-entrypoint.sh: sourcing /docker-entrypoint-initdb.d/10_postgis.sh
2020-04-06T11:58:11.218053189Z psql: error: could not connect to server: could not translate host name "postgres" to address: Name or service not known
could not translate host name "postgres" to address: Name or service not known
It seems to me that the host "postgres" is wrong. But the documenation of GitLab says that the hostname will be the alias: https://docs.gitlab.com/ce/ci/docker/using_docker_images.html#accessing-the-services
Excerpt of my .gitlab-ci-yml:
image: node:lts-stretch
services:
- name: redis:latest
- name: postgis/postgis:latest
alias: postgres
variables:
NODE_ENV: production
REDIS_HOST: redis
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGHOST: postgres
PGUSER: postgres
PGPASSWORD: postgres
I have also tried to remove the alias and use "postgis-postgis" or "postgis__postgis" as the hostname as per documenation, but the same error every time. I also tried to use the docker image "mdillon/postgis" because i saw it often, but also the same error.
I tried plugging in your .gitlab-ci.yml excerpt and got an error:
This GitLab CI configuration is invalid: jobs config should contain at least one visible job
Please provide a minimal reproducible example next time. ;)
I was able to reproduce and fix the issue. The fix was to remove the PGHOST setting. (You had its value set to postgres. Your main container can get to the postgis container using the alias postgres but the postgis container itself doesn't need a hostname to get to the PostgreSQL service because that service is listening on a local socket.)
PGHOST is used by psql in the "postgis" container (launched by the services directive), in the script https://github.com/postgis/docker-postgis/blob/master/initdb-postgis.sh (which ends up in /docker-entrypoint-initdb.d/10_postgis.sh -- see https://github.com/postgis/docker-postgis/blob/master/Dockerfile.template#L16)
The following .gitlab-ci.yml works:
image: node:lts-stretch
variables:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
PGUSER: postgres
PGPASSWORD: postgres
services:
- name: postgis/postgis:latest
alias: postgres
job1:
script: ping -c 3 postgres
Here is the job log:
Running with gitlab-runner 12.9.0 (4c96e5ad)
on docker-auto-scale 0277ea0f
Preparing the "docker+machine" executor
Using Docker executor with image node:lts-stretch ...
Starting service postgis/postgis:latest ...
Pulling docker image postgis/postgis:latest ...
Using docker image sha256:a412dcb70af7acfbe875faea4467a1594e7cba3dfca19e5e1c6bcf35286380df for postgis/postgis:latest ...
Waiting for services to be up and running...
Pulling docker image node:lts-stretch ...
Using docker image sha256:88c089733a3b980b3517e8e2e8afa46b338f69d7562550cb3c2e9fd852a2fbac for node:lts-stretch ...
Preparing environment
00:05
Running on runner-0277ea0f-project-17971942-concurrent-0 via runner-0277ea0f-srm-1586221223-45d7ab06...
Getting source from Git repository
00:01
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/atsaloli/service-postgis/.git/
Created fresh repository.
From https://gitlab.com/atsaloli/service-postgis
* [new ref] refs/pipelines/133464596 -> refs/pipelines/133464596
* [new branch] master -> origin/master
Checking out d20469e6 as master...
Skipping Git submodules setup
Restoring cache
00:02
Downloading artifacts
00:01
Running before_script and script
00:04
$ ping -c 3 postgres
PING postgres (172.17.0.3) 56(84) bytes of data.
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=1 ttl=64 time=0.077 ms
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=2 ttl=64 time=0.064 ms
64 bytes from postgis-postgis (172.17.0.3): icmp_seq=3 ttl=64 time=0.060 ms
--- postgres ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2062ms
rtt min/avg/max/mdev = 0.060/0.067/0.077/0.007 ms
Running after_script
00:01
Saving cache
00:02
Uploading artifacts for successful job
00:01
Job succeeded
As you can see in the ping command above, the container created from the image node:lts-stretch is able to access the postgis container using the postgres alias.
Does that unblock you?

Elastic search TestContainers Timed out waiting for URL to be accessible in Docker

Local env:
MacOS 10.14.6
Docker Desktop 2.0.1.2
Docker Engine 19.03.2
Compose Engine 1.24.1
Test containers 1.12.1
I'm using Elastic search in an app, and I want to be able to use TestContainers in my integration tests. Sample code in a Play Framework app that uses ElasticSearch testcontainer:
#BeforeAll
public static void setup() {
private static final ElasticsearchContainer ES = new ElasticsearchContainer();
ES.start();
}
This works when testing locally, but I want to be able to run this inside a Docker container to run on my CI server. I'm getting this exception when running the tests inside the Docker container:
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: alpine:3.5, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: quay.io/testcontainers/ryuk:0.2.3, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
?? Checking the system...
? Docker version should be at least 1.6.0
? Docker environment should have more than 2GB free disk space
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: docker.elastic.co/elasticsearch/elasticsearch:7.1.1, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[error] d.e.c.1.1] - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://172.17.0.1:32911/ should return HTTP [200])
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:197)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:675)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:332)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:285)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:283)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:272)
at controllers.HomeControllerTest.setup(HomeControllerTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I've read the instructions here: https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
So my docker-compose.yml looks like (note: I've been testing with another ES container as seen commented out below, but I have not been using it with this test)($INSTANCE is a random 16 char string for a particular test run):
version: '3'
services:
# elasticsearch:
# container_name: elasticsearch_${INSTANCE}
# image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
# ports:
# - 9200:9200
# - 9300:9300
# command: elasticsearch -E transport.host=0.0.0.0
# logging:
# driver: 'none'
# environment:
# ES_JAVA_OPTS: "-Xms750m -Xmx750m"
mainapp:
container_name: mainapp_${INSTANCE}
image: test_image:${INSTANCE}
stop_signal: SIGKILL
stdin_open: true
tty: true
working_dir: $PWD
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD:$PWD
environment:
ES_JAVA_OPTS: "-Xms1G -Xmx1G"
command: /bin/bash /projectfolder/build/tests/wrapper.sh
I've also tried running my tests with this command but received the same error:
docker run -it --rm -v $PWD:$PWD -w $PWD -v /var/run/docker.sock:/var/run/docker.sock test_image:68F75D8FD4C7003772C7E52B87B774F5 /bin/bash /testproject/build/tests/wrapper.sh
I tried creating a postgres container the same way inside my testing container and had no issues. I've also tried making a GenericContainer with the Elasticsearch image with no luck.
I don't think this is a connection issue because if I run curl 172.17.0.1:{port printed to test console} from inside my test container, I do get a valid elastic search response with status code 200, so it almost seems like its timing out trying to connect even though the connection is there.
Thanks.

Filebeat not running using docker-compose: setting 'filebeat.prospectors' has been removed

I'm trying to launch filebeat using docker-compose (I intend to add other services later on) but every time I execute the docker-compose.yml file, the filebeat service always ends up with the following error:
filebeat_1 | 2019-08-01T14:01:02.750Z ERROR instance/beat.go:877 Exiting: 1 error: setting 'filebeat.prospectors' has been removed
filebeat_1 | Exiting: 1 error: setting 'filebeat.prospectors' has been removed
I discovered the error by accessing the docker-compose logs.
My docker-compose file is as simple as it can be at the moment. It simply calls a filebeat Dockerfile and launches the service immediately after.
Next to my Dockerfile for filebeat I have a simple config file (filebeat.yml), which is copied to the container, replacing the default filebeat.yml.
If I execute the Dockerfile using the docker command, the filebeat instance works just fine: it uses my config file and identifies the "output.json" file as well.
I'm currently using version 7.2 of filebeat and I know that the "filebeat.prospectors" isn't being used. I also know for sure that this specific configuration isn't coming from my filebeat.yml file (you'll find it below).
It seems that, when using docker-compose, the container is accessing another configuration file instead of the one that is being copied to the container, by the Dockerfile, but so far I haven't been able to figure it out how, why and how can I fix it...
Here's my docker-compose.yml file:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
The filebeat.yml file:
filebeat.inputs:
- paths:
- '/usr/share/filebeat/*.json'
fields_under_root: true
fields:
tags: ['json']
output:
logstash:
hosts: ['localhost:5044']
The Dockerfile file:
FROM docker.elastic.co/beats/filebeat:7.2.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
COPY output.json /usr/share/filebeat/output.json
USER root
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
RUN mkdir /usr/share/filebeat/dockerlogs
USER filebeat
The output I'm expecting should be similar to the following, which comes from the successful executions I'm getting when I'm executing it as a single container.
The ERROR is expected because I don't have logstash configured at the moment.
INFO crawler/crawler.go:72 Loading Inputs: 1
INFO log/input.go:148 Configured paths: [/usr/share/filebeat/*.json]
INFO input/input.go:114 Starting input of type: log; ID: 2772412032856660548
INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
INFO log/harvester.go:253 Harvester started for file: /usr/share/filebeat/output.json
INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 1 reconnect attempt(s)
ERROR pipeline/output.go:100 Failed to connect to backoff(async(tcp://localhost:5044)): dial tcp [::1]:5044: connect: cannot assign requested address
INFO pipeline/output.go:93 Attempting to reconnect to backoff(async(tcp://localhost:5044)) with 2 reconnect attempt(s)
I managed to figure out what the problem was.
I needed to map the location of the config file and logs directory in the docker-compose file, using the volumes tag:
version: "3.7"
services:
filebeat:
build: "./filebeat"
command: filebeat -e -strict.perms=false
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
- ./filebeat/logs:/usr/share/filebeat/dockerlogs
Finally I just had to execute the docker-compose command and everything start working properly:
docker-compose -f docker-compose.yml up -d

Docker-compose does not start containers

I'm new with docker-compose. I have a problem when I use the command "docker-compose up -d" to start a multi-container application what should start the containers with the status "up" but all the time a execute the command the status is "Exit", I'm not sure if I'm doing something wrong, this is my docker-compose.yml file
version: '3'
services: catalog:
image: ciscatalog
hostname: catalogHost
command: hostname
volumes:
- /home/docker:/opt/host
container:
image: dis/ciscontainer
hostname: containerHost
command: hostname
volumes:
- /home/docker:/opt/host
inbound:
image: dsi/cisinbound
hostname: inboundHost
depends_on:
- catalog
links:
- catalog
command: hostname
volumes:
- /home/docker:/opt/host
outbound:
image: dsi/cisoutbound
hostname: outboundHost
depends_on:
- catalog
links:
- catalog
command: hostname
volumes:
- /home/docker:/opt/host
example run:
root#docker1:/home/docker/DSI# docker-compose scale catalog=3 container=4 inbound=1 outbound=1
Creating and starting dsi_catalog_1 ... done
Creating and starting dsi_catalog_2 ... done
Creating and starting dsi_catalog_3 ... done
Creating and starting dsi_container_1 ... done
Creating and starting dsi_container_2 ... done
Creating and starting dsi_container_3 ... done
Creating and starting dsi_container_4 ... done
Creating and starting dsi_inbound_1 ... done
Creating and starting dsi_outbound_1 ... done
root#docker1:/home/docker/DSI# docker-compose up -d
Starting dsi_container_4
Starting dsi_catalog_3
Starting dsi_catalog_1
Starting dsi_container_3
Starting dsi_catalog_2
Starting dsi_container_1
Starting dsi_outbound_1
Starting dsi_inbound_1
Starting dsi_container_2
root#docker1:/home/docker/DSI# docker-compose ps
Name Command State Ports
-------------------------------------------
dsi_catalog_1 hostname Exit 0
dsi_catalog_2 hostname Exit 0
dsi_catalog_3 hostname Exit 0
dsi_container_1 hostname Exit 0
dsi_container_2 hostname Exit 0
dsi_container_3 hostname Exit 0
dsi_container_4 hostname Exit 0
dsi_inbound_1 hostname Exit 0
dsi_outbound_1 hostname Exit 0
Please, can anybody help me? docker-compose version 1.13.
I think I got it: you are overriding the command you give in the dockerfile because you have this line in each of the services
command: hostname
so the only command you give is "hostname", which is actually what is run.
If you run an image with docker, you are probably running a completely different command!
If this is a linux based image, 'hostname' will just print the hostname and then exit. So then the command is stopped which logically will result in a stopped container (exit 0)
Remove the command-override so the containers actually run their respective commands.

Resources