Run a multi-container application tests in the CI process - docker

I have a Laravel application which has some Integration Tests and this project has Dockerized using Docker Compose and it's consisted of 5 containers: php-fpm, mysql, redis, nginx and the workspace which have php-cli and composer installed in itself (just like Laradock). I want to run the tests while the test stage is running in my CI process. I have to mention that my CI Server is GitLab CI.
Basically, I run the tests on my local system by running the following commands in my terminal:
$ docker-compose up -d
Creating network "docker_backend" with driver "bridge"
Creating network "docker_frontend" with driver "bridge"
Creating redis ... done
Creating workspace ... done
Creating mysql ... done
Creating php-fpm ... done
Creating nginx ... done
$ docker-compose exec workspace bash
// now, I have logged in to workspace container
$ cd /var/www/app
$ phpunit
PHPUnit 6.5.13 by Sebastian Bergmann and contributors.
........ 8 / 8 (100%)
Time: 38.1 seconds, Memory: 28.00MB
OK (8 tests, 56 assertions)
Here is my question: How I can run these tests in test stage while there is no running container? What're the Best Practices in this case?
I also followed this documentation of GitLab, but it seems that is not OK to use Docker-in-Docker or Docker Socket Binding.

First, it is absolutely ok to run docker-in-docker with gitlab ci. This is a greate way if you dont want or dont need to dive into kubernetes. Sharing docker socket of course somehow lowers the isolation level, but as far as you mostly run your jobs on your VPS containers, I personally dont find this issue critical.
I've answered similar question some time ago in this post.

Related

Running cypress tests in parallel on single machine gives error

TLDR: issues in running parallel cypress tests in docker containers at the same machine in jenkins.
I'm trying to run on single aws machine 2 docker instances of cypress to run different suite in parallel at the same time. I've encountered issues while seems that there's a collision on ports even though I've configured and exposed 2 unique and different ports on docker-compose.yml and on cypress.json files. he first container works but the the secnod one crashing on the error below:
✖ Verifying Cypress can run /home/my-user/.cache/Cypress/4.1.0/Cypress
→ Cypress Version: 4.1.0
Xvfb exited with a non zero exit code.
There was a problem spawning Xvfb.
This is likely a problem with your system, permissions, or installation of Xvfb.
----------
Error: _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed
_XSERVTransMakeAllCOTSServerListeners: server already running
(EE)
Fatal server error:
(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE)
----------
Platform: linux (Ubuntu Linux - 18.04)
Cypress Version: 4.1.0
important note: I want to implementthe parallelization on my own and not use the feature --parallel in cypress , I need to implement it in house on the same machine only in encapsulated environment.
Any suggestions?
If I understood correctly all you need to do is to start cypress (in containers) with xvfb-run -a. E.g. xvfb-run -a npx cypress run --browser Chrome
So -a will assign next available port number, means you can run multiple cypress containers in parallel. Check http://elementalselenium.com/tips/38-headless

How to collect all logs from number of servers in docker swarm?

I have number of Linux server that have docker installed on them, all of the server are in a docker swarm, on each server i have a custom application. I also have ELK setup in AWS.
I want to collect all logs from my custom app to the ELK on AWS, I have successfully done that on one server with filebeat by running the following commands:
1. docker pull docker.elastic.co/beats/filebeat-oss:7.3.0
2. created a file in /etc/filebeat/filebeat.yml with the content:
filebeat.inputs:
- type: container
paths:
- '/usr/share/continer_logs/*/*.log'
containers.ids:
- '111111111111111111111111111111111111111111111111111111111111111111'
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["XX.YY.ZZ.TT"]
chown root:root filebeat.yml
sudo docker run -u root -v /etc/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/lib/docker/containers:/usr/share/continer_logs -v /var/run/docker.sock:/var/run/docker.sock docker.elastic.co/beats/filebeat-oss:7.3.0
And now i want to do the same on all of my docker hosts(And there are a lot of them) in the swarm.
I encounter a number of problems:
1. How do i copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
2. How do i update the "containers.ids" on every server? and how to update it when i upgrade the docker image?
How do I copy "filebeat.yml" to /etc/filebeat/filebeat.yml on every server?
You need a configuration management tool for this. I prefer Ansible, you might want to take a look at others.
How do I update the "containers.ids" on every server?
You don't have to. Docker manages it by itself IF you use swarm mode. You're using docker run which is meant for the development phase and to deploy applications at one single machine. You need to look at Docker Stack to deploy an application across multiple servers.
How to update it when I upgrade the docker image?
docker stack deploy does both, deploy and update, services.
NOTE: Your image should be present on each node of the swarm in order to get its container deployed on that node.

Julia cluster using docker

I am trying to connect to docker containers using the default SSHManager.
These containers only have a running sshd, with public key authentication, and julia installed.
Here is my dockerfile:
FROM rastasheep/ubuntu-sshd
RUN apt-get update && apt-get install -y julia
RUN mkdir -p /root/.ssh
ADD id_rsa.pub /root/.ssh/authorized_keys
I am running the container using:
sudo docker run -d -p 3333:22 -it --name julia-sshd julia-sshd
And then in the host machine, using the julia repl, I get the following error:
julia> import Base:SSHManager
julia> addprocs(["root#localhost:3333"])
stdin: is not a tty
Worker 2 terminated.
ERROR (unhandled task failure): EOFError: read end of file
Master process (id 1) could not connect within 60.0 seconds.
exiting.
I have tested that I can connect to the container via ssh without password.
I have also tested that in julia repl I can add a regular machine with julia installed to the cluster and it works fine.
But I cannot get this two things working together. Any help or suggestions will be apreciated.
I recommend you to also deploy the Master in a Docker container. It makes your environment easily and fully reproducible.
I'm working on a way of deploying Workers in Docker containers on-demand. i.e., the Master deployed in a container can deploy further DockerizedJuliaWorkers. It is similar to https://github.com/gsd-ufal/Infra.jl but assuming that Master and Workers run on the same host, which makes things not so hard.
It is an on-going work and I plan to finish next weeks. In a nutshell:
1) You'll need a simple DockerBackend and a wrapper to transparently run containers, set up SSH, and call addprocs with all the low-level parameters (i.e., the DockerizedJuliaWorker.jl file):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis/tree/master/src/docker
2) Read here how to build the Docker image (Dockerfile is included):
https://github.com/NaelsonDouglas/DistributedMachineLearningThesis
Please tell me if you have any suggestion on how to improve it.
Best,
André Lage.

How to stop all containers when one container stops with docker-compose?

Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.

Installing Gitlab CI using Docker for the Ci and the Runners, and make it persistent after reboot

I have a server running Gitlab. Let's say that the address is https://gitlab.mydomain.com.
Now what I want to achieve is to install a Continuous Integration system. Being that I am using Gitlab, I opt for Gitlab CI, as it feels the more natural way to go. So I go to the Docker repo and I found this image.
So I run the image to create a container with the following
docker run --restart=always -d -p 9000:9000 -e GITLAB_URLS="https://gitlab.mydomain.com" anapsix/gitlab-ci
I give it a minute to boot up and I can now access to the CI through the URL http://gitlab.mydomain.com:9000. So far so good.
I log in the CI and I am greeted by this message:
Now you need Runners to process your builds.
So I come back to the Docker Hub and I find this other image. Apparently to boot up this image I have to do it interactively. I follow the instructions and it will create configuration files:
mkdir -p /opt/gitlab-ci-runner
docker run --name gitlab-ci-runner -it --rm -v /opt/gitlab-ci-runner:/home/gitlab_ci_runner/data sameersbn/gitlab-ci-runner:5.0.0-1 app:setup
The interactive setup will ask me for the proper data that it needs:
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/ )
http://gitlab.mydomain.com:9000/
Please enter the gitlab-ci token for this runner:
12345678901234567890
Registering runner with registration token: 12345678901234567890, url: http://gitlab.mydomain.com:9000/.
Runner token: aaaaaabbbbbbcccccccdddddd
Runner registered successfully. Feel free to start it!
I go to http://gitlab.mydomain:9000/admin/runners, and hooray, the runner appears on stage.
All seems like to work great, but here comes the problem:
If I restart the machine, due to an update or whatever reason, the runner is not there anymore. I could maybe add --restart=always to the command when I run the image of the runner, but this would be problematic because:
The setup is interactive, so the token to register runners have to be input manually
Every time the container with Gitlab CI is re-run the token to register new runners is different.
How could I solve this problem?
I have a way of pointing you in the right direction but im trying to make it myself, hope we both manage to get it up heres my situation.
im using coreOS + docker trying to do exactly what youre trying to do, and in coreOS you can setup a service that starts the CI everytime you restart the machine (as well as gitlab and the others) my problem is trying to make that same installation automatic.
after some digging i found this: https://registry.hub.docker.com/u/ubergarm/gitlab-ci-runner/
in this documentation they state that they can do it in 2 ways:
1-Mount in a .dockercfg file containing credentials into the /root
directory
2-start your container with this info:
-e CI_SERVER_URL=https://my.ciserver.com \
-e REGISTRATION_TOKEN=12345678901234567890 \
Meaning you can setup to auto start the CI with your configs, ive been trying this for 2 days if you manage to do it tell me how =(

Resources