Elixir Testing with Docker-Compose UP - docker

I've created a simple Sonatype API client in Elixir that returns the repositories and the components of the repositories.
I now need to create tests in Elixir so that I can verify the repo. I am using docker-compose to start the sonatype container. I need the tests to start with a fresh Docker(sonatype) repo to work with, via docker-compose up, then verify that it doesn't have any containers in it. Then from there add one or more images, then validate that the images I added are present. As cleanup, I could delete those images. It must be an automated set of tests that can run in CI or a user can run on their local machine.
My question is how would I be able to do that by either a .exs test file or bash script file?

You can build a docker-compose.yml file with something similar to this:
version: "2.2"
services:
my_app:
build:
context: .
ports:
- 4000:4000
command: >
bash -c 'wait-for-it -t 60 sonatype:1234
&& _build/prod/rel/my_app/bin/my_app start'
tests:
extends:
service: my_app
environment:
MIX_ENV: test
LOG_LEVEL: "warn"
working_dir: /my_app
depends_on:
- sonatype
command:
bash -c 'mix test'
sonatype:
image: sonatype/nexus3:3.19.1
ports:
- "1234:1234"
Then you have a bash script like test.sh:
docker-compose build tests
docker-compose run tests
EXIT=$?
docker-compose down --volumes
exit $EXIT
I'm not familiar with Sonatype, so this might not make sense, and you need to adapt.

Related

Docker-compose how to update celery without rebuild?

I am working on my django + celery + docker-compose project.
Problem
I changed django code
Update is working only after docker-compose up --build
How can I enable code update without rebuild?
I found this answer Developing with celery and docker but didn't understand how to apply it
docker-compose.yml
version: '3.9'
services:
django:
build: ./project # path to Dockerfile
command: sh -c "
gunicorn --bind 0.0.0.0:8000 core_app.wsgi"
volumes:
- ./project:/project
- ./project/static:/project/static
- media-volume:/project/media
expose:
- 8000
celery:
build: ./project
command: celery -A documents_app worker --loglevel=info
volumes:
- ./project:/usr/src/app
- media-volume:/project/media
depends_on:
- django
- redis
.........
volumes:
pg_data:
static:
media-volume:
Code update without rebuild is achievable and best practice when working with containers otherwise it takes too much time and effort creating a new image every time you change the code.
The most popular way of doing this is to mount your code directory into the container using one of the two methods below.
In your docker-compose.yml
services:
web:
volumes:
- ./codedir:/app/codedir # while 'codedir' is your code directory
In CLI starting a new container
$ docker run -it --mount "type=bind,source=$(pwd)/codedir,target=/app/codedir" celery bash
So you're effectively mounting the directory that your code lives in on your computer inside of the /opt/ dir of the Celery container. Now you can change your code and...
the local directory overwrites the one from the image when the container is started. You only need to build the image once and use it until the installed dependencies or OS-level package versions need to be changed. Not every time your code is modified. - Quoted from this awesome article

integration tests with docker

I have a rest api. I want to have a docker-compose setup that:
starts the api server
"waits" until it's up and running
runs some api tests against the endpoints
stops everything once the test job finished.
Now,
The first part I can do.
As for waiting for the backend to be up and runnning, as I understand it, depends_on does not quite cut it. the rest api does have a /ping endpoint tho in case we need it.
struggling to find a minimal example online that:
uses volumes and does not explicitly copy tests files over.
runs the tests through a command in the docker file (as opposed to in the DockerFile)
again not sure if there is an idiomatic way of stopping everything after tests are done, but I did come across a somewhat related solution that suggests using docker-compose up --abort-on-container-exit. is that the best way of achieving this?
currently my docker-compose file looks like this:
docker-compose.yml
version: '3.8'
networks:
development:
driver: bridge
services:
app:
build:
context: ../
dockerfile: ../Dockerfile
command: sbt run
image: sbt
ports:
- "8080:8080"
volumes:
- "../:/root/build"
networks:
- development
tests:
build:
dockerfile: ./Dockerfile
command: npm run test
volumes:
- .:/usr/tests/
- /usr/tests/node_modules
networks:
- development
depends_on:
- app
and the node Dockerfile looking this:
FROM node:16
ADD package*.json /usr/tests/
ADD test.js /usr/tests/
WORKDIR /usr/tests/
RUN npm install
Full repo is here: https://github.com/ShahOdin/dockerise-everything/pull/1
You can wait for another service to become available with docker-compose-wait project.
Add the 'docker-compose-wait' binary to the 'test container' and run the 'docker-compose-wait' binary before testing the API server inside the container's entrypoint.
You can give some time interval before and after checking if the service is ready.

Docker Compose Detach Mode Parameter Error

Here is my docker-compose file, mysql.yml:
# Use root/example as user/password credentials
version: '3'
services:
db:
image: mysql
tty: true
stdin_open: true
command: --default-authentication-plugin=mysql_native_password
container_name: db
restart: always
networks:
- db
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: example1
command: bash -c "apt update"
adminer:
image: adminer
restart: always
container_name: web
networks:
- db
ports:
- 8080:8080
volumes:
- ./data/db:/var/lib/mysql
networks:
db:
external: true
When I run this file as "docker-compose -f mysql.yml up -d" it starts working, but after 5 or 10 seconds it dies with 0 exit code. Then, it restarts due to "restart: always" parameter.
I search on the internet about my problem and got some solutions:
First one,
tty: true
std_in_open: true
parameters, but they are not working. The container dies anyway.
Second one,
entrypoint:
- bash
- -c
command:
- |
tail -f /dev/null
This solution is working, but it overrides the default entrypoint, and, so my MySQL service does not work at the end.
Yes, I can concatenate entrypoints or create a Dockerfile(I actually want to complete all this in a single file), but I think it' s not the right way and I need some advice.
Thanks in advance!
When your Compose setup says:
command: bash -c "apt update"
This is the only thing the container does; this runs instead of the normal container process. Once that command completes (successfully) the container will exit (with status code 0).
In normal operation you shouldn't need to specify the command: for a container; the Dockerfile will have a CMD line that provides a useful default. (The notable exception is a setup where you have both a Web server and a background worker sharing substantial code, so you can set CMD to run, say, the Flask application but override command: to run a Celery worker.)
Many of the other options you include in the docker-compose.yml file are unnecessary. You can safely remove tty:, stdin_open:, container_name:, and networks: with no ill effects. (You can configure the Compose-provided default network if you specifically need containers running on a pre-created network.)
The comments hint at trying to run package updates at container startup time. I'd echo #xdhmoore's comment here: you should only run APT or similar package managers during an image build, never on a running container. (You don't want your application startup to fail because a Debian mirror is down, or because an incompatible update has gotten deployed.)
For the standard Docker Hub images, in general they update somewhat frequently, especially if you're not pinning to a specific patch release. If you run
docker-compose pull
docker-compose up
it will ask Docker Hub for a newer version of the image, and recreate the container on it if needed.
The standard Docker Hub packages also frequently download and install the thing they're packaging outside their distribution's package manager system, so running an upgrade isn't necessarily useful.
If you must, though, the best way to do this is to write a minimal Dockerfile
FROM mysql
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get upgrade --assume-yes
and reference it in the docker-compose.yml file
services:
db:
build: .
# replacing the image: line
# do NOT leave `image: mysql` behind

Execute multiple `docker-compose run` that are encapsulated from of each other

I have a docker-compose file that looks like the following:
mongo:
image: mongo:3.6.12
container_name: mongo-${BUILD_ID}
app:
image: repo/my-image:latest
container_name: app-${BUILD_ID}
working_dir: /src
depends_on:
- mongo
I'm running this in a CI/CD pipeline to execute tests in parallel via a docker-compose run app run-test-1.sh but have noticed only one instance of a mongo container is created. This seems to result in the tests interfering with each other. Is it possible to docker-compose run such that it will create both the app service and the mongo service together and encapsulated from other docker-compose run containers so that they each have their own mongo instance?
I have tried the following to no avail:
Adding a container_name: mongo-${BUILD_ID} property in the docker-compose.yml
Adding the --name flag when executing the command. i.e. docker-compose run --name id1 app run-test-1.sh
Managed to figure this out. docker-compose has a flag --project-name which it will use instead of the default value (folder name).
Thus my docker-compose.yml looks like:
mongo:
image: mongo:3.6.12
app:
image: repo/my-image:latest
working_dir: /src
depends_on:
- mongo
and I can execute the following commands and each will be namespaced within their respective project names:
docker-compose --project-name project1 run app ./run-test1.sh.
docker-compose --project-name project2 run app ./run-test2.sh.
docker-compose --project-name project3 run app ./run-test3.sh.
I

Dockercompose can't access using hostname

I am quite new to docker but am trying to use docker compose to run automation tests against my application.
I have managed to get docker compose to run my application and run my automation tests, however, at the moment my application is running on localhost when I need it to run against a specific domain example.com.
From research into docker it seems you should be able to hit the application on the hostname by setting it within links, but I still don't seem to be able to.
Below is the code for my docker compose files...
docker-compose.yml
abc:
build: ./
command: run container-dev
ports:
- "443:443"
expose:
- "443"
docker-compose.automation.yml
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && DISPLAY=:1.0 && ENVIRONMENT=qa BASE_URL=https://example.com npm run automation"
links:
- abc:example.com
volumes:
- /tmp:/tmp/
and am using the following command to run...
docker-compose -p tests -f docker-compose.yml -f docker-compose.automation.yml up --build
Is there something I'm missing to map example.com to localhost?
If the two containers are on the same Docker internal network, Docker will provide a DNS service where one can talk to the other by just its container name. As you show this with two separate docker-compose.yml files it's a little tricky, because Docker Compose wants to isolate each file into its own separate mini-Docker world.
The first step is to explicitly declare a network in the "first" docker-compose.yml file. By default Docker Compose will automatically create a network for you, but you need to control its name so that you can refer to it from elsewhere. This means you need a top-level networks: block, and also to attach the container to the network.
version: '3'
networks:
abc:
name: abc
services:
abc:
build: ./
command: run container-dev
ports:
- "443:443"
networks:
abc:
aliases:
- example.com
Then in your test file, you can import that as an external network.
version: 3
networks:
abc:
external: true
name: abc
services:
tests:
build: test/integration/
dockerfile: DockerfileUIAuto
command: sh -c "Xvfb :1 -screen 0 1024x768x16 &>xvfb.log && sleep 20 && npm run automation"
environment:
DISPLAY: "1.0"
ENVIRONMENT: qa
BASE_URL: "https://example.com"
networks:
- abc
Given the complexity of what you're showing for the "test" container, I would strongly consider running it not in Docker, or else writing a shell script that launches the X server, checks that it actually started, and then runs the test. The docker-compose.yml file isn't the only tool you have here.

Resources