integration tests with docker - docker

I have a rest api. I want to have a docker-compose setup that:
starts the api server
"waits" until it's up and running
runs some api tests against the endpoints
stops everything once the test job finished.
Now,
The first part I can do.
As for waiting for the backend to be up and runnning, as I understand it, depends_on does not quite cut it. the rest api does have a /ping endpoint tho in case we need it.
struggling to find a minimal example online that:
uses volumes and does not explicitly copy tests files over.
runs the tests through a command in the docker file (as opposed to in the DockerFile)
again not sure if there is an idiomatic way of stopping everything after tests are done, but I did come across a somewhat related solution that suggests using docker-compose up --abort-on-container-exit. is that the best way of achieving this?
currently my docker-compose file looks like this:
docker-compose.yml
version: '3.8'
networks:
development:
driver: bridge
services:
app:
build:
context: ../
dockerfile: ../Dockerfile
command: sbt run
image: sbt
ports:
- "8080:8080"
volumes:
- "../:/root/build"
networks:
- development
tests:
build:
dockerfile: ./Dockerfile
command: npm run test
volumes:
- .:/usr/tests/
- /usr/tests/node_modules
networks:
- development
depends_on:
- app
and the node Dockerfile looking this:
FROM node:16
ADD package*.json /usr/tests/
ADD test.js /usr/tests/
WORKDIR /usr/tests/
RUN npm install
Full repo is here: https://github.com/ShahOdin/dockerise-everything/pull/1

You can wait for another service to become available with docker-compose-wait project.
Add the 'docker-compose-wait' binary to the 'test container' and run the 'docker-compose-wait' binary before testing the API server inside the container's entrypoint.
You can give some time interval before and after checking if the service is ready.

Related

Why do we need the oneliner Dockerfile with just a FROM command inside the Spring-boot source code folder

I am learning how to dockerize Springboot applications using docker-compose command, a docker-compose.yml, and a dockerfile.
I am able to run the following example and understand how the docker-compose command work.
There is one thing I still don't understand, which is the one liner dockerfile inside the app folder. If you open the dockerfile, there is literally just a FROM command specifying the docker image.
Of course, the pom.xml is included in the app folder. That's what you need to compile a springboot application. And there is a command "mvn clean spring-boot:run". That is what triggers the compile process of the spring-boot code. But why can't we move the FROM command to the docker-compose.yml file? Is it not supported?
https://github.com/hellokoding/hellokoding-courses/tree/master/docker-examples/dockercompose-springboot-mysql-nginx
I am able to run the following example and understand how the docker-compose command work.
There is one thing I still don't understand, which is the one liner dockerfile inside the app folder. If you open the dockerfile, there is literally just a FROM command specifying the docker image.
https://github.com/hellokoding/hellokoding-courses/tree/master/docker-examples/dockercompose-springboot-mysql-nginx
You are right that, in this case, it does not make any sense and the following change to the docker-compose file should still work:
app:
restart: always
build: ./app
working_dir: /app
volumes:
- ./app:/app
- ~/.m2:/root/.m2
expose:
- "8080"
command: mvn clean spring-boot:run
depends_on:
- hk-mysql
to
app:
restart: always
image: FROM maven:3.5-jdk-8
working_dir: /app
volumes:
- ./app:/app
- ~/.m2:/root/.m2
expose:
- "8080"
command: mvn clean spring-boot:run
depends_on:
- hk-mysql
In a compose file you don't use from, you use image to refer to an existing (or pull-able) image.
I think that the code you are referring to is like this, to be extended later. I noticed that it's part of an online course, that's why I expect more logic will be added to the Dockerfile later.

docker compose always building the Dockerfile thus it does not depend on db

I have a Dockerfile which is actually building a maven spring boot project. My docker-compose.yml is bellow
version: '3'
services:
db:
image: mysql
restart: always
environment:
- MYSQL_DATABASE=calero
- MYSQL_ROOT_PASSWORD=root
volumes:
- ./db:/var/lib/mysql
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
restart: always
environment:
PMA_ARBITRARY: 1
MYSQL_ROOT_PASSWORD: root
ports:
- "8082:80"
links:
- "db:db"
redsparrow:
build: .
restart: always
ports:
- "8081:8080"
links:
- "db:db"
depends_on:
- db
volumes:
db:
driver: "local"
And the Dockerfile is this
FROM maven:3.6.0-jdk-11 as build
WORKDIR /app
COPY . /app
RUN mvn clean package
FROM tomcat
COPY context.xml /usr/local/tomcat/webapps/manager/META-INF/context.xml
COPY tomcat-users.xml /usr/local/tomcat/conf/tomcat-users.xml
COPY tomcat-users.xml /usr/local/tomcat/conf/tomcat-users.xml
COPY --from=build /app/target/*.war /usr/local/tomcat/webapps
But what I am facing here docker-compose always try to build the redsparrow before spinning up the mySQL container and mvn clean package trying to access the database as it is not up then, the build does not get succeed.
I think I am missing something so that the spring boot app (redsparrow) should always get built after the database container is up.
Please help!
As far as I know, the docker-compose.yml configuration doesn't provide the feature you outline in your question. The image of the service containing the build: . option will always be built in isolation. However, you could achieve what you want in other ways.
To sum up, the service at issue is a dockerized Java/Maven/Spring-Boot project that relies on a dockerized MySQL database, and accessing that database is required to build you project with mvn clean package, probably due to the presence of integration tests in the test Maven phase.
To overcome this, I see two possible approaches (the first approach being less standard and less easy to implement than the second one; so I'll elaborate mostly on the latter):
You could rely on the docker-maven-plugin to spin the MySQL container directly from Maven. See also this blog article. The practical issue here will be that the docker commands are not directly available inside the considered Docker container, unless you rely on DinD (Docker-in-Docker).
A simpler approach would involve adapting the tests themselves rather than changing the docker setup:
this is closer to standard conventions assuming mvn test (triggered by mvn package) targets unit tests, while mvn verify (relying on the failsafe Maven plugin) targets integration tests, involving external databases or services;
still, if you want to keep all the same a number of unit tests involving database operations, you might want to use an in-memory database engine such as H2, which is often used in the context of Spring Boot unit tests (see e.g. that tutorial);
then, you could move your integration tests in an extra docker-compose service, following the approach outline in this tutorial and that article, for example:
integrationtest:
build: ./integrationtest
command: ./wait-for-it.sh -h db -p 3306 -s -t 150 -- mvn verify
depends_on:
- db
As an aside, note that the links: property is now deprecated.
Note also that the above .yml excerpt relies on wait-for-it because the depends: property only waits for dependencies' containers to be started, not to be fully ready.

Setting up docker-compose to watch for changes with multi-package golang project

I am using a standard structure for my go application.
It is built like this:
cmd
app
main.go
internal
app
server.go
pkg
users
...
pkg
dependency
...
web
app
...
docker-compose.yml
Dockerfile
The problem however is that with this structure it's very hard to mount and build the application dependencies. For example, it I use a file watcher such as fresh, it only watches a single directory and runs a particular file. If I update say pkg/dependency, it will not see those changes.
docker-compose looks like:
version: "3.1"
services:
core:
build: .
depends_on:
- mongo
- memcached
ports:
- 8080:8080
environment:
APP_ENV: dev
volumes:
- .:/go/src/github.com/me/app
mongo:
image: mongo
ports:
- 27017:27017
memcached:
image: memcached
ports:
- 11211:11211
Dockerfile:
FROM golang:1.10.0
WORKDIR /go/src/github.com/me/app
COPY . .
RUN go get -u github.com/golang/dep/cmd/dep
RUN dep ensure
WORKDIR /go/src/github.com/me/app/cmd/app/
RUN go install
RUN go get github.com/pilu/fresh
CMD ["fresh"]
Any help?
I would push back to that fresh repo and ask them
If your file changes are getting saved to git then you could setup a webhook like https://github.com/adnanh/webhook to listen to these git push actions to trigger your rebuild
However if they are just edits then you could roll your own using something like
https://github.com/hpcloud/tail
to do the functional equivalent to a tail -f on an arbitrary set of files/dirs which I have found to work nicely (my logs trigger a parse daemon for error checking)
but you're right there might be an easier way given your use case

docker-compose: how to see file-changes instantly (when developing)

I am new to docker, so this may seem very basic to you, anyway - its freaking me out at the moment.
I decided to develop a new web-project ontop of containers, of course i thought about docker. After finishing the tutorial and reading some Dockerfiles and so on, i decided to go with docker-compose.
I want to have multiple compose-files, one for Development, one for Production and so on. Now i managed to orchestrate a basic php/mysql/redis application using 3 different services. The main application is php based and maintained in the project src. Mysql and Redis are simply configured with base images and do not require any business logic.
I can build the containers and bring them up with
build:
docker-compose -f compose-Development.yml build
up:
docker-compose -f compose-Development.yml up
Many files in the main application container are built by gulp (templates, css, etc) and code will exist in both javascript and php.
I noticed, that my app state does not change when i change my files. I would have to rebuild and restart my containers.
Having some experience with Vagrant, i would go for some kind of shared source during development. But how would i achieve that?
My application Dockerfile (for development) looks like this:
FROM webdevops/php-nginx:7.1
COPY ./ /app
COPY docker/etc/ /opt/docker/etc
# php config...
RUN ln -sf /opt/docker/etc/php/php.Development.ini /opt/docker/etc/php/php.ini
WORKDIR /app/
EXPOSE 80
The compose file:
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile.Development
links:
- mysql
- redis
volumes:
- ./data/fileadmin:/app/public/fileadmin
- ./data/uploads:/app/public/uploads
env_file:
- docker/env/All.yml
- docker/env/Development.yml
ports:
- "80:80"
restart: always
# Mysql Container
mysql:
build:
context: docker/mysql/
dockerfile: Dockerfile
restart: always
volumes:
- mysql:/var/lib/mysql
env_file:
- docker/env/All.yml
- docker/env/Development.yml
# Cache Backend Container
redis:
build:
context: docker/redis/
dockerfile: Dockerfile
ports:
- "6379:6379"
volumes:
- redis:/data
env_file:
- docker/env/All.yml
- docker/env/Development.yml
restart: always
volumes:
mysql:
redis:
So far, i used some github repositories to copy chunks from. I know there might be other problems in my setup as well, for the moment the most blocking issue is the thing with the linked/copied source.
Kind regards,
Philipp
The idea of "Development/Production parity" confuses many on this front. This doesn't mean that you can simply have a single configuration and it will work across everything; it means you'll have much closer parity and that you can create an environment that resembles something very close to what you'll have in production.
What's wrong here is that currently you're building your image and it would be ready to ship out, it'd have your code, you have volumes set aside for uploads, etc. Awesome!
Unfortunately, this setup is not correct for development. If you want to be editing code on the fly - you need to attach your local working directory to the image as a volume as well. This would not be done in production; so it's very close - but not exactly the same setup.
Add the following in to the app service volumes section of your compose-file and you should be good to go:
- .:/app

docker-compose: using multiple Dockerfiles for multiple services

I'm using docker-compose and I'd like to use different Dockerfiles for different services' build steps. The docs seem to suggest to place different Dockerfiles in different directories, but I'd like them all to be in the same one (and perhaps distinguishable using the following convention: Dockerfile.postgres, Dockerfile.main...). Is this possible?
Edit: The scenario I have contains this docker-compose file:
main:
build: .
volumes:
- .:/code
environment:
- DEBUG=true
postgresdb:
extends:
file: docker-compose.yml
service: main
build: utils/sql/
ports:
- "5432"
environment:
- DEBUG=true
where postgresdb's Dockerfile is:
FROM postgres
# http://www.slideshare.net/tarkasteve/developerweek-2015-docker-tutorial
ADD make-db.sh /docker-entrypoint-initdb.d/
and the main is:
FROM python:2.7
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
ADD . /code/
This works right now, but I'd like to extend postgresdb's Dockerfile by calling a Python script that creates tables in the database according to models built upon SQL Alchemy (the Python script would be called as python manage.py create_tables). I wanted to add it to the db's Dockerfile, but due to the isolation of the containers I can't use SQL Alchemy there because that image is based on the postgres image instead of Python's, and it doesn't contain the sqlalchemy package...
What can I do? I tried to use the main service in postgresdb, but unfortunately it doesn't carry python and its packages over, so I still can't write a single Dockerfile that creates the Postgres database (through the shell script) as well as its tables (through a Python script).
You have to add it in build section.
So, you can specify different alternative dockerfiles for each service.
services:
service1:
build:
context: .
args:
- NODE_ENV=local
dockerfile: Dockerfile_X
ports:
- "8765:8765"
This is not possible due to the way Docker handles build contexts.
You will have to use and place a Dockerfile in each directory that becomes part of the Docker build context for that service.
See: Dockerfile
You will in fact require a docker-compose.yml that looks like:
service1:
build: service1
service2:
build: service2
See: docker-compose
Update:
To address your particular use-case -- Whilst I understand what you're trying to do and why I personally wouldn't do this myself. The isolation is a good thing and helps to manage expectations and complexity. I would perform the "database creation" as either another container based off your app's source code or within the app container itself.
Alternatively you could look at more scripted and template driven solutions such as shutit (I have no experience in but heard god thigns about).
FWIW: Separation of concerns ftw :)
You can use dockerfile argument in your docker-compose.yml to specify an alternate one for a specific service.
I don't know when it was added, since the discussion is old, but you can see it's in the reference https://docs.docker.com/compose/compose-file/#dockerfile
I've tried it yesterday and it works with me.
It the base dir for my project I have Dockerfile and Dockerfile-service3 and in the docker-compose.yml:
version: '2'
services:
service1:
build:
context: .
args:
- NODE_ENV=local
ports:
- "8765:8765"
# other args skipped for clarity
service2:
build:
context: .
args:
- NODE_ENV=local
ports:
- "8766:8766"
# other args skipped for clarity
service3:
build:
context: .
dockerfile: Dockerfile-service3
args:
- NODE_ENV=local
ports:
- "8767:8767"
# other args skipped for clarity
service4:
build:
context: .
args:
- NODE_ENV=local
ports:
- "8768:8768"
# other args skipped for clarity
In this way all services, except service3 will be built using the standard Dockerfile and service3 will be built using the Dockerfile-service3.
Creator of ShutIt here. Gratified to hear that people are hearing good things about it.
To be honest, in your position I'd write your own Dockerfile and use standard package management such as apt or yum. A quick check with an ubuntu image and python-pip and python-sqlalchemy are freely available.
There are more convoluted solutions that may work for you using ShutIt, happy to discuss this offline, as I think it's a bit off-topic. ShutIt was written for this kind of use case, as I could see that this would be a common problem given Dockerfiles' limited utility outside the microservices space.

Resources