How set up test containers for paraller runwith docker compose? - docker

Here is docker-compose file:
app:
image: myimage
depends_on:
- nsqd
- localstack
command: ["run.sh"]
environment:
- "DYNAMODB=http://localstack:4569"
ports:
- 8080:8080
nsqd:
image: nsqio/nsq
command: /run
ports:
- "4150:4150"
- "4151:4151"
localstack:
image: localstack/localstack:latest
ports:
- 4569:4569
environment:
SERVICES: dynamodb
DATA_DIR: /tmp/localstack/data
HOSTNAME: localstack
This compose file is run in java junit test before any test method is run :
#Before
public void setUp() throws Exception {
new DockerComposeContainer(new File("docker-compose.yaml"))
.withExposedService("nsqd", 4150, Wait.forListeningPort())
.withExposedService("localstack", 4569, Wait.forListeningPort())
.withExposedService("app", 8080, Wait.forListeningPort())
.start();
}
When all test methods run one by one there are no problems at all. But when I try to run more than 2 test same time I
got errors like that:
ERROR: for localstack Cannot start service localstack: driver failed programming external connectivity on endpoint hwfdrbmwpwn1_localstack_1 (e33d2a3098e74b1b8d87e3e595d9d9504ccddd4fe9c0605b20ebd3f22f50daa5): Bind for 0.0.0.0:4569 failed: port is already allocated
ERROR: for nsqlookupd Cannot start service nsqlookupd: driver failed programming external connectivity on endpoint hwfdrbmwpwn1_nsqlookupd_1 (fe62cec02a23a184d65b3f02776a14d77fdfbe639645ea0a11e07e8f11010e37): Bind for 0.0.0.0:4161 failed: port is already allocated
And those port differs from withExposedService function. From other side all service from compose file started in isolated network
so there are should not be any conflicts but they exist. Can any bpody explain what is going on with ports?
What is the additional config should be provided to testcontainers to run docker-compose services multiple times same time?

The port defined with withExposedService is from the internal view of a container. Testcontainers will bind that port to a random external port. Have a read here:
https://www.testcontainers.org/features/networking
Do you also stop your docker compose containers before each test method?
I also would suggest to remove the port mapping from your docker compose file as it is not necessary with testcontainers:
Note that it is not necessary to define ports to be exposed in the YAML file; this would inhibit reuse/inclusion of the file in other contexts.
Taken from: https://www.testcontainers.org/modules/docker_compose/

If I understood your setup correctly you want start and stop your docker-compose containers for each test and do so with potentially several different docker-compose-files in different tests (or different tests with the same file) at the same time.
There exists an alternative library, Docker-Compose-Rule of Palantir!.
There is actually a collaboration going on between the two (testContainers and Palantir) since testContainers is far more generic, but Palantir library was more in depth using docker-compose.
The Collaboration started in 2018 but for now the library is still maintained so it might still has a specialization advantage which solves your problem.

Related

Docker Compose network_mode - adding argument causes local testing to fail

I'm trying to build an application that is able to use local integration testing via Docker Compose with Google Cloud emulator containers, while also being able to run that same Docker Compose configuration on a Docker-based CI/CD tool (Google Cloud Build).
The kind of docker-compose.yml configuration I'm using is:
version: '3.7'
services:
main-application:
build:
context: .
target: dev
image: main-app-dev
container_name: main-app-dev
network_mode: $DOCKER_NETWORK
environment:
- MY_ENV=my_env
command: ["sh", "-c", "PYTHONPATH=./ python app/main.py"]
volumes:
- ~/.config:/home/appuser/.config
- ./app:/home/appuser/app
- ./tests:/home/appuser/tests
depends_on:
- firestore
firestore:
image: google/cloud-sdk
container_name: firestore
network_mode: $DOCKER_NETWORK
environment:
- GCP_PROJECT_ID=dummy-project
command: ["sh", "-c", "gcloud beta emulators firestore start --project=$$GCP_PROJECT_ID --host-port=0.0.0.0:9000"]
ports:
- "9000:9000"
I added the network_mode arguments to enable the configuration to use the "cloudbuild" network type available on the CI/CD pipeline, which is currently working perfectly. However this network configuration is not available to local Docker, hence why I've tried to use environment variables to enable the switch depending on local vs Cloud Build CI/CD environment.
Before I added these network_mode params/args for the CI/CD, the local testing was working just fine. However since I added them, my application either can't run, or can't connect to its accompanying services, like the firestore one specified in the YAML above.
I have tried the following valid Docker network modes with no success:
"bridge" - runs the service, but doesn't allow connection between containers
"host" - doesn't allow the service to run because of not being compatible with assigning ports
"none" - doesn't allow the service to connect externally
"service" - doesn't allow the service to run due to invalid mode/service
Anyone able to provide advice on what I'm missing here?
I would assume one of these network modes would be what Docker Compose would be using if the network_mode is not assigned, so I'm not sure why all of them are failing.
I want to avoid having a separate cloud build file for the remote and local configurations, and would also like to avoid the hassle of setting up my own docker network locally. Ideally if there were some way of only applying network_mode only remotely, that would work best in my case I think.
TL;DR:
Specifying network_mode does not give me the same result as not specifying it when running docker-compose up locally.
Due to running the configuration in the cloud I can't avoid specifying it.
Found a solution thanks to this thread and the comment by David Maze.
As far as I understand it, Docker Compose when not provided a specific network_mode for all the containers, creates its own private default network, named after the folder in which the docker-compose.yml file exists (as default).
Specifying a network mode like the default "bridge" network, without using this custom network created by docker compose means container discovery between services isn't possible, as in main-application couldn't find the firestore:9000 container.
Basically all I had to do was set the network_mode variable to myapplication_default, if the folder where docker-compose.yml sat in was called "MyApplication", to force app apps to use the same custom network set up in docker-compose up

docker stack deploy depends_on

Given compose file
version: '3.8'
services:
whoami1:
image: containous/whoami
depends_on:
- whoami2
whoami2:
image: containous/whoami
when deployed to docker swarm docker stack deploy -c docker-compose.yaml test
services whoami1 and whoami2 seem to start in random order and ignore depends_on condition.
docker stack deploy -c docker-compose.yaml test
Creating network test_default
Creating service test_whoami1
Creating service test_whoami2
Does docker swarm support service startup sequencing via dependencies?
No, at least not built in.
Even with depends_on the whoami2 may not yet be ready to interact with whoami1 because it may need time to boot itself:
However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) - only until it’s running. There’s a good reason for this.
https://docs.docker.com/compose/startup-order/
They hint at two possibilites to check if whoami2 is ready.
Use a tool such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
And depends_on is indeed ignored for docker swarm:
There are several things to be aware of when using depends_on:
(...)
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
https://docs.docker.com/compose/compose-file/#depends_on

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

Running an executable inside a docker container from another container

I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/

Docker: docker compose file for "docker stack deploy"

I have a docker-compose.yml file which works with docker-compose up --build. My app works and everything is fine.
version: '3'
services:
myapp:
container_name: myapp
restart: always
build: ./myapp
ports:
- "8000:8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 flaskplot:app
nginx:
container_name: nginx
restart: always
build: ./nginx
ports:
- "80:80"
depends_on:
- myapp
But when I use docker stack deploy -c docker-compose.yml myapp, I get the following error:
Ignoring unsupported options: build, restart
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network myapp_default
Creating service myapp_myapp
failed to create service myapp_myapp: Error response from daemon: rpc error: code = InvalidArgument desc = ContainerSpec: image reference must be provided
any hints how I should "translate" the docker-compose.yml file to make it compatible with docker stack deploy?
To run containers in swarm mode, you do not build them on each swarm node individually. Instead you build the image once, typically on a CI server, push to a registry server (often locally hosted, or you can use docker hub), and specify the image name inside your compose file with an "image" section for each service.
Doing that will get rid of the hard error. You'll likely remove the build section of the compose file since it no longer applies.
Specifying "container_name" is unsupported because it would break the ability to scale or perform updates (a container name must be unique within the docker engine). Let swarm name the containers and reference your app on the docker network by it's service name.
Specifying "depends_on" is not supported because containers may be started on different nodes, and rolling updates/failure recovery may remove some containers providing a service after the app started. Docker can retry the failing app until the other service starts up, or preferably you configure an entrypoint that waits for the dependencies to become available with some kind of ping for a minute or two.
Without seeing your Dockerfile, I'd also recommend setting up a healthcheck on each image. Swarm mode uses this to control rolling updates and recover from application failures.
Lastly, consider adding a "deploy" section to your compose file. This tells swarm mode how to deploy and update your service, including how many replicas, constraints on where to run, memory and CPU limits and requirements, and how fast to update the service. You can define a restart policy here as well but I recommend against it since I've seen docker engines restarting containers that conflict with swarm mode deploying containers on other nodes, or even a new container on the same node.
You can see the full compose file documentation with all of these options here: https://docs.docker.com/compose/compose-file/

Resources