I'm trying to use docker-compose to run continuous integration tests on a Jenkins server.
Here is my docker-compose.yml:
version: '3'
services:
elasticsearch:
container_name: elasticsearch_${INSTANCE}
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
ports:
- 9200:9200
- 9300:9300
command: elasticsearch -E transport.host=0.0.0.0
environment:
ES_JAVA_OPTS: "-Xms2g -Xmx2g"
discovery-type: single-node
mainapp:
container_name: mainapp_${INSTANCE}
image: testbot:${INSTANCE}
environment:
ES_ADDRESS: http://elasticsearch_${INSTANCE}:9200
SUBSET: ${SUBSET}
DIRECTORY: ${DIRECTORY}
INSTANCE: ${INSTANCE}
TEST_CMD: ${TEST_CMD}
command: /bin/bash /mainapp/build/tests/wrapper.sh
This works great, but when I try to run multiple tests at the same time, the previously running test exits with code 137 immediately. I think this is because the services are binding to the host network, and I can't do that with multiple containers.
For my purposes, the two services that are started only need to communicate with each other, not with the host at all. I'm a bit confused with exactly how to network this.
You can do this by specifying a different project name using the COMPOSE_PROJECT_NAME environment variable or the --project-name flag for docker-compose. All services, networks, and volumes are created and named per-project.
You can drop the ports property.
If you wish, you can use the expose property instead (and then you only need to describe the container port, e.g. expose: - 9200) but expose is purely documentary and is not functionally required.
The ports property defines ports that will be exposed on the host.
If you don't want|need ports exposed on the host, you don't need it.
Related
I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.
I'm running a Selenoid application test automation script, and would like to run this script against a local application. However, I can't find how to expose my local application (running on port 8787) to Selenoid. I found the following thread discussing a similar issue, but it doesn't solve my issue. The linked thread describes to use the host's ip address. However, I want to make my test system independent. The host ip address is different for each system, and is hard to be retrieved system independently.
I already tried adding the expose field to my docker compose file:
version: '3'
services:
selenoid:
network_mode: bridge
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"
However, this doesn't work because the docker containers created by Selenoid do not get passed the same option.
Is there any way to expose my host port 8787 to my Selenoid container in a system/os independent way (either via a configuration in the docker-compose.yml file, a capability passed to the remote driver or any other way?)?
Selenoid runs browsers in standard Docker containers, so anything applicable to Docker is applicable to Selenoid browsers. Docker was created for the case when all interacting parts are packed to containers and in that case you have legacy Docker links or modern Docker custom networks on your service. If you still want to run your application on the host machine without packing it to container, you have to either user host machine IP or on some platforms Docker provides a particular domain name, e.g. docker.for.mac.localhost on Mac.
I finally realized that yes, the application I run actually runs in a Docker container and thus linking them is as easy as putting Selenoid and the application in the same Docker network. Final docker-compose.yml is as follows:
version: '3'
networks:
my_network_name:
external:
name: my_network_name # This assumes network is already created
services:
selenoid:
networks:
my_network_name: null
image: aerokube/selenoid:latest-release
volumes:
- "${PWD}/run:/etc/selenoid"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${PWD}/run/video:/opt/selenoid/video"
- "${PWD}/run/logs:/opt/selenoid/logs"
environment:
- OVERRIDE_VIDEO_OUTPUT_DIR=${PWD}/run/video
- TZ=Europe/Amsterdam
command: ["-container-network", "my_network_name", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video", "-log-output-dir", "/opt/selenoid/logs"]
ports:
- "4444:4444"
expose:
- "8787"
I have 2 docker images, one for my backend and one for a mock database. I want to spin up these two images separately and link the backend to the database. To do this I have a connection string in my backend like so Data Source=192.168.99.100;Catalog=DB name;Integrated Security=True;MultipleActiveResultSets=True"; where 192.168.99.100 is the IP of my default Docker machine where the database container is running. So on my Windows machine this works perfectly and the backend container can communicate with the database which is running on another container. However, when some of my colleagues who use Mac and Linux use the same images they can't get the link to work because they obviously don't have the same IP for their Docker machine.
Is there any way to reference the database in the connection string so that it is the same no matter where it is running? For example use the name of the database container, instead of the IP or something similar?
You can also do this using plain docker. Basically you just need to create a bridge network, and then attach both containers to it.
Eg:
docker network create --driver=bridge mynetwork
docker run --network=mynetwork --name mydb mydb:latest
docker run --network=mynetwork --name myapp myapp:latest
Then inside the myapp container you can reference the database container using the hostname mydb (same as with docker-compose). You can still expose ports in the myapp container to your host using -p 3000:3000, etc
Further reading: https://docs.docker.com/network/bridge/
You can use docker-compose services to achieve what you are looking for. Here is a simplified example docker-compose.yml file:
version: "3.5"
services:
db:
container_name: mock_db
restart: "no"
build: ./mock_db
expose:
- 5432 (or whatever your port is)
env_file: .env
command: your-command
server:
container_name: my_server
build: ./server
env_file: .env
ports:
- "8443:8443"
command: your-command
You can then reference the service name (in this case db) as the ip/url part of your connection string.
You can read more about docker-compose configuration options here
I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379
Docker 'link' feature will be deprecated as new feature 'networking' has been released (link). I'm making docker-compose with some containers, and it was fine with 'link' to connect each others(without any other commands).
Since I need to change link configuration to network, I have to make docker network before 'docker-compose up'. Is there any docker-compose feature that making docker network automatically? Or any other way to connecting each containers with some configuration?
By default, docker-compose with a v2 yml will spin up a network for your project. Any networks you define will also be created unless you explicitly tell it otherwise. Here's an example docker-compose.yml:
version: '2'
networks:
dbnet:
appnet:
services:
db:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
app:
image: busybox
command: tail -f /dev/null
networks:
- dbnet
- appnet
proxy:
image: busybox
command: tail -f /dev/null
ports:
- 80
networks:
- appnet
And then when you spin it up, you'll see that it creates the networks defined:
$ docker-compose up -d
Creating network "test_dbnet" with the default driver
Creating network "test_appnet" with the default driver
Creating test_app_1
Creating test_db_1
Creating test_proxy_1
Note that linking containers also created an implicit dependency, so you may want to use depends_on in your yml to be explicit in any dependencies after removing your link.
docker-compose creates a default network for your compose project on itself. You only have to migrate your compose projects to version: '2' or version: '3' of the compose yaml format. Please read how to upgrade for more information.
With version 2 and 3, you don't have to specify links anymore, as all services will be in the default network if you don't explicitly specify other networks.
UPDATE: To make 2 containers talk to each other, you can simply use the service names which will resolve to container IPs. Links are now only required if for some reason a container expects a specific name, e.g. because it is hardcoded.