I have 3 Docker containers, one running nginx, another php and one running serf by hashicorp.
I want to use the php exec function to call the serf binary to fire off a serf event
In my docker compose I have written
version: '2'
services:
web:
restart: always
image: `alias`/nginx-pagespeed:1.11.4
ports:
- 80
volumes:
- ./web:/var/www/html
- ./conf/nginx/default.conf:/etc/nginx/conf.d/default.conf
links:
- php
environment:
- SERVICE_NAME=${DOMAIN}
- SERVICE_TAGS=web
php:
restart: always
image: `alias`/php-fpm:7.0.11
links:
- serf
external_links:
- mysql
expose:
- "9000"
volumes:
- ./web:/var/www/html
- ./projects:/var/www/projects
- ./conf/php:/usr/local/etc/php/conf.d
serf:
restart: always
dns: 172.17.0.1
image: `alias`/serf
container_name: serf
ports:
- 7496:7496
- 7496:7496/udp
command: agent -node=${SERF_NODE} -advertise=${PRIVATE_IP}:7496 -bind=0.0.0.0:7496
I was imagining that I would do something like in php exec('serf serf event "test"') where serf is the hostname of the container.
Or perhaps someone can give an idea of how to get something like this set up using alternative methods?
The "linked" containers allow network level discovery between containers. With docker networks, the linked feature is now considered legacy and isn't really recommended anymore. To run a command in another container, you'd need to either open up a network API functionality on the target container (e.g. a REST based http request to the target container), or you need to expose the host to the source container so it can run a docker exec against the target container.
The latter requires that you install the docker client in your source container, and then expose the server with either an open port on the host or mounting the /var/run/docker.sock in the container. Since this allows the container to have root access on the host, it's not a recommended practice for anything other than administrative containers where you would otherwise trust the code running directly on the host.
Only other option I can think of is to remove the isolation between the containers with a shared volume.
An ideal solution is to use a message queuing service that allows multiple workers to spin up and process requests at their own pace. The source container sends a request to the queue, and the target container listens for requests when it's running. This also allows the system to continue even when workers are currently down, activities simply queue up.
Related
I started mysqldb from a docker container . I was surprised that I could connect it via the localhost using the below command
mysql -uroot -proot -P3306 -h localhost
I thought the docker containers that start on the bridge network and wont be available outside that network. How is that mysql CLI is able to connect to this instance
Below is my docker compose that runs the mysqldb-docker instance
version: '3.8'
services:
mysqldb-docker:
image: 'mysql:8.0.27'
restart: 'unless-stopped'
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=reco-tracker-dev
volumes:
- mysqldb:/var/lib/mysql
reco-tracker-docker:
image: 'reco-tracker-docker:v1'
ports:
- "8083:8083"
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_PASSWORD=root
- SPRING_DATASOURCE_URL="jdbc:mysql://mysqldb-docker:3306/reco-tracker-dev"
depends_on: [mysqldb-docker]
env_file:
- ./.env
volumes:
mysqldb:
You have published the port(s). That means you can reach them on the host system on the published port.
By default, when you create or run a container using docker create or docker run, it does not publish any of its ports to the outside world. To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
The critical section in your config is the below. You have added a ports key to your service. This is composes way to publish ports. The left part is the port where you publish it to on the host system. The right part is where the container actually listens on.
ports:
- "3306:3306"
Also keep in mind that when you start compose, a default network is created that joins all container in the compose stack. That's why These containers can find each other, with the service name and/or container name as hostname.
You don't need to publish the port(s) like you did in order for them to be able to communicate. I guess that's why you did it. You can and probably should remove any port mapping from internal services, if possible. This will add extra security to your setup, because then it behaves like you describe. Only containers in the same network find each other.
I scale container
ports:
- "8086-8090:8085"
But what if I needed it only inside my bridge network?
In other words, does it exists something like this?
expose:
- "8086-8090:8085"
UPDATED:
I have a master container:
exposed to host network
acts as a load balancer
I want to have N slaves of another container, exposed to assigned ports inside docker network (bot visible in host network)
Connections between containers (over the Docker-internal bridge network) don't need ports: at all, and you can just remove that block. You only need ports: to accept connections from outside of Docker. If the process inside the container is listening on port 8085 then connections between containers will always use port 8085, regardless of what ports: mappings you have or if there is one at all.
expose: in a Compose file does almost nothing at all. You never need to include it, and it's always safe to delete it.
(This wasn't the case in first-generation Docker networking. However, Compose files v2 and v3 always provide what the Docker documentation otherwise calls a "user-defined bridge network", that doesn't use "exposed ports" in any way. I'm not totally clear why the archaic expose: and links: options were kept.)
No extra changes needed!
Because of internal Docker DNS it 'hides' scaled instances under same port:
version : "3.8"
services:
web:
image: "nginx:latest"
ports:
- "8080:8080"
then
docker-compose up -d --scale web=3
calling localhost:8080 will proxy requests to all instances using Round Robin!
My app has 2 dependencies which I specify in my docker-compose, a postgres and kafka service:
services:
postgres:
image: postgres:9.6-alpine
ports:
- "5432:5432"
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
I run my code and tests outside the docker network, and use these two containers as my dependencies.
As these both expose ports, I can configure my app to hit them via: localhost:5432, localhost:9092. This works.
The problem I have is when I want to test the app image itself, I add this as a service to the docker-compose file:
app:
image: myapp
links:
- postgres
- kafka
The app is still configured to use localhost, so I allow the app container to access my network using --net=host
Whilst the app container can now access localhost:5432 and localhost:9092 (confirmed by curling from inside the container), the host names fail to resolve when the code runs and the dependencies are unreachable - possibly as a result of using localhost from inside the container and confusing the client libraries? I'm really not sure.
It feels like the use of localhost in the app configuration isn't the right approach here. Is it possible to refer to the service names 'postgres' and 'kafka' from outside the docker network?
Why do you want to continue using localhost:xxx in your app?
The best approach for you is to change connection strings in your application when it is being launched from docker-compose. You just use postgres:5432 and kafka:9092 and everything will work, because inside docker-compose network all machines are visible to each other under their service names.
If for some great reasons you insist on using localhost as a connection target, you need to turn all services into host mode. But remember - in this case ports are not exposed, so you access services with their original port values.
version: '3'
services:
postgres:
image: postgres:9.6-alpine
network_mode: "host"
kafka:
image: wurstmeister/kafka
network_mode: "host"
app:
image: myapp
network_mode: "host"
And by the way, forget about links. They are deprecated.
I am trying to build a docker-compose file that will mimic my production environment with its various microservices. I am using a custom bridge network with an nginx proxy that routes port 80 and 443 requests to the correct service containers. The docker-compose file and the nginx conf files together specify the port mappings that allow the proxy container to route traffic for each DNS entry to its matching container.
Consequently, I can use my container names as DNS entries to access each container service from my host browser. I can also exec into each container and ping other containers by that same DNS hostname. However, I cannot successfully curl from one container to another by the container name alone.
It seems that I need to append the proxy port mapping to each inter-service API call when operating within the Docker environment. In my production environment each service has its own environment and can respond on ports 80 and 443. The code written for each service therefore ignores port specifications and simply calls each service by its DNS hostname. I would rather not have to append port id mappings to each API call throughout the various code bases in order for my services to talk to each other in the Docker environment.
Is there a tool or configuration setting that will allow my microservice containers to successfully call each other in Docker without the need of a proxy port map?
version: '3'
services:
#---------------------
# nginx proxy service
#---------------------
nginx_proxy:
image: nginx:alpine
networks:
- test_network
ports:
- "80:80"
- "443:443"
volumes:
- "./site1/site1.test.conf:/etc/nginx/conf.d/site1.test.conf"
- "./site2/site2.test.conf:/etc/nginx/conf.d/site2.test.conf"
container_name: nginx_proxy
#------------
# site1.test
#------------
site1.test:
build: alpine:latest
networks:
- test_network
ports:
- "9001:9000"
environment:
- "VIRTUAL_HOST=site1.test"
volumes:
- "./site1:/site1"
container_name: site1.test
#------------
# site2.test
#------------
site2.test:
build: alpine:latest
networks:
- test_network
ports:
- "9002:9000"
environment:
- "VIRTUAL_HOST=site2.test"
volumes:
- "./site2:/site2"
container_name: site2.test
# networks
networks:
test_network:
http://hostname/ always means http://hostname:80/ (that is, TCP port 80 is the default port for HTTP URLs). So if you want one container to be able to reach the other as http://othercontainer/, the other container needs to be running an HTTP daemon of some sort on port 80 (which probably means it needs to at least be started as root within its container).
If your nginx proxy routes to all of the containers successfully, it's not wrong to just route all inter-container traffic through it (in a previous technology generation we would have called this a service bus). There's not a trivial way to do this in Docker, but you might be able to configure it as a standard HTTP proxy.
I would suggest making all of the outbound service URLs configurable in any case, probably as environment variables. You can imagine wanting to run multiple services together in a development environment (in which case the service URL might be http://localhost:9002), or in a pure-Docker environment like what you show (http://otherservice:9000), or in a hybrid multi-host Docker setup (http://other.host.example.com:9002), or in Kubernetes (http://otherservice.default.svc.cluster.local:9000).
I'd like my web Docker container to access Redis on 127.0.0.1:6379 from within the web container. I've setup my Docker Compose file as the following. I get ECONNREFUSED though:
version: "3"
services:
web:
build: .
ports:
- 8080:8080
command: ["test"]
links:
- redis:127.0.0.1
redis:
image: redis:alpine
ports:
- 6379
Any ideas?
The short answer to this is "don't". Docker containers each get their own loopback interface, 127.0.0.1, that is separate from the host loopback and from that of other containers. You can't redefine 127.0.0.1, and if you could, that would almost certainly break other things.
There is a technically possible way to do it by either running all containers directly on the host, with:
network_mode: "host"
However, that removes the docker network isolation that you'll want with containers.
You can also attach one container to the network of another container (so they have the same loopback interface) with:
docker run --net container:$container_id ...
but I'm not sure if there's a syntax to do this in docker-compose and it's not available in swarm mode since containers may run on different nodes. The main use I've had for this syntax is attach network debugging tools like nicolaka/netshoot.
What you should do instead is make the location of the redis database a configuration parameter to your webapp container. Pass the location in as an environment variable, config file, or command line parameter. If the web app can't support this directly, update the configuration with an entrypoint script that runs before you start your web app. This would change your compose yml file to look like:
version: "3"
services:
web:
# you should include an image name
image: your_webapp_image_name
build: .
ports:
- 8080:8080
command: ["test"]
environment:
- REDIS_URL=redis:6379
# no need to link, it's deprecated, use dns and the network docker creates
#links:
# - redis:127.0.0.1
redis:
image: redis:alpine
# no need to publish the port if you don't need external access
#ports:
# - 6379