I currently have the following setup:
# https://github.com/SeleniumHQ/docker-selenium
version: "3"
services:
selenium-hub:
image: ${DOCKER_REGISTRY}selenium/hub:2.53.1-americium
container_name: selenium-hub
ports:
- 4444:4444
environment:
- NODE_MAX_SESSION=5
- GRID_DEBUG=false
selenium-chrome:
image: ${DOCKER_REGISTRY}selenium/node-chrome-debug:2.53.1-americium
container_name: chrome
ports:
- 5900:5900
depends_on:
- selenium-hub
environment:
- HUB_PORT_4444_TCP_ADDR=selenium-hub
- HUB_PORT_4444_TCP_PORT=4444
- SHM-SIZE=2g
- SCREEN_WIDTH=2560
- SCREEN_HEIGHT=1440
- GRID_DEBUG=false
volumes:
- /tmp/
- /dev/shm/:/dev/shm/
tomcat:
build:
context: .
args:
ARTIFACTORY: ${DOCKER_REGISTRY}
container_name: tomcat
restart: on-failure
ports:
- 8080:8080
depends_on:
- db
volumes:
- ./src/test/resources/tomcat/context.xml:/opt/tomcat/conf/context.xml
- ./src/test/resources/tomcat/tomcat-users.xml:/opt/tomcat/conf/tomcat-users.xml
The above config sets up a selenium hub and deploys a webapp to a tomcat container. The resources that are served will have a href in the likes of http://tomcat:8080/...
If I want to access these resources via href from the outside, the tomcat DNS will not be resolved as the DNS is only exposed inside the virtual container network. One resolution would be to expose that internal DNS to the host machine, but I have no idea how.
Another would be to do a string replace of the href value and replace tomcat to localhost but that looks kind of dirty.
Anyone of you guys know how I can expose the internal DNS to the host machine?
Answer can be found at https://docs.docker.com/config/containers/container-networking/
Exposing /etc/hosts and /etc/resolv.conf
Related
I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?
I set up a Docker network with a db container, a nextcloud container, and a nginx container. I can access the nextcloud website with 'ip-adress':8080, but I want to access it without specifying port 8080. How can I do that?
This is my docker-compose.yml:
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
restart: always
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud:fpm
restart: always
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- MYSQL_PASSWORD=
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
web:
image: nginx
restart: always
ports:
- 8080:80
links:
- app
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
volumes_from:
- app
What you want is to avoid having to specify the port when you request a URI. One way to do that is to use the default port for the protocol you are using (80 for HTTP, 443 for https, 21 for FTP, etc). Then rely on your client to automatically fallback to the default port.
In a Docker Compose configuration file, the syntax for exposing a port is defined as such: <host_port>:<container_port> (see the documentation). That means 8080:80 exposes port 80 from the container on your docker host on port 8080.
In your case, the service is exposing an HTTP server, which means you have to change it to the default port 80 in order to omit it. Update web.services.ports[0] from 8080:80 to 80:80, and you will be able to access nextcloud from 'ip-adress'.
I want to access service1 from inside of service2 container by using localhost:5432. How can do so?
This is what my docker compose currently looks like:
services:
service1:
image: postgres:12
ports:
- '172.10.1.1:5432:5432'
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
ports:
- '172.10.1.1:1234:1234'
Please note I know i can access it by using service1:5432 or just service1. But I would like to use localhost if possible.
It is not possible, because each container has a own ip.
But there is a workaround:
Set network to host. So the ports are open on hostmaschine and are accessible via 127.0.0.1. Not working on windows.
But I don't know any good reason why you like to use localhost for postgres? Are you trying to authenticate via localhost? Don't do that - use a password instead.
Using host network maybe a solution you are finding
https://docs.docker.com/network/host/
services:
service1:
image: postgres:12
network_mode: host
expose:
- '5432'
environment:
- POSTGRES_USER=project
- POSTGRES_PASSWORD=pass
volumes:
- db_data:/var/lib/postgresql/data
service2:
build: .
network_mode: host
I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.
I've been trying to connect two docker containers. My flask backend and my react frontend, when I use localhost in the request the request goes through, but when i use the docker container name ie http://backend-service:5000/endpoint , the name can't be resolved. The documentation states that the containers connect to the same networking automatically and that accessing services from one should be as simple as that. I've tried adding links to the docker compose file as well with no luck.
Here is my docker-compose file:
version: '3'
services:
backend-service:
build: ./api
expose:
- 5000
ports:
- "5000:5000"
volumes:
- ./api:/usr/src/app
environment:
- FLASK_ENV=development
- FLASK_APP=app.py
- FLASK_DEBUG=1
client-service:
build: ./clientside
expose:
- 3000
ports:
- "3000:3000"
volumes:
- ./clientside/src:/usr/src/app/src
- ./clientside/public:/usr/src/app/public
links:
- "backend-service:backend"