Why can't I connect to this docker compose service? - docker

So I have this docker compose file
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:5080/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
ports:
- 5080:9091
restart: unless-stopped
I'm new to docker compose and trying it out for the first time. I need to be able to access the transmission service via http://localhost:8080 but nginx is returning a 502.
How should I change my compose file so that http://localhost:8080 will connect to the transmission service?
How can I make the transmission service not accessible via http://localhost:5080 and only accessible via http://localhost:8080 using docker compose?

I have tested the code below, it is working
version: "2.1"
services:
nginx:
image: pottava/proxy
ports:
- 8080:80
environment:
- PROXY_URL=http://transmission-container:9091/
- BASIC_AUTH_USER=admin
- BASIC_AUTH_PASS=admin
- ACCESS_LOG=true
transmission:
image: linuxserver/transmission
container_name: transmission-container
expose:
- "9091"
restart: unless-stopped
You no need to expose port 5080 to the host, the Nginx container can access directly the container port. The proxy URL needs to point to port 9091. Now you can't directly access the transmission service but need to go though the proxy server.

You should be able to access the other container using the service name and container port:
- PROXY_URL=http://transmission:9091/
If you do not want to access the transmission service from locahost, do not declare the host port:
ports:
- 9091

Related

docker host: use docker dns to resolve container name from host network

I need to resolve a container name to the IP Address from the docker host.
The reason for this is, i need a container to run on the host network, but it must be also able to resolve the container "backend" which it connects also to. (The container must be send & receive multicast packets)
docker-compose.yml
version: "3"
services:
database:
image: mongo
container_name: database
hostname: database
ports:
- "27017:27017"
backend:
image: "project/backend:latest"
container_name: backend
hostname: backend
environment:
- NODE_ENV=production
- DATABASE_HOST=database
- UUID=5025f846-7587-11ed-9ca7-8b992b5e7dd3
ports:
- "8080:8080"
depends_on:
- database
tty: true
frontend:
image: "project/frontend:latest"
container_name: frontend
hostname: frontend
ports:
- "80:80"
- "443:443"
depends_on:
- backend
environment:
- BACKEND_HOST=backend
connector:
image: "project/connector:latest"
container_name: connector
hostname: connector
ports:
- "1900:1900/udp"
#expose:
# - "1900/udp"
environment:
- NODE_ENV=production
- BACKEND_HOST=backend
- STARTUP_DELAY=1500
depends_on:
- backend
network_mode: host
tty: true
How can i resolve the hostname "backend" via docker from the docker host?
dig backend #127.0.0.11 & dig backend #172.17.0.1 did not work.
A test with a docker ubuntu image & socat proves, that i can receive ssdp multicast packets:
docker run --net host -it --rm ubuntu
socat UDP4-RECVFROM:1900,ip-add-membership=239.255.255.250:0.0.0.0,fork -
The only problem i now have is the DNS/Container name resolution from the host (network).
TL;DR
The container "connector" must be on the host network,but also be able to resolve the container name "backend" to the docker internal IP Address.
NOTE: Perhaps this is better suited on superuser or similar?

How to connect with database(mongodb in server 2) from docker container (running in server 1)

Server 1->10.0.0.47
Server 2->10.0.1.202
All ports between these two servers are open as they are in same VPN in aws
version: '3.3'
networks:
net:
external: true
services:
backend:
image: test/test-backend:prod
ports:
- "8000:8000"
depends_on:
- discovery
ERROR:Connection refused
Note When i try to change the composer like below
connection with mongo established but unable to access the service on port 8000
networks:
net:
external: true
services:
backend:
image: test/test-backend:prod
expose:
- "27017:27017"
ports:
- "8000:8000"
depends_on:
- discovery
The Expose instruction does not change anything, it's for documentation only. You can read more about it in the Dockerfile reference.
If the 2 Server are in the same Docker network, you could change the mongoDB port to 8000 in its installation configuration. Then, you don't need to specify a port etc. in the docker-compose configuration.
If you want to access the mongoDB service from outside, you have to change the docker-compose configuration to:
ports:
- "8000:27017"

Local Communication Between Services

I have 2 services inside my docker cluster. frontend runs on port 8090, and backend runs on port 8000. How can I make frontend call backend via local DNS like fetch('https://backend.local/')? Because if I use docker-hostname, I need to specify the port to call the back-end. Do I need to have a local DNS Server inside my docker?
You have to create a Software Defined Network (SDN) in docker and then all containers running in that network can communicate with each other using the container names or you can define alias for each and use that. A simple docker-compose file for a backend microservice and mysql database can be created using the below configs.
version: '3.2'
networks:
testNetwork:
services:
mysql-dev:
image: mysql:latest
container_name: mysql-dev
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=root
ports:
- "3306:3306"
networks:
- testNetwork
backend:
image: backend:1.0
container_name: backend
environment:
- DB_USER=root
- DB_PASS=root
- DB_NAME=root
- DB_HOST=mysql-dev
- DB_DIALECT=mysql
ports:
- "4000:4000"
working_dir: /backend
command: npm start
networks:
- testNetwork

Access docker ports from a container inside another container at localhost

I have a setup where I build 2 dockers with docker-compose.
1 container is a web application. I can access it with port 8080. Another container is ElasticSearch; it's accessible with port 9200.
This is the content of my docker-compose.yml file:
version: '3'
services:
serverapplication:
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
elasticsearch:
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
When I browse to http://localhost:8080/serverapplication I can see my server application.
When I browse to http://localhost:9200/ I can see the default page of ElasticSearch.
But when I try to access ElasticSearch from inside the serverapplication, I get a "connection refused". It seems that the 9200 port is unreachable at localhost for the server application.
How can I fix this?
It's never safe to use localhost, since localhost means something else for your host system, for elasticsearch and for your server application. You're only able to access the containers from your host's localhost because you're mapping container ports onto your host's ports.
put them in the same network
give the containers a name
access elasticsearch through its containername, which Docker automatically resolves to the current IP of your elasticsearch container.
Code:
version: '3'
services:
serverapplication:
container_name: serverapplication
build: "serverapplication"
entrypoint:
- bash
- -x
- init.sh
command: ["jdbcUrl=${jdbcUrl} dbUser=${dbUser} dbUserPassword=${dbUserPassword}"]
ports:
- "8080:8080"
- "8443:8443"
- "8787:8787"
networks:
- my-network
elasticsearch:
container_name: elasticsearch
build: "elasticsearch"
environment:
- discovery.type=single-node
ports:
- "9200:9200"
- "9300:9300"
networks:
- my-network
networks:
my-network:
driver: bridge
Your server application must use the host name elasticsearch to access elasticsearch service i.e., http://elasticsearch:9200
Your serverapplication and elasticsearch are running in different containers. The localhost of serverapplication is different from localhost of elasticsearch.
docker-compose sets up a network between the containers such that they can be accessed with their service names. So from your serverapplication, you must use the name 'elasticsearch' to connect to it.

Reach Docker container from other docker Container within localhost:port

Current compose yaml:
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
ports:
- "15672:15672"
- "5672:5672"
networks:
- app-tier
My target is to reach docker_rabbitmq container from docker_php within localhost:
#bash php_container
telnet loaclhost 15672
How can I configure a network that:
container A has port mapping on localhost to Container B?
you're limited by the inner port, which means if your two containers are in the same docker defined network, you can use the internally opened ports of the respective container. For the hostname to be defined for a container in a different one, you can use the links attribute in the service definition inside your docker-compose.yml.
Consider a micro service which you want to be only accessed by only the containers on that network therefore exposing the ports on the host wouldn't make sense. Now assuming rabbitmq is the service that you want to access from php service, you need to define a link to rabbitmq in your php service definition( please not the link/host-definition is not bi-directional, if you need php in your rabbitmq you need to define a link in rabbitmq for php)
version: '3.7'
networks:
app-tier:
driver: bridge
services:
php:
container_name: docker_php
build: .docker/php73
volumes:
- .:/srv/
networks:
- app-tier
links:
- rabbitmq
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
networks:
- app-tier
Now you can access the internal ports of the rabbitmq from php but note the expternal ports are not accessible, those are for the host.
# inside your `php` container `bash`
telnet rabbitmq <internal_port>
Also not that I got rid of the ports in rabbitmq by removing
now these ports of rabbitmq are not accessible from the host.
Update
if you want to access the ports, such that the ports opened in rabbitmq are accessible in php on localhost. the easiest and the simplest way would be to configure rabbitmq to run in container network mode on the network of php to do this simple add
network_mode: "container:[container name/id]"
rabbitmq:
container_name: docker_rabbitmq
image: "rabbitmq:3-management"
hostname: "rabbitmq-localhost"
environment:
RABBITMQ_DEFAULT_USER: guest
RABBITMQ_DEFAULT_PASS: guest
network_mode: "container:php"
ports:
- "15672:15672"
- "5672:5672"

Resources