Docker compose cannot bind to any port - docker

I have been trying to run docker-compose up command but everytime it errors out saying the ports are not available. I have tried all the random ports I can think of but they all give me the same error
Here's my full error:
ERROR: for 5123b7524073_interaction Cannot start service interaction: Ports are not available: listen tcp 0.0.0.0:4005: bind:
An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: for content Cannot start service content: Ports are not available: listen tcp 0.0.0.0:4000: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: for mongo Cannot start service mongo: Ports are not available: listen tcp 0.0.0.0:40000: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: for user Cannot start service user: Ports are not available: listen tcp 0.0.0.0:4003: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: for interaction Cannot start service interaction: Ports are not available: listen tcp 0.0.0.0:4005: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
ERROR: Encountered errors while bringing up the project.
and here's my compose file
version: "3.7"
services:
content:
container_name: content
restart: always
build: ./Content
ports:
- "4000:3000"
external_links:
- mongo
interaction:
container_name: interaction
restart: always
build: ./Interaction
ports:
- "4005:3002"
external_links:
- mongo
user:
container_name: user
restart: always
build: ./User
ports:
- "4003:3001"
external_links:
- mongo
mongo:
container_name: mongo
image: mongo
ports:
- "40000:27017"
I have tried to use net stop winnat before running docker and made sure there were no blocked ports I was accessing by running netsh interface ipv4 show excludedportrange protocol=tcp.
Here's my output for running netsh
Start Port End Port
---------- --------
5357 5357
7080 7080
50000 50059 *
* - Administered port exclusions.
Does anybody know what could be the problem here? I am running docker on win 10

I just found the problem, my antivirus was blocking docker from accessing those ports. I just made sure to run docker desktop with admin privileges after reinstalling it. it asked me firewall permissions from com.docker.backend or smth like that, make sure to give the permissions and it worked now

Related

docker : how to open a port on a specific ip? Error starting userland proxy: cannot assign requested address

I have thise two networks in my docker compose and i would like docker to open a port only on one of the network the container is connected to.
I have therefore specified an ipam to my network and an ip on the open port following this post :docker-compose - How to specify which network for listening port?
But then i get the error :
Error response from daemon: driver failed programming external connectivity on endpoint nginx (ffa5c03d45d2e985c8caf76449ae8920f8fd59aeb6aa8618c300bfc1204a480c): Error starting userland proxy: listen tcp4 172.101.0.2:80: bind: cannot assign requested address
I am running docker with a nsremap so the docker process is not root but when i sudo docker compose up i get the same error.
Any idea where the issue can come from?
Here is the docker-compose.yml file:
---
version: '3.8'
services:
nginx:
cpus: 0.5
mem_limit: 400m
container_name: nginx
cap_drop:
#- CHOWN #Disables the ability to change file ownership and group ownership
- DAC_OVERRIDE #Disables the ability to bypass file and directory read, write, and execute permission checks
- DAC_READ_SEARCH #Disables the ability to bypass file and directory read permission checks and to search directories with execute permission
- FOWNER #Disables the ability to override file ownership and group ownership
- FSETID #Disables the ability to set the set-user-ID and set-group-ID bits on files
- KILL #Disables the ability to send signals to any process or process group
- NET_RAW #Disables the ability to use raw IP sockets
#- SETGID #Disables the ability to set the effective group ID
#- SETUID #Disables the ability to set the effective user ID
- SETPCAP #Disables the ability to set the process capabilities
- SYS_CHROOT #Disables the ability to change the root directory
- SYS_MODULE #Disables the ability to load and unload kernel modules
- SYS_PTRACE #Disables the ability to trace processes with ptrace
- SYS_RAWIO #Disables the ability to perform raw I/O operations
- SYS_TIME #Disables the ability to set system clocks and timers
- SYS_TTY_CONFIG #Disables the ability to configure tty devices
build:
context: .
dockerfile: ./nginx/Dockerfile
restart: unless-stopped
volumes:
- /logs/nginx:/var/log/nginx/
ports:
# - "172.101.0.2:80:80" #Error response from daemon: driver failed programming external connectivity on endpoint nginx (0f129795c4d99edafa4ba4d0b29845d1abad26e3ee3bf635ddb549831191d8cf): Error starting userland proxy: listen tcp4 172.101.0.2:80: bind: cannot assign requested address
- "80:80"
command: "/bin/sh -c 'nginx -g \"daemon off;\"'"
depends_on:
- app
networks:
backend:
web:
ipv4_address: 172.101.0.2
app:
cap_drop:
- ALL
cpus: 0.5
mem_limit: 400m
container_name: app
restart: unless-stopped
build:
context: .
dockerfile: ./app/Dockerfile
volumes:
- /logs/app:/app/logs
networks:
- backend
volumes:
app_volume:
name: app_volume
networks:
web:
name: web
ipam:
config:
- subnet: 172.101.0.0/24
backend:
name: backend
ipam:
config:
- subnet: 172.102.0.0/24
Your docker-compose.yml file tries to use ports: to specify a Docker-internal IP address. You can't do that. It sounds like you're trying to use functionality like ports: within the Docker-internal network environment, and that's not possible at all.
There are two networking mechanisms in play here:
Docker has an internal networking system. Containers internally happen to have individual IP addresses, but you almost never need to know or specify these. Instead, you can use a container's name as a DNS name, and connect to the container's "normal" port.
If you need to connect to a container from outside of Docker, then you need to specify ports:. That has an optional IP address part, but it specifies a host IP address, not a container address.
So say you have some sort of "gateway" system that has both a public interface 10.20.30.40/16, and also a private interface 192.168.73.1/24. When you run a container, you could specify ports: ['192.168.73.1:80:80'] to publish a container on only the internal network but not the external network. To reiterate, this is a host address that you've already configured using ifconfig or a similar mechanism.
For connections between containers, you have no options to selectively publish or remap ports. In your example, assuming the nginx main container process listens on port 80, http://nginx will reach it from any other container on the backend or web networks. ports: aren't required or used here; there is no option to change the port number, or to only listen on one network but not the other.
The minimum change you need to make is to delete the problematic ports: line; you don't need (and can't use) ports: to control the Docker-internal network.

Docker-compose multiple services listen on same port, different domains

How can I have multiple docker services listening on same ports, only using different domains?
Is it even possible to define in docker-compose or do I need to have just one service listening on the port and then rerouting the traffic to respective services depending on the domain?
This example is failing since it's listening on the whole network (instead of just the domains)
docker-compose up
Creating network "test-docker_default" with the default driver
Creating test-docker_static_1 ... done
Creating test-docker_app_1 ...
Creating test-docker_app_1 ... error
ERROR: for test-docker_app_1 Cannot start service app: driver failed programming external connectivity on endpoint test-docker_app_1 (ef433ffad1af01ffa31cd8a69a8c15b69ca7e7b6935924d34891b92682570e68): Bind for 0.0.0.0:80 failed: port is already allocated
ERROR: for app Cannot start service app: driver failed programming external connectivity on endpoint test-docker_app_1 (ef433ffad1af01ffa31cd8a69a8c15b69ca7e7b6935924d34891b92682570e68): Bind for 0.0.0.0:80 failed: port is already allocated
docker-compose.yml
version: '3.3'
services:
app:
image: node
depends_on:
- static
networks:
default:
aliases:
- app.localhost
ports:
- 80:80
static:
image: nginx
networks:
default:
aliases:
- static.localhost
ports:
- 80:80
/etc/hosts
127.0.0.1 app.localhost
127.0.0.1 static.localhost
You can map only one container to the same port of the host. If you want to map 2 services on same host port, you should use a reverse proxy like Traefik (well integrated with docker). The reverse proxy will listen the host port 80, then forward to a specific docker container on port not mapped to the host depending some defined rules like aliases, url path.
You should use a reverse proxy. You can for instance give a look to jwlider/nginx on dockerhub.io, documentation is quite good !

Docker container to container connect: connection refused

When all are run standalone outside of docker it works with no problem when core attempts to do a get from cerner. However, doing the same when all are dockerized as below I get:
Get http://cerner:8602/api/v1/patient/search: dial TCP 192.168.240.4:8602: connect: connection refused. The .4 is the IP of the cerner container and .2 is the IP of the core container
Cerner is the name of the container being called from core. If I change the name to the ip-address of the host server and use the ports, it works fine also. It just does not allow container to container using the containers DNS or IP. I have attempted with and without the private network and get the same thing.
The containers are all scratch go.
version: '3.7'
services: caConnector:
image: vertisoft/ca_connector:latest
ports:
- "8601:7001"
env_file:
- .env.ca_connector
networks:
- core-net
fhir:
image: vertisoft/fhir_connector:latest
container_name: cerner
ports:
- "8602:7002"
env_file:
- .env.fhir_connector
networks:
- core-net
core:
image: vertisoft/core:latest
ports:
- "8600:7000"
env_file:
- .env.core
networks:
- core-net
networks: core-net:
driver: bridge
You should call the container service with containerPort, not with hostPort in service to service communication. in your case, it should be 7000 to 7002 for any container to connect using container name.
Get http://cerner:8602/api/v1/patient/search: dial TCP
192.168.240.4:8602: connect: connection refused.
As in the error, it tries to attempt connection using publish port.
For example
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
When you run docker-compose up, the following happens:
A network called myapp_default is created.
A container is created
using web’s configuration. It joins the network myapp_default under
the name web. A container is created using db’s configuration. It
joins the network myapp_default under the name db.
In v2.1+, overlay
networks are always attachable
Each container can now look up the hostname web or db and get back the
appropriate container’s IP address. For example, web’s application
code could connect to the URL postgres://db:5432 and start using the
Postgres database.
It is important to note the distinction between HOST_PORT and CONTAINER_PORT. In the above example, for db, the HOST_PORT is 8001 and the container port is 5432 (postgres default). Networked service-to-service communication use the CONTAINER_PORT. When HOST_PORT is defined, the service is accessible outside the swarm as well.
Within the web container, your connection string to db would look like postgres://db:5432, and from the host machine, the connection string would look like postgres://{DOCKER_IP}:8001.
compose-networking

Driver failed programming external connectivity: ... bind: cannot assign requested address

In my development environment I want to simulate a "web farm" deployment, where I have several "physical" nodes running multiple services, which belong to the same network.
So, say I have 5 nodes, each node will host 1 mysql, 1 nginx and 2 web apps. And mysql will bind to the same port 3306, but on different ip addresses.
I started to write down docker-compose config and stuck on a very first step: Docker refuses to create new ip in a given network and bind mysql to a port on that ip.
That's the configuration I'm trying to use:
version: "3"
services:
node1_sql:
image: mariadb:10.0.33
restart: always
networks:
skkb:
ipv4_address: 10.9.2.2
ports:
- 10.9.2.2:3306:3306
environment:
- MYSQL_DATABASE=dbname
- MYSQL_ROOT_PASSWORD=password
volumes:
- ./sql_data/1:/var/lib/mysql
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci
networks:
skkb:
driver: bridge
ipam:
driver: default
config:
- subnet: 10.9.2.0/24
When I try to do docker-compose up I get following error:
Creating network "node-r_skkb" with driver "bridge"
Creating node-r_node1_sql_1 ... error
ERROR: for node-r_node1_sql_1 Cannot start service node1_sql: driver failed programming external connectivity on endpoint node-r_node1_sql_1 (24a8412b80ebc95f5b15f5d4ea5281639d6914f312f525cf8803ed5179b906a7): Error starting userland proxy: listen tcp 10.9.2.2:3306: bind: cannot assign requested address
ERROR: for node1_sql Cannot start service node1_sql: driver failed programming external connectivity on endpoint node-r_node1_sql_1 (24a8412b80ebc95f5b15f5d4ea5281639d6914f312f525cf8803ed5179b906a7): Error starting userland proxy: listen tcp 10.9.2.2:3306: bind: cannot assign requested address
ERROR: Encountered errors while bringing up the project.
If I try to bind to 10.9.2.1, it works no problem. So it seems to me that it cannot create new IP address which is configured as ipv4_address: 10.9.2.2
Any ideas how to fix that?

Syslog driver not working with docker compose and elk stack

I want to send logs from one container running my_service to another running the ELK stack with the syslog driver (so I will need the logstash-input-syslog plugin installed).
I am tweaking this elk image (and tagging it as elk-custom) via the following Dockerfile-elk
(using port 514 because this seems to be the default port)
FROM sebp/elk
WORKDIR /opt/logstash/bin
RUN ./logstash-plugin install logstash-input-syslog
EXPOSE 514
Running my services via a docker-compose as follows more or less:
elk-custom:
# image: elk-custom
build:
context: .
dockerfile: Dockerfile-elk
ports:
- 5601:5601
- 9200:9200
- 5044:5044
- 514:514
my_service:
image: some_image_from_my_local_registry
depends_on:
- elk-custom
logging:
driver: syslog
options:
syslog-address: "tcp://elk-custom:514"
However:
ERROR: for b4cd17dc1142_namespace_my_service_1 Cannot start service
my_service: failed to initialize logging driver: dial tcp: lookup
elk-custom on 10.14.1.31:53: server misbehaving
ERROR: for api Cannot start service my_service: failed to initialize
logging driver: dial tcp: lookup elk-custom on 10.14.1.31:53: server
misbehaving ERROR: Encountered errors while bringing up the project.
Any suggestions?
UPDATE: Apparently nothing seems to be listening on port 514, cause from within the container, the command netstat -a shows nothing on this port....no idea why...
You need to use tcp://127.0.0.1:514 instead of tcp://elk-custom:514. Reason being this address is being used by docker and not by the container. That is why elk-custom is not reachable.
So this will only work when you map the port (which you have done) and the elk-service is started first (which you have done) and the IP is reachable from the docker host, for which you would use tcp://127.0.0.1:514

Resources