How to Make Docker Container Only accessable via a Single IP Address - docker

I have a docker-compose file that looks something like the following:
version: "3.1"
services:
app:
container_name: Apache_web_server
image: httpd:2.4
ports:
- 40:80
restart: unless-stopped
volumes:
- ./web-root:/usr/local/apache2/htdocs
As it is currently configured, any IP can access the apache web server on port 40. I can change the ports section to this:
ports:
- "127.0.0.1:40:80"
And it only allows traffic from localhost on port 40 into the container, however if I change the ports section to this:
ports:
- "192.168.1.24:40:80"
And try to turn on the container I get this lovely error:
ERROR: for Apache_web_server Cannot start service app: driver failed programming external connectivity on endpoint Apache_web_server ([ID]): Error starting userland proxy: listen tcp4 192.168.1.24:40: bind: cannot assign requested address
ERROR: for app Cannot start service app: driver failed programming external connectivity on endpoint Apache_web_server ([ID]): Error starting userland proxy: listen tcp4 192.168.1.24:40: bind: cannot assign requested address
Does anyone know what's going on with this? I want to (in this example) restrict access to the apache web server to only requests from the IP 192.168.1.24.

First with the command ip a | grep -w inet Check if this address exists in the Linux.
Then check a similar port is open on the desired interface or not, cause problem is binding. Check with the following command whether it exists or not netstat -nltp.
Finally, if there is still a problem, create a bridge network in docker-compose and check on it again.

Related

How to properly configure HAProxy in Docker Swarm to automatically route traffic to replicated services (via SSL)?

I'm trying to deploy a Docker Swarm of three host nodes with a single replicated service and put an HAProxy in front of it. I want the clients to be able to connect via SSL.
My docker-compose.yml:
version: '3.9'
services:
proxy:
image: haproxy
ports:
- 443:8080
volumes:
- haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
deploy:
placement:
constraints: [node.role == manager]
networks:
- servers-network
node-server:
image: glusk/hackathon-2021:latest
ports:
- 8080:8080
command: npm run server
deploy:
mode: replicated
replicas: 2
networks:
- servers-network
networks:
servers-network:
driver: overlay
My haproxy.cfg (based on the official example):
# Simple configuration for an HTTP proxy listening on port 80 on all
# interfaces and forwarding requests to a single backend "servers" with a
# single server "server1" listening on 127.0.0.1:8000
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 127.0.0.1:8000 maxconn 32
My hosts are Lightsail VPS Ubuntu instances and share the same private network.
node-service runs each https server task inside its own container on: 0.0.0.0:8080.
The way I'm trying to make this work at the moment is to ssh into the manager node (which also has a static and public IP), copy over my configuration files from above, and run:
docker stack deploy --compose-file=docker-compose.yml hackathon-2021
but it doesn't work.
Well, first of all and regarding SSL (since it's the first thing that you mention) you need to configure it using the certificate and listen on the port 443, not port 80.
With that modification, your Proxy configuration would already change to:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
frontend https-in
bind *:443 ssl crt /etc/ssl/certs/hackaton2021.pem
default_backend servers
That would be a really simplified configuration for allowing SSL connection.
Now, let's go for the access to the different services.
First of all, you cannot access to the service on localhost, actually you shouldn't even expose the ports of the services you have to the host. The reason? That you already have those applications in the same network than the haproxy, so the ideal would be to take advantage of the Docker DNS to access directly to them
In order to do this, first we need to be able to resolve the service names. For that you need to add the following section to your configuration:
resolvers docker
nameserver dns1 127.0.0.11:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold other 10s
hold refused 10s
hold nx 10s
hold timeout 10s
hold valid 10s
hold obsolete 10s
The Docker Swarm DNS service is always available at 127.0.0.11.
Now to your previous existent configuration, we would have to add the server but using the service-name discovery:
backend servers
balance roundrobin
server-template node- 2 node-server:8080 check resolvers docker init-addr libc,none
If you check what we are doing, we are creating a server for each one of the discovered containers in the Swarm within the node-server service (so the replicas) and we will create those adding the prefix node- to each one of them.
Basically, that would be the equivalent to get the actual IPs of each of the replicas and add them stacked as a basic server configuration.
For deployment, you also have some errors, since we aren't interested into actually expose the node-server ports to the host, but to create the two replicas and use HAProxy for the networking.
For that, we should use the following Docker Compose:
version: '3.9'
services:
proxy:
image: haproxy
ports:
- 80:80
- 443:443
volumes:
- hackaton2021.pem:/etc/ssl/certs/hackaton2021.pem
- haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
deploy:
placement:
constraints: [node.role == manager]
node-server:
image: glusk/hackathon-2021:latest
command: npm run server
deploy:
mode: replicated
replicas: 2
Remember to copy your haproxy.cfg and the self-signed (or real) certificate for your application to the instance before deploying the Stack.
Also, when you create that stack it will automatically create a network with the name <STACK_NAME>-default, so you don't need to define a network just for connecting both services.

Access host from within a docker container

I have a dockerized app and I use the following docker-compose.yml to run it:
version: '3.1'
services:
db:
image: mysql:5.7
ports:
- "3306:3306"
env_file:
- ./docker/db/.env
volumes:
- ./docker/db/data:/var/lib/mysql:rw
- ./docker/db/config:/etc/mysql/conf.d
command: mysqld --sql_mode="NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
php:
build: ./docker/php/7.4/
volumes:
- ./docker/php/app.ini:/usr/local/etc/php/conf.d/docker-php-ext-app.ini:ro
- ./docker/logs/app:/var/www/app/var/log:cached
- .:/var/www/app:cached
working_dir: /var/www/app
links:
- db
env_file:
- ./docker/php/.env
webserver:
image: nginx:1
depends_on:
- php
volumes:
- ./docker/webserver/app.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/logs/webserver/:/var/log/nginx:cached
- .:/var/www/app:ro
ports:
- "80:80"
I have a server that is not dockerized runing on my machine, I can access it via localhost:3000. I would like my php service to be able to access it.
I found people suggesting to add to following to my php service configuration:
extra_hosts:
- "host.docker.internal:host-gateway"
But when I add this, then docker-compose up -d and try docker exec -ti php_1 curl http://localhost:3000, I get curl: (7) Failed to connect to localhost port 3000 after 0 ms: Connection refused. I have the same error when I try to curl http://host.docker.internal:3000.
I desperatly tried to add a port mapping to the php container:
ports:
- 3000:3000
But then when I start the services I have the following error:
ERROR: for php_1 Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
ERROR: for php Cannot start service php: driver failed programming external connectivity on endpoint php_1 (9dacd567ee97b9a46699969f9704899b04ed0b61b32ff55c67c27cb6867b7cef): Error starting userland proxy: listen tcp4 0.0.0.0:3000: bind: address already in use
Which is obvious since my server is running on that 3000 port.
I also tried to add
network_mode: host
But it fails because I already have a links. I get the following error:
Cannot create container for service php: conflicting options: host type networking can't be used with links.
I am running docker v20.10.6 on Ubuntu 21.10.
Any help appreciated, thanks in advance!
Make sure you are using version of docker that supports host.docker.internal.
If you are using linux version, then 20.10+ supports it.
For other systems you should probably consult documentation and probably some issues on github of docker-for-linux / other projects OS revelant.
After that...
Make sure extra_hosts is direct child of php service:
php:
extra_hosts:
host.docker.internal: host-gateway
build: ./docker/php/7.4/
Try using ping host.docker.internal first to check whether your host machine responds correctly.
Make sure that your service on port 3000 is working properly and there is no firewall issue.
Remember that localhost means always local ip from current container point of view. It means that localhost inside container maps to local container IP and not your host machine IP. This is a reason for sending extra_hosts section.
Also docker.host.internal is not your host loopback interface.
If service you are trying to reach listens only on localhost interface then there is no chance to reach it without doing some magic with iptables / firewall.
You can check what service is listening on which interface / ip address running following command on your host machine: netstat -tulpn
This should return something like following output:
$ netstat -tulpn
(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:39195 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
From docker container I can reach services listening on 0.0.0.0 (all interfaces) but cannot access 631 port as it is only on 127.0.0.1
$ docker run --rm -it --add-host="host.docker.internal:host-gateway" busybox
/ # ping host.docker.internal
PING host.docker.internal (172.17.0.1): 56 data bytes
64 bytes from 172.17.0.1: seq=0 ttl=64 time=0.124 ms
64 bytes from 172.17.0.1: seq=1 ttl=64 time=0.060 ms
^C
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.092/0.124 ms
/ # telnet host.docker.internal 631
telnet: can't connect to remote host (172.17.0.1): Connection refused
/ # telnet host.docker.internal 22
Connected to host.docker.internal
SSH-2.0-OpenSSH_8.6

Traefik v2 reverse proxy to a local server outside Docker

I have a simple server written in Python that listens on port 8000 inside a private network (HTTP communication). There is now a requirement to switch to HTTPS communications and every client that sends a request to the server should get authenticated with his own cert/key pair.
I have decided to use Traefik v2 for this job. Please see the block diagram.
Traefik runs as a Docker image on a host that has IP 192.168.56.101. First I wanted to simply forward a HTTP request from a client to Traefik and then to the Python server running outside Docker on port 8000. I would add the TLS functionality when the forwarding is running properly.
However, I can not figure out how to configure Traefik to reverse proxy from i.e. 192.168.56.101/notify?wrn=1 to the Python server 127.0.0.1:8000/notify?wrn=1.
When I try to send the above mentioned request to the server (curl "192.168.56.101/notify?wrn=1") I get "Bad Gateway" as an answer. What am I missing here? This is the first time that I am in contact with Docker and reverse proxy/Traefik. I believe it has something to do with ports but I can not figure it out.
Here is my Traefik configuration:
docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik.yml:/traefik.yml:ro"
traefik.yml
## STATIC CONFIGURATION
log:
level: INFO
api:
insecure: true
dashboard: true
entryPoints:
web:
address: ":80"
providers:
docker:
watch: true
endpoint: "unix:///var/run/docker.sock"
file:
filename: "traefik.yml"
## DYNAMIC CONFIGURATION
http:
routers:
to-local-ip:
rule: "Host(`192.168.56.101`)"
service: to-local-ip
entryPoints:
- web
services:
to-local-ip:
loadBalancer:
servers:
- url: "http://127.0.0.1:8000"
First, 127.0.0.1 will resolve to the traefik container and not to the docker host. You need to provide a private IP of the node and it needs to be accessible form the traefik container.
There is some workaround to make proxy to localhost:
change 127.0.0.1 to IP of docker0 interface
It should be 172.17.0.1
and then try to listen your python server on all interfaces (0.0.0.0)
if you use simple python http server nothing change... on default it listen on all interfaces

How to set a TCP service backend?

I have a MQTT broker started via docker-compose and managed by traefik:
mqtt:
container_name: mqtt
image: eclipse-mosquitto
restart: always
labels:
- 'traefik.port=1883'
- 'traefik.frontend.rule=Host:mqtt2.ex.com'
- 'traefik.frontend.entryPoints=mqtt'
ports:
- 3883:1883
The relevant part of traefik.toml where I am trying to set up the backend:
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.mqtt]
address = ":2884"
[tcp] # YAY!
[tcp.routers]
[tcp.routers.mqtt]
entrypoints = ["mqtt"]
rule = "HostSNI(`*`)" # Catches every request
service = "mqtt"
[tcp.services]
[tcp.services.mqtt.LoadBalancer]
I can access the broker via port `3883` but this is not what I intend to do (the exposed port above is just to make sure that the container is OK): I would like to proxy it though `traefik` like all my other **HTTP** services.
This however is not a HTTP service, it is a TCP one and my problem is that **I do not know how to configure such a TCP backend**.
The documentation is very light on that feature, it just states that
Currently, LoadBalancer is the only supported kind of TCP Service.
However, since Traefik is an ever evolving project, other kind of TCP
Services will be available in the future, reason why you have to
specify it.
What does this means in terms of configuring the backend? What should I add to either docker-compose.yaml or traefik.toml so that the backend is recognized as a TCP service? For the moment, it is seen as a HTTP one and the proxification does not work:

ClusterJ cannot connect to dockerized Mysql cluster from outside the container

I have setup MySQL cluster on my PC using mysql/mysql-cluster image on docker hub, and it starts up fine. However when I try to connect to the cluster from outside docker (via the host machine) using clusterJ it doesn't connect.
Initially I was getting the following error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)
So I created a custom mysql-cluster.cnf, very similar to the one distributed with the docker image, but with a new api endpoint:
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql
[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql
[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql
[mysqld]
NodeId=4
hostname=192.168.0.10
[api]
This is the configuration used for clusterJ setup:
com.mysql.clusterj.connect:
host: 127.0.0.1:1186
database: my_db
Here is the docker-compose config:
version: '3'
services:
#Sets up the MySQL cluster ndb_mgmd process
database-manager:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.2
command: ndb_mgmd
ports:
- "1186:1186"
volumes:
- /c/Users/myuser/conf/mysql-cluster.cnf:/etc/mysql-cluster.cnf
# Sets up the first MySQL cluster data node
database-node-1:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.3
command: ndbd
depends_on:
- database-manager
# Sets up the second MySQL cluster data node
database-node-2:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.4
command: ndbd
depends_on:
- database-manager
#Sets up the first MySQL server process
database-server:
image: mysql/mysql-cluster
networks:
database_net:
ipv4_address: 192.168.0.10
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=my_db
- MYSQL_USER=my_user
command: mysqld
networks:
database_net:
ipam:
config:
- subnet: 192.168.0.0/16
When I try to connect to the cluster I get the following error: '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 0 message: .
I can see that the app running ClusterJ is registered to the cluster, but then it disconnects. Here is a excerpt from the docker mysql manager logs:
database-manager_1 | 2018-05-10 11:18:43 [MgmtSrvr] INFO -- Node 3: Communication to Node 4 opened
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Alloc node id 6 succeeded
database-manager_1 | 2018-05-10 11:22:16 [MgmtSrvr] INFO -- Nodeid 6 allocated for API at 10.0.2.2
Any help solving this issue would be much appreciated.
Here is how ndb_mgmd handles the request to start the ClusterJ application.
You connect to the MGM server on port 1186. In this connection you
will get the configuration. This configuration contains the IP addresses
of the data nodes. To connect to the data nodes ClusterJ will try to
connect to 192.168.0.3 and 192.168.0.4. Since ClusterJ is outside Docker,
I presume those addresses point to some different place.
The management server will also provide a dynamic port to use when
connecting to the NDB data node. It is a lot easier to manage this
by setting ServerPort for NDB data nodes. I usually use 11860 as
ServerPort, 2202 is also popular to use.
I am not sure how you mix a Docker environment with an external
environment. I assume it is possible to solve somehow by setting
up proper IP translation tables in the correct places.

Resources