I'm using trying to access a ftpd-server from the host
Using ftp localhost or ftp <my_ip>
But I'm getting ftp: connect: Connection refused
version: '3'
services:
ftpd-server:
container_name: ftpd-server
image: stilliard/pure-ftpd:hardened
ports:
- 21:21
- 20:20
- 30000-30009:30000-30009
volumes:
- './ftp/data:/home/username/'
- './ftp/pass:/etc/pure-ftpd/passwd'
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "user"
FTP_USER_PASS: "pass"
FTP_USER_HOME: "/home/username"
restart: always
Since I'm using PUBLICHOST: "0.0.0.0" and port forward 21:21 I was expecting to be able to connect.
Docker Log
Removing ftpd-server ... done
Removing network mytest_default
No stopped containers
Creating network "mytest_default" with the default driver
Creating ftpd-server ...
Creating ftpd-server ... done
Attaching to ftpd-server
ftpd-server | Creating user...
ftpd-server | Password:
ftpd-server | Enter it again:
ftpd-server | Setting default port range to: 30000:30009
ftpd-server | Setting default max clients to: 5
ftpd-server | Setting default max connections per ip to: 5
ftpd-server | Starting Pure-FTPd:
ftpd-server | pure-ftpd -l puredb:/etc/pure-ftpd/pureftpd.pdb -E -j -R -P 0.0.0.0 -s -A -j -Z -H -4 -E -R -G -X -x -p 30000:30009 -c 5 -C 5
How can I achieve to connect from host machine to my ftp server on the container?
You can add network_mode: host to your service definition to make it work.
services:
ftpd-server:
# ...
network_mode: host
# ...
Then test with:
$ ftp -p localhost 21
Connected to localhost.
220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 1 of 5 allowed.
220-Local time is now 16:04. Server port: 21.
220-This is a private system - No anonymous login
220 You will be disconnected after 15 minutes of inactivity.
A working example on my side is as following:
version: "1.0"
services:
ftpd-server:
image: stilliard/pure-ftpd:hardened
ports:
- "21:21"
- "30000-30009:30000-30009"
volumes:
- './ftp/data:/home/username/'
- './ftp/pass:/etc/pure-ftpd/passwd'
environment:
PUBLICHOST: "0.0.0.0"
FTP_USER_NAME: "username"
FTP_USER_PASS: "changeme!"
FTP_USER_HOME: "/home/username"
restart: always
Adding line of "network_mode: host" would cause following error with latest Docker installation (for my case it's Docker version 20.10.13, build a224086 on a Windows 10 OS with WSL2 support):
"host" network_mode is incompatible with port_bindings
It's a guarding facility involved in newer version of docker to avoid mis-configuration of ports, check following link for details: https://forums.docker.com/t/docker-errors-invalidargument-host-network-mode-is-incompatible-with-port-bindings/103492.
Related
This probably just related to WSL in general but Redis is my use case.
This works fine and I can connect like:
docker exec -it redis-1 redis-cli -c -p 7001 -a Password123
But I cannot make any connections from my local windows pc to the container. I get
Could not connect: Error 10061 connecting to host.docker.internal:7001. No connection could be made because the target machine actively refused it.
This is the same error when the container isn't running, so not sure if it's a docker issue or WSL?
version: '3.9'
services:
redis-cluster:
image: redis:latest
container_name: redis-cluster
command: redis-cli -a Password123 -p 7001 --cluster create 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006 --cluster-replicas 1 --cluster-yes
depends_on:
- redis-1
- redis-2
- redis-3
- redis-4
- redis-5
- redis-6
network_mode: host
redis-1:
image: "redis:latest"
container_name: redis-1
network_mode: host
entrypoint: >
redis-server
--port 7001
--appendonly yes
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--masterauth Password123
--requirepass Password123
--bind 0.0.0.0
--protected-mode no
# Five more the same as the above
According to the provided docker-compose.yml file, container ports are not exposed, so they are unreachable from the outside (your windows/wls host). Check here for the official reference. More about docker and ports here
As an example for redis-1 service, you should add the following to the definition.
...
redis-1:
ports:
- 7001:7001
...
...
The docker exec ... is working because the port is reachable from inside the container.
I am running a docker container on CentOS Linux release 8.2.2004. The CentOs Server itself has a stable internet connection. Ultimately I am trying to start a docker container with the following docker-compose.yaml
version: '3'
services:
postgres:
restart: always
image: postgres:12.1
environment:
POSTGRES_PASSWORD: myusername
POSTGRES_USER: mypass
networks:
- myname
volumes:
- /home/user/data/myname:/var/lib/postgresql/data
ports:
- 5432:5432
web:
restart: always
build: .
environment:
JPDA_ADDRESS: 8001
JPDA_TRANSPORT: dt_socket
networks:
- myname
depends_on:
- postgres
ports:
- 80:8080
- 8001:8001
volumes:
- /home/user/data/images:/data/images
networks:
myname:
driver: bridge
But upon docker-compose build maven repositories cannot be reached (due to a missing internet connection from with in the docker). Adding dns rules to the yaml doesn't change anything, neither does setting network_mode: "host"
When I try and execute
docker run --dns 8.8.8.8 busybox nslookup google.com
;; connection timed out; no servers could be reached
Upon trying a normal ping, the connection fails too
docker run -it busybox ping -c 1 8.8.8.8
1 packets transmitted, 0 packets received, 100% packet loss
However
docker run --rm -it busybox ping 172.17.0.1
seems to work just fine.
docker run --net=host -it busybox ping -c 1 8.8.8.8
Works too
how can I get docker to connect to the internet?
Looks like it is a known issue of busybox. Check this thread here: nslookup can not get service ip on latest busybox
On short, you must use busybox versions before 1.28.4.
I just ran the following command on a CentOS 7 with Docker 19 and it worked fine:
# docker run --dns 8.8.8.8 busybox:1.28.0 nslookup google.com
Server: 8.8.8.8
Address 1: 8.8.8.8 dns.google
Name: google.com
Address 1: 2a00:1450:4016:807::200e muc11s04-in-x0e.1e100.net
Address 2: 216.58.207.174 muc11s04-in-f14.1e100.net
I had to rewrite the build section in the docker-compose.yaml to the following in order to fix the problem
build:
context: .
network: host
I want to enable ssl by docker-letsencrypt-nginx-proxy-companion.
This is the docker-compose.yml
version: "3.3"
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- certs:/etc/nginx/certs:ro
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
db:
# ---
wordpress:
# ---
environment:
# ---
VIRTUAL_HOST: blog.ironsand.net
LETSENCRYPT_HOST: blog.ironsand.net
LETSENCRYPT_EMAIL: mymail#example.com
restart: always
letsencrypt-nginx-proxy-companion:
container_name: letsencrypt
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- certs:/etc/nginx/certs
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: nginx-proxy
restart: always
networks:
default:
external:
name: nginx-proxy
volumes:
certs:
vhostd:
html:
docker logs letsencrypt shows that a certificate exists already.
/etc/nginx/certs/blog.ironsand.net /app
Creating/renewal blog.ironsand.net certificates... (blog.ironsand.net)
2020-04-09 00:03:23,711:INFO:simp_le:1581: Certificates already exist and renewal is not necessary, exiting with status code 1.
/app
But ACME challenge returns nothing. (failure?)
$ docker exec letsencrypt bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
$
The port 443 is listning, but the port is closed from outside.
// in remote server
$ sudo lsof -i:443
[sudo] password for ubuntu:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 10910 root 4u IPv6 633694 0t0 TCP *:https (LISTEN)
// from local pc
❯ nmap -p 443 blog.ironsand.net
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-09 09:44 JST
Nmap scan report for blog.ironsand.net (153.127.40.107)
Host is up (0.035s latency).
rDNS record for 153.127.40.107: ik1-418-41103.vs.sakura.ne.jp
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.21 seconds
I'm using packet filtering, but it's open for 80 and 443, and I'm not using firewall.
How can I investigate more where the problem exists?
I can't solve your problem directly, but I can wrote some hints, so can solve your problem.
Your command return nothing.
bash -c 'echo "Hello world!" > /usr/share/nginx/html/.well-known/acme-challenge/hello-world'
This comand only writes "Hello world!" to the location and normally return nothing. See https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Redirections
Look inside of certs-folder.
Have a look into the certs folder and maybe clean them up. Check that the folder was mounted corretly in your nginx-container. Take a bash into the container and check the ssl-folder.
Check that the firewall nothing breaks up
From outside is no connection possibly? What is from the inside? Login on your docker-host and check the connection from there (maybe openssl and curl are your friends),
Don't use SSL inside container.
I often see problems when sombody tries to use ssl with ACME-images and "wild mounting and shared volumes". But I heard never about problems, when the same people using a normal reverse proxy. I explain a good setup bellow.
So just remove the whole letscrypt-code from your container and close the 443 port of your container.
(Additionally you can switch to a non-root-image and expose only ports which doesn't need root-privileges.)
Then install nginx on your host and setup a reverse proxy (something like proxy_pass 127.0.0.1:8080). And now install certbot and start it. It helps you and is straight-forward.
The certbot can also maintain your certificates.
I have a docker like this:
version: '3.5'
services:
RedisServerA:
container_name: RedisServerA
image: redis:3.2.11
command: "redis-server --port 26379"
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:26379
expose:
- 26379
RedisServerB:
container_name: RedisServerB
image: redis:3.2.11
command: "redis-server --port 6379"
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
expose:
- 6379
Now I do a vagrant ssh and do
ping RedisServerA
ping RedisServerB
They both work.
Now I try to connect to the redis server:
redis-cli -h RedisServerB
Works fine
Then I try to connect to the other
redis-cli -h RedisServerA -p 26739
It says:
Could not connect to Redis at RedisServerA:26739: Connection refused
Could not connect to Redis at RedisServerA:26739: Connection refused
Twice.
What am I missing here?
Typically in this setup you'd let each container run on its "natural" port. For connections from outside Docker you need the ports: mapping, and you'd access a container via its published port on the host's IP address. For connections between Docker containers (assuming they're on the same network, and if you used bare docker run, you manually created that network), you use the container name and the container's internal port number.
We can clean up the docker-compose.yml file by removing some unnecessary lines (container_name: and expose: don't really have a practical effect) and letting the image run its default command: on the default port, and only remapping with ports:. We'd get:
version: '3.5'
services:
RedisServerA:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerA:/data
ports:
- 26379:6379
RedisServerB:
image: redis:3.2.11
volumes:
- ../docker/redis/RedisServerB:/data
ports:
- 6379:6379
Between containers, you'd use the default port
redis-cli -h RedisServerA
redis-cli -h RedisServerB
From outside Docker you'd use the server's host name and the published ports
redis-cli -h server.example.com -p 23679
redis-cli -h server.example.com
I am using docker-compose for deploying a server/client application on different devices on my local network. My setting is the following:
In my docker-compose.yml file, I have a service called 'server', which depends on two additional services ('database' and 'web'). These three services are running on the same device and are able to connect successfully with each other. The 'server' service deploys a Flask-based API which should ideally be waiting for requests from other devices in the same LAN.
In the very same docker-compose.yml file, I have a service called 'client', which runs an application that should be deployed on more than one device in the same LAN. The 'client' service, independently from the device where it is running, should be able to send requests to the 'server' service, which is on a different device in the same LAN.
Here is my docker-compose.yml file:
version: '3.5'
networks:
outside:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.220.0/24
services:
client:
build: ./client
environment:
TZ: "Europe/Madrid"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
nmap -p 8080 192.168.220.220 &&
python -u client/main_controller.py"
restart: always
volumes:
- .:/code
networks:
outside:
server:
build: ./server
environment:
TZ: "Europe/Madrid"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
python -u server/main_server.py"
volumes:
- .:/code
ports:
- "8080:8080" # host:container
restart: always
depends_on:
- database
- web
networks:
default:
outside:
ipv4_address: 192.168.220.220
database:
image: mysql:latest
#command: ./database/run_db.sh #mysqld --user=root --verbose
restart: always
volumes:
- ./database:/docker-entrypoint-initdb.d/:ro
ports:
- "3306:3306" # host:container
environment:
MYSQL_ROOT_PASSWORD: root
networks:
default:
web:
image: nginx:latest
restart: always
ports:
- "8081:80"
volumes:
- ./interface:/www
- ./interface/nginx.conf:/etc/nginx/conf.d/default.conf
networks:
default:
I am using the python requests library to send requests from 'client' to 'server' using the following url:
http://192.168.220.220:8080
My issue is, that when I run both containers, 'client' and 'service', on the same device [deviceA], they are able to successfully communicate.
But when I run the containers on different devices ('service' on a computer with Mac OS X [deviceA], and 'client' on a Raspberry Pi [deviceB], both connected to the same LAN using wi-fi), the 'client' is not able to reach the specified IP and port.
To test if the device is able to reach the IP:port combination I use the following command right after running the 'client' service:
nmap -p 8080 192.168.220.220
Which gives the following output on [deviceA]:
client_1 | Starting Nmap 7.01 ( https://nmap.org ) at 2019-03-03 12:22 Europe
client_1 | Nmap scan report for raspberry_escape_controller_server_1.raspberry_escape_controller_outside (192.168.220.220)
client_1 | Host is up (0.00012s latency).
client_1 | PORT STATE SERVICE
client_1 | 8080/tcp open http-proxy
client_1 | MAC Address: <mac_address> (Unknown)
client_1 |
client_1 | Nmap done: 1 IP address (1 host up) scanned in 0.71 seconds
and the following one on [deviceB]:
client_1 | Starting Nmap 7.40 ( https://nmap.org ) at 2019-03-03 13:24 CET
client_1 | Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
client_1 | Nmap done: 1 IP address (0 hosts up) scanned in 0.78 seconds
----------- [EDIT 1] -----------
As suggested by DTG here is the output of netstat command on [deviceB]:
root#a9923f852423:/code# netstat -nr
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.220.1 0.0.0.0 UG 0 0 0 eth0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
It looks like it is not able to see [deviceA], which should be 192.168.220.220
It looks to me that even though the service is up and running in your [deviceA] , there is some kind of firewall not allowing external connections to it from outside .
Maybe you should need to check the Firewall configuration at [deviceA].
ROUTING ISSUES
If it is a Routing issue you should see the routing table in hostB with
netstat -nr
And see that a valid route exist to hostA
If no valid route exist you should add one with
sudo route add -net hostA_IP/MASK gw HOSTB_DEFAULT_GATEWAY
INTER DOCKERS COMUNICATIONS
After you create the network, you can launch containers on it using the docker run --network= option. The containers you launch into this network must reside on the same Docker host. Each container in the network can immediately communicate with other containers in the network.
Read more about understanding docker communications:
See docker documentation here