Keepalived/Haproxy docker-compose connection refused - docker

Docker Version: Version 17.03.0-ce-mac2 (15654)
OS: macOS Sierra
I am trying to setup an HA environment using docker-compose. A quick overview of the what the topology might look like is that I will have at least two instances of keepalived and haproxy running, the haproxy will be in front of multiple servers. However, in this posting I make reference to only one instance of keepalived, haproxy, and server for simplification.
The problem that I have right now is that I am unable to direct traffic to the virtual IP address that I assign to keepalived. For testing purposes in my docker compose file I have a client that tries to communicate using the VIP, and it results in a connection refused error.
dial tcp 192.168.99.120:80: getsockopt: connection refused
However, if I reach out directly to haproxy there is not a connection issue. Furthermore, I can communicate directly to the haproxy from host but not to keepalived.
I feel like this has something to do with how networks work in docker but I am pretty new to using docker and have not been able to track down the issue. Any help would be much appreciated.
My configuration files are all included below.
docker-compose.yml:
version: '2'
services:
keepalived1:
image: neoassist/docker-keepalived:latest
container_name: keepalived1
volumes:
- "./keepalived.conf:/etc/keepalived/keepalived.conf"
environment:
- VIRTUAL_IP=192.168.99.120
- VIRTUAL_MASK=24
- VRID=1
- CHECK_IP=any
- CHECK_PORT=80
- INTERFACE=eth0
entrypoint: sh -c 'sleep 4;/usr/bin/keepalived.sh'
network_mode: "host"
cap_drop:
- NET_ADMIN
privileged: true
haproxy1:
image: haproxy:latest
container_name: haproxy1
ports:
- 7054:7054
volumes:
- "./haproxy1.cfg:/usr/local/etc/haproxy/haproxy.cfg"
environment:
- EXPOSE=7054
links:
- fabric-ca-server1:fabric-ca-server1
fabric-ca-server1:
image: hyperledger/fabric-ca
container_name: fabric-ca-server1
ports:
- 7051:7054
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
volumes:
- "./fabric-ca-server:/etc/hyperledger/fabric-ca-server"
command: sh -c "fabric-ca-server start -d -b admin:adminpw"
admin-client:
image: hyperledger/fabric-ca
container_name: admin-client
network_mode: "host"
command: sh -c "sleep 14;fabric-ca-client enroll -d -u http://admin:adminpw#192.168.99.120"
haproxy.cfg
global
maxconn 4096
defaults
mode http
maxconn 2000
timeout connect 5000
timeout client 50000
timeout server 50000
frontend server
bind *:7054
mode tcp
default_backend server_cluster
backend server_cluster
balance source
mode tcp
option tcpka
server server1 fabric-ca-server1:7054
keepalived.conf
vrrp_script haproxy {
script "pidof haproxy"
interval 2
weight 2
}
vrrp_instance haproxy_1 {
virtual_router_id 1
advert_int 1
interface eth0
nopreempt
state BACKUP
virtual_ipaddress {
192.168.99.120/24 dev eth0
}
track_script {
haproxy
}
}
ifconfig from my mac has:
vboxnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 0a:00:27:00:00:00
inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255

I don't think this will ever work with Docker for Mac because it actually uses a VM under the covers to run your Docker containers. This should work on a system with native Docker support if you try using host networking rather than bridge networking.
My suggestion would be to look at either Docker swarm mode (not the standalone Docker swarm) or Kubernetes which both provide mechanisms to scale services and provide load balancing across them via a single address

Related

Can't ping service inside Docker container from the host machine

I'm running a container via docker-compose on Ubuntu 20.04, and I can't ping or curl the web server that's running inside from the host machine that's running docker.
I've given the container a static IP, and if I open a shell in the container I can see the service running fine and curl it as expected.
My docker-compose.yml looks like this:
version: "2.1"
services:
container:
image: imagename
container_name: container
networks:
net:
ipv4_address: 172.20.0.5
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
ports:
- 9000:9000
restart: unless-stopped
networks:
net:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
But if I curl -v 172.20.0.5:9000 from the same machine, I get
* Trying 172.20.0.5:9000...
* TCP_NODELAY set
* connect to 172.20.0.5 port 9000 failed: No route to host
* Failed to connect to 172.20.0.5 port 9000: No route to host
* Closing connection 0
curl: (7) Failed to connect to 172.20.0.5 port 9000: No route to host
My best guess is something to do with iptables or firewall rules? I've not changed those at all from the default Docker set up. With host network mode it does work, but exposes the 9000 port publicly. I want to have it only accessible locally and then set it up behind a reverse proxy. Thanks.
The static IP you gave is within the network docker created. Your host is correctly telling you that it has no routes to that subnet. However you are binding the containers port 9000 to your host port 9000, thus you should be able to ping/curl localhost:9000. If that doesn't work your webserver may need to listen on on 0.0.0.0

OpenVPN server on VPS no internet

I'm trying to get a simple OpenVPN server set up on a cheap Vultr vps through docker-compose.
I was able to generate certificates and such just fine, and can even connect to the server..
But when I try to connect to it on my mac through Tunnelblick, I have no internet. My IPv6 internet works, but seems to just be using my home internet, not the VPN tunnel.
Whenever I try to connect to any IPv4 traffic, it times out. Even trying ping 8.8.8.8 gives me a timeout error.
docker-compose:
version: '3.5'
services:
openvpn:
container_name: openvpn
image: kylemanna/openvpn
restart: unless-stopped
cap_add:
- NET_ADMIN
network_mode: host
ports:
- "943:943"
- "1194:1194/udp"
privileged: true
hostname: example.com
volumes:
- /lib/modules:/lib/modules:ro
- /etc/openvpn:/etc/openvpn
volumes:
openvpn-config:
name: openvpn-config
It may be related to DNS nameserver settings not being pushed to clients. You can try manually assigning a nameserver (e.g. 8.8.8.8) in Tunnelblick.
As for IPv6 traffic not being encapsulated, I'd check if the docker engine is configured to handle such traffic. It looks like Kylemanna's image needs additional configuration (e.g. add --ipv6 when starting the Docker daemon) as explained at IPv6 Support

Docker Compose: Expose not working

docker-ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83b1503d2e7c app_nginx "nginx -g 'daemon ..." 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp app_nginx_1
c9dd2231e554 app_web "/home/start.sh" 2 hours ago Up 2 hours 8000/tcp app_web_1
baad0fb1fabf app_gremlin "/start.sh" 2 hours ago Up 2 hours 8182/tcp app_gremlin_1
b663a5f026bc postgres:9.5.1 "docker-entrypoint..." 25 hours ago Up 2 hours 5432/tcp app_db_1
They all work fine:
app_nginx connects well with app_web
app_web connects well with postgres
No working file:
app_web is not able to connect with app_gremlin
docker-compose.yaml
version: '3'
services:
db:
image: postgres:9.5.12
web:
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh
Errors:
Basically I am not able to connect to gremlin container from my app_web container.
All below have been executed inside web_app container
curl:
root#49a8f08a7b82:/# curl 0.0.0.0:8182
curl: (7) Failed to connect to 0.0.0.0 port 8182: Connection refused
netstat
root#49a8f08a7b82:/# netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.11:42681 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
udp 0 0 127.0.0.11:54232 0.0.0.0:*
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node Path
nmap
root#49a8f08a7b82:/# nmap -p 8182 0.0.0.0
Starting Nmap 7.60 ( https://nmap.org ) at 2018-06-22 09:28 UTC
Nmap scan report for 0.0.0.0
Host is up.
PORT STATE SERVICE
8182/tcp filtered vmware-fdm
Nmap done: 1 IP address (1 host up) scanned in 2.19 seconds
nslookup
root#88626de0c056:/# nslookup app_gremlin_1
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: app_gremlin_1
Address: 172.19.0.3
Experimenting:
For Gremlin container I did,
ports:
- "8182:8182"
Then from Host I can connect to gremlin container BUT still no connection between web and gremlin container
I am working on creating a re-creating sample Docker file (minimal stuff to recreate the issue) meanwhile anyone has any idea what the issue might be?
curl 0.0.0.0:8182
The 0.0.0.0 address is a wild card that tells an app to listen on all network interfaces, you do not connect to this interface as a client. For container to container communication, you need:
containers on the same user generated network (compose does this for you by default)
connect to the name of the service (or container name)
connect to the port inside the other container, not the published port.
In your case, the command should be:
curl http://gremlin:8182
Networking is namespaced in apps running inside containers, so each container gets it's open loopback interface and ip address on a bridge network. So moving an app into containers means you need to listen on 0.0.0.0 and connect to the bridge ip using DNS.
You should also remove links and depends_on from your Dockerfile, they don't apply in version 3. Links have long since been deprecated in favor of shared networks. And depends_on doesn't work in swarm mode along with probably not doing what you wanted since it never checked for the target app to be running, only the start of that container to have been kicked off.
One last note, expose doesn't affect the ability to communicate between containers on common networks or publish ports on the host. Expose simply sets meta data on the image that is documentation between the person creating the image and the person running the image. Applications are not required to use that value, but it's a good habit to make your app default to that value for the benefit of downstream users. Because of its role, unless you have another app checking for the exposed port list, like a self updating reverse proxy, there's no need to expose the port in the compose file unless you're giving the compose file to another person and they need the documentation.
There is no link configured in the docker-compose.yaml between web and gremlin. Try to use the following:
version: '3'
services:
db:
image: postgres:9.5.12
web:
links:
- gremlin
build: .
expose:
- "8000"
depends_on:
- gremlin
command: /home/start.sh
nginx:
build: ./nginx
links:
- web
ports:
- "80:80"
command: nginx -g 'daemon off;'
gremlin:
build: ./gremlin
expose:
- "8182"
command: /start.sh

Docker Container's network interface in promiscuous mode

compose a 3 services architecture and a virtual bridged network on which the three services are attached. I want one of the container to be able to listen to all the traffic within the virtual network (promiscuous mode). Is it possible? I've tried almost everything but nothing seems to be working.
What I've tried:
Giving full privileges to the container
Setting the container eth0 interface to promiscuous (ifconfig eth0 promisc)
restart the network manager inside the container
setting the veth relative to container in promiscuous mode from the host machine
modify the mode from "bridge" to "passthru" in the macvlan configuration from the pipework script
setting the container as gateway in the network properties of the docker-compose file
many of the above attempts results in the container's eth0 interface to "think" it is in promiscuous mode, in fact both ifconfig and syslog (from the host) say it is, but the container still sees only its own traffic.
I'm using Docker 1.11 and the base image inside the container is Ubuntu 14.04:latest
Below is listed my docker-compose file
Thanks in advance
docker-compose.yml
version: '2'
networks:
snort_net:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.19.0.0/24
gateway: 172.19.0.3
services:
mysql:
build:
context: .
dockerfile: MySql/MySqlFile
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
networks:
snort_net:
ipv4_address: 172.19.0.2
snort:
build:
context: .
dockerfile: Snort/SnortFile
depends_on:
- snorby
env_file:
- Snort/snort_variables.env
networks:
snort_net:
ipv4_address: 172.19.0.3
expose:
- "80"
- "21"
ports:
- "10100:80"
- "10101:80/udp"
- "21:21"
cap_add:
- NET_ADMIN
privileged: true
snorby:
build:
context: .
dockerfile: Snorby/SnorbyFile
depends_on:
- mysql
env_file:
- Snorby/snorby_variables.env
networks:
snort_net:
ipv4_address: 172.19.0.4
ports:
- "3000:3000"
i am able to get it working with below command while creating container as i decided to switch off to listen for all traffic
administrator#gitlabrunner-prod01:~$ docker run --rm --privileged -t -d -p 23:22 --name ubuntu ubuntu
A container is effectively attached to a virtual switch; it's never going to see anything other than (a) unicast traffic to the container or (b) broadcast/multicast traffic on the docker network. If you have it set up as a network gateway, it would also see any traffic being sent from other containers to destinations outside the network (but would still not see communication between other containers on the same network).
If you were using Linux bridges rather than macvlan, you should be able to attach tcpdump to the docker bridge and get what you want (either by running it on the host, or by running it inside a container with --net=host).

Docker for Mac Host Networking

I'm using Docker for Mac. I have two containers.
1st: A PHP application that is attempting to connect to localhost:3306 to MySQL.
2nd: MySQL
When running with links, they are able to reach each other.
However, I would like to avoid changing any of the code in the PHP application (e.g. changing localhost to "mysql") and stay with using localhost.
Host networking seems to do the trick, the problem is, when I enable host networking I can't access the PHP application on port 80 on my host mac.
If I docker exec -it into the php application and curl localhost, i see the HTML, so it looks like the port is just not forwarding to the host machine?
this is an example for docker-compose
it runs mysql in one container and phpmyadmin in another
the containers are linked together
you can access the containers via your host machine on the ports
3316 and 8889
my_mysql:
image: mysql/mysql-server:latest
container_name: my_mysql
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
ports:
- 3316:3306
restart: always
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: my_myadmin
links:
- my_mysql:my_mysql
environment:
- PMA_ARBITRARY=0
- PMA_HOST=my_mysql
ports:
- 8889:80
restart: always

Resources