Unable to loopback to host ip using docker-compose.yml file on Windows Sever 2019 - docker

I am using Windows Server 2019 with Containers and Hyper-V features enabled. Also, I made sure that Windows support for Linux containers is installed on the machine.
I need to use docker-compose.yml file to bring up the docker containers (web APIs) but I want the port exposed from the container to be accessible only on the host machine.
Below is the sample docker-compose.yml that I am using with loopback to 127.0.0.1.
webapi:
image: webapi
build:
context: .
dockerfile: webapi/Dockerfile
container_name: webapi
restart: always
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443
ports:
# Allow access to web APIs only on this local machine and on secure port.
- "127.0.0.1:5443:443"
This solution works fine on Windows 10 machine with Docker Desktop installed but doesn't work on Windows Server 2019 with Docker EE installed. I get the below error where webapi is a linux image:
ERROR: for webapi Cannot start service webapi: failed to create endpoint webapi on network containers_default: Windows does not support host IP addresses in NAT settings
ERROR: Encountered errors while bringing up the project.
My Windows Server 2019 docker configuration looks like this:
PS C:\Users\xyz> docker version
Client: Mirantis Container Runtime
Version: 20.10.5
API version: 1.41
Go version: go1.13.15
Git commit: 105e9a6
Built: 05/17/2021 16:36:02
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Mirantis Container Runtime
Engine:
Version: 20.10.5
API version: 1.41 (minimum version 1.24)
Go version: go1.13.15
Git commit: 1a7d997053
Built: 05/17/2021 16:34:40
OS/Arch: windows/amd64
Experimental: true
PS C:\Users\xyz> docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker Application (Docker Inc., v0.8.0)
cluster: Manage Mirantis Container Cloud clusters (Mirantis Inc., v1.9.0)
registry: Manage Docker registries (Docker Inc., 0.1.0)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 5
Server Version: 20.10.5
Storage Driver: windowsfilter (windows) lcow (linux)
Windows:
LCOW:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics internal l2bridge l2tunnel nat null overlay private transparent
Log: awslogs etwlogs fluentd gcplogs gelf json-file local logentries splunk syslog
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 17763 (17763.1.amd64fre.rs5_release.180914-1434)
Operating System: Windows Server 2019 Datacenter Version 1809 (OS Build 17763.1999)
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 16GiB
Any help would be greatly appreciated.

After spending a fair amount of time I didn't find a way to fix the Windows does not support host IP addresses in NAT settings error nor using any other network / driver (bridge, host, etc). However, I found a workaround to make the port exposed only on the local machine by configuring the Kestrel web server of the web app (container) using the "AllowedHosts" parameter in the appsettings.json. I set the parameter value like below:
{
// allow the web apis to be accessible only on host machine as a security measure.
// allow access only if the host in the URL matches the values mentioned in the list.
"AllowedHosts": "localhost;127.0.0.1",
"Serilog": {
"Using": [],
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
...
}
All this does is to check if the host in the URL is either localhost or 127.0.0.1 which is similar to loopback.
You can also override this parameter by passing the "AllowedHosts" parameter in the environment variable in the yml file like below:
webapi:
image: webapi
build:
context: .
dockerfile: webapi/Dockerfile
container_name: webapi
restart: always
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443
- AllowedHosts=*
ports:
# Allow access to web APIs only on this local machine and on secure port.
- "5443:443"
Please note that AllowedHosts is not used here for safe-listing the IPs to accept the connections from as it works on the basis of the target host mentioned in the URL.

Related

Attaching a second network to a Docker NGINX container causes it to stop responding to any of them

I've been trying to setup what might be a rather complicated docker setup, and have run into a very weird issue. What I currently have is a collection of containers, all running different web services, and an Nginx container that routes them to be publicly accessible over HTTPS. This has worked fine, but meant I can only setup services that use HTTPS, and was run over one of my 5 static IPs my ISP has given me, by routing it through my UniFi network. When I went to add GitLab, I realized I needed to connect it to a separate public address, so that I could access port 22 for SSH based Git clones. Since I already had the switch port connected to my modem on a vlan (topology weirdness, it works fine,) I simply tagged the server port to allow that VLan through, and started using a macvlan network. As soon as I added the macvlan to my nginx container, it stopped working all together. After spending several hours making sure my static ips were actually setup correctly, I found out that if I attach more than one network to my Nginx server, it stops responding to anything at all. If I stick just the macvlan on it, it can respond just fine, even over my static ip. But if there is more than one, everything stops working. Pings, TCP requests, everything. If I use docker network disconnect to remove the network from the running instance, it starts working immediately again. I've tried this with just netcat on an alpine instance, and can confirm that all inbound traffic stops immediately when a second network is attached, and resumes as soon as it's removed. I'm including a sample docker-compose that shows this effect just by adding or removing the networks.
docker version:
Client: Docker Engine - Community
Version: 20.10.13
API version: 1.41
Go version: go1.16.15
Git commit: a224086
Built: Thu Mar 10 14:07:51 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.13
API version: 1.41 (minimum version 1.12)
Go version: go1.16.15
Git commit: 906f57f
Built: Thu Mar 10 14:05:44 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.10
GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc:
Version: 1.0.3
GitCommit: v1.0.3-0-gf46b6ba
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.0-docker)
compose: Docker Compose (Docker Inc., v2.2.3)
scan: Docker Scan (Docker Inc., v0.12.0)
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 9
Server Version: 20.10.13
Storage Driver: zfs
Zpool: Storage
Zpool Health: ONLINE
Parent Dataset: Storage/docker
Space Used By Parent: 87704957952
Space Available: 8778335683049
Parent Quota: no
Compression: off
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux nvidia runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
runc version: v1.0.3-0-gf46b6ba
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-104-generic
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 39.18GiB
Name: server2
ID: <Redacted>
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
EDIT: forgot to add the docker compose file. Here it is:
services:
nginx:
image: nginx:1.21.6-alpine
networks:
public_interface:
ipv4_address: 123.456.789.102 //Replaced with nonsense for privacy reasons
private_interface:
ipv4_address: 192.168.5.2
web_interface:
networks:
web_interface:
public_interface:
driver: macvlan
driver_opts:
parent: enp10s0.100
ipam:
config:
- subnet: 123.456.789.101/29 //Replaced with nonsense for privacy reasons
gateway: 123.456.789.108 //Replaced with nonsense for privacy reasons
private_interface:
driver: macvlan
driver_opts:
parent: enp10s0.305
ipam:
config:
- subnet: 192.168.5.0/24
gateway: 192.168.5.1
Ok, time to answer this so I don't become the next #979. Turns out I was right about the routing, and my issue lay not actually in docker, but in
how the network router in the kernel works. I confirmed this by running an application without docker (just a simple python HTTP server), and testing, finding the exact same issue.
The solution, it turns out, is to use a combination of routing tables, iptables, and packet marks. The first depends on your network backend. I'm using Netplan, 'cause Ubuntu, which means I have to tell Netplan to setup routing tables:
network:
version: 2
ethernets:
eth0:
dhcp4: true
dhcp6: false
gateway4: 192.168.1.1
eth1:
dhcp4: false
dhcp6: false
addresses:
- 123.456.789.20/24 #Server address + subnet
routes:
- to: 0.0.0.0/0
via: 123.456.789.1 #Gateway address
metric: 500
table: 100
routing-policy:
- from: 123.456.789.20 #Server address
table: 100
If you're not using Docker, this patches everything nicely, and things "just work". If you are, you'll need to also add a packet mark, and tell iptables to keep said mark when transferring the packet to the docker container. First, mark incoming packets:
ip rule add fwmark 0x1 table 100
Followed by telling iptables to keep the marks:
iptables -t mangle -A PREROUTING -i eth1 -m conntrack --ctstate NEW --ctdir ORIGINAL -j CONNMARK --set-mark 0x1
iptables -t mangle -A PREROUTING -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
iptables -t mangle -A OUTPUT -m conntrack ! --ctstate NEW --ctdir REPLY -m connmark ! --mark 0x0 -j CONNMARK --restore-mark
Hopefully that helps future docker users. It was certainly an experience.
I also wrote all of this up on my blog, along with a bit more detail of where things started, why I was in this pickle, and how I figured it out: https://wiki.faeranne.com/en/blogs/nexus-labs/docker-netplan-woes

RabbitMQ Docker Container Error: Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces

I am having a problem with running rabbitmq from within Docker on Windows Server 1709 (Windows Server core edition).
I am using docker-compose to create the rabbitmq service. If I run the docker-compose on my local computer, everything works fine. When I run the docker-compose on the windows server (where docker has been set to docker lcow support on windows) I get the above mentioned error multiple times occurring the in the logs. Namely this error is:
Error when reading /var/lib/rabbitmq/.erlang.cookie: eacces
It is worth noting that I receive this error even if I just do a manual pull of rabbitmq and a manual run with docker run -itd --rm --name rabbitmq rabbitmq:3-management
I am able to bash into the container for a short while before it crashes and exits and I see the following:
root#localhost:~# ls -la
---------- 2 root root 20 Jan 5 12:18 .erlang.cookie
On my localhost, the permissions look like this (which is correct):
root#localhost:~# ls -la
-r-------- 1 rabbitmq rabbitmq 20 Dec 28 00:00 .erlang.cookie
I can't understand why the permission structure is broken on the server.
Is it possible that this is an issue with LCOW support on Windows Server 1709 with Docker for Windows? Or is the problem with rabbitmq?
For reference here is the docker compose file used:
version: "3.3"
services:
rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq
hostname: localhost
ports:
- "1001:5672"
- "1002:15672"
environment:
- "RABBITMQ_DEFAULT_USER=user"
- "RABBITMQ_DEFAULT_PASS=password"
volumes:
- d:/docker_data/rabbitmq:/var/lib/rabbitmq/mnesia
restart: always
For reference here is the docker information where there error is happening.
docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.10.0-ee-preview-3
Storage Driver: windowsfilter (windows) lcow (linux)
LCOW:
Logging Driver: json-file
Plugins:
Volume: local
Network: ics l2bridge l2tunnel nat null overlay transparent
Log: awslogs etwlogs fluentd json-file logentries splunk syslog
Swarm: inactive
Default Isolation: process
Kernel Version: 10.0 16299 (16299.15.amd64fre.rs3_release.170928-1534)
Operating System: Windows Server Datacenter
OSType: windows
Architecture: x86_64
CPUs: 4
Total Memory: 7.905GiB
Name: ServerName
Docker Root Dir: D:\docker-root
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
docker version
Client:
Version: 17.10.0-ee-preview-3
API version: 1.33
Go version: go1.8.4
Git commit: 1649af8
Built: Fri Oct 6 17:52:28 2017
OS/Arch: windows/amd64
Server:
Version: 17.10.0-ee-preview-3
API version: 1.34 (minimum version 1.24)
Go version: go1.8.4
Git commit: b8571fd
Built: Fri Oct 6 18:01:48 2017
OS/Arch: windows/amd64
Experimental: true
I struggled with same problem when run RabbitMQ inside AWS ECS container
Disclaimer: I didn't check this behavior in detail and that is only my assumption, so the problem cause may be wrong, but at least solution is working
It feels like RabbitMQ creating .erlang.cookie file on container start if it doesn't exist. And if inside-container user is root:
...
rabbitmq:
image: rabbitmq:3-management
# set container user to root
user: 0:0
...
then .erlang.cookie will be created with root permissions. But RabbitMQ starting child processes with rabbitmq user permissions. And .erlang.cookie is not writeable (editable) in this case.
To avoid this problem, I created custom image with existing .erlang.cookie file using Dockerfile:
ARG COOKIE_VALUE=SomeDefaultRandomString01
FROM rabbitmq:3.11-alpine
ARG COOKIE_VALUE=$COOKIE_VALUE
RUN printf 'log.console = true\nlog.console.level = warning\nlog.default.level = warning\nlog.connection.level = warning\nlog.channel.level = warning\nlog.file.level = warning\n' > /etc/rabbitmq/conf.d/10-logs_to_stdout.conf && \
printf 'loopback_users.guest = false\n' > /etc/rabbitmq/conf.d/20-allow_remote_guest_users.conf && \
printf 'management_agent.disable_metrics_collector = true' > /etc/rabbitmq/conf.d/30-disable_metrics_data.conf && \
chown rabbitmq:rabbitmq /etc/rabbitmq/conf.d/* && mkdir -p /var/lib/rabbitmq/ && \
echo "$COOKIE_VALUE" > /var/lib/rabbitmq/.erlang.cookie && chmod 400 /var/lib/rabbitmq/.erlang.cookie && \
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
where .erlang.cookie value may be any random string, but it should be same for all nodes in RabbitMQ cluster (extra information here).

docker stack communicate between containers

I'm trying to setup a swarm using docker but I'm having issues with communicating between containers.
I have cluster with 5 nodes. 1 manager and 4 workers.
3 apps: redis, splash, myapp
myapp has to be on the 4 workers
redis, splash just on the manager
myapp has to be able to communicate with redis and splash
I tried using the container name but its not working. It resolves the container name to different IPs.
ping splash # return a different ip than the container actually has
I am deploying running the swarm using docker stack
docker stack deploy -c docker-stack.yml myapp
Linking container between them also doesn't work.
Any ideas ? Am I missing something ?
root#swarm-manager:~# docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:18 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: false
docker-stack.yml contains:
version: "3"
services:
splash:
container_name: splash
image: scrapinghub/splash
ports:
- 8050:8050
- 5023:5023
deploy:
mode: global
placement:
constraints:
- node.role == manager
redis:
container_name: redis
image: redis
ports:
- 6379:6379
deploy:
mode: global
placement:
constraints:
- node.role == manager
myapp:
container_name: myapp
image: myapp_image:latest
environment:
REDIS_ENDPOINT: redis:6379
SPLASH_ENDPOINT: splash:8050
deploy:
mode: global
placement:
constraints:
- node.role == worker
entrypoint:
- ping google.com
---- EDIT ----
I tried with curl also. Didn't work.
docker stack deploy -c docker-stack.yml myapp
Creating network myapp_default
Creating service myapp_splash
Creating service myapp_redis
Creating service myapp_myapp
curl http://myapp_splash:8050
curl: (7) Failed to connect to myapp_splash port 8050: No route to host
curl http://splash:8050
curl: (7) Failed to connect to splash port 8050: No route to host
What worked is getting the actual container name of splash, which is some random generated string.
curl http://myapp_splash.d7bn0dpei9ijpba4q41vpl4zz.tuk1cimht99at9g0au8vj9lkz:8050
But this doesn't really help me.
Ping is not the proper tool to try and connect services. For some reason it doesn't work with docker networks. Try curl http://serviceName instead.
Other than that: Containers can't be named when using stack deploy, instead your service name is used (which coincidentally is the same) to access another service.
I manage to get it working using curl http://tasks.splash:8050 or http://tasks.myapp_splash:8050.
I don't know whats is causing this issue though. Feel free to comment with an answer.
It seems that containers in stack named tasks.<service name> so the command ping tasks.myservice works for me!
Itersting point to note that names like <stackname>_<service name> will also resolve and ping'able but IP address is incorrect. This is frustarating.
(For exmple if you do docker stack deploy -c my.yml AA you'll get name like AA_myservice which will resolve to incorrect addreses)
To add to above answer. From network point of view curl and ping do the same things. Both will try to resolve name passed to them and then curl will try to connect using specified protocol (http is the example above) and ping will send ICMP echo requests.

Docker container can't resolve DNS to reach another AWS Ec2 Machine

I am not able to ping another machine/host App2 by resolving the DNS from the container running on host App1. Though the /etc/resolv.conf is same as that of host. I am making use of AWS Route 53 private hosted DNS to allow intercommunication by resolving DNS and not IPs.
Some basic info for this :
ubuntu#app1:~$ docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 10
Server Version: 1.13.1
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 31
Dirperm1 Supported: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
apparmor
Kernel Version: 3.13.0-106-generic
Operating System: Ubuntu 14.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.797 GiB
Name: app1
ID: 6GYC:GI6M:JNTM:MMSL:7LRD:BEUZ:RTRD:Q4AG:NEQU:XC5C:ALOK:N3LM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
############################################
ubuntu#app1:~$ docker version
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 06:42:29 2017
OS/Arch: linux/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 06:42:29 2017
OS/Arch: linux/amd64
Experimental: false
###########################################
ubuntu#app1:~$ docker exec -it conatiner1 sh
/data # ping app2
ping: bad address 'app2'
/data # ping app2.mydomain
PING app2.mydomain (10.xx.xx.xx): 56 data bytes
##############################################
resolv.conf on conatiner
/data # cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.xx.xx.xx
search mydomain
resolv.conf on host
ubuntu#app1:~$ cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.XX.XX.xx [ same as of container ]
search mydomain
From the docker host i am able to ping App2 wihtout giving full domain like app2.mydomain but same is not working from the container
When you call docker run, please add --net=host option to use host's network stack. It will do the trick.

docker-proxy - Error starting userland proxy while trying to bind on 443

I'm trying to install discourse with docker in an Ubuntu 16.04 LTS with Apache listening to port 80 and 443.
When I try to lunch the app I get the following error:
starting up existing container
+ /usr/bin/docker start app Error response from daemon: driver failed programming external connectivity on endpoint app
(dade361e77fbf29f4d9667febe57a06f168f916148e10cc1365093d8f97026bb):
Error starting userland proxy: listen tcp 0.0.0.0:443: listen: address
already in use Error: failed to start containers: app
For what I'v found docker-proxy is the one that is trying to bind on 443.
How can I solve this?
Some details...
docker version
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 22:00:43 2016
OS/Arch: linux/amd64
docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 4
Server Version: 1.11.2
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 25
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.4.0-28-generic
Operating System: Ubuntu 16.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 31.39 GiB
Name: sd-12345
ID: 6OLH:SAG5:VWTW:BL7U:6QYH:4BBS:QHBN:37MY:DLXA:W64E:4EVZ:WBAK
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
perhaps, stop apache? – vitr Jul 22 '16 at 2:56
^^^ This comment from vitr should be the Accepted Answer:
Docker cannot proxy a service from within a container to the port on the host without first stopping any services that are already using that port.
In this case, Apache must be stopped with a command such as sudo service apache2 stop.
Then docker start app can then be run and docker should do its thing unhindered.
See the related question: docker run -> name is already in use by container
Edit /etc/docker/daemon.json and add:
{
"userland-proxy": false
}

Resources