I have an app based on actix web and I'm trying to containerize it. When I run the build locally it works fine, but when I built it through docker, the app starts fine and binds to local docker port, but is not accessible from host.
How I'm running the container:
docker run -p 80:80 myapp
Dockerfile (abreviated):
[...] // Builds binaries
FROM debian:buster-slim
COPY --from=server-builder /usr/src/myapp/target/release/myapp /usr/bin
COPY --from=frontend-builder /usr/src/frontend/dist /static
EXPOSE 80
CMD ["myapp"]
I don't have any other services running on port 80, and I've tried using different host ports. It's not something wrong with my binary since it runs fine locally and logs show it runs fine in docker. It's not a miscofiguration of the /static directory since server logs connections (even in production builds) and it's not reaching the server when in docker.
Here are network settings when running docker inspect
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f7fe576d1f3c1612bb5afa94c3c360a12b0c0d464881835a0347fcadad228343",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
},
{
"HostIp": "::",
"HostPort": "80"
}
]
},
"SandboxKey": "/var/run/docker/netns/f7fe576d1f3c",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "b5939e954990ef71f4b69929beaf9af8f0b28f7fa9ee32363a9aefdd2f49107b",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "3eefce33be199f43e6aaa5866606a9a2da0daf25e662b510a18ed1820fb4d1d9",
"EndpointID": "b5939e954990ef71f4b69929beaf9af8f0b28f7fa9ee32363a9aefdd2f49107b",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:03",
"DriverOpts": null
}
}
}
Ok. Figured it out. First of all, the comand line argument is hostport:containerport and not the other way around.
Second, server was only acepting localhost connections. Will see what's up with actix. A good way to debug that is using --network host argument with docker run
Related
I'm pretty new with docker, but running up against a wall trying to get my implementation to work through Docker Compose.
Docker Compose File
version: "3.4"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
- 5001:5001
Dockerfile
# syntax=docker/dockerfile:1
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Install EF tools
RUN dotnet tool install --global dotnet-ef
ENV PATH="${PATH}:/root/.dotnet/tools"
# Generate Certificates
RUN dotnet dev-certs https -ep ${HOME}/.aspnet/https/aspnetapp.pfx -p TestPassword
RUN dotnet dev-certs https --trust
# Copy everything else and build
COPY . ./
COPY Setup.sh Setup.sh
RUN dotnet restore
RUN dotnet build -c Release -o out
# Build runtime image
RUN chmod +x ./Setup.sh
CMD /bin/bash ./Setup.sh
Setup.sh Entry Point
#!/bin/bash
set -e
run_cmd="dotnet run --project Artis.Merchant.API/Artis.Merchant.API.csproj --launch-profile Artis.Merchant.API"
until dotnet ef database update --project Artis.Models/Artis.Models.csproj; do
>&2 echo "SQL Server is starting up"
sleep 1
done
>&2 echo "SQL Server is up - executing command"
exec $run_cmd
Launch Settings for Project
"profiles": {
"Artis.Merchant.API": {
"commandName": "Project",
"launchBrowser": true,
"launchUrl": "swagger",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Local"
},
"applicationUrl": "https://localhost:5001;http://localhost:5000"
},
So both my http or https endpoints seem to be unreachable when launched through Docker. The CLI output is the exact same, so it appears to be up and running through Docker. Any ideas what I'm doing wrong?
Locally running dotnet run --project Artis.Merchant.API/Artis.Merchant.API.csproj --launch-profile Artis.Merchant.API
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[14]
Now listening on: https://localhost:5001
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Local
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Users\ddrob\source\Artis.Merchant.API
The above works fine I and access it as normal through https://localhost:5001 or http://localhost:5000
Output from Docker after running docker compose up
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[14]
artis_api-app-1 | Now listening on: http://localhost:5000
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[14]
artis_api-app-1 | Now listening on: https://localhost:5001
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Application started. Press Ctrl+C to shut down.
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Hosting environment: Local
artis_api-app-1 | info: Microsoft.Hosting.Lifetime[0]
artis_api-app-1 | Content root path: /app/Artis.Merchant.API
Been testing by just making an HTTP GET call to https://localhost:5001/api/health which should just return a 200. Works fine running locally, Docker returns either a socket hang up if accessed through non-https, and client network socket disconnected before secure TLS connection was established when accessed through https
EDIT
For more insight, here are the outputs from docker ps and docker inspect artis_api-app
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b71ffc2c5a1 artis_api-app "/bin/sh -c '/bin/ba…" 43 minutes ago Up 23 minutes 0.0.0.0:5000-5001->5000-5001/tcp artis_api-app-1
[
{
"Id": "sha256:505685ec26fb58cc87fe4ec5166f2b3c0978b251953f5e9d431776ad036fc839",
"RepoTags": [
"artis_api-app:latest"
],
"RepoDigests": [],
"Parent": "",
"Comment": "buildkit.dockerfile.v0",
"Created": "2022-09-14T18:40:33.7696322Z",
"Container": "",
"ContainerConfig": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"DockerVersion": "",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.dotnet/tools",
"ASPNETCORE_URLS=",
"DOTNET_RUNNING_IN_CONTAINER=true",
"DOTNET_VERSION=6.0.9",
"ASPNET_VERSION=6.0.9",
"DOTNET_GENERATE_ASPNET_CERTIFICATE=false",
"DOTNET_NOLOGO=true",
"DOTNET_SDK_VERSION=6.0.401",
"DOTNET_USE_POLLING_FILE_WATCHER=true",
"NUGET_XMLDOC_MODE=skip",
"POWERSHELL_DISTRIBUTION_CHANNEL=PSDocker-DotnetSDK-Debian-11"
],
"Cmd": [
"/bin/sh",
"-c",
"/bin/bash ./Setup.sh"
],
"ArgsEscaped": true,
"Image": "",
"Volumes": null,
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"Architecture": "amd64",
"Os": "linux",
"Size": 6245959471,
"VirtualSize": 6245959471,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/uqv91vlzccv2h5y9g9mzcbl2a/diff:/var/lib/docker/overlay2/dewh3h889lvrnae1qq1gaucga/diff:/var/lib/docker/overlay2/21fioiit0habui6whx5x56e4w/diff:/var/lib/docker/overlay2/xgcecdok1kdh5je5ti1xuyylx/diff:/var/lib/docker/overlay2/yxu9wlvfmwn5rp9lth9f9ff8s/diff:/var/lib/docker/overlay2/imqsg5pxz3xkzxsaucymd3xv5/diff:/var/lib/docker/overlay2/etcmf1dskml8g13hg0eeqgtrq/diff:/var/lib/docker/overlay2/3orhtkibg5z6eeosvazrsjjwx/diff:/var/lib/docker/overlay2/e903ebed1d7345de8bd1780dbb132091b2930ddaa014ef4f399ce87be1e105d4/diff:/var/lib/docker/overlay2/d49d1b24c5821a65689ef573e9f6e7f4562cc2cd7fb4d1e8f2f6144f31cbb1dc/diff:/var/lib/docker/overlay2/bbf9acaefd8441bb31972a56526870d63995056b59f59166560007d0a114b2f9/diff:/var/lib/docker/overlay2/2306e276d00d98d2aaff2af655cba48efe5b4c0c072f54b6883818a5063d2623/diff:/var/lib/docker/overlay2/7ab5f58b84e23d093901394612c6cc71f8b704f8408fcaab10ed9c2b799e71e4/diff:/var/lib/docker/overlay2/2cd518af44aa8bae9adaa6460d9c88a761f2fdccdd36b69ddcd4809d26fbb2a0/diff:/var/lib/docker/overlay2/ea6cf0ad92836e7d871430c029e19c688f5b7caeaee475f1b7fb18b215511cd9/diff:/var/lib/docker/overlay2/892c98581f39f02c87c8ca05c73b98bdc29ab6862ebb2e70ee8c7006d1f90ccf/diff",
"MergedDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/merged",
"UpperDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/diff",
"WorkDir": "/var/lib/docker/overlay2/u0v4b4saadkhe84ztlc2blsnj/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:b45078e74ec97c5e600f6d5de8ce6254094fb3cb4dc5e1cc8335fb31664af66e",
"sha256:5ec686cbc3c7aea55c4f80c574962f120e5f81ed0b7ccacbbde51f8d819f8247",
"sha256:e487c4dad54c656811ed2064a764f7240bb3b5936d497ec757991f9344be616d",
"sha256:64d665f70cc187b1c5a5c1fc8d0a4431ea0f0069c1985ce330c55523466c22b2",
"sha256:efccb7d95dee67a40c57966066356e47a301cc6028db923ab58fe4f21564fefb",
"sha256:b4d89feca49ab5217ab7d09079ab1f07c618570d0822ad74ccce634313cb0c91",
"sha256:d5471ff23747a10089f58397f39c2f66c6c8937687dd2b3592ab6fe09c6756d0",
"sha256:08affa1f86cb7d7262620e2517bc3308598db5d047135c5b7db00994d22f6701",
"sha256:6f1bf9eb1c14548bc0b119efb283637880394c2cae2117de367238ab3b7fcb80",
"sha256:be544cd1ea837e49241d66815afaeddfe79b3d869972f64451e3c2dcdf8c10eb",
"sha256:7c2ba258f59487f4dacf912a4f2c0a7598e2b4082283555fd5ca127da145cdc9",
"sha256:3bcd28ee7f58f12baceeb1ab2c097675ca35e66f3e350869a73d5bc508fcfd57",
"sha256:11f2bbe76890bfd069e0f1b75dd4acd35dc33f071f061e5c6751e5af7723e897",
"sha256:d5f7661f5ac2f07ba5836972af96f89358381cbb5399342ef6fd03b6306fe000",
"sha256:5879a59c7c2aba6ffcc9535592116cc92e7efd4d7acff9f7716e1e2ac167c3e8",
"sha256:fdedee5f422b0f13237dd54f9894c64c4323f48d7d06e10a7640991ce9dd62ce",
"sha256:01104bf88915fea5e113a37e57b450afa15ec647816e61b17bea508082a8ee32"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
I noticed the outputs are different when I inspect artis_api-app-1 instead of the Image named artis_api-app. I figured the only relevant portion of the former is the network settings output.
"NetworkSettings": {
"Bridge": "",
"SandboxID": "174a63cebce5ca8ff7c6b218a73c90e5b813caad60b008777a1fdb031c595ced",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5000"
}
],
"5001/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5001"
}
]
},
"SandboxKey": "/var/run/docker/netns/174a63cebce5",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"artis_api_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"artis_api-app-1",
"app",
"4b71ffc2c5a1"
],
"NetworkID": "dc60832ac2b09450067b3edeb1cd944ad9c7c4805a674da5dba456654db49125",
"EndpointID": "cd04c2b9261b8ec3a095a6c3db6001484e41a41ca0dd3c32a72c093cf3787e7b",
"Gateway": "172.22.0.1",
"IPAddress": "172.22.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:16:00:03",
"DriverOpts": null
}
}
}
EDIT 2
Here is my initial Program.cs entry point since I've had some questions regarding that.
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.ConfigureAppConfiguration((context, configBuilder) =>
{
string assemblyName = Assembly.GetExecutingAssembly().GetName().Name;
string envName = context.HostingEnvironment.EnvironmentName;
configBuilder.Sources.Clear();
configBuilder.AddJsonFile($"{assemblyName}.appsettings.json", optional: false, reloadOnChange: true);
configBuilder.AddJsonFile($"{assemblyName}.appsettings.{envName}.json", optional: true, reloadOnChange: true);
});
webBuilder.UseStartup<Startup>();
});
}
Answer
Along with the accepted answer I also needed to include a Kestrel configuration in my appsettings.json file such as this:
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://0.0.0.0:5000"
},
"Https": {
"Url": "https://0.0.0.0:5001"
}
},
"EndpointDefaults": {
"Url": "https://0.0.0.0:5001",
"Protocols": "Http1"
}
}
Your application is listen for connection inside container (localhost) only.
Now listening on: http://localhost:5000
To be able to pass-through the port, the application should listen external network adapter. It is enough to listen http://0.0.0.0:5000.
Please, try providing a value for the environment variable ASPNETCORE_URLS when running your container.
For example, in docker-compose:
version: "3.4"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:5000
- 5001:5001
environment:
- ASPNETCORE_ENVIRONMENT=Local
- ASPNETCORE_URLS=https://+:5001;http://+:5000
# please, review the provided path, according to your
# setup I am unsure whether it is exact or not
- ASPNETCORE_Kestrel__Certificates__Default__Path=${HOME}/.aspnet/https/aspnetapp.pfx
# consider use an env varible to provide the password, to avoid
# putting under version control system sensitive information
- ASPNETCORE_Kestrel__Certificates__Default__Password=TestPassword
This will allow your app to listen in all the network interfaces available: on the contrary, it will only be accesible through localhost but be aware that localhost in that context is the container itself.
You could provide an analogous information in the Dockerfile as well.
Please, notice that in order to support HTTPS I included information about the location of the pfx bundle and the corresponding password you used when building the image. It is necessary for that purpose: consider read the provided Microsoft documentation.
A final word about the certificates: as you can see in the mentioned documentation, the certificates used by the application are usually mounted through a docker volume to avoid including it in your Docker image - making it public in practice. It is fine for a POC but please, never store your production certificates and passwords like this.
Having the same issue previously caused by port setting in Programm.cs
Please check
Interface IWebHostBuilder you might have specified specific port like this
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseWebRoot("")
.UseUrls(urls: "http://localhost:5000");
and change it with this
public static IHostBuilder CreateWebHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
Overview
I am using a course to learn how to Dockerize my ASP.NET Core application. I have a networking issue with the token server I am trying to use in my configuration.
The ASP.NET Core Web application (webmvc) allows authorization through a token server (tokenserver).
docker-compose for the services
tokenserver
tokenserver:
build:
context: .\src\Services\TokenServiceApi
dockerfile: Dockerfile
image: shoes/token-service
environment:
- ASPNETCORE_ENVIRONMENT=ContainerDev
- MvcClient=http://localhost:5500
container_name: tokenserviceapi
ports:
- "5600:80"
networks:
- backend
- frontend
depends_on:
- mssqlserver
tokenserver knows about the webmvc url.
webmvc
webmvc:
build:
context: .\src\Web\WebMvc
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=ContainerDev
- CatalogUrl=http://catalog
- IdentityUrl=http://10.0.75.1:5600
container_name: webfront
ports:
- "5500:80"
networks:
- frontend
depends_on:
- catalog
- tokenserver
Running the container confirms that webmvc will try to reach the identity server at http://10.0.75.1:5600.
By running ipconfig in my Windows machine I confirm that DockerNAT is using 10.0.75.1:
Ethernet adapter vEthernet (DockerNAT):
Connection-specific DNS Suffix . :
IPv4 Address. . . . . . . . . . . : 10.0.75.1
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . :
http://10.0.75.1:5600/ is not accessible when accessed from the host machine while http://localhost:5600 is accessible.
However, I have to rely on DockerNAT IP because webmvc must access the service from its own container where localhost:5600 does not make sense:
docker exec -it webfront bash
root#be382eb4608b:/app# curl -i -X GET http://10.0.75.1:5600
HTTP/1.1 404 Not Found
Date: Fri, 03 Jan 2020 08:55:48 GMT
Server: Kestrel
Content-Length: 0
root#be382eb4608b:/app# curl -i -X GET http://localhost:5600
curl: (7) Failed to connect to localhost port 5600: Connection refused
Token service container inspect (relevant parts)
"HostConfig": {
"Binds": [],
....
"NetworkMode": "shoesoncontainers_backend",
"PortBindings": {
"80/tcp": [
{
"HostIp": "",
"HostPort": "5600"
}
]
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6637a47944251a4dc59205dc6e03670bc4b03f8bf38a7be0dc11b72adf6a3afa",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5600"
}
]
},
"SandboxKey": "/var/run/docker/netns/6637a4794425",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"shoesoncontainers_backend": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"tokenserver",
"d31d9b5f4ec7"
],
"NetworkID": "a50a9cee66e6a65a2bb90a7035bae4d9716ce6858a17d5b22e147dfa8e33d686",
"EndpointID": "405b1beb5e20636bdf0d019b36494fd85ece86cfbb8c2d57283d64cc20e5d760",
"Gateway": "172.28.0.1",
"IPAddress": "172.28.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:1c:00:04",
"DriverOpts": null
},
"shoesoncontainers_frontend": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"tokenserver",
"d31d9b5f4ec7"
],
"NetworkID": "b7b3e8599cdae7027d0bc871858593f41fa9b938c13f906b4b29f8538f527ca0",
"EndpointID": "e702b29016b383b7d5872f8c55cad0f189d6f58f2631316cf0313f3df30331c0",
"Gateway": "172.29.0.1",
"IPAddress": "172.29.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:1d:00:03",
"DriverOpts": null
}
}
}
I have also created an inbound rule for port 5600 in Windows Defender Firewall with Advanced Security.
Question: How to access Docker container through DockerNAT IP address on Windows 10?
I think you are looking for host.docker.internal. It's a special DNS name which allow you to connect from a container to a service on the host or a container exposed on the host.
The official documentation.
You can fine longer explanations here.
I am not sure why it does not work as expected, but using the information provided here I was able to figure out how to make it work:
You can try add incoming rule in firewall:
Example:
protocol: any/tcp/udp/...
program: any
action: allow
local port: any
remote port: any
local address: 10.0.75.1
remote address: 10.0.75.0/24
or you can try use address 10.0.75.2 instead of 10.0.75.1
For me the second solution worked.
I have a container with 2 subnets:
one is the reverse proxy subnet
the second one is the internal subnet for the different containers of that project
The container needs to access an external SMTP server (on mailgun.com), but it looks like, with docker-compose, you can put a container on both one or more subnets and give it access to the host network at the same time.
Is there a way to allow this container to initiate connections to the outside world?
and, if no, what common workarounds are used? (for example, adding an extra IP to the container to be on the host network, etc.)
This is the docker compose file:
version: '2.3'
services:
keycloak:
container_name: keycloak
image: jboss/keycloak
restart: unless-stopped
volumes:
- '/appdata/keycloak:/opt/jboss/keycloak/standalone/data'
expose:
- 8080
external_links:
- auth
networks:
- default
- nginx
environment:
KEYCLOAK_USER: XXXX
KEYCLOAK_PASSWORD: XXXX
PROXY_ADDRESS_FORWARDING: 'true'
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
VIRTUAL_HOST: auth.XXXX.com
VIRTUAL_PORT: 80
LETSENCRYPT_HOST: auth.XXXX.com
LETSENTRYPT_EMAIL: admin#XXXX.com
networks:
default:
external:
name: app-network
nginx:
external:
name: nginx-proxy
The networks are as follows:
$ dk network ls
NETWORK ID NAME DRIVER SCOPE
caba49ae8b1c bridge bridge local
2b311986a6f6 app-network bridge local
67f70f82aea2 host host local
9e0e2fe50385 nginx-proxy bridge local
dab9f171e37f none null local
and nginx-proxy network info is:
$ dk network inspect nginx-proxy
[
{
"Name": "nginx-proxy",
"Id": "9e0e2fe503857c5bc532032afb6646598ee0a08e834f4bd89b87b35db1739dae",
"Created": "2019-02-18T10:16:38.949628821Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"360b49ab066853a25cd739a4c1464a9ac25fe56132c596ce48a5f01465d07d12": {
"Name": "keycloak",
"EndpointID": "271ed86cac77db76f69f6e76686abddefa871b92bb60a007eb131de4e6a8cb53",
"MacAddress": "02:42:ac:12:00:04",
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"379dfe83d6739612c82e99f3e8ad9fcdfe5ebb8cdc5d780e37a3212a3bf6c11b": {
"Name": "nginx-proxy",
"EndpointID": "0fcf186c6785dd585b677ccc98fa68cc9bc66c4ae02d086155afd82c7c465fef",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"4c944078bcb1cca2647be30c516b8fa70b45293203b355f5d5e00b800ad9a0d4": {
"Name": "adminmongo",
"EndpointID": "65f1a7a0f0bcef37ba02b98be8fa1f29a8d7868162482ac0b957f73764f73ccf",
"MacAddress": "02:42:ac:12:00:06",
"IPv4Address": "172.18.0.6/16",
"IPv6Address": ""
},
"671cc99775e09077edc72617836fa563932675800cb938397597e17d521c53fe": {
"Name": "portainer",
"EndpointID": "950e4b5dcd5ba2a13acba37f50e315483123d7da673c8feac9a0f8d6f8b9eb2b",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"90a98111cbdebe76920ac2ebc50dafa5ea77eba9f42197216fcd57bad9e0516e": {
"Name": "kibana",
"EndpointID": "fe1768274eec9c02c28c74be0104326052b9b9a9c98d475015cd80fba82ec45d",
"MacAddress": "02:42:ac:12:00:05",
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Update:
The following test was done to test the solution proposed by lbndev:
a test network was created:
# docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
test_network
e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9
we can display the contents:
# docker inspect test_network
[
{
"Name": "test_network",
"Id": "e21057cf83eec70e9cfeed459d79521fb57e9f08477b729a8c8880ea83891ed9",
"Created": "2019-02-24T21:52:44.678870135+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Then we can inspect the container:
I put the contents on pastebin: https://pastebin.com/5bJ7A9Yp since it's quite large and would make this post unreadable.
and testing:
# docker exec -it 5d09230158dd sh
sh-4.2$ ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10006ms
So, we couldn't get this solution to work.
Looks like your bridge network is missing a few options, to allow it to reach the outside world.
Try executing docker network inspect bridge (the default bridge network). You'll see this in the options :
...
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
...
On your nginx-proxy network, these are missing.
You should delete your network and re-create it with these additional options. From the documentation on user-defined bridged networks and docker network create command :
docker network create \
-o "com.docker.network.bridge.enable_icc"="true" \
-o "com.docker.network.bridge.enable_ip_masquerade"="true" \
-o "com.docker.network.bridge.host_binding_ipv4"="0.0.0.0" \
-o"com.docker.network.driver.mtu"="1500" \
nginx-proxy
Enabling ICC or not is up to you.
What will enable you to reach your mail server is ip_masquerade to be enabled. Without this setup, your physical infrastructure (= network routers) would need to properly route the IPs of the docker network subnet (which I assume is not the case).
Alternatively, you could configure your docker network's subnet, ip range and gateway, to match those of your physical network.
In the end, the problem turned out to be very simple:
In the daemon.json file, in the docker config, there was the following line:
{"iptables": false, "dns": ["1.1.1.1", "1.0.0.1"]}
It comes from the setup scripts we’ve been using and we didn’t know about iptables:false
It prevents docker from updating the host’s iptables; while the bridge networks were set up correctly, there was no communication possible with the outside.
While simple in nature, it proved very long to find, so I’m posting it as an answer with the hope it might help someone.
Thanks to everyone involved for trying to solve this issue!
I am sure the answer here is something real obvious that I am missing here. I have Docker for Windows installed on a Win 10 Pro machine. The Windows machine is on the 192.168.40/24 network.
I pull and install RabbitMQ as follows:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management
And I can see that it is running successfully:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3cabceeade6e rabbitmq:3-management "docker-entrypoint.s…" 7 minutes ago Up 7 minutes 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp some-rabbit
However I cannot telnet to either 5671 or 15672 on 127.0.0.1. I have also tried disabling the Windows firewall with no luck.
Not sure how this relates but Docker is configured with the following networking settings:
EDIT: The IP address information is:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "707c66b726b25c80abfebb1712d3bb0ae588dd77c996013bb528de7ac061edd4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"15671/tcp": null,
"15672/tcp": null,
"25672/tcp": null,
"4369/tcp": null,
"5671/tcp": null,
"5672/tcp": null
},
"SandboxKey": "/var/run/docker/netns/707c66b726b2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "6e5ba9a4596967d98def608e18c9fd925a6ce036a84cd9d616f9f35d561ce68d",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "38f30e8dcf669b9419be3a03f1f296e0bed71d970516c4a1e581d37772bd1b55",
"EndpointID": "6e5ba9a4596967d98def608e18c9fd925a6ce036a84cd9d616f9f35d561ce68d",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
So what have I missed here that is not enabling me to access the web management interface on http://127.0.0.1:15672? While I can see the server is running on 172.17.0.2 that is clearly not on my network.
So I finally figured out my stupidity:
I was adding the ports on the end of the command viz:
docker run -d --hostname my-rabbit --name some-rabbit rabbitmq:3-management -p 15672:15672 -p 5672:5672
instead of before the actual name of the container etc.:
docker run -d --hostname my-rabbit -p 15672:15672 -p 5672:5672 --name some-rabbit rabbitmq:3-management
I have a problem where docker-compose containers aren't able to reach the internet. Manually created containers via the docker cli or kubelet work just fine.
This is on an AWS EC2 node created using Kops with Calico overlay (I think that may be unrelated, however).
Here's the docker-compose:
version: '2.1'
services:
app:
container_name: app
image: "debian:jessie"
command: ["sleep", "99999999"]
app2:
container_name: app2
image: "debian:jessie"
command: ["sleep", "99999999"]
This fails:
# docker exec -it app ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
docker-compose container<->container works (as expected):
# docker exec -it app ping app2
PING app2 (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: icmp_seq=0 ttl=64 time=0.098 ms
Manually created container works fine:
# docker run -it -d --name app3 debian:jessie sh -c "sleep 99999999"
# docker exec -it app3 ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=37 time=9.972 ms
So it seems like docker-compose containers can't reach the internet.
Here's the NetworkSettings from app3, which works:
"NetworkSettings": {
"Bridge": "",
"SandboxID": "54168ea912b9caa842b208f36dac80a588ebdc63501a700379fb1b732a41d3ac",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/54168ea912b9",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "cdddee0f3e25e7861a98ba6aff33652619a3970c061d0ed2a5dc5bd2b075b30d",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "46e8bc586d48c9a57e2886f7f35f7c2c8396f8084650fcc2bf1e74788df09e3f",
"EndpointID": "cdddee0f3e25e7861a98ba6aff33652619a3970c061d0ed2a5dc5bd2b075b30d",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
}
From one of the docker-compose containers (fails):
"NetworkSettings": {
"Bridge": "",
"SandboxID": "6b79a6b45f099c65f89adf59eb50eadff2362942f316b05cf20ae1959ca9b88b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/6b79a6b45f09",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"root_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"app2",
"4f48647ba5bb"
],
"NetworkID": "ffb540b2b9e2945908477a755a43d3505aea6ed94ef5fd944909a91fb104ce8e",
"EndpointID": "48aff2f00bb4bd670b5178b459a353ac45f7d3efbfb013c1026064022e7c4e59",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:02"
}
}
}
So it seems like the major difference is that the docker-compose containers aren't created with an IPAddress or Gateway.
Some background info:
# docker version
Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Tue Jan 10 20:17:57 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Tue Jan 10 20:17:57 2017
OS/Arch: linux/amd64
# docker-compose version
docker-compose version 1.15.0, build e12f3b9
docker-py version: 2.4.2
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
# ip route
default via 10.20.128.1 dev eth0
10.20.128.0/20 dev eth0 proto kernel scope link src 10.20.140.184
100.104.10.64/26 via 10.20.136.0 dev eth0 proto bird
100.109.150.192/26 via 10.20.152.115 dev tunl0 proto bird onlink
100.111.225.192 dev calic6f21d462fc scope link
blackhole 100.111.225.192/26 proto bird
100.111.225.193 dev calief8dddb6a0d scope link
100.111.225.195 dev cali8ca1dd867c3 scope link
100.111.225.196 dev cali34426885f86 scope link
100.111.225.197 dev cali6cae60de42a scope link
100.111.225.231 dev calibd569acd2f3 scope link
100.115.17.64/26 via 10.20.148.89 dev tunl0 proto bird onlink
100.115.237.64/26 via 10.20.167.9 dev tunl0 proto bird onlink
100.117.246.128/26 via 10.20.150.249 dev tunl0 proto bird onlink
100.118.80.0/26 via 10.20.162.215 dev tunl0 proto bird onlink
100.119.204.0/26 via 10.20.135.183 dev eth0 proto bird
100.123.178.128/26 via 10.20.170.43 dev tunl0 proto bird onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-bd6445b00ccf proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-ffb540b2b9e2 proto kernel scope link src 172.19.0.1
iptables are a bit long, so not posting for now (I would expect them to interfere with the non-docker-compose generated containers, so I think the iptables are unrelated).
Anyone know what's going on?