Docker port mapping weirdness - docker

I am using this docker-compose file:
version: "3.7"
services:
mautrix-wsproxy:
container_name: mautrix-wsproxy
image: dock.mau.dev/mautrix/wsproxy
restart: unless-stopped
ports:
- 29331
environment:
#LISTEN_ADDRESS: ":29331"
APPSERVICE_ID: imessage
AS_TOKEN: put your as_token here
HS_TOKEN: put your hs_token here
# These URLs will work as-is with docker networking
SYNC_PROXY_URL: http://mautrix-syncproxy:29332
SYNC_PROXY_WSPROXY_URL: http://mautrix-wsproxy:29331
SYNC_PROXY_SHARED_SECRET: ...
mautrix-syncproxy:
container_name: mautrix-syncproxy
image: dock.mau.dev/mautrix/syncproxy
restart: unless-stopped
environment:
#LISTEN_ADDRESS: ":29332"
DATABASE_URL: postgres://user:pass#host/mautrixsyncproxy
HOMESERVER_URL: http://localhost:8008
SHARED_SECRET: ...
But then docker ps shows
... dock.mau.dev/mautrix/wsproxy ... 0.0.0.0:49156->29331/tcp, :::49156->29331/tcp
And I have to use external port 49156 to connect to its internal port 29331.
Where the heck did this 49156 come from? How do I map it so it's 29331->29331 ?
docker inspect shows:
"NetworkSettings": {
"Bridge": "",
"SandboxID": ...,
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"29331/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49156"
},
{
"HostIp": "::",
"HostPort": "49156"
}
]
},

In addition to the provided answers, if you want a fix port mapping, you can do this by providing 2 ports separated by a colon with the publish flag or in the ports array when using compose. The left part is the mapped port available on the host system, and the right part is the port inside the container.
# make port 29331 inside the container available on port 8080 on the host system
docker run --publish 8080:29331 busybox
In your case, to answer your question.
How do I map it so it's 29331->29331 ?
services:
mautrix-wsproxy:
ports:
- "29331:29331"
...

From the docs for ports:
There are three options:
Specify both ports (HOST:CONTAINER)
Specify just the container port (an ephemeral host port is chosen for the host port).
....
You're using the 2nd option, and just specifying the container port that you want to expose. Since you didn't specify a host port to map that to, "an ephemeral host port is chosen".

Looking at the documentation for ports in a compose file (docs.docker.com) we can read the following:
Short syntax
There are three options:
...
Specify just the container port (an ephemeral host port is chosen for the host port).
...
This means in essence that a random, free host port is chosen.
To explicitly map the container-port on a known host-port (even if it s the same as the container port), we use the HOST:CONTAINER syntax (see link above):
version: "3.7"
services:
mautrix-wsproxy:
...
ports:
- "29331:29331"
...

Related

Two services cannot see each other through a swarm overlay

I feel like this is simple, but I can't figure it out. I have two services, consul and traefik up in a single node swarm on the same host.
> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3g1obv9l7a9q consul_consul replicated 1/1 progrium/consul:latest
ogdnlfe1v8qx proxy_proxy global 1/1 traefik:alpine *:80->80/tcp, *:443->443/tcp
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090f1ed90972 progrium/consul:latest "/bin/start -server …" 12 minutes ago Up 12 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul_consul.1.o0j8kijns4lag6odmwkvikexv
20f03023d511 traefik:alpine "/entrypoint.sh -c /…" 12 minutes ago Up 12 minutes 80/tcp
Both containers have access to the "consul" overlay network, which was created as such.
> docker network create --driver overlay --attachable consul
ypdmdyx2ulqt8l8glejfn2t25
Traefik is complaining that it can't reach consul.
time="2019-03-18T18:58:08Z" level=error msg="Load config error: Get http://consul:8500/v1/kv/traefik?consistent=&recurse=&wait=30000ms: dial tcp 10.0.2.2:8500: connect: connection refused, retrying in 7.492175404s"
I can go into the traefik container and confirm that I can't reach consul through the overlay network, although it is pingable.
> docker exec -it 20f03023d511 ash
/ # nslookup consul
Name: consul
Address 1: 10.0.2.2
/ # curl consul:8500
curl: (7) Failed to connect to consul port 8500: Connection refused
# ping consul
PING consul (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.085 ms
However, if I look a little deeper, I find that they are connected, just that the overlay network isn't transmitting traffic to the actual destination for some reason. If I go directly to the actual consul ip, it works.
/ # nslookup tasks.consul
Name: tasks.consul
Address 1: 10.0.2.3 0327c8e1bdd7.consul
/ # curl tasks.consul:8500
Moved Permanently.
I could workaround this, technically there will only ever be one copy of consul running, but I'd like to know why the data isn't routing in the first place before I get deeper into it. I can't think of anything else to try. Here is various information related to this setup.
> docker --version
Docker version 18.09.2, build 6247962
> docker network ls
NETWORK ID NAME DRIVER SCOPE
cee3cdfe1194 bridge bridge local
ypdmdyx2ulqt consul overlay swarm
5469e4538c2d docker_gwbridge bridge local
5fd928ea1e31 host host local
9v22k03pg9sl ingress overlay swarm
> docker network inspect consul
[
{
"Name": "consul",
"Id": "ypdmdyx2ulqt8l8glejfn2t25",
"Created": "2019-03-18T14:44:27.213690506-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0327c8e1bdd7ebb5a7871d16cf12df03240996f9e590509984783715a4c09193": {
"Name": "consul_consul.1.8v4bshotrco8fv3sclwx61106",
"EndpointID": "ae9d5ef1d19b67e297ebf40f6db410c33e4e3c0266c56e539e696be3ed4c81a5",
"MacAddress": "02:42:0a:00:02:03",
"IPv4Address": "10.0.2.3/24",
"IPv6Address": ""
},
"c21f5dfa93a2f43b747aedc64a343d94d6c1c2e6558d81bd4a52e2ba4b5fa90f": {
"Name": "proxy_proxy.sb6oindhmfukq4gcne6ynb2o2.4zvco02we58i3ulbyrsw1b2ok",
"EndpointID": "7596a208e0b05ba688f318814e24a2a1a3401765ed53ca421bf61c73e65c235a",
"MacAddress": "02:42:0a:00:02:06",
"IPv4Address": "10.0.2.6/24",
"IPv6Address": ""
},
"lb-consul": {
"Name": "consul-endpoint",
"EndpointID": "23e74716ef54f3fb6537b305176b790b4bc4132dda55f20588d7ce4ca71d7372",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
{
"Name": "e11b9bd30b31",
"IP": "10.8.0.1"
}
]
}
]
> cat consul/docker-compose.yml
version: '3.1'
services:
consul:
image: progrium/consul
command: -server -bootstrap
networks:
- consul
volumes:
- consul:/data
deploy:
labels:
- "traefik.enable=false"
networks:
consul:
external: true
> cat proxy/docker-compose.yml
version: '3.3'
services:
proxy:
image: traefik:alpine
command: -c /traefik.toml
networks:
# We need an external proxy network and the consul network
# - proxy
- consul
ports:
# Send HTTP and HTTPS traffic to the proxy service
- 80:80
- 443:443
configs:
- traefik.toml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
# Deploy the service to all nodes that match our constraints
mode: global
placement:
constraints:
- "node.role==manager"
- "node.labels.proxy==true"
labels:
# Traefik uses labels to configure routing to your services
# Change the domain to your own
- "traefik.frontend.rule=Host:proxy.mcwebsite.net"
# Route traffic to the web interface hosted on port 8080 in the container
- "traefik.port=8080"
# Name the backend (not required here)
- "traefik.backend=traefik"
# Manually set entrypoints (not required here)
- "traefik.frontend.entryPoints=http,https"
configs:
# Traefik configuration file
traefik.toml:
file: ./traefik.toml
# This service will be using two external networks
networks:
# proxy:
# external: true
consul:
external: true
There were two optional kernel configs CONFIG_IP_VS_PROTO_TCP and CONFIG_IP_VS_PROTO_UDP disabled in my kernel which, you guessed it, enable tcp and udp load balancing.
I wish I'd checked that about four hours sooner than I did.

AWS ECS containers are not connecting but works perfectly in my local machine

I have an application(runs at http://localhost:8080) that talks to a backend api which runs at http://localhost:8081. I have dockerized the frontend and the backend separately and running them through docker-compose locally works perfectly without any issues. But, when I run it in ECS, the frontend couldn't find http://localhost:8081(backend).
I am using an AutoScaling group with an Elastic Load Balancer and I have my both containers defined in a single Task Definition. Also, I have the backend linked to the front end. When I ssh into my ECS instance and run docker ps -a i can see both of my containers are running at the correct ports exactly like in my local machine(Result of docker ps -a) and I can successfully ping each of them from one container to the other.
Task Definition
"CartTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "cs-cart",
"Image": "thishandp7/cs-cart",
"Memory": 400,
"PortMappings":[
{
"ContainerPort": "8080",
"HostPort": "8080"
}
],
"Links": [
"cs-server"
]
},
{
"Name": "cs-server",
"Image": "thishandp7/cs-server",
"Memory": 450,
"PortMappings":[
{
"ContainerPort": "8081",
"HostPort": "8081"
}
],
}
]
}
}
Listeners in my ElasticLoadBalancer,
The first listener is for the frontend and the second one is for the backend
"Listeners" : [
{
"LoadBalancerPort": 80,
"InstancePort": 8080,
"Protocol": "http"
},
{
"LoadBalancerPort": 8081,
"InstancePort": 8081,
"Protocol": "tcp"
}
],
EC2 instacne security Group Ingress rules:
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : 8080,
"ToPort" : 8080,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 8081,
"ToPort" : 8081,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 22,
"ToPort" : 22,
"CidrIp" : "0.0.0.0/0"
}
],
Docker Compose
version: "3.5"
services:
cart:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-cart/
PORT: 8080
networks:
- server-cart
ports:
- 8080:8080
depends_on:
- server
server:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-shopping-cart-server/
PORT: 8081
ports:
- 8081:8081
networks:
- server-cart
networks:
server-cart:
Quick update: I have tried it with awsvpc network mode with application load balancer. Still not working
Thanks in advance.
What kind of Docker Network mode are you using(Brdige/Host) on ECS?. I don't think localhost will work properly on ECS containers. I had same issue so I used private IP or DNS name of EC2 host for my communication as temp testing purpose. Ex - http://10.0.1.100:8081.
Note - Please make sure to give security group rule to allow 8081 traffic from within EC2(Edit EC2 security group to allow 8081 from same sgid as source).
For Production deployments, I would recommend to use a service discovery to identify the backend service(Consul by Hashicorp) or AWS Private Service Discovery on ECS.
-- Update --
Since you are running both containers under same task def(under same ECS service), so typically ECS will bring both docker containers on same host. Do something like following.
By default ECS brings containers using Bridge mode on Linux.
You should be able to have each containers communicate using Docker Gateway IP - 172.17.0.1 on Linux. So for your case, try configuring http://172.17.0.1:8081

Cannot access a Docker Swarm service through its published port on the host

I have deployed a Docker Swarm cluster on several machines and I am now trying to access to the server running in Docker from the host.
I use docker compose file to define my service and the exposed port appears when I inspect the service:
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 27017,
"PublishedPort": 3017,
"PublishMode": "host"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 27017,
"PublishedPort": 3017,
"PublishMode": "host"
}
],
"VirtualIPs": [
{
"NetworkID": "**********",
"Addr": "10.0.0.34/24"
}
]
}
I use host mode because the service is constrained to run on a particular machine, and I want it accessible only from this machine.
But when I list the processes listening on ports on the host machine, the port doesn't appear.
And of course I cannot connect to the server from the host through the exposed port.
I am using iptables as firewall and restrains as much as possible the open ports, but the Docker Swarm needed ones are opened.
Here is my docker-compose.yml file:
version: '3.4'
services:
mongo-router:
image: mongo
networks:
- mongo-cluster
volumes:
- db-data-router:/data/db
- db-config-router:/data/configdb
ports:
- target: 27017
published: 3017
protocol: tcp
mode: host
deploy:
placement:
constraints:
- node.labels.mongo.router == true
command: mongos --configdb cnf/mongodb-cnf_mongo-cnf-1:27017,mongodb-cnf_mongo-cnf-2:27017,mongodb-cnf_mongo-cnf-3:27017
volumes:
db-data-router:
db-config-router:
networks:
mongo-cluster:
external: true
The network is an overlay network on which all services are subscribing.
I had a similar issue. After installing hyper-v feature on windows (even though the cpu did not support hyper-v) I was able to access published ports from the host (even in ingress mode).

How to launch a docker bundle with specified exposed ports?

If you are not familiar with docker bundles please read this.
So I have tried to create a simple docker bundle from the following docker-compose.yml
version: "2"
services:
web:
image: cohenaj194/apache-simple
ports:
- 32701:80
nginx:
image: nginx
ports:
- 32700:80
But the ports of the docker services this bundle created were not exposed and I could not access any of the containers in my services through ports 32700 or 32701 as I specified it in thedocker-compose.yml. How am I supposed to expose the ports of docker bundle services?
Update: I believe my issue may be that my test.dab file that is created with docker-compose bundle does not contain any mention of port 32700 or 32701:
{
"Services": {
"nginx": {
"Image": "nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
},
"web": {
"Image": "cohenaj194/apache-simple#sha256:6196c5bce25e5f76e0ea7cbe8e12e4e1f96bd36011ed37d3e4c5f06f6da95d69",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
}
},
"Version": "0.1"
}
Attempting to insert the extra ports into this file also does not work and results in the following error:
Error reading test.dab: JSON syntax error at byte 229: invalid character ':' after object key:value pair
Update2: My services are accessible over the default ports docker swarm assigns to services when the host port is not defined:
user#hostname:~/test$ docker service inspect test_nginx --pretty
ID: 3qimd4roft92w3es3qooa9qy8
Name: test_nginx
Labels:
- com.docker.stack.namespace=test
Mode: Replicated
Replicas: 2
Placement:
ContainerSpec:
Image: nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6
Networks: 1v5nyqqjnenf7xlti346qfw8n
Ports:
Protocol = tcp
TargetPort = 80
PublishedPort = 30000
I can then get at my service from port 30000 however I want to be able to define the host port my services will use.
As of the Docker 1.12 release, there is no way to specify a "published" port in the bundle. The bundle is a portable format, and exposed ports are non-portable (in that if two bundles used the same ones they would conflict).
So exposed ports will not be part of the bundle configuration. Currently the only option is to run a docker service update to add the published port. In the future there may be other ways to achieve this.

Using the host ip in docker-compose

I want to create a docker-compose file that is able to run on different servers.
For that I have to be able to specify the host-ip or hostname of the server (where all the containers are running) in several places in the docker-compose.yml.
E.g. for a consul container where I want to define how the server can be found by fellow consul containers.
consul:
image: progrium/consul
command: -server -advertise 192.168.1.125 -bootstrap
I don't want to hardcode 192.168.1.125 obviously.
I could use env_file: to specify the hostname or ip and adopt it on every server, so I have that information in one place and use that in docker-compose.yml. But this can only be used to specifiy environment variables and not for the advertise parameter.
Is there a better solution?
docker-compose allows to use environment variables from the environment running the compose command.
See documentation at https://docs.docker.com/compose/compose-file/#variable-substitution
Assuming you can create a wrapper script, like #balver suggested, you can set an environment variable called EXTERNAL_IP that will include the value of $(docker-machine ip).
Example:
#!/bin/sh
export EXTERNAL_IP=$(docker-machine ip)
exec docker-compose $#
and
# docker-compose.yml
version: "2"
services:
consul:
image: consul
environment:
- "EXTERNAL_IP=${EXTERNAL_IP}"
command: agent -server -advertise ${EXTERNAL_IP} -bootstrap
Unfortunately if you are using random port assignment, there is no way to add EXTERNAL_PORT, so the ports must be linked statically.
PS: Something very similar is enabled by default in HashiCorp Nomad, also includes mapped ports. Doc: https://www.nomadproject.io/docs/jobspec/interpreted.html#interpreted_env_vars
I've used docker internal network IP that seems to be static: 172.17.0.1
Is there a better solution?
Absolutely! You don't need the host ip at all for communication between containers. If you link containers in your docker-compose.yaml file, you will have access to a number of environment variables that you can use to discover the ip addresses of your services.
Consider, for example, a docker-compose configuration with two containers: one using consul, and one running some service that needs to talk to consul.
consul:
image: progrium/consul
command: -server -bootstrap
webserver:
image: larsks/mini-httpd
links:
- consul
First, by starting consul with just -server -bootstrap, consul figures out it's own advertise address, for example:
consul_1 | ==> Consul agent running!
consul_1 | Node name: 'f39ba7ef38ef'
consul_1 | Datacenter: 'dc1'
consul_1 | Server: true (bootstrap: true)
consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
consul_1 | Cluster Addr: 172.17.0.4 (LAN: 8301, WAN: 8302)
consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 | Atlas: <disabled>
In the webserver container, we find the following environment variables available to pid 1:
CONSUL_PORT=udp://172.17.0.4:53
CONSUL_PORT_8300_TCP_START=tcp://172.17.0.4:8300
CONSUL_PORT_8300_TCP_ADDR=172.17.0.4
CONSUL_PORT_8300_TCP_PROTO=tcp
CONSUL_PORT_8300_TCP_PORT_START=8300
CONSUL_PORT_8300_UDP_END=udp://172.17.0.4:8302
CONSUL_PORT_8300_UDP_PORT_END=8302
CONSUL_PORT_53_UDP=udp://172.17.0.4:53
CONSUL_PORT_53_UDP_ADDR=172.17.0.4
CONSUL_PORT_53_UDP_PORT=53
CONSUL_PORT_53_UDP_PROTO=udp
CONSUL_PORT_8300_TCP=tcp://172.17.0.4:8300
CONSUL_PORT_8300_TCP_PORT=8300
CONSUL_PORT_8301_TCP=tcp://172.17.0.4:8301
CONSUL_PORT_8301_TCP_ADDR=172.17.0.4
CONSUL_PORT_8301_TCP_PORT=8301
CONSUL_PORT_8301_TCP_PROTO=tcp
CONSUL_PORT_8301_UDP=udp://172.17.0.4:8301
CONSUL_PORT_8301_UDP_ADDR=172.17.0.4
CONSUL_PORT_8301_UDP_PORT=8301
CONSUL_PORT_8301_UDP_PROTO=udp
CONSUL_PORT_8302_TCP=tcp://172.17.0.4:8302
CONSUL_PORT_8302_TCP_ADDR=172.17.0.4
CONSUL_PORT_8302_TCP_PORT=8302
CONSUL_PORT_8302_TCP_PROTO=tcp
CONSUL_PORT_8302_UDP=udp://172.17.0.4:8302
CONSUL_PORT_8302_UDP_ADDR=172.17.0.4
CONSUL_PORT_8302_UDP_PORT=8302
CONSUL_PORT_8302_UDP_PROTO=udp
CONSUL_PORT_8400_TCP=tcp://172.17.0.4:8400
CONSUL_PORT_8400_TCP_ADDR=172.17.0.4
CONSUL_PORT_8400_TCP_PORT=8400
CONSUL_PORT_8400_TCP_PROTO=tcp
CONSUL_PORT_8500_TCP=tcp://172.17.0.4:8500
CONSUL_PORT_8500_TCP_ADDR=172.17.0.4
CONSUL_PORT_8500_TCP_PORT=8500
CONSUL_PORT_8500_TCP_PROTO=tcp
There is a set of variables for each port EXPOSEd by the consul
image. For example, in that second image, we could interact with the consul REST API by connecting to:
http://${CONSUL_PORT_8500_TCP_ADDR}:8500/
With the new version of Docker Compose (1.4.0) you should be able to do something like this:
docker-compose.yml
consul:
image: progrium/consul
command: -server -advertise HOSTIP -bootstrap
bash
$ sed -e "s/HOSTIP/${HOSTIP}/g" docker-compose.yml | docker-compose --file - up
This is thanks to the new feature:
Compose can now read YAML configuration from standard input, rather than from a file, by specifying - as the filename. This makes it easier to generate configuration dynamically:
$ echo 'redis: {"image": "redis"}' | docker-compose --file - up
Environment variables, as suggested in the earlier solution, are created by Docker when containers are linked. But the env vars are not automatically updated if the container is restarted. So, it is not recommended to use environment variables in production.
Docker, in addition to creating the environment variables, also updates the host entries in /etc/hosts file. In fact, Docker documentation recommends using the host entries from etc/hosts instead of the environment variables.
Reference: https://docs.docker.com/userguide/dockerlinks/
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
extra_hosts works, it's hard coded into docker-compose.yml but for my current static setup, at this moment that's all I need.
version: '3'
services:
my_service:
container_name: my-service
image: my-service:latest
extra_hosts:
- "myhostname:192.168.0.x"
...
networks:
- host
networks:
host:
Create a script to set, every boot, your host IP in an environment variable.
sudo vi /etc/profile.d/docker-external-ip.sh
Then copy inside this code:
export EXTERNAL_IP=$(hostname -I | awk '{print $1}')
Now you can use it in your docker-compose.yml file:
version: '3'
services:
my_service:
container_name: my-service
image: my-service:latest
environment:
- EXTERNAL_IP=${EXTERNAL_IP}
extra_hosts:
- my.external-server.net:${EXTERNAL_IP}
...
environment --> to set as system environment var in your docker
container
extra_hosts --> to add these hosts to your docker container

Resources