If you are not familiar with docker bundles please read this.
So I have tried to create a simple docker bundle from the following docker-compose.yml
version: "2"
services:
web:
image: cohenaj194/apache-simple
ports:
- 32701:80
nginx:
image: nginx
ports:
- 32700:80
But the ports of the docker services this bundle created were not exposed and I could not access any of the containers in my services through ports 32700 or 32701 as I specified it in thedocker-compose.yml. How am I supposed to expose the ports of docker bundle services?
Update: I believe my issue may be that my test.dab file that is created with docker-compose bundle does not contain any mention of port 32700 or 32701:
{
"Services": {
"nginx": {
"Image": "nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
},
"web": {
"Image": "cohenaj194/apache-simple#sha256:6196c5bce25e5f76e0ea7cbe8e12e4e1f96bd36011ed37d3e4c5f06f6da95d69",
"Networks": [
"default"
],
"Ports": [
{
"Port": 80,
"Protocol": "tcp"
}
]
}
},
"Version": "0.1"
}
Attempting to insert the extra ports into this file also does not work and results in the following error:
Error reading test.dab: JSON syntax error at byte 229: invalid character ':' after object key:value pair
Update2: My services are accessible over the default ports docker swarm assigns to services when the host port is not defined:
user#hostname:~/test$ docker service inspect test_nginx --pretty
ID: 3qimd4roft92w3es3qooa9qy8
Name: test_nginx
Labels:
- com.docker.stack.namespace=test
Mode: Replicated
Replicas: 2
Placement:
ContainerSpec:
Image: nginx#sha256:d33834dd25d330da75dccd8add3ae2c9d7bb97f502b421b02cecb6cb7b34a1b6
Networks: 1v5nyqqjnenf7xlti346qfw8n
Ports:
Protocol = tcp
TargetPort = 80
PublishedPort = 30000
I can then get at my service from port 30000 however I want to be able to define the host port my services will use.
As of the Docker 1.12 release, there is no way to specify a "published" port in the bundle. The bundle is a portable format, and exposed ports are non-portable (in that if two bundles used the same ones they would conflict).
So exposed ports will not be part of the bundle configuration. Currently the only option is to run a docker service update to add the published port. In the future there may be other ways to achieve this.
Related
in short, I've got the problem that containers in a swarm can't reach containers that sit on another node. The worker node is in my home network, so not directly accessible externally.
Setup:
Manager node that is a publicly available server, let's give it the IP A.A.A.A
Worker node that is at home, behind a router, with the internal IP B.B.B.B and the router with public IP C.C.C.C
The worker can without a problem join the swarm, the manager can without problems allocate containers to that worker, so some sort of communication is established and working.
What is not working is, that containers on the manager can't reach containers on the worker and vice versa (but can reach containers on the same node)
docker node ls shows the worker node as Ready and Active. docker node inspect <NODE NAME> show the IP C.C.C.C under Status
minimal working example:
docker-compose
version: "3.8"
services:
manager1:
image: jwilder/whoami
hostname: manager1
deploy:
placement:
constraints:
- node.role == manager
manager2:
image: jwilder/whoami
hostname: manager2
deploy:
placement:
constraints:
- node.role == manager
worker1:
image: jwilder/whoami
hostname: worker1
deploy:
placement:
constraints:
- node.role == worker
worker2:
image: jwilder/whoami
hostname: worker2
deploy:
placement:
constraints:
- node.role == worker
deploying with docker stack deploy -c docker-compose.yml testing
docker network inspect testing_default -v on manager shows
"Peers": [
{
"Name": "f0de4150d01e",
"IP": "A.A.A.A"
}
],
"Services": {
"testing_manager1": {
"VIP": "10.0.25.5",
"Ports": [],
"LocalLBIndex": 21646,
"Tasks": [
{
"Name": "testing_manager1.1.w6b2wufu96vk1jmtez9dtewr0",
"EndpointID": "213b7182882e267f249edc52be57f6c56d83efafeba471639f2cbb9398854fe0",
"EndpointIP": "10.0.25.6",
"Info": {
"Host IP": "A.A.A.A"
}
}
]
},
"testing_manager2": {
"VIP": "10.0.25.8",
"Ports": [],
"LocalLBIndex": 21645,
"Tasks": [
{
"Name": "testing_manager2.1.5w51imw8toh81oyeruu48z2pr",
"EndpointID": "41eeb9eaf97cd3f744873ccea9577332e24c799f61171c59447e084de9c829a4",
"EndpointIP": "10.0.25.9",
"Info": {
"Host IP": "A.A.A.A"
}
}
]
}
}
docker network inspect testing_default -v on worker shows
"Peers": [
{
"Name": "75fba815742b",
"IP": "B.B.B.B"
},
{
"Name": "f0de4150d01e",
"IP": "A.A.A.A"
}
],
"Services": {
"testing_worker1": {
"VIP": "10.0.25.10",
"Ports": [],
"LocalLBIndex": 293,
"Tasks": [
{
"Name": "testing_worker1.1.ol4x1h560613l7e7yqv94sj68",
"EndpointID": "3a9dc067b4a0e7e5d26fabdcb887b823f49bfad21fc0ec159edd8dd4f976b702",
"EndpointIP": "10.0.25.11",
"Info": {
"Host IP": "B.B.B.B"
}
}
]
},
"testing_worker2": {
"VIP": "10.0.25.2",
"Ports": [],
"LocalLBIndex": 292,
"Tasks": [
{
"Name": "testing_worker2.1.m2d5fwn83uxg9b7udakq1o41x",
"EndpointID": "8317415fe2b0fa77d1195d33e91fa3354fcfd00af0bab5161c69038eb8fe38bb",
"EndpointIP": "10.0.25.3",
"Info": {
"Host IP": "B.B.B.B"
}
}
]
}
}
So the worker sees the manager as a peer, but does not see the other services. What confuses me, is that the Host IP for worker services is B.B.B.B, which is the internal IP of the worker node (so a 192.168.x.x IP) instead of the external IP of my home network.
Attaching to one of the containers with docker exec -it <CONTAINER ID> /bin/sh and executing wget -qO- <ANOTHER CONTAINERS IP>:8000 returns fine for containers on the same node, but Host unreachable for containers on the other node. (Testing with the defined host names returns "bad address" for the ones on the other node)
Looking at the docs, it reads at https://docs.docker.com/engine/swarm/swarm-tutorial/#open-protocols-and-ports-between-the-hosts that there need to be some ports open.
I was under the impression that creating the swarm comes with a virtual network between the nodes (which kinda seems to be the case, as the services can get created without a problem, so there is a connection). But as it did not work like that, I tested it with just plain port forwarding, which resulted in the manager "sometimes" seeing the other services when inspecting the network, but the containers still can't reach eachother.
Am I supposed to spin up a VPN for the nodes to be inside the same network, or what am I missing?
I am using this docker-compose file:
version: "3.7"
services:
mautrix-wsproxy:
container_name: mautrix-wsproxy
image: dock.mau.dev/mautrix/wsproxy
restart: unless-stopped
ports:
- 29331
environment:
#LISTEN_ADDRESS: ":29331"
APPSERVICE_ID: imessage
AS_TOKEN: put your as_token here
HS_TOKEN: put your hs_token here
# These URLs will work as-is with docker networking
SYNC_PROXY_URL: http://mautrix-syncproxy:29332
SYNC_PROXY_WSPROXY_URL: http://mautrix-wsproxy:29331
SYNC_PROXY_SHARED_SECRET: ...
mautrix-syncproxy:
container_name: mautrix-syncproxy
image: dock.mau.dev/mautrix/syncproxy
restart: unless-stopped
environment:
#LISTEN_ADDRESS: ":29332"
DATABASE_URL: postgres://user:pass#host/mautrixsyncproxy
HOMESERVER_URL: http://localhost:8008
SHARED_SECRET: ...
But then docker ps shows
... dock.mau.dev/mautrix/wsproxy ... 0.0.0.0:49156->29331/tcp, :::49156->29331/tcp
And I have to use external port 49156 to connect to its internal port 29331.
Where the heck did this 49156 come from? How do I map it so it's 29331->29331 ?
docker inspect shows:
"NetworkSettings": {
"Bridge": "",
"SandboxID": ...,
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"29331/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "49156"
},
{
"HostIp": "::",
"HostPort": "49156"
}
]
},
In addition to the provided answers, if you want a fix port mapping, you can do this by providing 2 ports separated by a colon with the publish flag or in the ports array when using compose. The left part is the mapped port available on the host system, and the right part is the port inside the container.
# make port 29331 inside the container available on port 8080 on the host system
docker run --publish 8080:29331 busybox
In your case, to answer your question.
How do I map it so it's 29331->29331 ?
services:
mautrix-wsproxy:
ports:
- "29331:29331"
...
From the docs for ports:
There are three options:
Specify both ports (HOST:CONTAINER)
Specify just the container port (an ephemeral host port is chosen for the host port).
....
You're using the 2nd option, and just specifying the container port that you want to expose. Since you didn't specify a host port to map that to, "an ephemeral host port is chosen".
Looking at the documentation for ports in a compose file (docs.docker.com) we can read the following:
Short syntax
There are three options:
...
Specify just the container port (an ephemeral host port is chosen for the host port).
...
This means in essence that a random, free host port is chosen.
To explicitly map the container-port on a known host-port (even if it s the same as the container port), we use the HOST:CONTAINER syntax (see link above):
version: "3.7"
services:
mautrix-wsproxy:
...
ports:
- "29331:29331"
...
I feel like this is simple, but I can't figure it out. I have two services, consul and traefik up in a single node swarm on the same host.
> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3g1obv9l7a9q consul_consul replicated 1/1 progrium/consul:latest
ogdnlfe1v8qx proxy_proxy global 1/1 traefik:alpine *:80->80/tcp, *:443->443/tcp
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
090f1ed90972 progrium/consul:latest "/bin/start -server …" 12 minutes ago Up 12 minutes 53/tcp, 53/udp, 8300-8302/tcp, 8400/tcp, 8500/tcp, 8301-8302/udp consul_consul.1.o0j8kijns4lag6odmwkvikexv
20f03023d511 traefik:alpine "/entrypoint.sh -c /…" 12 minutes ago Up 12 minutes 80/tcp
Both containers have access to the "consul" overlay network, which was created as such.
> docker network create --driver overlay --attachable consul
ypdmdyx2ulqt8l8glejfn2t25
Traefik is complaining that it can't reach consul.
time="2019-03-18T18:58:08Z" level=error msg="Load config error: Get http://consul:8500/v1/kv/traefik?consistent=&recurse=&wait=30000ms: dial tcp 10.0.2.2:8500: connect: connection refused, retrying in 7.492175404s"
I can go into the traefik container and confirm that I can't reach consul through the overlay network, although it is pingable.
> docker exec -it 20f03023d511 ash
/ # nslookup consul
Name: consul
Address 1: 10.0.2.2
/ # curl consul:8500
curl: (7) Failed to connect to consul port 8500: Connection refused
# ping consul
PING consul (10.0.2.2): 56 data bytes
64 bytes from 10.0.2.2: seq=0 ttl=64 time=0.085 ms
However, if I look a little deeper, I find that they are connected, just that the overlay network isn't transmitting traffic to the actual destination for some reason. If I go directly to the actual consul ip, it works.
/ # nslookup tasks.consul
Name: tasks.consul
Address 1: 10.0.2.3 0327c8e1bdd7.consul
/ # curl tasks.consul:8500
Moved Permanently.
I could workaround this, technically there will only ever be one copy of consul running, but I'd like to know why the data isn't routing in the first place before I get deeper into it. I can't think of anything else to try. Here is various information related to this setup.
> docker --version
Docker version 18.09.2, build 6247962
> docker network ls
NETWORK ID NAME DRIVER SCOPE
cee3cdfe1194 bridge bridge local
ypdmdyx2ulqt consul overlay swarm
5469e4538c2d docker_gwbridge bridge local
5fd928ea1e31 host host local
9v22k03pg9sl ingress overlay swarm
> docker network inspect consul
[
{
"Name": "consul",
"Id": "ypdmdyx2ulqt8l8glejfn2t25",
"Created": "2019-03-18T14:44:27.213690506-04:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.2.0/24",
"Gateway": "10.0.2.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0327c8e1bdd7ebb5a7871d16cf12df03240996f9e590509984783715a4c09193": {
"Name": "consul_consul.1.8v4bshotrco8fv3sclwx61106",
"EndpointID": "ae9d5ef1d19b67e297ebf40f6db410c33e4e3c0266c56e539e696be3ed4c81a5",
"MacAddress": "02:42:0a:00:02:03",
"IPv4Address": "10.0.2.3/24",
"IPv6Address": ""
},
"c21f5dfa93a2f43b747aedc64a343d94d6c1c2e6558d81bd4a52e2ba4b5fa90f": {
"Name": "proxy_proxy.sb6oindhmfukq4gcne6ynb2o2.4zvco02we58i3ulbyrsw1b2ok",
"EndpointID": "7596a208e0b05ba688f318814e24a2a1a3401765ed53ca421bf61c73e65c235a",
"MacAddress": "02:42:0a:00:02:06",
"IPv4Address": "10.0.2.6/24",
"IPv6Address": ""
},
"lb-consul": {
"Name": "consul-endpoint",
"EndpointID": "23e74716ef54f3fb6537b305176b790b4bc4132dda55f20588d7ce4ca71d7372",
"MacAddress": "02:42:0a:00:02:04",
"IPv4Address": "10.0.2.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4099"
},
"Labels": {},
"Peers": [
{
"Name": "e11b9bd30b31",
"IP": "10.8.0.1"
}
]
}
]
> cat consul/docker-compose.yml
version: '3.1'
services:
consul:
image: progrium/consul
command: -server -bootstrap
networks:
- consul
volumes:
- consul:/data
deploy:
labels:
- "traefik.enable=false"
networks:
consul:
external: true
> cat proxy/docker-compose.yml
version: '3.3'
services:
proxy:
image: traefik:alpine
command: -c /traefik.toml
networks:
# We need an external proxy network and the consul network
# - proxy
- consul
ports:
# Send HTTP and HTTPS traffic to the proxy service
- 80:80
- 443:443
configs:
- traefik.toml
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
# Deploy the service to all nodes that match our constraints
mode: global
placement:
constraints:
- "node.role==manager"
- "node.labels.proxy==true"
labels:
# Traefik uses labels to configure routing to your services
# Change the domain to your own
- "traefik.frontend.rule=Host:proxy.mcwebsite.net"
# Route traffic to the web interface hosted on port 8080 in the container
- "traefik.port=8080"
# Name the backend (not required here)
- "traefik.backend=traefik"
# Manually set entrypoints (not required here)
- "traefik.frontend.entryPoints=http,https"
configs:
# Traefik configuration file
traefik.toml:
file: ./traefik.toml
# This service will be using two external networks
networks:
# proxy:
# external: true
consul:
external: true
There were two optional kernel configs CONFIG_IP_VS_PROTO_TCP and CONFIG_IP_VS_PROTO_UDP disabled in my kernel which, you guessed it, enable tcp and udp load balancing.
I wish I'd checked that about four hours sooner than I did.
I have an application(runs at http://localhost:8080) that talks to a backend api which runs at http://localhost:8081. I have dockerized the frontend and the backend separately and running them through docker-compose locally works perfectly without any issues. But, when I run it in ECS, the frontend couldn't find http://localhost:8081(backend).
I am using an AutoScaling group with an Elastic Load Balancer and I have my both containers defined in a single Task Definition. Also, I have the backend linked to the front end. When I ssh into my ECS instance and run docker ps -a i can see both of my containers are running at the correct ports exactly like in my local machine(Result of docker ps -a) and I can successfully ping each of them from one container to the other.
Task Definition
"CartTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "cs-cart",
"Image": "thishandp7/cs-cart",
"Memory": 400,
"PortMappings":[
{
"ContainerPort": "8080",
"HostPort": "8080"
}
],
"Links": [
"cs-server"
]
},
{
"Name": "cs-server",
"Image": "thishandp7/cs-server",
"Memory": 450,
"PortMappings":[
{
"ContainerPort": "8081",
"HostPort": "8081"
}
],
}
]
}
}
Listeners in my ElasticLoadBalancer,
The first listener is for the frontend and the second one is for the backend
"Listeners" : [
{
"LoadBalancerPort": 80,
"InstancePort": 8080,
"Protocol": "http"
},
{
"LoadBalancerPort": 8081,
"InstancePort": 8081,
"Protocol": "tcp"
}
],
EC2 instacne security Group Ingress rules:
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : 8080,
"ToPort" : 8080,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 8081,
"ToPort" : 8081,
"SourceSecurityGroupId" : { "Ref": "ElbSecurityGroup" }
},
{
"IpProtocol" : "tcp",
"FromPort" : 22,
"ToPort" : 22,
"CidrIp" : "0.0.0.0/0"
}
],
Docker Compose
version: "3.5"
services:
cart:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-cart/
PORT: 8080
networks:
- server-cart
ports:
- 8080:8080
depends_on:
- server
server:
build:
context: ..
dockerfile: docker/Dockerfile
args:
APP_LOCATION: /redux-saga-shopping-cart-server/
PORT: 8081
ports:
- 8081:8081
networks:
- server-cart
networks:
server-cart:
Quick update: I have tried it with awsvpc network mode with application load balancer. Still not working
Thanks in advance.
What kind of Docker Network mode are you using(Brdige/Host) on ECS?. I don't think localhost will work properly on ECS containers. I had same issue so I used private IP or DNS name of EC2 host for my communication as temp testing purpose. Ex - http://10.0.1.100:8081.
Note - Please make sure to give security group rule to allow 8081 traffic from within EC2(Edit EC2 security group to allow 8081 from same sgid as source).
For Production deployments, I would recommend to use a service discovery to identify the backend service(Consul by Hashicorp) or AWS Private Service Discovery on ECS.
-- Update --
Since you are running both containers under same task def(under same ECS service), so typically ECS will bring both docker containers on same host. Do something like following.
By default ECS brings containers using Bridge mode on Linux.
You should be able to have each containers communicate using Docker Gateway IP - 172.17.0.1 on Linux. So for your case, try configuring http://172.17.0.1:8081
I have deployed a Docker Swarm cluster on several machines and I am now trying to access to the server running in Docker from the host.
I use docker compose file to define my service and the exposed port appears when I inspect the service:
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 27017,
"PublishedPort": 3017,
"PublishMode": "host"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 27017,
"PublishedPort": 3017,
"PublishMode": "host"
}
],
"VirtualIPs": [
{
"NetworkID": "**********",
"Addr": "10.0.0.34/24"
}
]
}
I use host mode because the service is constrained to run on a particular machine, and I want it accessible only from this machine.
But when I list the processes listening on ports on the host machine, the port doesn't appear.
And of course I cannot connect to the server from the host through the exposed port.
I am using iptables as firewall and restrains as much as possible the open ports, but the Docker Swarm needed ones are opened.
Here is my docker-compose.yml file:
version: '3.4'
services:
mongo-router:
image: mongo
networks:
- mongo-cluster
volumes:
- db-data-router:/data/db
- db-config-router:/data/configdb
ports:
- target: 27017
published: 3017
protocol: tcp
mode: host
deploy:
placement:
constraints:
- node.labels.mongo.router == true
command: mongos --configdb cnf/mongodb-cnf_mongo-cnf-1:27017,mongodb-cnf_mongo-cnf-2:27017,mongodb-cnf_mongo-cnf-3:27017
volumes:
db-data-router:
db-config-router:
networks:
mongo-cluster:
external: true
The network is an overlay network on which all services are subscribing.
I had a similar issue. After installing hyper-v feature on windows (even though the cpu did not support hyper-v) I was able to access published ports from the host (even in ingress mode).