I have a docker deployment with 3 services (using docker-compose) and the following port mappings:
nginx (90 → 80)
node (3000 → 3000)
python (8001 → 8000)
Python is a demo aiohttp app (aiohttp-based) served on port 8000
The node app is a simple ssr frontend served on port 3000
Nginx acts as a reverse proxy and has this clause to route traffic to the python app:
location /api/ {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_buffering off;
proxy_pass http://python:8000;
}
And this one to route to the node app:
location / {
proxy_pass http://node:3000;
include /etc/nginx/node_params;
}
The problem is that the none of the other two containers can connect to the python container:
$ docker-compose exec nginx curl 'http://python:8000/api/'
curl: (7) Failed to connect to python port 8000: Connection refused
Same by using the IP directly:
$ docker-compose exec node curl 'http://172.18.0.5:8000/api/'
curl: (7) Failed to connect to 172.18.0.5 port 8000: Connection refused
Checking open ports also fails:
$ docker-compose exec nginx nc -vz python 8000
$ <no response>
Only the python container can connect to itself:
$ docker-compose exec python curl 'http://python:8000/api/'
Response ok
$ docker-compose exec python nc -vz python 8000
python (172.18.0.5:8000) open
The other service (node) can be accessed normally. Pinging the container also works.
The only way it can be accessed is from outside the docker network by the mapped port (8001), i.e.:
$ curl http://localhost:8001/api/
Response ok
It works with any IP and even from another hosts over the internet:
$ curl http://my-app.mydomain.com:8001/api/
Response ok
I am also not able to reproduce this problem because the same project run on my local machine works completely fine. The only difference is that where I'm trying to run it it's using docker 17 (Docker version 17.06.0-ce, build 02c1d87) whereas my local machine runs docker 18 (Docker version 18.09.5, build e8ff056). Also the server is running fedora 24 vs fedora 29 on my machine.
What am I doing wrong?
This is my docker-compose.yml file
version: '3.7'
services:
python:
build: api
ports:
- 8001:8000
networks:
default:
aliases:
- python
restart: always
volumes:
- cdn:/app/cdn
frontend:
build:
context: nuxt
ports:
- 3000:3000
networks:
default:
aliases:
- node
restart: always
nginx:
build:
context: nginx
ports:
- 90:80
restart: always
volumes:
- cdn:/app/cdn
volumes:
cdn:
Edit:
$ docker inspect project_python_1
[
{
"Id": "98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e",
"Created": "2019-05-07T14:03:17.714587695Z",
"Path": "/bin/sh",
"Args": [
"-c",
"cd src && python -m api"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 5268,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-05-07T14:03:18.860468562Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:6b9059304a2e0f5316204acaf37423a557dc8d14dbc3bc72e169430ff38df73c",
"ResolvConfPath": "/var/lib/docker/containers/98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e/hostname",
"HostsPath": "/var/lib/docker/containers/98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e/hosts",
"LogPath": "/var/lib/docker/containers/98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e/98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e-json.log",
"Name": "/project_python_1",
"RestartCount": 0,
"Driver": "overlay2",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"cdn:/app/cdn:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "project_default",
"PortBindings": {
"8000/tcp": [
{
"HostIp": "",
"HostPort": "8001"
}
]
},
"RestartPolicy": {
"Name": "always",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/013c07caf2f6fd59e99a7ec626355e8820d7fe6c0d2f83d5ed0fd2a0c2688ea9-init/diff:/var/lib/docker/overlay2/b1986769f12e6919ad34bb2184a4822a18d01c402b187d8caf7d1088f6020da1/diff:/var/lib/docker/overlay2/919b177579f26bde763973564af0a3762db5fb9d801b9804f5038fb9c60e4250/diff:/var/lib/docker/overlay2/22389c009280043fe76e9e2631e59aa3d6ee35a827613114e39db5f4d29783b7/diff:/var/lib/docker/overlay2/098414feeb05448f0b70dad272c9c81976171d7626e902c9325c5a454b666e59/diff:/var/lib/docker/overlay2/91cf4d7cef0ffb067991afc5b99ebb7ffee6fb02ce6e258304b23202a49d71a9/diff:/var/lib/docker/overlay2/7d13e7a43ebd06c9babf901e9630ff663c5036886df08038ccbda5f730e7c3a5/diff:/var/lib/docker/overlay2/f8db754b7d72fc8cd0fcfdd758a9491ffc1029e7cac0f5f884d8f0ca26aee253/diff:/var/lib/docker/overlay2/b0cb3c0f4b0d1eba56f353767142bdccbe08b9d15cddf0b52f2173cb771f850a/diff:/var/lib/docker/overlay2/228b0ee3f88b6b9ab9a436612f416acb02dd7196fb3870ba632c973f560ca75e/diff:/var/lib/docker/overlay2/ee2d7a211a67bc164f787443de343de51efc89e00592a7516acd26f1a02bf520/diff:/var/lib/docker/overlay2/40a529d74eb8c72cbc3e57db301678996e229b4b4de31a5b3f5642c44018c499/diff:/var/lib/docker/overlay2/95534c69b64738866cd6a87a73dda2f049a28745bea72dbd54c6fb6f662202e3/diff:/var/lib/docker/overlay2/69ce7a7e7ad79423e0abab05a3b4270a4a309686ab4410759e05248286799cb6/diff:/var/lib/docker/overlay2/6525630fd688dbae59699c3cf1246cc5a202e4a4265b6cc17e238cd90867ad54/diff:/var/lib/docker/overlay2/66f8ad83ba1c1bd4c719ebfc004b85f4b6aef9bb15fba5f5ea9b5a58f7eb198c/diff:/var/lib/docker/overlay2/a1ca64fad83b74d88984bd7378905308ed5e9bc142f9fb50392b4414b6076eb2/diff",
"MergedDir": "/var/lib/docker/overlay2/013c07caf2f6fd59e99a7ec626355e8820d7fe6c0d2f83d5ed0fd2a0c2688ea9/merged",
"UpperDir": "/var/lib/docker/overlay2/013c07caf2f6fd59e99a7ec626355e8820d7fe6c0d2f83d5ed0fd2a0c2688ea9/diff",
"WorkDir": "/var/lib/docker/overlay2/013c07caf2f6fd59e99a7ec626355e8820d7fe6c0d2f83d5ed0fd2a0c2688ea9/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "cdn",
"Source": "/var/lib/docker/volumes/cdn/_data",
"Destination": "/app/cdn",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "98f3624ea086",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8000/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"DEBUG=1",
"PATH=scripts:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"GPG_KEY=0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D",
"PYTHON_VERSION=3.7.3",
"PYTHON_PIP_VERSION=19.1"
],
"Cmd": [
"/bin/sh",
"-c",
"cd src && python -m api"
],
"ArgsEscaped": true,
"Image": "project_python",
"Volumes": {
"/app/cdn": {}
},
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "0f0fe6053d92416fd77f6efba7e8282f385c447b8a8d40aa866554ee282896d7",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "project",
"com.docker.compose.service": "python",
"com.docker.compose.version": "1.24.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "397d60b1dbe4733910c9ae2c0dabc1bdb3046d784b25f8fb4f72c28f6d458ff2",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8001"
}
]
},
"SandboxKey": "/var/run/docker/netns/397d60b1dbe4",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"project_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"98f3624ea086",
"api",
"python"
],
"NetworkID": "4145a30ce48519a895707d607265635012341f73db63b9fedf6e86d68fad6641",
"EndpointID": "4b4bafed80cb88693e2c3f3c1b0268f95afefc3eb7e713ce88d20392d36fa85c",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:05",
"DriverOpts": null
}
}
}
}
]
Okay, so I found the culprit, the problem was that the machine I was deploying to had a port mapping set up via firewalld, 8000→80, on the main interface eth0, and docker was using that when trying to access the container. I.e. When the nginx container tried to connect to the python container in port 8000, it was actually using 80 as upstream and thus failing. A workaround is to either remove the port mapping or using an unmapped port. I have no idea why would docker apply the same rules of the system's firewalld in its internal networks.
This is the output of firewall-cmd --list-all
FedoraServer (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client
ports: 22/tcp 9090/tcp 90/tcp 8001/tcp 3000/tcp
protocols:
masquerade: no
forward-ports: port=8000:proto=tcp:toport=80:toaddr=
source-ports:
icmp-blocks:
rich rules:
And this is the output of docker network inspect project_default:
[
{
"Name": "project_default",
"Id": "4145a30ce48519a895707d607265635012341f73db63b9fedf6e86d68fad6641",
"Created": "2019-05-07T09:03:17.425575867-05:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"02f1f96b74eb292eeff1eb623e725a41c2a14aa0fc40f727ba78e0a620812254": {
"Name": "project_nginx_1",
"EndpointID": "68c3c7fb40d2e56d6601136a123fc8b7834c0503e3da99be56fac40750247a37",
"MacAddress": ...,
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"1e93a55f0d329f4cc8beb681c3e17c6aec1ded73de5dca2fc1eaf49dae788516": {
"Name": "project_mongo_1",
"EndpointID": "3a0a6ae0dfdc922b5fa6032c492376643e4b61415743af7afae2de33576f3acf",
"MacAddress": ...,
"IPv4Address": "172.18.0.4/16",
"IPv6Address": ""
},
"39ac596559da13506abcce9941a06441f42bd1c2d153d118bd13ff9a57f8c538": {
"Name": "project_node_1",
"EndpointID": "6753668d5fb20d908660b48bb757f9b6755c5f4f0bae69c7e02f5431c8f0e575",
"MacAddress": ...,
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
},
"98f3624ea0866665204167d9975b050977836b843c8294639e245897c0c8e44e": {
"Name": "project_python_1",
"EndpointID": "4b4bafed80cb88693e2c3f3c1b0268f95afefc3eb7e713ce88d20392d36fa85c",
"MacAddress": ...,
"IPv4Address": "172.18.0.5/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "project",
"com.docker.compose.version": "1.24.0"
}
}
]
This is the stripped output of docker ps:
PORTS NAMES
0.0.0.0:8001->8000/tcp project_python_1
27017/tcp, 28017/tcp project_mongo_1
0.0.0.0:90->80/tcp project_nginx_1
0.0.0.0:3000->3000/tcp project_node_1
First step: I loaded the Images from a local drive:
docker load -i postgres10.tar
docker load -i drupaldrush1.tar
Second step: I started the containers:
docker run -p5432:5432 postgres:10
docker run -p8081:8081 drupaldrush:1
Third step: displaying containers:
docker ps
results in:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b77bcc79d599 drupaldrush:1 "docker-php-entrypoi…" 33 seconds ago Up 32 seconds 80/tcp, 0.0.0.0:8081->8081/tcp flamboyant_easley
97b9ba5f2779 postgres:10 "docker-entrypoint.s…" About a minute ago Up About a minute 0.0.0.0:5432->5432/tcp competent_fermat
BUT container is not available under: localhost:8081
Fourth step: inspecting container:
docker inspect flamboyant_easley
resulting in (among other Information):
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"8081/tcp": [
{
"HostIp": "",
"HostPort": "8081"
}
]
},
and
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "d5e552bf9c57050fe2debfc7d38a784580309fa0b72c4854a563e78295128912",
"EndpointID": "f61b02c5997b2e391add348686f658b4c596dd60495365cee0fee539743d4792",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
Problem: The container is not available under localhost:8081 and not under 172.17.0.3: 8081 or 172.17.0.1:8081
Question: What do I have to do to make it run under localhost:8081?
The problems came from Docker for Windows. Now I got a Linux Laptop and everything works just fine.
What I am working on:
nginx- openresty with mecached and docker-compose.
from nginx I am able to connect memcached container by specifying resolver = 127.0.0.11, in docker compose it working file.
But when I am deploying it on AWS multi container beanstalk I am getting time out error
failed to connect: memcache could not be resolved (110: Operation timed out)
but from nginx container I am able to ping memcahed.
NGINX.conf
location /health-check {
resolver 127.0.0.11 ipv6=off;
access_by_lua_block {
local memcached = require "resty.memcached"
local memc, err = memcached:new()
if not memc then
ngx.say("failed to instantiate memc: ", err)
return
end
memc: set_timeout(1000) -- 1 sec
local ok, err = memc:connect("memcache", 11211)
if not ok then
ngx.say("failed to connect: ", err)
return
end
DOCKER-COMPOSE.YML
version: "3"
services:
memcache:
image: memcached:alpine
container_name: memcached
ports:
- "11211:11211"
expose:
- "11211"
networks:
- default
nginx:
image: openresty/openresty:alpine
container_name: nginx
volumes:
# Nginx files
- ./nginx/:/etc/nginx/:ro
# Web files
- ./web/:/var/www/web/:ro
entrypoint: openresty -c /etc/nginx/nginx.conf
ports:
- "8080:8080"
networks:
- default
DOCKERRUN.AWS.JSON
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "current-nginx",
"host": {
"sourcePath": "/var/app/current/nginx"
}
},
{
"name": "web",
"host": {
"sourcePath": "/var/www/web/"
}
}
],
"containerDefinitions": [
{
"name": "memcache",
"image": "memcached:alpine",
"essential": true,
"memory": 1000,
"portMappings": [
{
"hostPort": 11211,
"containerPort": 11211
}
]
},
{
"name": "nginx",
"image": "openresty/openresty:alpine",
"essential": true,
"memory": 1000,
"entryPoint": [
"openresty",
"-c",
"/etc/nginx/nginx.conf"
],
"links": [
"memcache"
],
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
},
{
"hostPort": 80,
"containerPort": 8080
}
],
"mountPoints": [
{
"sourceVolume": "web",
"containerPath": "/var/www/web/",
"readOnly": false
},
{
"sourceVolume": "current-nginx",
"containerPath": "/etc/nginx",
"readOnly": false
}
]
}
]
}
You have a typo:
memc:connect("memcache", 11211)
should be
memc:connect("memcached", 11211)
(you are missing a "d").
I have 2 Spring Boot micro-service applications i.e web application and metastore application. This is the properties file for my web application.
spring:
thymeleaf:
prefix: classpath:/static/
application:
name: web-server
profiles:
active: native
server:
port: ${port:8383}
---
host:
metadata: http://10.**.**.***:5011
Dockerfile for web application:
FROM java:8-jre
MAINTAINER **** <******>
ADD ./ms.console.ivu-ivu.1.0.1.jar /app/
CMD chmod +x /app/*
CMD ["java","-jar", "/app/ms.console.web-web.1.0.1.jar"]
EXPOSE 8383
Dockerfile for metadata application:
FROM java:8-jre
MAINTAINER ******* <********>
ADD config/* /deploy/config/
CMD chmod +x ./deploy/config/*
COPY ./ms.metastore.1.0.1.jar /deploy/
CMD chmod +x ./deploy/ms.metastore.1.0.1.jar
CMD ["java","-jar","./deploy/ms.metastore.1.0.1.jar"]
EXPOSE 5011
I am using Mesos and Marathon for cluster management. The Marathon scripts for metastore is :-
{
"id": "/ms-metastore",
"cmd": null,
"cpus": 1,
"mem": 2000,
"disk": 0,
"instances": 0,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"docker": {
"forcePullImage": true,
"image": "*****/****:ms-metastore",
"parameters": [],
"privileged": true
},
"volumes": [],
"portMappings": [
{
"containerPort": 5011,
"hostPort": 0,
"labels": {},
"protocol": "tcp",
"servicePort": 10000
}
]
},
"networks": [
{
"mode": "container/bridge"
}
],
"portDefinitions": [],
"fetch": [
{
"uri": "file:///etc/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}
]
}
Web marathon:
{
"id": "/ms-console",
"cmd": null,
"cpus": 1,
"mem": 2000,
"disk": 0,
"instances": 0,
"acceptedResourceRoles": [
"*"
],
"container": {
"type": "DOCKER",
"docker": {
"forcePullImage": true,
"image": "****/****:ms-console",
"parameters": [],
"privileged": true
},
"volumes": [],
"portMappings": [
{
"containerPort": 8383,
"hostPort": 0,
"labels": {},
"protocol": "tcp",
"servicePort": 10000
}
]
},
"networks": [
{
"mode": "container/bridge"
}
],
"portDefinitions": [],
"fetch": [
{
"uri": "file:///etc/docker.tar.gz",
"extract": true,
"executable": false,
"cache": false
}
]
}
Web application I am connecting to metastore with IP which is hard coded (mentioned in properties). I created docker images for both and run in my server. The metastore server now running in different machine, so my web application is unable to resolve this IP.
All you need to do here is expose 5011 as the host port on the metadata server running on "different machine" using -p -
docker run -d -p 5011:5011 metadata_image ....
Now your web application should be able to access metadata server by using http://$different_machine_ip:5011/
$different_machine_ip = Metadata server IP
However since they need to be tightly coupled, i would suggest you run web app & metadata server on the same machine in case your metadata server is stateless.
I'm trying to start docker container with docker's Remote API. I could able to start the container but unable to expose and map the container's port to host port.
I need Remote API JSON for following ssh command
docker run -i -t --expose 80 -p 80:80 my_image_nodejs nodejs /var/www/server.js
Right now i'm using below JSON.
{
"Image": "f96f6e304cfcd630ee51af87baf30dfd42cf1f361da873a2f62ce6654d7a4c6b",
"Memory": 0,
"MemorySwap": 0,
"VolumesFrom": "",
"Cmd": [
"nodejs",
"/var/www/server.js",
"-D"
],
"PortBindings": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "80"
}
]
},
"ExposedPorts": {
"80/tcp": {}
}
}
Thanks in advance
This works for me :
Container create :
ExposedPorts: {"80/tcp": {}, "22/tcp" : {}}
Container start :
PortBindings: {"80/tcp": [{ "HostPort": "80" }],"22/tcp": [{ "HostPort": "22" }]
}
If you know how to set up Env, I've just sent my question :-)
I believe your request should be below:
curl -X POST -H "Content-Type: application/json" -d '{
"AttachStdin":false,"AttachStdout":true,"AttachStderr":true,
"ExposedPorts": { "80/tcp": {}},
"Cmd": [
"nodejs","/var/www/server.js","-D"
],
"HostConfig":{
"PortBindings": { "80/tcp": [{ "HostPort": "80" }] }
},
"Image":"my_image_nodejs",
"Tag":"latest"
}' $DOCKER_DAEMON/containers/create
where $DOCKER_DAEMON is the host listening remote requests.
The PortBindings and ExposedPorts are in different sections. You may want to refer more detail on Docker remote API v1.22.
Hope this helps.