Exposing multiple ports from Docker within Elastic Beanstalk - docker

From reading the AWS documentation, it appears that when using Docker as the platform on Elastic Beanstalk (EB) (as opposed to Tomcat, etc.), only a single port can be exposed. I'm trying to understand why Amazon created this restriction -- seems that you now can't even serve both HTTP and HTTPS.
I'd like to use Docker as the container since it allows me to run several interconnected server processes within the same container, some of which require multiple ports (e.g. RTSP). Are there any workarounds for this kind of application, where say an RTSP and HTTP server can both be running within the same Docker container on EB?

Even though none of the documentation explains it, Single Container Docker Environment does support mapping multiple ports
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8080"
},
{
"HostPort": "9000",
"ContainerPort": "8090"
}
]
}
With above configuration, port 8080 of docker will get mapped to host machines port 80 and port 8090 of docker will get mapped to host machine's port 9000.
To be more clear always the first port in the list will get mapped to host machine's port 80 and remaining will get mapped to specified hostPort (or) the same as container port in absence of host port.

You could write an on-start config file for Elastic Beanstalk's LoadBalancer/ReversProxy to forward the additional ports to its EC2 instance(s). an example from Ben Delarre :
"Resources" : {
"AWSEBLoadBalancerSecurityGroup": {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable 80 inbound and 8080 outbound",
"VpcId": "vpc-un1que1d",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
}],
"SecurityGroupEgress": [ {
"IpProtocol" : "tcp",
"FromPort" : "8080",
"ToPort" : "8080",
"CidrIp" : "0.0.0.0/0"
} ]
}
},
"AWSEBLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"Subnets": ["subnet-un1que1d2"],
"Listeners" : [ {
"LoadBalancerPort" : "80",
"InstancePort" : "8080",
"Protocol" : "HTTP"
} ]
}
}
}
Ref:
Customizing AWS EB's ENV http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-resources.html
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-elb-listenerconfig-quickref.html

In its current form, the Docker support in Elastic Beanstalk is marginal at best. FWIW I wrote a blog post evaluating EB that touched on this. I found that in addition to your observation about ports, it's not possible to run multiple containers, nor to even customize the docker run command. Hopefully they'll extend support in a future update.

Related

How to expose docker port with host in a Elastic Beanstalk Docker environment?

Current environment :
I'm having an issue in my Beanstalk docker environment of exposing the expected port throughout the host. I can see my docker container has been running successfully inside the docker daemon but I cannot expose it via port 8080 on the beanstalk endpoint, but which is working with port 80.
Issue : I'm trying to access my EB endpoint using the same port(8080) where I'm using in dockerfile. But how can I do that?
Here is the output of docker ps
Here is my sample Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "123456789.dkr.ecr.us-east-1.amazonaws.com/registry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8080,
"HostPort": 8080
}
],
"Volumes": [
{
"HostDirectory": "/path/to/log",
"ContainerDirectory": "/path/to/log"
}
]
}
you should create container with -p 8080:80 args, as I see you did it with -p 8080.

Docker port range binding using REST api

I need to bind ports 1024 to 2048 to my host when running a container via REST api. I've tried using a similar syntax like in "docker run" but no luck:
PortBindings: {
"1024-2048": [{ "HostPort": "1024-2048" }],
}
How to achieve this?
I believe you can't. You will need to list each port separately. However, given that it's done programmatically, this shouldn't be a problem. For reference, when do you create a docker container using the docker cli and you specify a range, e.g., docker -p 1000-1010/tcp:1000-1010 and you then inspect the created container using docker inspect, you will see that in the shown details all ports in that range will be listed separately, too, e.g.:
"PortBindings": {
"1000/tcp": [
{
"HostIp": "",
"HostPort": "1000"
}
],
"1001/tcp": [
{
"HostIp": "",
"HostPort": "1001"
}
],
"1002/tcp": [
.....
You are missing the protocol. From the documentation for the Docker Engine API v1.24:
PortBindings - A map of exposed container ports and the host port they
should map to. A JSON object in the form
{ <port>/<protocol>: [{ "HostPort": "<port>" }] }
Take note that port is specified as a string not an integer value.
So your request should have:
PortBindings: {
"1024-2048/tcp": [{ "HostPort": "1024-2048" }],
}

Setting Team City Build Agent Port Number in Marathon

Trying to deploy a teamcity build agent on the Mesosphere Marathon platform and having problems with the port mappings.
By default the teamcity server will try to talk to the teamcity agent on port 9090
Therefor I set the container port like so :
"containerPort": 9090
However when I deploy the teamcity agent container, Marathon maps port 9090 to a port in the 30000 range.
When teamcity server talks back to the container on port 9090 it fails because the port is mapped to 30000.
I've figured out how to get this dynamic port into the teamcity config file by running the following sed command in the marathon args :
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
When the container is spun up it will swap out ownPort=9090 for ownPort=$PORT0 in buildAgent.properties and then start the agent.
However now that the agent is on port 30000 the "containerPort": 9090 is now invalid, it should be "containerPort": $PORT0 however this is invalid json as containerPort should be an integer.
I have tried setting "containerPort": 0 which should dynamically assign a port, but using this value I cannot get the container to start it just disappears straight away and keeps trying to deploy it.
I log onto the mesos slave host and run docker ps -a I can see the containers ports are blank :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28*********0 teamcityagent "\"sh -c 'sed -i -- 7 minutes ago Exited (137) 2 minutes ago mes************18a8
This is the Marathon json file I'm using and Marathon version is Version 0.8.2 :
{
"id": "teamcityagent",
"args": ["sh", "-c", "sed -i -- \"s/ownPort=9090/ownPort=$PORT0/g\" buildAgent.properties; bin/agent.sh run"],
"cpus": 0.05,
"mem": 4000.0,
"instances": 1,
"container":
{
"type": "DOCKER",
"docker":
{
"image": "teamcityagent",
"forcePullImage": true,
"network": "BRIDGE",
"portMappings":
[
{
"containerPort": 0,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
}
}
}
Any help would be greatly appreciated!
Upgrading from Marathon Version 0.8.2 to Marathon Version 0.9.0 fixed the issue, using settings "containerPort": 0, now dynamically sets a port properly and the container starts up and the teamcity server can now communicate with it.

Docker's containers communication using Consul

I have read about service discovery for Docker using Consul, but I can't understand it.
Could you explain to me, how can I run two docker containers, recognize from the first container host of the second using Consul and send some message to it?
You would need to run Consul Agent in client mode inside each Docker container. Each Docker Container will need a Consul Service Definition file to let the Agent know to advertize it's service to the Consul Servers.
They look like this:
{
"service": {
"name": "redis",
"tags": ["master"],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/check_redis.py",
"interval": "10s"
}
]
}
}
And a Service Health Check to monitor the health of the service. Something like this:
{
"check": {
"id": "redis",
"name": "Redis",
"script": "/usr/local/bin/check_redis_ping_returns_pong.sh",
"interval": "10s"
}
}
In the other Docker Container your code would find the Redis service either via DNS or the Consul Servers HTTP API
dig #localhost -p 8500 redis.service.consul
curl $CONSUL_SERVER/v1/health/service/redis?passing

Binding a port to a host interface using the REST API

The documentation for the commandline interface says the following:
To bind a port of the container to a specific interface of the host
system, use the -p parameter of the docker run command:
General syntax
docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image>
When no host interface is provided, the port is bound to
all available interfaces of the host machine (aka INADDR_ANY, or
0.0.0.0).When no host port is provided, one is dynamically allocated. The possible combinations of options for TCP port are the following
So I was wondering how I do the same but with the REST API?
With POST /container/create I tried:
"PortSpecs": ["5432:5432"] this seems to expose the port but not bind it to the host interface.
"PortSpecs": ["5432"] gives me the same result as the previous one.
"PortSpecs": ["0.0.0.0:5432:5432"] this returns the error Invalid hostPort: 0.0.0.0 which makes sense.
When I do sudo docker ps the container shows 5432/tcp which should be 0.0.0.0:5432/tcp.
Inspecting the container gives me the following:
"NetworkSettings": {
"IPAddress": "172.17.0.25",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"5432/tcp": null
}
}
Full inspect can be found here.
This is an undocumented feature. I found my answer on the mailing list:
When creating the container you have to set ExposedPorts:
"ExposedPorts": { "22/tcp": {} }
When starting your container you need to set PortBindings:
"PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }
There already is an issue on github about this.
Starting containers with PortBindings in the HostConfig was deprecated in v1.10 and removed in v1.12.
Both these configuration parameters should now be included when creating the container.
POST /containers/create
{
"Image": image_id,
"ExposedPorts": {
"22/tcp": {}
},
"HostConfig": {
"PortBindings": { "22/tcp": [{ "HostPort": "" }] }
}
}
I know this question had been answered, I using the above solution and here is how I did it in java using Docker Java Client v3.2.5
PortBinding portBinding = PortBinding.parse( hostPort + ":" + containerPort);
HostConfig hostConfig = HostConfig.newHostConfig()
.withPortBindings(portBinding);
CreateContainerResponse container =
dockerClient.createContainerCmd(imageName)
.withHostConfig(hostConfig)
.withExposedPorts(ExposedPort.parse(containerPort+"/tcp"))
.exec();

Resources