I need to bind ports 1024 to 2048 to my host when running a container via REST api. I've tried using a similar syntax like in "docker run" but no luck:
PortBindings: {
"1024-2048": [{ "HostPort": "1024-2048" }],
}
How to achieve this?
I believe you can't. You will need to list each port separately. However, given that it's done programmatically, this shouldn't be a problem. For reference, when do you create a docker container using the docker cli and you specify a range, e.g., docker -p 1000-1010/tcp:1000-1010 and you then inspect the created container using docker inspect, you will see that in the shown details all ports in that range will be listed separately, too, e.g.:
"PortBindings": {
"1000/tcp": [
{
"HostIp": "",
"HostPort": "1000"
}
],
"1001/tcp": [
{
"HostIp": "",
"HostPort": "1001"
}
],
"1002/tcp": [
.....
You are missing the protocol. From the documentation for the Docker Engine API v1.24:
PortBindings - A map of exposed container ports and the host port they
should map to. A JSON object in the form
{ <port>/<protocol>: [{ "HostPort": "<port>" }] }
Take note that port is specified as a string not an integer value.
So your request should have:
PortBindings: {
"1024-2048/tcp": [{ "HostPort": "1024-2048" }],
}
Related
I have an ECS task where I have the main container and a sidecar container. I'm creating the task on EC2 and the network mode is bridge. My main container needs to talk to the sidecar container. But I am unable to do so.
My task definition is:
[
{
"name": "my-sidecar-container",
"image": "ECR image name",
"memory": "256",
"cpu": "256",
"essential": true,
"portMappings": [
{
"containerPort": "50051",
"hostPort": "50051",
"protocol": "tcp"
}
],
"links": [
"app"
]
},
{
"name": "app",
"image": "<app image URL here>",
"memory": "256",
"cpu": "256",
"essential": true
}
]
The sidecar is a gRPC server.
To check if I can list all the gRPC endpoints if I do the following from my main app container, it does not work.
root#my-main-app# ./grpcurl -plaintext localhost:50051 list
Failed to dial target host "localhost:50051": dial tcp 127.0.0.1:50051: connect: connection refused
But if I mention the EC2 private IP, it works. e.g.
root#my-main-app# ./grpcurl -plaintext 10.0.56.69:50051 list
grpc.reflection.v1alpha.ServerReflection
health.v1.Health
server.v1.MyServer
So it is definitely a networking issue. Wondering how to fix it!
If you're using bridge mode and linking, then you actually need to use link name as the address, instead of localhost. You would need to link the sidecar container to the app container (you are currently doing the opposite) and then use the sidecar's link name as the address.
If you were using awsvpc mode, then you would use localhost:containerport to communicate between containers in the same task.
I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:
Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},
I am currently posting a docker service with the MESOS + Marathon combination.
This means that the IP address of the docker is constantly changing.
For example, if you put mongodb on marathon, you would use the following code.
port can specify the port that is coming into the host. After a day, the service will automatically shut down and run and the IP will change.
So, when I was looking for a method called mesos dns, when I was studying the docker command, I learned how to find the ip of the service with the alias name by specifying the network alias in the docker.
I thought it would be easier to access without using mesos dns by using this method.
However, in marathon, docker service is executed in json format like below.
I was asked because I do not know how to specify the docker network alias option or the keyword or method.
{
"id": "mongodbTest",
"instances": 1,
"cpus": 2,
"mem": 2048.0,
"container": {
"type": "DOCKER",
"docker": {
"image": "mongo:latest",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 27017,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/mesos-mg",
"hostPath": "/var/data/mesos-mg",
"mode": "RW"
}
]
}
}
From reading the AWS documentation, it appears that when using Docker as the platform on Elastic Beanstalk (EB) (as opposed to Tomcat, etc.), only a single port can be exposed. I'm trying to understand why Amazon created this restriction -- seems that you now can't even serve both HTTP and HTTPS.
I'd like to use Docker as the container since it allows me to run several interconnected server processes within the same container, some of which require multiple ports (e.g. RTSP). Are there any workarounds for this kind of application, where say an RTSP and HTTP server can both be running within the same Docker container on EB?
Even though none of the documentation explains it, Single Container Docker Environment does support mapping multiple ports
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "8080"
},
{
"HostPort": "9000",
"ContainerPort": "8090"
}
]
}
With above configuration, port 8080 of docker will get mapped to host machines port 80 and port 8090 of docker will get mapped to host machine's port 9000.
To be more clear always the first port in the list will get mapped to host machine's port 80 and remaining will get mapped to specified hostPort (or) the same as container port in absence of host port.
You could write an on-start config file for Elastic Beanstalk's LoadBalancer/ReversProxy to forward the additional ports to its EC2 instance(s). an example from Ben Delarre :
"Resources" : {
"AWSEBLoadBalancerSecurityGroup": {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable 80 inbound and 8080 outbound",
"VpcId": "vpc-un1que1d",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
}],
"SecurityGroupEgress": [ {
"IpProtocol" : "tcp",
"FromPort" : "8080",
"ToPort" : "8080",
"CidrIp" : "0.0.0.0/0"
} ]
}
},
"AWSEBLoadBalancer" : {
"Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties" : {
"Subnets": ["subnet-un1que1d2"],
"Listeners" : [ {
"LoadBalancerPort" : "80",
"InstancePort" : "8080",
"Protocol" : "HTTP"
} ]
}
}
}
Ref:
Customizing AWS EB's ENV http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-resources.html
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/using-elb-listenerconfig-quickref.html
In its current form, the Docker support in Elastic Beanstalk is marginal at best. FWIW I wrote a blog post evaluating EB that touched on this. I found that in addition to your observation about ports, it's not possible to run multiple containers, nor to even customize the docker run command. Hopefully they'll extend support in a future update.
The documentation for the commandline interface says the following:
To bind a port of the container to a specific interface of the host
system, use the -p parameter of the docker run command:
General syntax
docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image>
When no host interface is provided, the port is bound to
all available interfaces of the host machine (aka INADDR_ANY, or
0.0.0.0).When no host port is provided, one is dynamically allocated. The possible combinations of options for TCP port are the following
So I was wondering how I do the same but with the REST API?
With POST /container/create I tried:
"PortSpecs": ["5432:5432"] this seems to expose the port but not bind it to the host interface.
"PortSpecs": ["5432"] gives me the same result as the previous one.
"PortSpecs": ["0.0.0.0:5432:5432"] this returns the error Invalid hostPort: 0.0.0.0 which makes sense.
When I do sudo docker ps the container shows 5432/tcp which should be 0.0.0.0:5432/tcp.
Inspecting the container gives me the following:
"NetworkSettings": {
"IPAddress": "172.17.0.25",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"5432/tcp": null
}
}
Full inspect can be found here.
This is an undocumented feature. I found my answer on the mailing list:
When creating the container you have to set ExposedPorts:
"ExposedPorts": { "22/tcp": {} }
When starting your container you need to set PortBindings:
"PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }
There already is an issue on github about this.
Starting containers with PortBindings in the HostConfig was deprecated in v1.10 and removed in v1.12.
Both these configuration parameters should now be included when creating the container.
POST /containers/create
{
"Image": image_id,
"ExposedPorts": {
"22/tcp": {}
},
"HostConfig": {
"PortBindings": { "22/tcp": [{ "HostPort": "" }] }
}
}
I know this question had been answered, I using the above solution and here is how I did it in java using Docker Java Client v3.2.5
PortBinding portBinding = PortBinding.parse( hostPort + ":" + containerPort);
HostConfig hostConfig = HostConfig.newHostConfig()
.withPortBindings(portBinding);
CreateContainerResponse container =
dockerClient.createContainerCmd(imageName)
.withHostConfig(hostConfig)
.withExposedPorts(ExposedPort.parse(containerPort+"/tcp"))
.exec();