I am new to ECS and I am trying to deploy a couple of containers in a ECS task using Fargate.
I have 1 container running that uses Angular2 and is running on nginx, the other container is the backend and is running on Springboot and uses the port 42048.
I am using the awsvpc network with Fargate and I have to do it that way.
The Angular app communicates with the backend using localhost:42048/some_url and it works fine in my local docker but in AWS the front-end doesn't find the backend. Currently I have my ports mapped with 80 for the front end and 42048 for the backend and the front-end when deployed locally was able to find the backend as localhost:42048
Any help would be appreciated. Thank you
linking is not allowed in AWSVPC.
You can do linking only in network mode when its set to bridge.
links
Type: string array
Required: no
The link parameter allows containers to communicate with each other
without the need for port mappings. Only supported if the network
mode of a task definition is set to bridge. The name:internalName
construct is analogous to name:alias in Docker links. Up to 255
letters (uppercase and lowercase), numbers, hyphens, and underscores
are allowed. For more information about linking Docker containers, go
to
https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/.
This parameter maps to Links in the Create a container section of the
Docker Remote API and the --link option to docker run.
Note
This parameter is not supported for Windows containers or tasks using the awsvpc network mode.
Important
Containers that are collocated on a single container instance may be able to communicate with each other without requiring links or host port mappings. Network isolation is achieved on the container instance using security groups and VPC settings.
task_definition_parameters
In network mode, you have to define two containers in the same task definition and then mentioned the name of the container in the link.
And then Mentioned the name of backend container in frontend container.
With Fargate, If you want to access your backend using localhost:42048, then you can try configuring your Frontend and Backend in the same Task definition. While deploying the task, all the containers defined in the same task definition would run in the same underlying host and we can access it using localhost.
Remember that Fargate storage is ephemeral and your backend shouldn't maintain application state in the container.
...
"containerDefinitions": [
{
"name": "frontend",
"image": "my-repo/angularapp",
"cpu": 256,
"memory": 1024,
"essential": true,
"portMappings": [ {
"containerPort": 8080,
"hostPort": 8080
}
]
},
{
"name": "backend",
"image": "my-repo/springboot",
"cpu": 256,
"memory": 1024,
"essential": true,
"portMappings": [ {
"containerPort": 42048,
"hostPort": 42048
}
]
}
]
...
But I'm afraid this approach isn't suitable for production grade.
Related
I came across an interesting issue which i tried resolving and investigating on my own. I think i can already feel the solution with my fingertips but i just cant grasp it. ..
Any help would be really nice and would be grateful for it.
Common docker network:
local bridge, with defaul driver "Subnet": "172.19.0.0/16","Gateway": "172.19.0.1"
a proxy container (nginx) which handles ssl and two domains and internal routing to two containers on the network
"120253b9613d95bb4d540abe3676c7d309cdc9ac531cc81de9acd548737b829e": {
"Name": "youtrack",
"EndpointID": "0ff42cc51535663df36a47f79f41f4df5bdb229c411b2aa0200fffc0c3e7b824",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"639a75318859f2b60c93585a77d259f919f307a3f0653fd75cbcc8cad932e3ac": {
"Name": "proxy",
"EndpointID": "7923e5fbe27e0b2a4ff3b8f765a5a2fb34b3b97c10f2545fb875facd04d71fdb",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"8b0d7cf3f4d09d9fe281e742e747c9947528519e730c0e58a07bec9a6d097083": {
"Name": "gitlab",
"EndpointID": "84b454e38b9178f5f55cefb310f839de1abcd2ed0e7b58018e9522e08dfbff01",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
}
when communicating from outside world on the container-one.company.com and container-two.company.com through the proxy, there are no issues. Everything works as expected.
Now thing is theese two containers have integration through https. (VCS integration) between youtrack which is issue management and gitlab which is code git repository. And main integration property is the URL of the server. So container-one.company.com's settings include container-two.company.com domain as https point for integration. And the container-two.company.com's settings include container-one.company.com url for the VCS integration.
I have already checked that BOTH domains resolve to the SERVER IP and not Docker's internal IP.
When i did testing and set up a reverse proxy with IDENTICAL config, but pointing BACK to the server. So the flow was SERVER1 which has all 3 containers and SERVER2 which only had nginx container with both of domains, but it redirected back to SERVER1 proxy. So only difference was BOTH containers resolved IP which was not of their own server, but some different server.
That's why i do think there is issue with how docker is handling network calls and it somehow tries to "optimize" both calls to go through internal docker's network instead through proxy, like all external calls.
And i can mention that Nginx's config is really only ssl termination and maping based on the domain. IF domain container-one.company.com proxy pass to youtrack container and its port and if container-two.company.com proxy pass to the gitlab cotntainer and its internal port. All plain http, no complications.
NOTE:
I have tried with the solution:
ports:
- "IP:443:443"
And
--add-host property
Same behaviour.
UPDATE1:
For more clarification adjusted and provided two diagrams with additional explanations and behaviour.
this is the current setup and desired setup. And as you see when trying to make VCS integration ON the same host from a YOUTRACK container to GITLAB one using FQDN it all fails, says it does not map/point to a valid repository url.
But however if you move both domains to a different server, the top-level proxy (diagram 2 which is incoming). It is actually on a different server and CONTAINER is pointing to that container (i even used the add-host property so not even a valid domain to the dummy proxy, on a different host). Then it went through.
Meanwhile, this kind of configuration (moving the FQDN from the same host to a different one and PROXYing it back to the original with NO changes to ANY of the original proxy configurations), works like a charm.
TLDR:
Issue was that UFW did not let full 443:443 binding through so I had to add:
sudo ufw allow 443
sudo ufw reload
Note symptoms were:
Every other PC in our network COULD connect to port 443
EVERY local applications COULD connect to port 443
ONLY containers ON the same host COULD NOT connect to port 443, even if it was FULL IP:PORT:PORT binding on the proxy container.
and with ports configuration:
- "IP:443:443"
It then worked.
Full debugging was made with:
docker run --rm -it --network container:gitlab nicolaka/netshoot:latest
Which allowed all telnet,ping,curl tests to figure out that even tho ping was going through connections on port 443 did not.
I'm trying to debug a container dns resolving issue on ubuntu linux docker. As described here https://docs.docker.com/config/containers/container-networking/#dns-services docker usage a embedded DNS server inside container. Are there any commands that list Docker’s embedded DNS server entries ? ( like entries in /etc/resolv.conf)
I have tried docker inspect and docker network inspect.
Also tried starting dockerd is debug mode but have not found anything useful.
It does show some config file read like below.
INFO[2020-07-13T14:39:58.517777580+05:45] detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/systemd/resolve/resolv.conf
But I wanted to list the runtime dns entries of dockerd network with dns addresss 127.0.0.11. Is it possible ?
It is possible, but you have to parse the JSON printed by docker network inspect.
Run docker network ls to get the running networks names, and then docker network inspect NETWORK_NAME to see the containers in it.
Look for the "Containers" keyword in the JSON, it is a list of connected devices. Look for the instance with the "IPv4Address": "127.0.0.11/24" entry, the "Name" key is the DNS name.
I.E. countly_countly-endpoint is the DNS name that resolves to ip 10.0.8.4/24:
"lb-countly_countly": {
"Name": "countly_countly-endpoint",
"EndpointID": "9f7abf354b5fbeed0be6483b53516641f6c6bbd37ab991544423f5aeb8bdb771",
"MacAddress": "02:42:0a:00:08:04",
"IPv4Address": "10.0.8.4/24",
"IPv6Address": ""
}
Note that countly_ is the network namespace that matches the network name in docker network ls, that way you can be sure that they are unique and configure your services to talk to each other using the DNS name according to the rule of NETWORK-NAME_SERVICE-NAME.
One of the machines where we need to deploy docker containers has an eth0 IP set to within the docker IPs range (172.17.0.1/16).
The problem is that when we try to access this server through NAT from outside (SSH etc), then everything "hangs". I guess the packets get missdirected by the docker iptables rules.
What is the recommendation in this case if we cannot change the eth0 IP?
Docker should avoid subnet collisions if it sees all of the in use subnets when it creates it's networks. However if you change networks (e.g. a laptop), then you want to setup address pools for docker to use. Steps for this are in my slides here:
https://sudo-bmitch.github.io/presentations/dc2018eu/tips-and-tricks-of-the-captains.html#19
The important details are to setup a /etc/docker/daemon.json file containing:
{
"bip": "10.15.0.0/24",
"default-address-pools": [
{"base": "10.20.0.0/16", "size": 24},
{"base": "10.40.0.0/16", "size": 24}
]
}
Adjust the ip ranges as needed. Stop all containers in the bad networks, delete the containers, delete any user created networks, restart the docker engine, and then recreate any user created networks and containers (often the last two steps just involves removing and redeploying a compose project or swarm stack).
Note, it wasn't clear if you were attempting to connect to your host or container. You should not be connecting directly to a container IP externally (with very few exceptions). Instead you publish the desired ports that you need to be able to access externally, and you connect to the host IP on that published port to reach the container. E.g.
docker run -d -p 8080:80 nginx
Will start nginx with it's normal port 80 inside the container that you normally cannot reach externally. Publishing host port 8080 (could just as easily be 80 to match the container port) maps connections to the container port 80.
One important prerequisite is the application inside the container must listen on all interfaces, not just 127.0.0.1, to be able to access it from outside of that container's network namespace.
I'm using Docker 1.9.1's remote API to create a container.
One thing I'm trying to accomplish is that among all the exposed ports of an image, I only want to expose a few of them (or in other words give them host port mapping), at the same time I don't want to manage the host ports to use but want Docker to pick up random and available ones.
For example, an image has port 80, 443, 22 exposed, what I want is something like this in a Docker run flavor (I know this is not possible through cmd line though)
docker run -p {a random available port}:80 image
Can I achieve something like this through remote API? Right now I can only set PublishAllPorts = true but that publish all ports and waste too many host ports.
Docker rest api for starting container allows you to define port bindings. For random mapping to host port use "PortBindings": { "80/tcp": [{ "HostPort": "" }] }
I have a dockerized dropwizard service deployed on marathon. I am using Hazelcast as a distributed cache which I start has part of my dropwizard service. I have placed a constraint to ensure that each container is started on a unique Host.
"constraints": [
[
"hostname",
"UNIQUE"
]
],
I have exposed 2 ports on my docker container, 10012 for my service and 10013 for Hazelcast. I am using Zookeeper for my Dropwizard service discovery. Thus when I startup my Hazelcast instance I have access to the hostnames of all the machines on which my docker containers are running and I add all of them as below.
TcpIpConfig tcpIpConfig = join.getTcpIpConfig();
// finder is a handle to a service discovery service and the following gets me all the hosts on which my docker containers will run.
List<ServiceNode<ShardInfo>> nodes = finder.getAllNodes();
nodes.stream()
.peek(serviceNode -> log.info("Adding " + serviceNode.representation() + " to hazelcast."))
.map(serviceNode -> serviceNode.getHost())
.forEach(host -> tcpIpConfig.addMember(host));
tcpIpConfig.setRequiredMember(null).setEnabled(true);
Now issues:
If I use network type as BRIDGE while deploying on Marathon, then I don't know the docker container host and thus my 2 docker containers don't know each other. It looks something like this:
ip-10-200-2-219.ap-southeast-1.compute.internal (docker host) - 172.12.1.18 (docker container ip)
ip-10-200-2-220.ap-southeast-1.compute.internal (docker host) - 172.12.1.20 (docker container ip)
From zookeeper I get the docker host IPs but not the docker container IPs.
If I use network type as HOST then everything works but an issue is that I then have to make sure that ports which my docker containers are running always have port 1001 and 10013 available. (With BRIDGE the docker container ports are bound to a random ports).
Analysis:
The two docker containers are inside their own network localized to the slaves. They need to recognise each other using the public IP of the slave and the bridged port to which 5701 (or whatever hazelcast port you are using).
Solution
In the TCP/IP configuration set the Public Address and Port when starting the instance. All instances will do this and they will talk to each other using the marathon slave IP and the randomized port used to it.
Use the HOST and PORT_5701 variables provided by marathon and available inside the container to do this.
Config hzConfig = new Config();
hzConfig.getNetworkConfig().setPublicAddress(
String.format("%s:%s",
System.getenv("HOST"),
System.getenv("PORT_5701")));
Refer to hazelcast network config documentation to understand a bit more about the public address option.
You can use TCP/IP Discovery mechanism and make sure hazelcast nodes bind to public ip of docker container. Although this solution is might help if you know docker container IPs, before your deployment.
<hazelcast>
...
<network>
...
<join>
<multicast enabled="false">
</multicast>
<tcp-ip enabled="true">
<member>docker-host1</member>
<member>docker-host2</member>
<member>172.12.1.20</member>
<member>192.168.1.21</member>
</tcp-ip>
...
</join>
...
</network>
...
</hazelcast>
Getting hazelcast members to discover each other and form a cluster is tricky. I ended up following advice of #bitsofinfo -
https://github.com/hazelcast/hazelcast/issues/9219
#santanu 's answer is correct. Public address needs to be set properly for hazelcast members to be able to discover each other. Here's a parameterized way of doing this : https://github.com/gagangoku/hazelcast-docker