Cannot reach container in ECS Fargate cluster that looks properly configured - docker

I have an ECS Fargate cluster that I configured after reading instructions in another StackOverFlow post. I have several containers that I've pushed into ECR repositories and can successfully launch the containers. But going to http://PUBLIC-IP-ADDRESS does not access the service exposed by the container.
In my most recent test I simply used the httpd container from Docker Hub because it is simple and provides a default web page. Still no luck.
The VPC has two subnets - public and private - and was constructed per the instructions in the above-linked post. I am attaching the containers -- as an ECS Service -- to the public subnet and also configuring the Service to make it a public IP address.
Public subnet (CIDR 10.0.1.0/24) has this route table:
10.0.0.0/16 local
0.0.0.0/0 igw-0ad0671cc2924857e
Network ACL inbound rules
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
Network ACL outbound rules
100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
* ALL Traffic ALL ALL 0.0.0.0/0 DENY
(These are the default rules)
The private subnet (CIDR 10.0.2.0/24) has the same configuration but the route table instead connects to a NAT gateway. The NAT gateway is homed on the public net.
The only thing I did differently from the VPC configuration instructions is the security group. When creating the services, I configure the service with the default security group that came with the VPC. This security group allows all traffic both inbound and outbound.
For the Task Definition - I created an httpd T.D. using the awsvpc network mode (it's a Fargate ECS), 1/2 GB memory, 0.25 vCPU, exposing port 80 on the container,
For the Service, I attached it to the VPC, gave it the name httpd, attached it to the public subnet, and said to use a public IP address.
The Service and the contained Task launch correctly, and the Task shows a public IP address. Accessing that IP address results in a long wait and eventually the web browser gives up. (times out)
UPDATE --
I was not aware of the need to have a load balancer. I have attempted to add a load balancer. But it made no difference.
To add the load balancer required adding more public subnets configured as above. The Application Load Balancer is attached to the VPC and to the public subnets. It is listening to HTTP (port 80).
I then re-created the service for the httpd container. During creation of the service, I did my best to configure it for the load balancer and then the service description gives this summary:
Target Group Name Container Name Container Port
ecs-ecs-go-httpd httpd 80

An ELB shouldn't be needed if you are trying to hit a specific container. Run a TCP test against port 80 and see what the response is.
Powershell: Test-NetConnection <IP address> -port 80
Bash: nc -z <IP address> <80>
If you cannot connect over 80 then move back through the pieces:
Open up the task
Click on the link to the ENI
Double check the security group has 80 open
Open up the task definition
Open up the container definition
Ensure that the host port and container port are both 80
If it is not an issue with either the security group or the task definition that leaves only the VPC configuration and the container image.
Test the container image by launching it locally and hitting it on localhost:80
Test the VPC configuration by launching another EC2 instance in the same subnet as your container with the same security group
SSH in
If you can SSH in then the issue is most likely not a routing issue. If you cannot SSH in double check the route table, the VPC ACL, and the subnet ACL.
From the new EC2 instance
Ensure 80 is wide open and try to hit your container with it's local IP
Install a web server
Try to hit this from outside on port 80
If you can hit 80 on your EC2 instance with the same security group in the same subnet then you have eliminated any possible issues with networking and can focus on what might be wrong with the ECS configuration.
Lastly, don't forget to lock your security group and ACLs back down if you opened anything up wider.

Related

Squid proxy service on docker with multiple ip on this same interface

I using squid on docker, and have problem with connect to other site by selected ip.
I always connected by default host ip, not additional failover ip.
My setup:
a) server
-dedicated server on ovh.org
-1 dedicated ip from server, and 6 additional by ovh service 'failover ip'
-each failover ip added to main interface, and I have on main eno1 interface has 7 ip.
-i added all failover-ip by this guide on ovh.org
b) problem
-I added to squid.conf my failover ip, but when I connect to this ip remote and using squid, I always using host ip, not additionaly. What is wrong?
-my gist setup docker-compose, and squid.conf
https://gist.github.com/mxcdh/22baa3d7fa2d9dcb2279520b81d71afa
p.s
When I logged to host, not on squid on docker, and put in terminal:
ip-failover-1-results
It's working, but on squid no.

docker swarm causing problems with iptables

I'm having an issue with docker swarm with it modifying iptables when i need control over the ports. I am trying to use UFW to make my own rules.
My setup has an nginx proxy that routes all traffic on designated ports to containers on different nodes to specific ports on a local network interface. So all of the node servers i want to block every single port on the public interface so all traffic has to come via the proxy server.
The problem is docker opens the ports on the public interface, let's say i have a container on 80:80 it opens port 80 on the public interface, when i don't want that server to be directly accessed, it needs to come in via the reverse proxy and down through the private network interface only.
I read with docker compose you can bind the port to the 127.0.0.1 ip address instead of letting docker bind to 0.0.0.0 like this:
"127.0.0.1:80:80"
However this doesn't work with docker swarm yaml config, when i try it gives this error:
error decoding 'Ports': Invalid hostPort: 127.0.0.1
This is causing me a headache, i don't want docker touching my iptables rules at all but i can't find a solid answer on how to stop it.
At the moment i am using OVH firewall directly on the ip addresses, however this isn't an ideal solution as OVH is basic and doesn't allow me to set port ranges which i need to do;

Can we use a DNS name for a service running on a Docker Swarm on my local system?

Say I have a Swarm of 3 nodes on my local system. And I create a service say Drupal with a replication of 3 in this swarm. Now, say each of the node has one container each running Drupal. Now when I have to access this in my browser I will have to use the IP address of one of the nodes <IP Address>:8080 to access Drupal.
Is there a way I can set a DNS name for this service and access it using DNS name instead of having to use IP Address and port number?
You need to configure the DNS server that you use on the host making the query. So if your laptop queries the public DNS, you need to create a public DNS entry that would resolve from the internet (on a domain you own). This should resolve to the docker host IPs running the containers, or an LB in front of those hosts. And then you publish the port on the host to the container you want to access.
You should not be trying to talk directly to the container IP, these are not routeable from outside of the docker host. And the docker DNS used for service discovery is for container to container communication. This is separate from communication outside of docker that goes through a published port.

Port forwarding in Jelastic with Docker

I have simple application which has rest api on port 4567 and run it in my docker container in jelastic cloud.
Now I want to forward port 4567 to the external world. When I run docker locally I can do it like that: docker run -d -p 4567:4567 -ti myapp /bin/bash
But how can I do that in jelastic without external IP? I've also tried to use jelastic endpoints but port is not available.
Also found some information on jelastic's docs: "In case your Docker container does not have an external IP attached, Jelastic performs an automatic port redirect.
This means that if application listens to a custom port on TCP level, Jelastic will try to automatically detect it and forward all the incoming requests to this port number.
As a result, in most cases, your dockerized app or service will become available over the Internet under the corresponding node’s domain right after creation."
To build docker image I use Dockerfile and it has "EXPOSE 4567" field.
#Catalina,
Pay attention that there is no need to expose ports in Jelastic because it uses PCS container-based virtualization, which is more technologically advanced compared to the native Docker containers’ implementation: it has the built-in support of the natural virtual host-routed network adapters.
By default, Jelastic automatically detects the ports, that are predefined to be listened by an application within the appropriate Docker image settings, and applies the required redirects to ensure container’s accessibility right after the deployment.
Let us explain which ports are listening on Shared Load Balancer (SLB) and can be forwarded to the containers:
80 -> HTTP
8080 -> HTTP
8686 -> HTTP
8443 -> SSL
4848 (glassfish admin) -> SSL
4949 (wildfly admin) -> HTTP
7979 (import/export feature) -> SSL
In the case when you want to specify another port instead of selected by auto-redirect functionality you can do it by specifying the JELASTIC_EXPOSE docker variable in the environment settings wizard to specify the needed port.
JELASTIC_EXPOSE variable should be used, with the following values as possible:
0 or DISABLED or FALSE - to disable auto-redirect
a number within the 1-65535 range - to define the required port for setting the corresponding redirect
Also, you can either map the required private port via endpoint (for being accessible over Shared LB) and bind your service to the received address and shared port.

I can not access my Container Docker Image by HTTP

I created an image with apache2 running locally on a docker container via Dockerfile exposing port 80. Then pushed to my DockerHUB repository
I created a new instance of Container Engine In my project on the Google Cloud. Within this I have two clusters, the Master and the Node1.
Then created a Pod specifying the name of my image in DockerHUB and configuring Ports "containerPort" and "hostPort" for 6379 and 80 respectively.
Node1 accessed via SSH and the command line: $ sudo docker ps -l Then I found that my docker container there is.
I created a service for instance by configuring the ports as in the Pod, "containerPort" and "hostPort" for 6379 and 80 respectively.
I checked the Firewall is available with access to port 80. Even without deems it necessary, I created a rule to allow access through port 6379.
But when I enter http://IP_ADDRESS:PORT is not available.
Any idea about what it's wrong?
If you are using a service to access your pod, you should configure the service to use an external load balancer (similarly to what is done in the guestbook example's frontend service definition) and you should not need to specify a host port in your pod definition.
Once you have an external load balancer created, then you should open a firewall rule to allow external access to the load balancer which will allow packets to reach the service (and pods backing it) running in your cluster.

Resources