Unable to connect to linked docker container in ECS - docker

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the service running on port 3000 in the linked container. I've been working through lots of headaches when moving this to AWS ECS, unfortunately.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb'
I confirmed that puma is starting as expected and appears to be serving tcp traffic on port 3000.
Relevant parts of my nginx config:
upstream puma {
fail_timeout=0;
server myapp:3000;
}
server {
listen 80 default deferred;
server_name *.myapp.com;
location ~ (\.php$|\.aspx$|wp-admin|myadmin) {
return 403;
}
root /www/apps/myapp/public;
try_files $uri/index.html $uri #puma;
Nginx dockerfile:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
Here's my ECS task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
I am able to ping the myapp container from inside my nginx container, so I don't think it's a security group issue.

This turned out to be an AWS security group issue. I had foolishly expected the Rails app to perhaps alert me that it couldn't reach the database, but instead it just hung there forever until I manually started it with rails c. Then I got the timeout which led to speedy resolution.

Related

vscode debug FastAPI in container on MacOS

I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:
Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},

How to expose docker port with host in a Elastic Beanstalk Docker environment?

Current environment :
I'm having an issue in my Beanstalk docker environment of exposing the expected port throughout the host. I can see my docker container has been running successfully inside the docker daemon but I cannot expose it via port 8080 on the beanstalk endpoint, but which is working with port 80.
Issue : I'm trying to access my EB endpoint using the same port(8080) where I'm using in dockerfile. But how can I do that?
Here is the output of docker ps
Here is my sample Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "123456789.dkr.ecr.us-east-1.amazonaws.com/registry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8080,
"HostPort": 8080
}
],
"Volumes": [
{
"HostDirectory": "/path/to/log",
"ContainerDirectory": "/path/to/log"
}
]
}
you should create container with -p 8080:80 args, as I see you did it with -p 8080.

Can you tell me the solution to the change of service ip in mesos + marathon combination?

I am currently posting a docker service with the MESOS + Marathon combination.
This means that the IP address of the docker is constantly changing.
For example, if you put mongodb on marathon, you would use the following code.
port can specify the port that is coming into the host. After a day, the service will automatically shut down and run and the IP will change.
So, when I was looking for a method called mesos dns, when I was studying the docker command, I learned how to find the ip of the service with the alias name by specifying the network alias in the docker.
I thought it would be easier to access without using mesos dns by using this method.
However, in marathon, docker service is executed in json format like below.
I was asked because I do not know how to specify the docker network alias option or the keyword or method.
{
"id": "mongodbTest",
"instances": 1,
"cpus": 2,
"mem": 2048.0,
"container": {
"type": "DOCKER",
"docker": {
"image": "mongo:latest",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 27017,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}
]
},
"volumes": [
{
"containerPath": "/etc/mesos-mg",
"hostPath": "/var/data/mesos-mg",
"mode": "RW"
}
]
}
}

Mounted volume using volume-from is empty

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the volume in the linked container using volumes_from.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb
Nginx config is pretty basic, relevant parts are here:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
EXPOSE 443
Again, this all works great in docker-compose. However, in ECS, I'm trying to use the following task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
The task starts in ECS as expected, and the myapp container looks perfect. However, when I check out the nginx container on the EC2 instance host with
docker exec -it <container> bash
I land in /www/apps/myapp, but the directory is empty. I've tried to mount drives and do several other things and I'm at a loss here... anyone have any ideas as to how to get the files from the linked container to be usable in my nginx container?
And of course, right after I post this I find the solution. So no one else has to feel this pain, here it is:
VOLUME [/www/apps/myapp/]
VOLUME ["/www/apps/myapp/"]
sigh

Mesos cannot deploy container from private Docker registry

I have a private Docker registry that is accessible at https://docker.somedomain.com (over standard port 443 not 5000). My infrastructure includes a set up of Mesosphere, which have docker containerizer enabled. I'm am trying to deploy a specific container to a Mesos slave via Marathon; however, this always fails with Mesos failing the task almost immediately with no data in stderr and stdout of that sandbox.
I tried deploying from an image from the standard Docker Registry and it appears to work fine. I'm having trouble figuring out what is wrong. My private Docker registry does not require password authentication (turned off for debugging this), AND if I shell into the Meso's slave instance, and sudo su as root, I can run a 'docker pull docker.somedomain.com/services/myapp' successfully every time.
Here is my Marathon post data for starting the task:
{
"id": "myapp",
"cpus": 0.5,
"mem": 64.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.somedomain.com/services/myapp:2",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 7000, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
]
},
"volumes": [
{
"containerPath": "application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
},
"healthChecks": [
{
"protocol": "HTTP",
"portIndex": 0,
"path": "/",
"gracePeriodSeconds": 5,
"intervalSeconds": 20,
"maxConsecutiveFailures": 3
}
]
}
I've been stuck on this for almost a day now, everything I've tried seems to be yielding the same result. Any insights on this would be much appreciated.
My versions:
Mesos: 0.22.1
Marathon: 0.8.2
Docker: 1.6.2
So this turns out to be an issue with volumes
"volumes": [
{
"containerPath": "/application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
Using the root path of the container of the root path may be legal in docker, but Mesos appears not to handle this behavior. Modifying the containerPath to a non-root path resolves this, i.e
"volumes": [
{
"containerPath": "/var",
"hostPath": "/var/myapp",
"mode": "RW"
}
]
If it is a problem between Marathon and the registry, the answer should be in the http logs of your registry. If Marathon connects, there will be an entry. And the Mesos master log should contain a clue as well.
It doesn't really sound like a problem between Marathon and Registry though. Are you sure you have 'docker,mesos' in /etc/mesos-slave/containerizers?
Did you --despite having no authentification-- try to follow Using a Private Docker Repository?
To supply credentials to pull from a private repository, add a .dockercfg to the uris field of your app. The $HOME environment variable will then be set to the same value as $MESOS_SANDBOX so Docker can automatically pick up the config file.

Resources