Mounted volume using volume-from is empty - ruby-on-rails

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the volume in the linked container using volumes_from.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb
Nginx config is pretty basic, relevant parts are here:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
EXPOSE 443
Again, this all works great in docker-compose. However, in ECS, I'm trying to use the following task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
The task starts in ECS as expected, and the myapp container looks perfect. However, when I check out the nginx container on the EC2 instance host with
docker exec -it <container> bash
I land in /www/apps/myapp, but the directory is empty. I've tried to mount drives and do several other things and I'm at a loss here... anyone have any ideas as to how to get the files from the linked container to be usable in my nginx container?

And of course, right after I post this I find the solution. So no one else has to feel this pain, here it is:
VOLUME [/www/apps/myapp/]
VOLUME ["/www/apps/myapp/"]
sigh

Related

vscode debug FastAPI in container on MacOS

I am setting up debugging of FastAPI running in a container with VS Code. When I launch the debugger, the FastAPI app runs in the container. But when I access the webpage from host, there is no response from server as the following:
However, if I start the container from command line with the following command, I can access the webpage from host.
docker run -p 8001:80/tcp with-batch:v2 uvicorn main:app --host 0.0.0.0 --port 80
Here is the tasks.json file:
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--port",
"80"
],
"module": "uvicorn"
}
},
{
"type": "docker-build",
"label": "docker-build",
"platform": "python",
"dockerBuild": {
"tag": "with-batch:v2"
}
}
]
}
here is the launch.json file:
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Debug Flask App",
"type": "docker",
"request": "launch",
"preLaunchTask": "docker-run: debug",
"python": {
"pathMappings": [
{
"localRoot": "${workspaceFolder}/app",
"remoteRoot": "/app"
}
],
"projectType": "fastapi"
}
}
]
}
here is the debug console output:
here is the docker-run: debug terminal output:
here is the Python Debug Console terminal output:
Explanation
The reason you are not able to access your container at that port, is because VSCode builds your image with a random, unique localhost port mapped to the running container.
You can see this by running docker container inspect {container_name} which should print out a JSON representation of the running container. In your case you would write docker container inspect withbatch-dev
The JSON is an array of objects, in this case just the one object, with a key of "NetworkSettings" and a key in that object of "Ports" which would look similar to:
"Ports": {
"80/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "55016"
}
]
}
That port 55016 would be the port you can connect to at localhost:55016
Solution
With some tinkering and documentation it seems the "projectType": "fastapi" should be launching your browser for you at that specific port. Additionally, your debug console output shows Uvicorn running on http://127.0.0.1:80. 127.0.0.1 is localhost (also known as the loopback interface), which means your process in the docker container is only listening to internal connections. Think of docker containers being in their own subnetwork relative to your computer (there are exceptions to this, but that's not important). If they want to listen to outside connections (your computer or other containers), they would need to tell the container's virtual network interface to do so. In the context of a server, you would use the address 0.0.0.0 to indicate you want to listen on all ipv4 addresses referencing this interface.
That got a little deep, but suffice it to say, you should be able to add --host 0.0.0.0 to your run arguments and you would be able to connect. You would add this to tasks.json, in the docker-run object, where your other python args are specified:
{
"type": "docker-run",
"label": "docker-run: debug",
"dockerRun": {
"image": "with-batch:v2",
"volumes": [
{
"containerPath": "/app",
"localPath": "${workspaceFolder}/app"
}
],
"ports": [
{
"containerPort": 80,
"hostPort": 8001,
"protocol": "tcp"
}
]
},
"python": {
"args": [
"main:app",
"--host",
"0.0.0.0",
"--port",
"80"
],
"module": "uvicorn"
}
},

starting container process caused: exec: "[\"/bin/sh -c\"": stat ["/bin/sh -c": no such file or directory

I am building and deploying an application via Docker and ECS Fargate. I have my entrypoint command defined in the ECS Task definition. Upon pushing the image into a private ECR repository, I am getting this error when ECS Fargate attempts to deploy the docker image. Any advice would be helpful. Below is the dockerfile, Task Definition, and the error.
Dockerfile
FROM centos:7
COPY /src/main/build/application.zip /tmp/application.zip
COPY /src/main/residual-container-setup/application/init.sh /tmp/init.sh
#Environment variables and Entry point being defined via task definition
Task Definition
{
"containerDefinitions": [
{
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/application",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": [
"[\"/bin/sh -c\"",
"\"/tmp/init.sh\"]"
],
"portMappings": [
{
"hostPort": 9003,
"protocol": "tcp",
"containerPort": 9003
}
],
"cpu": 0,
"environment": [
{
"name": "HOST",
"value": "dev.application.com"
},
{
"name": "REST_PORT",
"value": "8003"
}
],
"mountPoints": [],
"volumesFrom": [],
"image": "xxxxxxxxxxxx.dkr.ecr.us-east-2.amazonaws.com/application:latest",
"essential": true,
"name": "application"
}
]
Error
container_linux.go:380: starting container process caused: exec: "[\"/bin/sh -c\"": stat ["/bin/sh -c": no such file or directory
I attempted running the container locally with the following command: `docker run -it $docker_image /bin/sh
I was unable to even exec into the container. I believe I may need to install additionally in the image to get this to work. Any advice would be helpful.
Update
I have updated the dockerfile to update the permissions on the init script using the following command: chmod +x /tmp/init.sh
I have also update the task definition entrypoint attribute to ["/bin/sh", "-c", "/tmp/init.sh"]
After making these changes I am now being presented with the following:
container_linux.go:380: starting container process caused: exec: "-c": executable file not found in $PATH
Your entrypoint is defined wrongly.
The way you did it Linux thinks the path to the binary is "/bin/sh -c". If you check the container image I'm pretty sure you do not find that file either.

How to keep logs for ECS container?

I'm running ECS tasks, and recently the service cpu hit 100% and went down.
I waited for the instance to settle down and sshed-in.
I was looking for logs, but it seemed docker container restarted and logs are all gone (logs when the cpu was high)
Next time, how do I make sure I can see the logs at least to diagnose the problem?
I have the following, hoping to see some logs somewhere (mounted in the host machine)
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
But there's no /var/log/uwsgi in the host machine.
And I probably need syslog and stuff..
As far you current configuration logs totally depend on the path that you define in the volume section.
"mountPoints": [
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "logs"
}
],
the souces path defined in volume logs logs not /var/log/uwsgi, so you are mounting
/var/log/uwsgi (container path) -> logs volume (host path). you find these logs in path define in logs volume. but better to set something like
{
"readOnly": null,
"containerPath": "/var/log/uwsgi",
"sourceVolume": "volume_name"
}
then volume config
"volumes": [
{
"name": "logs",
"host": {
"sourcePath": "/home/ec2-user/logs"
}
}
]
From documentation
In the task definition volumes section, define a bind mount with name
and sourcePath values.
"volumes": [
{
"name": "webdata",
"host": {
"sourcePath": "/ecs/webdata"
}
}
]
In the containerDefinitions section, define a container with
mountPoints values that reference the name of the defined bind mount
and the containerPath value to mount the bind mount at on the
container.
"containerDefinitions": [
{
"name": "web",
"image": "nginx",
"cpu": 99,
"memory": 100,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"essential": true,
"mountPoints": [
{
"sourceVolume": "webdata",
"containerPath": "/usr/share/nginx/html"
}
]
}
]
bind-mounts-ECS
Now if I come to my suggestion I will go for AWS log driver.
Working in AWS, the best approach is to push all logs to CW, but AWS log driver only pushes container stdout and stderr logs to CW.
Using AWS log driver you do not need to worry about instance and container, you will log in CW and you can stream these logs to ELK as well.
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-wordpress",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-example"
}
}
using_awslogs

Unable to connect to linked docker container in ECS

So here's what I'm trying to do:
Nginx container linked to -> Rails container running Puma
Using docker-compose, this solution works great. I'm able to start both containers and the NGINX container has access to the service running on port 3000 in the linked container. I've been working through lots of headaches when moving this to AWS ECS, unfortunately.
First, the relevant bits of the Dockerfile for Rails:
ENV RAILS_ROOT /www/apps/myapp
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
.... lots of files get put in their proper places ....
EXPOSE 3000
VOLUME [/www/apps/myapp/]
CMD puma -C config/puma.rb'
I confirmed that puma is starting as expected and appears to be serving tcp traffic on port 3000.
Relevant parts of my nginx config:
upstream puma {
fail_timeout=0;
server myapp:3000;
}
server {
listen 80 default deferred;
server_name *.myapp.com;
location ~ (\.php$|\.aspx$|wp-admin|myadmin) {
return 403;
}
root /www/apps/myapp/public;
try_files $uri/index.html $uri #puma;
Nginx dockerfile:
ENV RAILS_ROOT /www/apps/myapp
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
EXPOSE 80
Here's my ECS task definition:
{
"family": "myapp",
"containerDefinitions": [
{
"name": "web",
"image": "%REPOSITORY_URI%:nginx-staging",
"cpu": 512,
"memory": 512,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
},
{
"containerPort": 443,
"protocol": "tcp"
}
],
"links": [
"myapp"
],
"volumesFrom": [
{
"sourceContainer": "myapp",
"readOnly": false
}
],
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "awslogs-myapp-staging",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "awslogs-myapp-nginx"
}
}
},
{
"image": "%REPOSITORY_URI%:v_%BUILD_NUMBER%",
"name": "myapp",
"cpu": 2048,
"memory": 2056,
"essential": true,
...bunch of environment variables, etc.
}
I am able to ping the myapp container from inside my nginx container, so I don't think it's a security group issue.
This turned out to be an AWS security group issue. I had foolishly expected the Rails app to perhaps alert me that it couldn't reach the database, but instead it just hung there forever until I manually started it with rails c. Then I got the timeout which led to speedy resolution.

Mesos cannot deploy container from private Docker registry

I have a private Docker registry that is accessible at https://docker.somedomain.com (over standard port 443 not 5000). My infrastructure includes a set up of Mesosphere, which have docker containerizer enabled. I'm am trying to deploy a specific container to a Mesos slave via Marathon; however, this always fails with Mesos failing the task almost immediately with no data in stderr and stdout of that sandbox.
I tried deploying from an image from the standard Docker Registry and it appears to work fine. I'm having trouble figuring out what is wrong. My private Docker registry does not require password authentication (turned off for debugging this), AND if I shell into the Meso's slave instance, and sudo su as root, I can run a 'docker pull docker.somedomain.com/services/myapp' successfully every time.
Here is my Marathon post data for starting the task:
{
"id": "myapp",
"cpus": 0.5,
"mem": 64.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "docker.somedomain.com/services/myapp:2",
"network": "BRIDGE",
"portMappings": [
{ "containerPort": 7000, "hostPort": 0, "servicePort": 0, "protocol": "tcp" }
]
},
"volumes": [
{
"containerPath": "application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
},
"healthChecks": [
{
"protocol": "HTTP",
"portIndex": 0,
"path": "/",
"gracePeriodSeconds": 5,
"intervalSeconds": 20,
"maxConsecutiveFailures": 3
}
]
}
I've been stuck on this for almost a day now, everything I've tried seems to be yielding the same result. Any insights on this would be much appreciated.
My versions:
Mesos: 0.22.1
Marathon: 0.8.2
Docker: 1.6.2
So this turns out to be an issue with volumes
"volumes": [
{
"containerPath": "/application.yml",
"hostPath": "/var/myapp/application.yml",
"mode": "RO"
}
]
Using the root path of the container of the root path may be legal in docker, but Mesos appears not to handle this behavior. Modifying the containerPath to a non-root path resolves this, i.e
"volumes": [
{
"containerPath": "/var",
"hostPath": "/var/myapp",
"mode": "RW"
}
]
If it is a problem between Marathon and the registry, the answer should be in the http logs of your registry. If Marathon connects, there will be an entry. And the Mesos master log should contain a clue as well.
It doesn't really sound like a problem between Marathon and Registry though. Are you sure you have 'docker,mesos' in /etc/mesos-slave/containerizers?
Did you --despite having no authentification-- try to follow Using a Private Docker Repository?
To supply credentials to pull from a private repository, add a .dockercfg to the uris field of your app. The $HOME environment variable will then be set to the same value as $MESOS_SANDBOX so Docker can automatically pick up the config file.

Resources