Does ECS task definition support volume mapping syntax? - docker

docker-compose spec support volume mapping syntax under services, for example:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_VERSION: ${DOCKER_VERSION}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
Following "AWSTemplateFormatVersion": "2010-09-09", the corresponding ECS task definition has volume syntax un-readable(with MountPoints and Volumes), as shown below:
"EcsTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "jenkins",
"Image": "xyzaccount/jenkins:ecs",
"Memory": 995,
"PortMappings": [ { "ContainerPort": 8080, "HostPort": 8080 } ],
"MountPoints": [
{
"SourceVolume": "docker",
"ContainerPath": "/var/run/docker.sock"
},
{
"SourceVolume": "jenkins_home",
"ContainerPath": "/var/jenkins_home"
}
]
}
],
"Volumes": [
{
"Name": "jenkins_home",
"Host": { "SourcePath": "/ecs/jenkins_home" }
},
{
"Name": "docker",
"Host": { "SourcePath": "/var/run/docker.sock" }
}
]
}
}
Does ECS task definition syntax of CloudFormation (now) support volume mapping syntax? similar to docker-compose....

Yes, of course, ECS support docker socket mounting, but the syntax is bit different. Add DOCKER_HOST environment variable in the task definition and source path should start with //.
"volumes": [
{
"name": "docker",
"host": {
"sourcePath": "//var/run/docker.sock"
}
}
]
The // worked in case of AWS ecs.
Also, you need to add DOCKER_HOST environment variable in your task definition.
"environment": [
{
"name": "DOCKER_HOST",
"value": "unix:///var/run/docker.sock"
}
]

Related

Filebeat 7.10.1 add_docker_metadata adds only container.id

I'm using filebeat 7.10.1 installed on host system (not docker container), running as service by root
according to https://www.elastic.co/guide/en/beats/filebeat/current/add-docker-metadata.html
and https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-container.html
filebeat config, filebeat.yml:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/var/lib/docker/containers/*/*.log'
processors:
- add_docker_metadata: ~
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
setup.kibana:
output.logstash:
hosts: ["<logstash_host>:5044"]
started container:
docker run --rm -d -l my-label --label com.example.foo=bar -p 80:80 nginx
filebeat get logs and successfully send them to endpoint (in my case to logstash, which resend to elasticsearch), but generated json by filebeat contains only container.id without container.name, container.labels and container.image
it looks like (copy-paste from kibana):
{
"_index": "logstash-2021.02.10",
"_type": "_doc",
"_id": "s4a4i3cB8j0XLXFVuyMm",
"_version": 1,
"_score": null,
"_source": {
"#version": "1",
"ecs": {
"version": "1.6.0"
},
"#timestamp": "2021-02-10T11:33:54.000Z",
"host": {
"name": "<some_host>"
},
"input": {
"type": "container"
},
"tags": [
"beats_input_codec_plain_applied"
],
"log": {
.....
},
"stream": "stdout",
"container": {
"id": "15facae2115ea57c9c99c13df815427669e21053791c7ddd4cd0c8caf1fbdf8c-json.log"
},
"agent": {
"version": "7.10.1",
"ephemeral_id": "adebf164-0b0d-450f-9a50-11138e519a27",
"id": "0925282e-319e-49e0-952e-dc06ba2e0c43",
"name": "<some_host>",
"type": "filebeat",
"hostname": "<some_host>"
}
},
"fields": {
"log.timestamp": [
"2021-02-10T11:33:54.000Z"
],
"#timestamp": [
"2021-02-10T11:33:54.000Z"
]
},
"highlight": {
"log.logger_name": [
"#kibana-highlighted-field#gw_nginx#/kibana-highlighted-field#"
]
},
"sort": [
1612956834000
]
}
what am I doing wrong? How to configure filebeat for send container.name, container.labels, container.image?
So after looking on filebeat-debug and paths on filesystem - issue closed
Reason: symlink /var/lib/docker -> /data/docker produces unexpected behavior
Solution:
filebeat.inputs:
- type: container
enabled: true
paths:
- '/data/docker/containers/*/*.log' #use realpath
processors:
- add_docker_metadata:
match_source_index: 3 #subfolder for extract container id from path

ECS Application sourcebundle validation error: We expected a VALUE token but got: START_ARRAY

I'm making an multi-container docker build on Elastic Beanstalk and whenever I run eb deploy
I get the error ECS Application sourcebundle validation error: We expected a VALUE token but got: START_ARRAY
I think it might be something wrong with my Dockerrun.aws.json, but I can't seem to figure out what it is.
Here's my dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "ELASTICSEARCH_URL",
"value": "elasticsearch:9200"
}
],
"essential": true,
"image": "902260087874.dkr.ecr.ap-southeast-1.amazonaws.com/the-medical-agora",
"memory": 128,
"links": [
"db",
"elasticsearch"
],
"mountPoints": [
{
"containerPath": "/usr/src/app",
"sourceVolume": "."
}
],
"name": "app",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
]
},
{
"memory": 128,
"essential": true,
"image": "postgres:10.3-alpine",
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data",
"sourceVolume": "Db"
}
],
"name": "db",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"memory": 128,
"essential": true,
"image": "docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4",
"mountPoints": [
{
"containerPath": "/usr/share/elasticsearch/data",
"sourceVolume": "Esdata1"
}
],
"name": "elasticsearch"
}
],
"volumes": [
{
"host": {
"sourcePath": "esdata1"
},
"name": "Esdata1"
},
{
"host": {
"sourcePath": "db"
},
"name": "Db"
},
{
"host": {
"sourcePath": "."
},
"name": "_"
}
]
}
Which is weird because when I ran this dockerrun.aws.json JSON schema linter on it, it seemed to do fine.
The project also works when I run it with eb local run. It seems to only break when I'm deploying it to Elastic Beanstalk.
Hey guys after reading the docs of eb deploy I discovered the problem.
Although I fixed the Dockerrun.aws.json file, it doesn't reflect on eb deploy until I make a new git commit.
So I just ran git add . and git commit and then ran git push for good measure.
After that when I ran eb deploy it used my new Dockerrun.aws.json and my problems were resolved.

Nginx Docker AWS, Nginx is not able to resolve 127.0.0.11 in multi container

What I am working on:
nginx- openresty with mecached and docker-compose.
from nginx I am able to connect memcached container by specifying resolver = 127.0.0.11, in docker compose it working file.
But when I am deploying it on AWS multi container beanstalk I am getting time out error
failed to connect: memcache could not be resolved (110: Operation timed out)
but from nginx container I am able to ping memcahed.
NGINX.conf
location /health-check {
resolver 127.0.0.11 ipv6=off;
access_by_lua_block {
local memcached = require "resty.memcached"
local memc, err = memcached:new()
if not memc then
ngx.say("failed to instantiate memc: ", err)
return
end
memc: set_timeout(1000) -- 1 sec
local ok, err = memc:connect("memcache", 11211)
if not ok then
ngx.say("failed to connect: ", err)
return
end
DOCKER-COMPOSE.YML
version: "3"
services:
memcache:
image: memcached:alpine
container_name: memcached
ports:
- "11211:11211"
expose:
- "11211"
networks:
- default
nginx:
image: openresty/openresty:alpine
container_name: nginx
volumes:
# Nginx files
- ./nginx/:/etc/nginx/:ro
# Web files
- ./web/:/var/www/web/:ro
entrypoint: openresty -c /etc/nginx/nginx.conf
ports:
- "8080:8080"
networks:
- default
DOCKERRUN.AWS.JSON
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "current-nginx",
"host": {
"sourcePath": "/var/app/current/nginx"
}
},
{
"name": "web",
"host": {
"sourcePath": "/var/www/web/"
}
}
],
"containerDefinitions": [
{
"name": "memcache",
"image": "memcached:alpine",
"essential": true,
"memory": 1000,
"portMappings": [
{
"hostPort": 11211,
"containerPort": 11211
}
]
},
{
"name": "nginx",
"image": "openresty/openresty:alpine",
"essential": true,
"memory": 1000,
"entryPoint": [
"openresty",
"-c",
"/etc/nginx/nginx.conf"
],
"links": [
"memcache"
],
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
},
{
"hostPort": 80,
"containerPort": 8080
}
],
"mountPoints": [
{
"sourceVolume": "web",
"containerPath": "/var/www/web/",
"readOnly": false
},
{
"sourceVolume": "current-nginx",
"containerPath": "/etc/nginx",
"readOnly": false
}
]
}
]
}
You have a typo:
memc:connect("memcache", 11211)
should be
memc:connect("memcached", 11211)
(you are missing a "d").

Docker-Compose Up Works but Eb Local Run does not

Attached is my docker-compose file. Its a very simple project with a database and phpmyadmin to access it.
web:
build: ./docker_web/
links:
- db
ports:
- "80:80"
volumes:
- "./docker_web/www/:/var/www/site"
db:
image: mysql:latest
restart: always
volumes:
- "./.data/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: ^^^^
MYSQL_DATABASE: electionbattle
MYSQL_USER: admin
MYSQL_PASSWORD: ^^^^
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
- PMA_ARBITRARY=1
restart: always
ports:
- 8081:80
volumes:
- /sessions
links:
- db
If I run this it works fine. I created the equivlent for Amazon Docker Run for Elastic Beanstalk and it starts up but for some reason it can't find the volume that is holding my persisted database data in the .data folder.
I've tried changing .data to just data no luck.
Also I get a weird error when trying to do eb deploy
2016-09-24 19:56:10 UTC-0700 ERROR ECS task stopped due to: Essential container in task exited. (db: CannotCreateContainerError: API error (500): create ./.data/db/: "./.data/db/" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed web: phpmyadmin: )
I have no idea how to fix this error or why its happening. Any ideas?
Oops forgot to add my amazon file :).
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "web",
"host": {
"sourcePath": "./docker_web/www/"
}
},
{
"name": "db",
"host": {
"sourcePath": "./data/db/"
}
}
],
"containerDefinitions": [
{
"name": "web",
"image": "197984628663.dkr.ecr.us-west-1.amazonaws.com/electionbattleonline",
"memory": 200,
"essential": true,
"mountPoints": [
{
"sourceVolume": "web",
"containerPath": "/var/www/site",
"readOnly": false
}
],
"links": [
"db"
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
]
},
{
"name": "db",
"image": "mysql:latest",
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "^^^^"
},
{
"name": "MYSQL_DATABASE",
"value": "electionbattleonline"
},
{
"name": "MYSQL_USER",
"value": "admin"
},
{
"name": "MYSQL_PASSWORD",
"value": "^^^^"
}
],
"portMappings": [
{
"hostPort": 3306,
"containerPort": 3306
}
],
"mountPoints": [
{
"sourceVolume": "db",
"containerPath": "/var/lib/mysql",
"readOnly": false
}
],
"essential": true,
"memory": 200
},
{
"name": "phpmyadmin",
"image": "phpmyadmin/phpmyadmin",
"environment": [
{
"name": "PMA_ARBITRARY",
"value": "1"
}
],
"essential": true,
"memory": 128,
"links": [
"db"
],
"portMappings": [
{
"hostPort": 8081,
"containerPort": 80
}
]
}
]
}
Don't use relative paths.
Use eb local run to test before deploying, it will help you solve deployment issues. Your Dockerrun.aws.json file will be converted into a docker-compose.yml file and started using a local copy of docker.
You can find the generated docker-compose.yml in your project directory at the path .elasticbeanstalk/docker-compose.yml. You will notice that your volumes are missing from the docker-compose config file.
To fix this change your volumes to:
"volumes": [
{
"name": "web",
"host": {
"sourcePath": "/var/app/current/docker_web/www/"
}
},
{
"name": "db",
"host": {
"sourcePath": "/var/app/current/data/db/"
}
}
],
and create the directories in your app, then "eb local run" will correctly convert them.
eb deploy should now work correctly.

Docker Compose - Mount 2 volumes with the same path to a container

Here is my docker-compose config file
{
"version": "2",
"services": {
"data1": {
"image": "myimage:v1",
"volumes": ["/agents"]
},
"data2": {
"image": "myimage:v2",
"volumes": ["/agents"]
},
"app": {
"image": "app:latest",
"volumes_from": ["data1", "data2"]
}
}
}
Since the volume from both "data1" and "data2" services is at path "/agents", my "app" container will have one of them. Is there a way to specify a different path for each volume that i mount?

Resources