Attached is my docker-compose file. Its a very simple project with a database and phpmyadmin to access it.
web:
build: ./docker_web/
links:
- db
ports:
- "80:80"
volumes:
- "./docker_web/www/:/var/www/site"
db:
image: mysql:latest
restart: always
volumes:
- "./.data/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: ^^^^
MYSQL_DATABASE: electionbattle
MYSQL_USER: admin
MYSQL_PASSWORD: ^^^^
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: phpmyadmin
environment:
- PMA_ARBITRARY=1
restart: always
ports:
- 8081:80
volumes:
- /sessions
links:
- db
If I run this it works fine. I created the equivlent for Amazon Docker Run for Elastic Beanstalk and it starts up but for some reason it can't find the volume that is holding my persisted database data in the .data folder.
I've tried changing .data to just data no luck.
Also I get a weird error when trying to do eb deploy
2016-09-24 19:56:10 UTC-0700 ERROR ECS task stopped due to: Essential container in task exited. (db: CannotCreateContainerError: API error (500): create ./.data/db/: "./.data/db/" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed web: phpmyadmin: )
I have no idea how to fix this error or why its happening. Any ideas?
Oops forgot to add my amazon file :).
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "web",
"host": {
"sourcePath": "./docker_web/www/"
}
},
{
"name": "db",
"host": {
"sourcePath": "./data/db/"
}
}
],
"containerDefinitions": [
{
"name": "web",
"image": "197984628663.dkr.ecr.us-west-1.amazonaws.com/electionbattleonline",
"memory": 200,
"essential": true,
"mountPoints": [
{
"sourceVolume": "web",
"containerPath": "/var/www/site",
"readOnly": false
}
],
"links": [
"db"
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
]
},
{
"name": "db",
"image": "mysql:latest",
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "^^^^"
},
{
"name": "MYSQL_DATABASE",
"value": "electionbattleonline"
},
{
"name": "MYSQL_USER",
"value": "admin"
},
{
"name": "MYSQL_PASSWORD",
"value": "^^^^"
}
],
"portMappings": [
{
"hostPort": 3306,
"containerPort": 3306
}
],
"mountPoints": [
{
"sourceVolume": "db",
"containerPath": "/var/lib/mysql",
"readOnly": false
}
],
"essential": true,
"memory": 200
},
{
"name": "phpmyadmin",
"image": "phpmyadmin/phpmyadmin",
"environment": [
{
"name": "PMA_ARBITRARY",
"value": "1"
}
],
"essential": true,
"memory": 128,
"links": [
"db"
],
"portMappings": [
{
"hostPort": 8081,
"containerPort": 80
}
]
}
]
}
Don't use relative paths.
Use eb local run to test before deploying, it will help you solve deployment issues. Your Dockerrun.aws.json file will be converted into a docker-compose.yml file and started using a local copy of docker.
You can find the generated docker-compose.yml in your project directory at the path .elasticbeanstalk/docker-compose.yml. You will notice that your volumes are missing from the docker-compose config file.
To fix this change your volumes to:
"volumes": [
{
"name": "web",
"host": {
"sourcePath": "/var/app/current/docker_web/www/"
}
},
{
"name": "db",
"host": {
"sourcePath": "/var/app/current/data/db/"
}
}
],
and create the directories in your app, then "eb local run" will correctly convert them.
eb deploy should now work correctly.
Related
Node server and redis containers are defined within the same docker-compose.yml:
version: '3.9'
services:
backend:
image: 127.0.0.1:5000/backend
container_name: backend
hostname: backend
restart: always
networks:
- wellerman
volumes:
- ./:/app/src
build: .
environment:
REDIS_HOST: 'localhost'
REDIS_PORT: 6379
ports:
- '3231:4000'
depends_on:
- redis
links:
- redis
redis:
container_name: redis
hostname: redis
restart: always
networks:
- wellerman
image: redis:latest
command: --port 6379
ports:
- '6379:6379'
networks:
wellerman:
external: true
The network is configured in the following way (snapshot was taken when the backend node server got up for a moment; redis, although works correctly, is not reported as connected to the network):
docker network inspect wellerman
[
{
"Name": "wellerman",
"Id": "xiqats19pc6gb6tvmre1thyhh",
"Created": "2021-08-26T15:56:49.678973091Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.15.0/24",
"Gateway": "10.0.15.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"dd114e62bb8dffe866e1293af30cb7ad86516dfaaae5eae04b9347507b26023c": {
"Name": "backend_backend.1.w1suc8ytt8jfxvs9syspx1uv6",
"EndpointID": "dd114e62bb8dffe866e1293af30cb7ad86516dfaaae5eae04b9347507b26023b",
"MacAddress": "02:42:0a:00:0f:25",
"IPv4Address": "10.0.15.37/24",
"IPv6Address": ""
},
"lb-wellerman": {
"Name": "wellerman-endpoint",
"EndpointID": "dd114e62bb8dffe866e1293af30cb7ad86516dfaaae5eae04b9347507b26023a",
"MacAddress": "02:42:0a:00:0f:26",
"IPv4Address": "10.0.15.38/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4111"
},
"Labels": {},
"Peers": [
{
"Name": "ba2cce29a569",
"IP": "10.12.63.2"
}
]
}
]
The node server terminates if doesn't have established connection with redis instance, and since this is persistent situation, it keeps crashing.
Redis seems to work as intended:
docker service inspect t7znrullne5x
[
{
"ID": "t7krullne5xvkze8xr3znlott",
"Version": {
"Index": 233743
},
"CreatedAt": "2021-08-26T15:55:05.529166159Z",
"UpdatedAt": "2021-08-26T15:55:05.534908594Z",
"Spec": {
"Name": "backend_redis",
"Labels": {
"com.docker.stack.image": "redis:latest",
"com.docker.stack.namespace": "backend"
},
"TaskTemplate": {
"ContainerSpec": {
"Image": "redis:latest#sha256:66ce9bc742609650afc3de7009658473ed601db4e926a5b16d239303383bacad",
"Labels": {
"com.docker.stack.namespace": "backend"
},
"Args": [
"--port",
"6379"
],
"Hostname": "redis",
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"StopGracePeriod": 10000000000,
"DNSConfig": {},
"Isolation": "default"
},
"Resources": {},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
},
{
"OS": "linux"
},
{
"OS": "linux"
},
{
"Architecture": "arm64",
"OS": "linux"
},
{
"Architecture": "386",
"OS": "linux"
},
{
"Architecture": "mips64le",
"OS": "linux"
},
{
"Architecture": "ppc64le",
"OS": "linux"
},
{
"Architecture": "s390x",
"OS": "linux"
}
]
},
"Networks": [
{
"Target": "xiqats19pc6gb6tvmre1thyhh",
"Aliases": [
"redis"
]
}
],
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 6379,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 6379,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 6379,
"PublishedPort": 6379,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "iyu2r81xo9fjiyhxve36dcuxx",
"Addr": "10.0.0.143/24"
},
{
"NetworkID": "xiqats19pc6gb6tvmre1thyhh",
"Addr": "10.0.15.7/24"
}
]
}
}
]
From the info above it looks that redis is connected with wellerman network (network inspect doesn't show it though).
Looking at the logs from node server I find information about ECONNREFUSED (-111) when trying to reach the redis instance.
If I replace the REDIS_HOST env. variable from localhost to redis, I get ENOTFOUND (-3008) error.
Also, setting up 127.0.0.1 or the actual IP of the host doesn't change much: the connection between these two containers seem not to exist.
What would you suggest to check or change?
I managed to get this thing working.
Here's what I did:
launched redis as a separate instance:
docker run --name redis -p 6379:6379 -d redis:latest
moved all environmental variables to project's Dockerfile, setting the redis host address as the server's IP address. Then built the project, tagged and launched as a service:
docker build . -t backend:v1
docker service create --detach=true --name backend -p 3231:4000 backend:v1
After a couple of seconds the whole instance was running and was ready for scaling.
reading at this docker-compose file : (Source: https://docs.traefik.io/user-guides/docker-compose/acme-dns/#setup)
version: "3.3"
services:
traefik:
image: "traefik:v2.1.0"
container_name: "traefik"
command:
- "--log.level=DEBUG"
How can I set --log.level=DEBUG command in this marathon deployment file:
{
"id": "/whoami",
"cpus": 0.1,
"mem": 256.0,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "traefik:v2.1.0",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
]
}
}
}
I think you just need to add "args": ["--log.level=DEBUG"] to your JSON.
{
"id": "/whoami",
"cpus": 0.1,
"mem": 256.0,
"instances": 1,
"args": ["--log.level=DEBUG"],
"container": {
"type": "DOCKER",
"docker": {
"image": "traefik:v2.1.0",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
]
}
}
}
docker-compose spec support volume mapping syntax under services, for example:
version: '2'
volumes:
jenkins_home:
external: true
services:
jenkins:
build:
context: .
args:
DOCKER_GID: ${DOCKER_GID}
DOCKER_VERSION: ${DOCKER_VERSION}
DOCKER_COMPOSE: ${DOCKER_COMPOSE}
volumes:
- jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
Following "AWSTemplateFormatVersion": "2010-09-09", the corresponding ECS task definition has volume syntax un-readable(with MountPoints and Volumes), as shown below:
"EcsTaskDefinition": {
"Type": "AWS::ECS::TaskDefinition",
"Properties": {
"ContainerDefinitions": [
{
"Name": "jenkins",
"Image": "xyzaccount/jenkins:ecs",
"Memory": 995,
"PortMappings": [ { "ContainerPort": 8080, "HostPort": 8080 } ],
"MountPoints": [
{
"SourceVolume": "docker",
"ContainerPath": "/var/run/docker.sock"
},
{
"SourceVolume": "jenkins_home",
"ContainerPath": "/var/jenkins_home"
}
]
}
],
"Volumes": [
{
"Name": "jenkins_home",
"Host": { "SourcePath": "/ecs/jenkins_home" }
},
{
"Name": "docker",
"Host": { "SourcePath": "/var/run/docker.sock" }
}
]
}
}
Does ECS task definition syntax of CloudFormation (now) support volume mapping syntax? similar to docker-compose....
Yes, of course, ECS support docker socket mounting, but the syntax is bit different. Add DOCKER_HOST environment variable in the task definition and source path should start with //.
"volumes": [
{
"name": "docker",
"host": {
"sourcePath": "//var/run/docker.sock"
}
}
]
The // worked in case of AWS ecs.
Also, you need to add DOCKER_HOST environment variable in your task definition.
"environment": [
{
"name": "DOCKER_HOST",
"value": "unix:///var/run/docker.sock"
}
]
I'm making an multi-container docker build on Elastic Beanstalk and whenever I run eb deploy
I get the error ECS Application sourcebundle validation error: We expected a VALUE token but got: START_ARRAY
I think it might be something wrong with my Dockerrun.aws.json, but I can't seem to figure out what it is.
Here's my dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"environment": [
{
"name": "ELASTICSEARCH_URL",
"value": "elasticsearch:9200"
}
],
"essential": true,
"image": "902260087874.dkr.ecr.ap-southeast-1.amazonaws.com/the-medical-agora",
"memory": 128,
"links": [
"db",
"elasticsearch"
],
"mountPoints": [
{
"containerPath": "/usr/src/app",
"sourceVolume": "."
}
],
"name": "app",
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
]
},
{
"memory": 128,
"essential": true,
"image": "postgres:10.3-alpine",
"mountPoints": [
{
"containerPath": "/var/lib/postgresql/data",
"sourceVolume": "Db"
}
],
"name": "db",
"portMappings": [
{
"containerPort": 5432,
"hostPort": 5432
}
]
},
{
"memory": 128,
"essential": true,
"image": "docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4",
"mountPoints": [
{
"containerPath": "/usr/share/elasticsearch/data",
"sourceVolume": "Esdata1"
}
],
"name": "elasticsearch"
}
],
"volumes": [
{
"host": {
"sourcePath": "esdata1"
},
"name": "Esdata1"
},
{
"host": {
"sourcePath": "db"
},
"name": "Db"
},
{
"host": {
"sourcePath": "."
},
"name": "_"
}
]
}
Which is weird because when I ran this dockerrun.aws.json JSON schema linter on it, it seemed to do fine.
The project also works when I run it with eb local run. It seems to only break when I'm deploying it to Elastic Beanstalk.
Hey guys after reading the docs of eb deploy I discovered the problem.
Although I fixed the Dockerrun.aws.json file, it doesn't reflect on eb deploy until I make a new git commit.
So I just ran git add . and git commit and then ran git push for good measure.
After that when I ran eb deploy it used my new Dockerrun.aws.json and my problems were resolved.
What I am working on:
nginx- openresty with mecached and docker-compose.
from nginx I am able to connect memcached container by specifying resolver = 127.0.0.11, in docker compose it working file.
But when I am deploying it on AWS multi container beanstalk I am getting time out error
failed to connect: memcache could not be resolved (110: Operation timed out)
but from nginx container I am able to ping memcahed.
NGINX.conf
location /health-check {
resolver 127.0.0.11 ipv6=off;
access_by_lua_block {
local memcached = require "resty.memcached"
local memc, err = memcached:new()
if not memc then
ngx.say("failed to instantiate memc: ", err)
return
end
memc: set_timeout(1000) -- 1 sec
local ok, err = memc:connect("memcache", 11211)
if not ok then
ngx.say("failed to connect: ", err)
return
end
DOCKER-COMPOSE.YML
version: "3"
services:
memcache:
image: memcached:alpine
container_name: memcached
ports:
- "11211:11211"
expose:
- "11211"
networks:
- default
nginx:
image: openresty/openresty:alpine
container_name: nginx
volumes:
# Nginx files
- ./nginx/:/etc/nginx/:ro
# Web files
- ./web/:/var/www/web/:ro
entrypoint: openresty -c /etc/nginx/nginx.conf
ports:
- "8080:8080"
networks:
- default
DOCKERRUN.AWS.JSON
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "current-nginx",
"host": {
"sourcePath": "/var/app/current/nginx"
}
},
{
"name": "web",
"host": {
"sourcePath": "/var/www/web/"
}
}
],
"containerDefinitions": [
{
"name": "memcache",
"image": "memcached:alpine",
"essential": true,
"memory": 1000,
"portMappings": [
{
"hostPort": 11211,
"containerPort": 11211
}
]
},
{
"name": "nginx",
"image": "openresty/openresty:alpine",
"essential": true,
"memory": 1000,
"entryPoint": [
"openresty",
"-c",
"/etc/nginx/nginx.conf"
],
"links": [
"memcache"
],
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
},
{
"hostPort": 80,
"containerPort": 8080
}
],
"mountPoints": [
{
"sourceVolume": "web",
"containerPath": "/var/www/web/",
"readOnly": false
},
{
"sourceVolume": "current-nginx",
"containerPath": "/etc/nginx",
"readOnly": false
}
]
}
]
}
You have a typo:
memc:connect("memcache", 11211)
should be
memc:connect("memcached", 11211)
(you are missing a "d").