Elastic Beanstalk & Docker: problem with elastic beanstalk spawning multiple docker containers - docker

I'm forced to use elastic beanstalk (eb) and Docker in deploying. When I build & run my container locally it boots up and runs well. I'm using supervisord to boot some ruby code (clockwork and Rails/puma)
When deploying using eb, I see how eb spawns several consecutive containers until all just chokes down:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
232bbe498977 a4a6fd70537b "supervisord -c /etc…" About a minute ago Up About a minute 80/tcp silly_williams
a9e21774575e a4a6fd70537b "supervisord -c /etc…" 2 minutes ago Up 2 minutes 80/tcp trusting_murdock
945f51ef510f a4a6fd70537b "supervisord -c /etc…" 3 minutes ago Up 3 minutes 80/tcp blissful_stonebraker
6e51470ddce8 a4a6fd70537b "supervisord -c /etc…" 4 minutes ago Up 4 minutes 80/tcp lucid_ramanujan
2689568ceb6d a4a6fd70537b "supervisord -c /etc…" 4 minutes ago Up 4 minutes 80/tcp keen_mestorf
Where should I be looking for the root to this behavior? Can the container be creating this behaviour or is eb configured in a wrong way?
(I apologize that I'm a bit too unspecific with details since I'm not in full control of the environment)

I eventually realized I had been tampering with some settings, and had set monitoring to basic. Once put to Enhanced it only booted one container and things started to work again!
In:
Elastic Beanstalk > [my application] > Configuration > monitoring > System: Enhanced.

Related

Access data from external container

I have some docker images running like below:
77beec19859a nginx:latest "nginx -g 'daemon of…" 9 seconds ago Up 4 seconds 0.0.0.0:8000->80/tcp dockerisedphp_web_1
d48461d800e0 php:fpm "docker-php-entrypoi…" 9 seconds ago Up 4 seconds 9000/tcp dockerisedphp_php_1
a6ed456a4cc2 phpmyadmin/phpmyadmin "/docker-entrypoint.…" 12 hours ago Up 12 hours 0.0.0.0:8080->80/tcp sc-phpmyadmin
9e0dda76c110 firewatchdocker_webserver "docker-php-entrypoi…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp 7.4.x-webserver
beba7cb1ee14 firewatchdocker_mysql "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
000e0f21d46e redis:latest "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:6379->6379/tcp sc-redis
The problem is: My PHP script need to access the data on the mysql inside the mysql container from the container dockerisedphp_web_1.
This kind of data exchange between containers is possible?
Actually I'm using the docker-compose to bring all up.
If you only need to do something with the data, and you don't need it on the host, you can use docker exec.
If you want to copy it to the host, or copy data from the host into a container, you can use docker cp.
You can use docker exec to run mysql client inside the mysql container, write the results to a file, and then use docker cp to copy the output to the host.
Or you can just do something like docker exec mysql-container-name mysql mysql-args > output on the host.

Why can't I go to localhost using Laradock?

I'm getting error: This page isn’t working
I ran the following command inside the Laradock directory yet it's not connecting when I go to localhost. docker-compose up -d nginx postgres
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19433b191832 laradock_nginx "/bin/bash /opt/star…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp laradock_nginx_1
e7f68a9d841d laradock_php-fpm "docker-php-entrypoi…" 5 minutes ago Up 5 minutes 9000/tcp laradock_php-fpm_1
3c73fedff4aa laradock_workspace "/sbin/my_init" 5 minutes ago Up 5 minutes 0.0.0.0:2222->22/tcp laradock_workspace_1
eefb58598ee5 laradock_postgres "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:5432->5432/tcp laradock_postgres_1
ea559a775854 docker:dind "dockerd-entrypoint.…" 5 minutes ago Up 5 minutes 2375/tcp laradock_docker-in-docker_1
docker-compose ps returns these results:
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------
laradock_docker-in-docker_1 dockerd-entrypoint.sh Up 2375/tcp
laradock_nginx_1 /bin/bash /opt/startup.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
laradock_postgres_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
Any help would be much appreciated.
I figured this out. I edited my docker-compose file volume to be /local/path/to/default.conf:/etc/nginx/sites-available
This is a problem because nginx looks for default.conf file but the volumes flag was setting sites-available as the file. I thought docker volume would symlink the file into the site-available directory not make it a file.
The correct volume syntax should be:
/local/path/to/default.conf:/etc/nginx/sites-available/default.conf

Unexpected extra container created when deploying a service to a swarm

I observe an odd behavior of swarm when I create a service with docker in swarm mode.
basically, I create a service from a private registry, with a binding mount :
docker service create --mount type=bind,src=/some/shared/filesystem/mod_tile,dst=/osm/mod_tile,ro --name="mod_tile" --publish 8082:80 --replicas 3 --with-registry-auth my-registry:5050/repo1/mod_tile
This goes well... and my services are replicated the way I expected...
But When I perform a docker ps on the manager, I see my expected container, as well as an unexpected second container, running from the same image, with a different name :
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ca33d my-registry:5050/mod_tile:latest "apachectl -D FOREGRâ¦" About a minute ago Up About a minute vigilant_kare.1.fn5u
619e7 my-registry:5050/mod_tile:latest "apachectl -D FOREGRâ¦" 3 minutes ago Up 3 minutes mod_tile.3.dyismrc
4f1ebf demo/demo-tomcat:0.0.1 "./entrypoint.sh" 7 days ago Up 7 days (healthy) 9900/tcp, 0.0.0.0:8083->8080/tcp tomcatgeoserver
d3adf some.repo:5000/manomarks/visualizer:latest "npm start" 8 days ago Up 8 days 8080/tcp supervision_visualizer.1.ok27kbz
673c1 some.repo:5000/grafana/grafana:latest "/run.sh" 8 days ago Up 8 days 3000/tcp supervision_grafana.1.pgqko8 some.repo:5000/portainer:latest "/portainer --externâ¦" 8 days ago Up 8 days 9000/tcp supervision_portainer.1.vi90w6
bd9b1 some.repo:5000/prom/prometheus:latest "/bin/prometheus -coâ¦" 8 days ago Up 8 days 9090/tcp supervision_prometheus.1.j4gyn02
d8a8b some.repo:5000/cadvisor:0.25.0 "/usr/bin/cadvisor -â¦" 8 days ago Up 8 days 8080/tcp supervision_cadvisor.om7km
bd46d some.repo:5000/prom/node-exporter:latest "/bin/node_exporter â¦" 8 days ago Up 8 days 9100/tcp supervision_nodeexporter.om7kmd
04b53 some.repo:5000/sonatype/nexus3 "sh -c ${SONATYPE_DIâ¦" 9 days ago Up 2 hours 0.0.0.0:5050->5050/tcp, 0.0.0.0:8081->8081/tcp nexus_registry
At first, I thought it was a remaining container from previous attempts, so I stoped it... but a few seconds later, it was up again! No matter how many time I stop it, it will be restarted.
So, I guess it is there on purpose... but I don't understand : I already have my 3 replicas running (I checked on all nodes), and even though I promote another node, the extra container appears only on the leader...
This may come from one of my other containers (used for supervision), but so far, I couldn't figure out from which one...
Does any one have an idea why this extra container is created?
EDIT 05/07
Here are the result of a docker ps on the mod_tile service. The 3 replicas are here, one one each node. The extra service is not considered by the "ps" command.
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
c77gc mod_tile.1 my-registry:5050/mod_tile:latest VM3 Running Running 15 hours ago
u7465 mod_tile.2 my-registry:5050/mod_tile:latest VM4 Running Running 15 hours ago
dyism mod_tile.3 my-registry:5050/mod_tile:latest VM2 Running Running 15 hours ago
It looks like you have a second service defined with the name "vigilant_kare", possibly automatically named if you didn't provide a name.
Swarm mode will automatically correct a down or deleted container to return you to the target state. To delete a container managed by swarm mode, you need to delete the service that manages it:
docker service rm vigilant_kare

Docker - run deployed asp.net application

How to run this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c2cafa3cb28b voiconshop:dev "tail -f /dev/null" 8 minutes ago Up 8 minutes 5000/tcp, 0.0.0.0:32769->80/tcp dockercompose5647427199741822447_voiconshop_1
a14af67cb5f1 e898d5096181 "dotnet VoiConShop..." 4 hours ago Up 4 hours 8889/tcp, 0.0.0.0:8080->2000/tcp loving_volhard
I opened Chrome then enter: http://localhost:8889/ or http://localhost:5000/
However, the site can't reached.

dokku error: cannot find entity for app

trying to restart dokku after a server reboot. Everything I do (redis:link, dokku deploy, pushing a new version of the app from my computer..) fails with a could not find entity error message.
2014/07/16 09:02:45 Error: Could not find entity for domain_freek
I feel like it might have something to do with the docker containers or images, so I tried restarting the latest docker container for the app, but no dice. Dokku still gives me a could not find entity error.
root#domainfreek3:/home/dokku/domain_freek# docker restart `cat /home/dokku/domain_freek/CONTAINER`
b5823fa2703f
root#domainfreek3:/home/dokku/domain_freek# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b5823fa2703f dokku/domain_freek:latest /bin/bash -c '/start 6 minutes ago Up 10 seconds 5000/tcp compassionate_tesla
f6d165a32e92 2c035b41f308 /bin/bash -c '/start 27 minutes ago Up 25 minutes 5000/tcp boring_heisenberg
96c33f5458de jezdez/redis:latest /usr/bin/redis-serve 27 minutes ago Up 27 minutes 6379/tcp redis_domain_freek
f76d9f0d944b postgresql/domain_freek:latest /usr/bin/start_pgsql 24 hours ago Up 18 hours 0.0.0.0:49153->5432/tcp sick_einstein
root#domainfreek3:/home/dokku/domain_freek# dokku deploy domain_freek
-----> Checking status of PostgreSQL
Found image postgresql/domain_freek database
Checking status... ok.
2014/07/16 09:04:47 Error: Could not find entity for domain_freek
root#domainfreek3:/home/dokku/domain_freek# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6d165a32e92 2c035b41f308 /bin/bash -c '/start 27 minutes ago Up 25 minutes 5000/tcp boring_heisenberg
96c33f5458de jezdez/redis:latest /usr/bin/redis-serve 27 minutes ago Up 27 minutes 6379/tcp redis_domain_freek
f76d9f0d944b postgresql/domain_freek:latest /usr/bin/start_pgsql 24 hours ago Up 18 hours 0.0.0.0:49153->5432/tcp sick_einstein
Any advice would be appreciated!

Resources