Access data from external container - docker

I have some docker images running like below:
77beec19859a nginx:latest "nginx -g 'daemon of…" 9 seconds ago Up 4 seconds 0.0.0.0:8000->80/tcp dockerisedphp_web_1
d48461d800e0 php:fpm "docker-php-entrypoi…" 9 seconds ago Up 4 seconds 9000/tcp dockerisedphp_php_1
a6ed456a4cc2 phpmyadmin/phpmyadmin "/docker-entrypoint.…" 12 hours ago Up 12 hours 0.0.0.0:8080->80/tcp sc-phpmyadmin
9e0dda76c110 firewatchdocker_webserver "docker-php-entrypoi…" 12 hours ago Up 12 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp 7.4.x-webserver
beba7cb1ee14 firewatchdocker_mysql "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
000e0f21d46e redis:latest "docker-entrypoint.s…" 12 hours ago Up 12 hours 0.0.0.0:6379->6379/tcp sc-redis
The problem is: My PHP script need to access the data on the mysql inside the mysql container from the container dockerisedphp_web_1.
This kind of data exchange between containers is possible?
Actually I'm using the docker-compose to bring all up.

If you only need to do something with the data, and you don't need it on the host, you can use docker exec.
If you want to copy it to the host, or copy data from the host into a container, you can use docker cp.
You can use docker exec to run mysql client inside the mysql container, write the results to a file, and then use docker cp to copy the output to the host.
Or you can just do something like docker exec mysql-container-name mysql mysql-args > output on the host.

Related

Airflow Docker Unhealthy trigerrer

Im trying to setup airflow on my machine using docker and the docker-compose file provided by airflow here : https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#docker-compose-yaml
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4d8de8f7782 apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-scheduler_1
3315f125949c apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 8080/tcp airflow_airflow-worker_1
2426795cb59f apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp airflow_airflow-webserver_1
cf649cd645bb apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (unhealthy) 8080/tcp airflow_airflow-triggerer_1
fa6b181113ae apache/airflow:2.2.0 "/usr/bin/dumb-init …" About a minute ago Up About a minute (healthy) 0.0.0.0:5555->5555/tcp, :::5555->5555/tcp, 8080/tcp airflow_flower_1
b6e05f63aa2c postgres:13 "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 5432/tcp airflow_postgres_1
177475be25a3 redis:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes (healthy) 6379/tcp airflow_redis_1
I followed all steps as described in this URL, every airflow component is working great but the airflow trigerrer shows an unhealthy status :/
Im kinda new to docker i just know the basics and i don't really know how to debug that, can anyone help me up ?
Try to follow all steps on their website including mkdir ./dags ./logs ./plugins echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env.
I don't know but it works then, but still unhealthy,
airflow.apache.org/docs/apache-airflow/stable/start/docker.html

Docker stack deploy cannot deploy service to different node in swarm cluster

I am trying to deploy the application on multiple instances. On master node. after deployed application running the only master node. cannot deploy service different node in the docker swarm cluster.
here my docker-compose file
version: "3"
services:
mydb:
image: localhost:5000/mydb-1
environment:
TZ: "Asia/Colombo"
ports:
- 9042:9042
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
command docker service scale mydb-1_mydb=5
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7fxxxxxxxx7 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 5 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.2.q77i258vn2xynlgein9s7tdpb
34fcxxxx14bd localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 4 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.1.s2mzitj8yzb0zo7spd3dmpo1j
9axxxx1efb localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 8 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.3.zgyev3p4qdg7hf7h67oeedutr
f14xxxee59 localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 2 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.4.r0themodonzzr1izdbnppd5bi
e3xxx16d localhost:5000/mydb-1:latest "docker-entrypoint.sh" 16 seconds ago Up 6 seconds 7000-7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp mydb-1_mydb.5.bdebi4
all running only master-node. Does anyone know the issue?
Your image appears to be locally built with a name that cannot be resolved in other nodes (localhost:5000/mydb-1). In swarm, images should be pushed to a registry, and that registry needs to be accessible by all nodes. You can run your own registry service on your own node, there's a docker image, or you can push to docker hub. If the registry is private, you also need to perform a docker login on the node running the stack deploy and include registry credentials in that deploy, e.g.
docker stack deploy -c compose.yml --with-registry-auth stack-name
Thanks. I find the issue and fixed.
volumes:
- /root/data/cdb:/var/lib/cassandra
- /root/logs/cdb:/var/log/cassandra
If you bind mount a host path into your service’s containers, the path must exist on every swarm node.
docker service scale zkr_zkr=2
after scale-up service running my node
root#beta-node-1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9bxxx15861 localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.3.qpr8qp5y
01dxxxx64bc localhost:5000/zookeeper:latest "/docker-entrypoint.…" 40 minutes ago Up 40 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zkr_zkr.1.g2uee5j

Why can't I go to localhost using Laradock?

I'm getting error: This page isn’t working
I ran the following command inside the Laradock directory yet it's not connecting when I go to localhost. docker-compose up -d nginx postgres
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19433b191832 laradock_nginx "/bin/bash /opt/star…" 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp laradock_nginx_1
e7f68a9d841d laradock_php-fpm "docker-php-entrypoi…" 5 minutes ago Up 5 minutes 9000/tcp laradock_php-fpm_1
3c73fedff4aa laradock_workspace "/sbin/my_init" 5 minutes ago Up 5 minutes 0.0.0.0:2222->22/tcp laradock_workspace_1
eefb58598ee5 laradock_postgres "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:5432->5432/tcp laradock_postgres_1
ea559a775854 docker:dind "dockerd-entrypoint.…" 5 minutes ago Up 5 minutes 2375/tcp laradock_docker-in-docker_1
docker-compose ps returns these results:
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------------
laradock_docker-in-docker_1 dockerd-entrypoint.sh Up 2375/tcp
laradock_nginx_1 /bin/bash /opt/startup.sh Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp
laradock_php-fpm_1 docker-php-entrypoint php-fpm Up 9000/tcp
laradock_postgres_1 docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
laradock_workspace_1 /sbin/my_init Up 0.0.0.0:2222->22/tcp
Any help would be much appreciated.
I figured this out. I edited my docker-compose file volume to be /local/path/to/default.conf:/etc/nginx/sites-available
This is a problem because nginx looks for default.conf file but the volumes flag was setting sites-available as the file. I thought docker volume would symlink the file into the site-available directory not make it a file.
The correct volume syntax should be:
/local/path/to/default.conf:/etc/nginx/sites-available/default.conf

Docker Container Not accessible after commit

I just committed a docker container and getting following list
[root#kentuckianatradenew log]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19d7479f4f66 newsslcyclos "catalina.sh run" 12 minutes ago Up 12 minutes 0.0.0.0:80->80/tcp, 8080/tcp n2n33
88175386c0da cyclos/db "docker-entrypoint.s…" 26 hours ago Up 21 minutes 5432/tcp cyclos-db
But when I browse it through IP it won't accessible while same was fine before commit.
docker port 19d7479f4f66
80/tcp -> 0.0.0.0:80

Elastic Beanstalk & Docker: problem with elastic beanstalk spawning multiple docker containers

I'm forced to use elastic beanstalk (eb) and Docker in deploying. When I build & run my container locally it boots up and runs well. I'm using supervisord to boot some ruby code (clockwork and Rails/puma)
When deploying using eb, I see how eb spawns several consecutive containers until all just chokes down:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
232bbe498977 a4a6fd70537b "supervisord -c /etc…" About a minute ago Up About a minute 80/tcp silly_williams
a9e21774575e a4a6fd70537b "supervisord -c /etc…" 2 minutes ago Up 2 minutes 80/tcp trusting_murdock
945f51ef510f a4a6fd70537b "supervisord -c /etc…" 3 minutes ago Up 3 minutes 80/tcp blissful_stonebraker
6e51470ddce8 a4a6fd70537b "supervisord -c /etc…" 4 minutes ago Up 4 minutes 80/tcp lucid_ramanujan
2689568ceb6d a4a6fd70537b "supervisord -c /etc…" 4 minutes ago Up 4 minutes 80/tcp keen_mestorf
Where should I be looking for the root to this behavior? Can the container be creating this behaviour or is eb configured in a wrong way?
(I apologize that I'm a bit too unspecific with details since I'm not in full control of the environment)
I eventually realized I had been tampering with some settings, and had set monitoring to basic. Once put to Enhanced it only booted one container and things started to work again!
In:
Elastic Beanstalk > [my application] > Configuration > monitoring > System: Enhanced.

Resources