docker compose connect ssh tunnel server on load container - docker

I usually do a:
ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem
from my local computer to a remote aws server.
Now I would like to connect AUTOMATICALLY from my ssh-container service in a container when the container is load. Is that possible? if so , what i'm doing wrong? how can I achieve this?
Container is loaded, if execute "ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem" from the shell it connect to my remote though
The reason why I'm trying this is because I have another service "phpmyadmin-container" which require that ssh connection to load mysql datadb through ssh tunnel.
Aside that I have another problem to solve with phpmyadmin(this if I go into ssh-container shell's I can see that I'm not connected.
Any help would be appreciated.
my yml file look like:
version: '3'
services:
phpmyadmin-container:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: 127.0.0.1
PMA_PORT: 33333
PMA_USERNAME: user
PMA_USERNAME_PASSWORD: pass
ports:
- '8080:80'
ssh-container:
image: nazarpc/webserver:ssh-v1
restart: always
volumes:
# this where I have the key.pem (could be anywhere ,am i right?)
- /usr/lib:/usr/lib
command: ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem
#I have also use it the way ["ssh", "--oStrictHostKeyChecking=no" ...]

Related

Docker compose can't access ports of other containers

I have 2 containers running with docker compose. One of the containers is executing a shell script which should check if the other container has already started and is running on port 9990.
Even though the container is starting, the shell script echos nothing.
keycloak:
image: jboss/keycloak:latest
volumes:
- ./imports/cache_reload/disable-theme-cache.cli:/opt/jboss/startup-scripts/disable-theme-cache.cli
- ./imports/themes/custom/:/opt/jboss/keycloak/themes/custom-theme/
- ./imports/realm/realm-export.json:/opt/jboss/realms/custom-import.json
environment:
DB_VENDOR: MYSQL
DB_ADDR: mysql
DB_DATABASE: keycloak
DB_USER: keycloak
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: Pa55w0rd
ports:
- 8080:8080
depends_on:
- mysql
keycloak_installer:
image: solr:6.6-alpine
volumes:
- ./imports/scripts/import-realm.sh:/docker-entrypoint-initdb.d/init.sh
depends_on:
- keycloak
The shell script is the following:
echo "MOIN LEUDE TRYMACS HIER!"
while ! nc -z localhost 9990; do
sleep 1
echo "Waiting for keycloak server startup 9990..."
echo "$(nc -z localhost 9990)"
done
The first echo is printed, but then nothing else is printed.
The container keycloak is running on Port 9990.
Please help, thanks
You have to understand more detail about network in docker compose.
To solve your issue, you need :
Add network in your docker compose file for each container (there is a default network but to understand the mechanism, you can define it explicitly). This must looks like this (under ports for example) for the first container (named keycloak):
ports:
- 8080:8080
networks:
- keycloak_network
On the second container (named keycloak_installer) (you must expose the port that you want to request in the first container):
depends_on:
- keycloak
networks:
- keycloak_network
On your script call explictly the second container which will be now available by the network. You must change your code by this :
nc -z keycloak_installer 9990

Configure docker volumes to share data across host and containers

I am stuck trying to configure docker volumes to share files between my host and make able in my container to use this files. let me explain.
I have a rails docker app with puma as a web server, I want to make able to puma to view and use the ssl .key and .crt files, so for this project also I am using docker-compose in "production mode", but I do not know how to make this work.
My setup is this:
Ubuntu 18.04 server host for production has the ssl files inside /home/ubuntu/my_app_keys, the containers are also in my host.
/home/ubuntu/docker-compose.yml
version: '3'
services:
postgres:
image: postgres:10.5
environment:
POSTGRES_DB: my_app_production
env_file:
-~/production.env
redis:
image: redis:4.0.11
web:
image: my_app:latest
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' -e production
ports:
- '3000:3000'
volumes:
- /home/ubuntu/my_app_keys
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
sidekiq:
image: my_app_sidekiq:latest
command: bundle exec sidekiq -C config/sidekiq.yml
depends_on:
- postgres
- redis
env_file:
- ~/production.env
restart: always
so, as you can see: command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt' is looking for ssl files in /home/ubuntu/my_app_keys, when I execute docker-compose up puma can not find the ssl files and exits with:
/usr/local/bundle/gems/puma-3.9.1/lib/puma/minissl.rb:180:in `key=': No such key file '/home/ubuntu/my_app_keys/server.key' (ArgumentError)
I think is because key=/home/ubuntu/my_app_keys/server.key&cert=/home/ubuntu/my_app_keys/server.crt are pointing in the container context but I have the cert and key in my host context
so, I include in docker compose volume in order to bind-mount the files:
volumes:
- /home/ubuntu/my_app_keys
but without luck, same error.
In the container context my app lives in /var/www/my_app directory, so I tried to specify an absolute path (for some reason I imagined that it was because the ssl files were not in the same directory where my app lived could not be shared), so I add as compose-file docs say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
and change in compose file:
command: bundle exec rails server -p 3000 -b 'ssl://127.0.0.1:3000?key=server.key&cert=server.crt' -e
when I execute the compose up my web service exit with error:
web | Could not locate Gemfile or .bundle/ directory
only way that web service run is (but no ssl files exist):
volumes:
- /home/ubuntu/my_app_keys
so, I do not know what to do now. any help?
When your Docker Compose YAML file says:
volumes:
- /home/ubuntu/my_app_keys
It means, "make /home/ubuntu/my_app_keys in container space persist across restarts of the container; it will start off empty unless the Dockerfile did something special; it's not connected to any specific host content".
When you say:
volumes:
- /home/ubuntu/my_app_keys:/var/www/my_app
It means, "totally replace the contents of /var/www/my_app in container space with the contents of /home/ubuntu/my_app_keys on the host". (The path names in host and container space don't need to be the same.)
As a bonus question, when you say:
rails server -b 'ssl://127.0.0.1:3000?...'
It means, "only listen for inbound connections on port 3000 initiated from within this Docker container; don't accept any connections from outside the container at all, whether from the same physical host, other containers, or elsewhere."

Connect from one Docker container to the other one

I am running a Java app inside a Docker container which is supposed to connect MySQL inside the other container. Trying multiple options suggested in the forms, nothing really works. Here is my Docker Compose file:
version: "3"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1
environment:
- DB_HOST=Imrans-MacBook-Pro.local
- DB_PORT=3306
ports:
- 8080:8080
networks:
- backend
depends_on:
- mysql
mysql:
image: mysql:5.7.20
hostname: mysql
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
networks:
- backend
networks:
backend:
driver: bridge
Where DB_HOST=Imrans-MacBook-Pro.local is my laptop's name. This did not work. Some suggest that the container name can be used so tried DB_HOST= mysql, never worked.
The only thing works from times to time when I pass the laptop's IP address, which is not I want to do. So, what is a good way to create communication between those containers?
The mysql is running in the container so there are two things that you should consider here:
If the mysql is running in the container then you will need to link the app container to the mysql container. This will allow them to talk to
each other using docker's inter container communication. The containers talk to each other using hostnames to resolve their respective internal IP addresses. See later in my answer I will show you how to get the two containers to communicate with each other using a compose file.
The mysql container should make use of a docker volume to store the database. This will allow you to store the database and related files on the file system of the host (server or machine where the containers are running on). The docker volume will then be mounted as a directory in the container. Thus the container can now read and write to a directory on the machine where the docker containers are running on. This means that even if the containers are all deleted or removed you will still have the database data persist. Here is a nice beginner friendly article on docker volumes and using them with MySQL:
https://severalnines.com/blog/mysql-docker-containers-understanding-basics
Container communication using only docker without compose:
You have container "app" and "mysql", you want to be able to access "app" on localhost and you want "app" to be able to connect to mysql. How are you gonna do this?
1. You need to expose a port for container "app" so we can access it on localhost. The docker containers have their own internal network and it is closed to you unless you expose some ports with docker.
You need to link the "mysql" container to "app" without exposing "mysql" 's ports to the rest of the world.
This config should work for what you want to achieve:
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1:latest
links:
- mysql
environment:
- DB_HOST=mysql
# This is the hostname that app will reach the mysql container on.
# If you do with app container:
# docker exec -it <app container id> bash
# # apt-get update -y && apt-get install iputils-ping -y
#
# Then you should be able to ping mysql container with:
#
# # ping -c 2 mysql
- DB_PORT=3306
ports:
- 8080:8080
# You will access "app" on localhost:8080 in your browser. If this is running on your own machine.
mysql: #hostname actually gets set here so no need to set it later
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
# Remember to use a volume if you would like this container's data to persist or if you would like
# to restore a database backup.
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
Now you can just start it up with:
$ docker-compose up
If you ran this before then just make sure to run this first before running docker-compose up:
$ docker-compose down
Let me know if that helps.
I have, in the past, gotten this to work without explicitly setting the host networking part in Docker Compose. Because Docker images inside a Docker Compose File are put into a Docker Network with each other, you really shouldn't have to do anything to get this to work: by default you should be able to attach into the container for your Spring app and be able to ping mysql and have it work out.
DB host should be localhost or 127.0.0.1

How to connect to postgres database with docker container

Im trying to use pgAdmin to connect to my postgres container in my docker image. But I cant seem to get it to connect. Here is my docker-compose.yml:
version: '2'
services:
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/livingrecipe
ports:
- '3000:3000'
env_file:
- .env
links:
- postgres
- elasticsearch
postgres:
image: postgres
elasticsearch:
image: elasticsearch
I have searched around and it looks like the localhost:5432 doesnt work unless inside the VM, but I cant find the VM IP. Looking in Kitematic under the ports this is what it shows:
Ive tried specifying the ports in the docker-compose.yml file but then I get an annoying error that says the ports are already allocated and cant figure out for the life of me what is using those ports so not sure whats going on there. Any help pointing me in the right direction either getting pgAdmin to work or just another way for me to access the database through gui like pgAdmin
You could do in your shell:
docker ps
This will give you the list of the images you have on your machine. Choose the container id of postgres image and type to the shell:
docker inspect <container_id>
This will give you the hash with info for your image. Find the IPAddress key. Use in pgadmin that IP and the port you've specified before (5432 I guess).

Local hostnames for Docker containers

Beginner Docker question here,
So I have a development environment in which I'm running a modular app, it is working using Docker Compose to run 3 containers: server, client, database.
The docker-compose.yml looks like this:
#############################
# Server
#############################
server:
container_name: server
domainname: server.dev
hostname: server
build: ./server
working_dir: /app
ports:
- "3000:3000"
volumes:
- ./server:/app
links:
- database
#############################
# Client
#############################
client:
container_name: client
domainname: client.dev
hostname: client
image: php:5.6-apache
ports:
- "80:80"
volumes:
- ./client:/var/www/html
#############################
# Database
#############################
database:
container_name: database
domainname: database.dev
hostname: database
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
ports:
- "5432:5432"
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
You can see I'm assigning a .dev domainname to each one, this works fine to see one machine from another one (Docker internal network), for example here I'm pinging server.dev from client.dev's CLI:
root#client:/var/www/html# ping server.dev
PING server.dev (127.0.53.53): 56 data bytes
64 bytes from 127.0.53.53: icmp_seq=0 ttl=64 time=0.036 ms
This works great internally, but not on my host OS network.
For convenience, I would like to assigns domains in MY local network, not the Docker containers network so that I can for example type: client.dev on my browsers URL and load the Docker container.
Right now, I can only access if I use the Docker IP, which is dynamic:
client: 192.168.99.100:80
server: 192.168.99.100:3000
database: 192.168.99.100:5432
Is there an automated/convenient way to do this that doesn't involve me manually adding the IP to my /etc/hosts file ?
BTW I'm on OSX if that has any relevance.
Thanks!
Edit: I found this Github issue which seems to be related: https://github.com/docker/docker/issues/2335
As far as I understood, they seem to say that it is something that is not available outside of the box and they suggest external tools like:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Is that correct? And if so, which one should I go for in my particular scenario?
OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which
will resolve to your docker-machine IP:
update-docker-host(){
# clear existing docker.local entry from /etc/hosts
sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts
# get ip of running machine
export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
# update /etc/hosts with docker machine ip
[[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts"
}
update-docker-host
This will automatically add or udpate the /etc/hosts line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Extending on #eduwass's own answer, here's what I did manually (without a script).
As mentioned in the question, define the domainname: myapp.dev and hostname: www in the docker-compose.yml file
Bring up your Docker containers as normal
Run docker-compose exec client cat /etc/hosts to get an output of the container's hosts file (where client is your service name)
(Output example: 172.18.0.6 www.myapp.dev)
Open your local (host machine) /etc/hosts file and add that line: 172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.
Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://client.dev
The request will follow the route:
Access a local development domain in a browser
The proxy extension forwards that request to mitm-nginx-proxy-companion instead of the “real” internet
mitm-nginx-proxy-companion tries to resolve the domain name through the dns server in the same container
If the domain is not a “local” one, it will forward the request to the “real” internet
But if the domain is a “local” one, it will forward the request to the nginx-proxy
The nginx-proxy in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
links removed as it's outdated and is replaced by Docker networks
you don't need to add domain names to server and database containers. client will be able to access them on server and database domains because they are all in the same network (similar to what link was doing previously)
you don't need to use ports on server and database containers because it only forwards ports to be used through 127.0.0.1. PHP in client container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them with database:5432 and server:3000. The same goes for server <-> database connections.
I am the author of mitm-nginx-proxy-companion
In order to make whole domain for localhost you can use dnsmasq. In this case if you chose the domain .dev any subdomain will point to your container. But you have to know about problems with .dev zone
Or you can use bash script for launch your docker-compose which on start will add line to /etc/hosts and after you kill this process this line will removed
#!/usr/bin/env bash
sudo sed -i '1s;^;127.0.0.1 example.dev\n;' /etc/hosts
trap 'sudo sed -i "/example.dev/d" /etc/hosts' 2
docker-compose up
My Bash script WITH ALIAS without docker-machine
Based on http://cavaliercoder.com/blog/update-etc-hosts-for-docker-machine.html
#!/bin/bash
#alias
declare -A aliasArr
aliasArr[docker_name]="alias1,alias2"
# clear existing *.docker.local entries from /etc/hosts
sudo sed -i '/\.docker\.local$/d' /etc/hosts
# iterate over each machine
docker ps -a --format "{{.Names}}" \
| while read -r MACHINE; do
MACHINE_IP="$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${MACHINE} 2>/dev/null)"
if [[ ${aliasArr[$MACHINE]} ]]
then
DOMAIN_NAME=$(echo ${aliasArr[$MACHINE]} | tr "," "\n")
else
DOMAIN_NAME=( ${MACHINE} )
fi
for addr in $DOMAIN_NAME
do
echo "add ${MACHINE_IP} ${addr}.docker.local"
[[ -n $MACHINE_IP ]] && sudo /bin/bash -c "echo \"${MACHINE_IP} ${addr}.docker.local\" >> /etc/hosts"
export no_proxy=$no_proxy,$MACHINE_IP
done
done

Resources