I am trying to use a docker container to set up a SSH tunnel to a remote database that is only reachable via SSH. I have a docker network with several containers and want to make the database available for all the containers in the network.
The Dockerfile for the SSH container looks like this:
FROM debian:stable
RUN apt-get update && apt-get -y --force-yes install openssh-client autossh postgresql-client
COPY .ssh /root/.ssh
RUN chown root:root /root/.ssh/config
EXPOSE 12345
ENTRYPOINT ["/usr/bin/autossh", "-M", "0", "-v", "-T", "-N", "-4", "-L", "12345:localhost:1234", "user#remotedb" ]
Inside the .ssh diretctory are my keys and the config file, which looks like that:
Host remotedb
StrictHostKeyChecking no
ServerAliveInterval 30
ServerAliveCountMax 3
The tunnel itself works on this container, meaning I can access the db from inside it as localhost:12345.
Now I want to access it also from other containers in the same network.
My docker-compose.yml looks like this (I commented out some trials):
version: '2'
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 10.12.0.0/16
gateway: 10.12.0.1
services:
service_1:
image: my/image:alias
volumes:
- somevolume
# links:
# - my_ssh
ports:
- "8080"
environment:
ENV1: blabla
networks:
my_network:
ipv4_address: 10.12.0.12
my_ssh:
build:
context: ./dir_with_Dockerfile
# ports:
# - "23456:12345"
expose:
- "12345"
networks:
my_network:
ipv4_address: 10.12.0.13
I've tried to access the remote database from inside service_1 with hostnames 'my_ssh', the ipv4_address, 'localhost', and with ports 12345 and 23456. None of these combinations have worked. Where could I go wrong?
Or how else could I achieve a permanent connection from my containers to the remote database?
More of a suggestion than an answer; setting up OpenVPN on your database network and your docker swarm would allow you to connect the two networks together. It would also make it easier for you to configure more hosts in the future.
Related
i'm using Docker-Desktop on Windows and i'm trying to get running 3 containers inside docker-desktop.
After few research and test, i get the 3 container running [WEB - API - DB], everything seems to compile/run without issue in the logs but i'can't access my web container from outside.
Here's my dockerfile and docker-compose, what did i miss or get wrong ?
[WEB] dockerfile
FROM node:16.17.0-bullseye-slim
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
CMD ["npm", "run", "start"]
[API] dockerfile
FROM openjdk:17.0.1-jdk-slim
WORKDIR /app
COPY ./target/test-0.0.1-SNAPSHOT.jar /app
#EXPOSE 2022 (the issue is the same with or without this line)
CMD ["java", "-jar", "test-0.0.1-SNAPSHOT.jar"]
Docker-compose file
version: "3.8"
services:
### FRONTEND ###
web:
container_name: wallet-web
restart: always
build: ./frontend
ports:
- "80:4200"
depends_on:
- "api"
networks:
customnetwork:
ipv4_address: 172.20.0.12
#networks:
# - "api"
# - "web"
### BACKEND ###
api:
container_name: wallet-api
restart: always
build: ./backend
ports:
- "2022:2022"
depends_on:
- "db"
networks:
customnetwork:
ipv4_address: 172.20.0.11
#networks:
# - "api"
# - "web"
### DATABASE ###
db:
container_name: wallet-db
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
customnetwork:
ipv4_address: 172.20.0.10
#networks:
# - "api"
# - "web"
networks:
customnetwork:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
gateway: 172.20.0.1
# api:
# web:
Listening on:
enter image description here
I found several issue similar to mine but the solution didn't worked for me.
If i understand you are trying to access on port 80. To do that, you have to map your container port 4200 to 80 in yaml file 80:4200 instead of 4200:4200.
https://docs.docker.com/config/containers/container-networking/
Have you looked in the browsers development console, if there comes any error. Your docker-compose seems not to have any issue.
How ever lets try to debug it:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6245eaffd67e nginx "/docker-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:4200->80/tcp test-api-1
copy the container id then execute:
docker exec -it 6245eaffd67e bin/bash
Now you are inside the container. Instead of the id you can use also the containers name.
curl http://localhost:80
Note: in my case here i just create a container from an nginx image.
In your case use the port where your app is running. Control it in your code if you arent sure. A lot of Javascript-frameworks start default on 3000.
If you get an error: curl command not found, install it in your image:
FROM node:16.17.0-bullseye-slim
USER root # to install dependencies you need sudo permissions su we tell the image that it is root
RUN apt update -y && apt install curl -y
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
#EXPOSE 4200 (the issue is the same with or without this line)
USER node # we dont want to execute the image as root so we put user node (this user is defined in the node:16.17.0-bullseye-slim image)
CMD ["npm", "run", "start"]
Now the curl should work (if it doesnt already).
The same should work from your host.
Here is an important thing:
The localhost, always refers to the fisical computer, or the container itselfs where you are refering. Every container and your PC have localhost and they are not the same.
In the docker-compose you just map the port host/container, so your PC (host) where docker is running can access the docker network from the host on the host port you defined, inside the port of the container.
If you cant still access from your host, try to change the host ports 2022, 4200 ecc. Could be possible that something conflicts on your Windows machine.
It happens sometimes that the docker networks can create some conflicts.
Execute a docker-compose down, so it should be delete and recreated.
Still not working?
Reset docker-desktop to factory settings, control if you have last version (this is always better).
If all this doesnt help, let me know so we can debugg further.
For the sake of clarity i post you here the docker-compose which i used to check. I just used nginx to test the ports as i dont have your images.
version: "3.8"
services:
### FRONTEND ###
web:
restart: always
image: nginx
ports:
- "4200:80"
depends_on:
- "api"
networks:
- "web"
### BACKEND ###
api:
restart: always
image: nginx
ports:
- "2022:80"
depends_on:
- "db"
networks:
- "api"
- "web"
### DATABASE ###
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=postgres
networks:
- "api"
networks:
api:
web:
```
Update:
You can log what happens in the conatiner like so:
```
docker logs containerid/name
```
If you are using Visualcode there is excellent extension for docker build also by Microsoft:
Just search docker in the extensions. Has something like 20.000.000 downloads and can help you a lot debugging containers ecc. After installing it you see the dockericon on the left toolbar.
If you can see directly the errors that occurs in the logs, maybe you can post them partially. So it would be possible to understand. Please tell also something about your Frontendapp architecture, (react-app, angular). There are some frameworks that need to be startet on 0.0.0.0 instead of 127.0.0.1 or they dont work.
I can't install pi-hole in my container due to it not officially supporting the CPU architecture ("mips"). It has however been performed by https://www.reddit.com/r/pihole/comments/fnhfb8/pihole_for_mips_ci20/, but I'm not sure how to modify my yml file to retrieve his code instead of the official pi-hole code. Everything needed should be in this post, hopefully.
I suspect I need to modify something in the .yml file but maybe it's one of the commands that are wrong.
the error code i get is "no matching manifest for linux/mipsle in the manifest list entries", if theres another solution than id be open to trying that too.
docker-compose.yml
version: "3.3"
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
pihole:
container_name: pihole
image: pihole/pihole:2021.09
hostname: pihole
environment:
TZ: SE
# WEBPASSWORD: 'set a secure password here or it will be random'
# Volumes store your data between container upgrades
volumes:
- './pihole/etc-pihole/:/etc/pihole/'
- './pihole/etc-dnsmasq.d/:/etc/dnsmasq.d/'
- './pihole/var-log/:/var/log'
- './pihole/etc-cont-init.d/10-fixroutes.sh:/etc/cont-init.d/10-fixroutes.sh'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
networks:
internal:
lan:
ipv4_address: 192.168.1.3
networks:
internal:
lan:
name: lan
driver: macvlan
driver_opts:
parent: br-lan.20
ipam:
config:
- subnet: 192.168.1.0/24
making volumes
mkdir -p ./pihole/etc-pihole/
mkdir -p ./pihole/etc-dnsmasq.d/
mkdir -p ./pihole/var-log/
mkdir -p ./pihole/var-log/lighttpd
chown 33:33 ./pihole/var-log/lighttpd
mkdir -p ./pihole/etc-cont-init.d/
setting routes
echo '#!/usr/bin/with-contenv bash
set -e
echo "fixing routes"
ip route del default
ip route add default via 172.18.0.1
echo "done fixing routes"' >> ./pihole/etc-cont-init.d/10-fixroutes.sh
chmod 755 ./pihole/etc-cont-init.d/10-fixroutes.sh
running the container
cd ~
docker-compose up -d pihole
I am trying to set up a dockerized redis cluster spanning multiple host machines.
In my curretn setup I have two hosts with public ip addresses and start a similar configuration on both, this config consists of a compose.yml:
services:
redis-cluster:
container_name: node-redis
build:
context: ../../
dockerfile: deployment/node/cluster-dockerfile
restart: always
ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
networks:
node_net:
ipv4_address: 10.20.0.6
networks:
node_net:
driver: bridge
ipam:
config:
- subnet: 10.20.0.0/16
gateway: 10.20.0.1
which is identical on both hosts.
The Dockerfile uses supervisord to start 3 redis instances (on ports 7000, 7001 and 7002) as such:
FROM ubuntu:20.04
RUN apt update && \
DEBIAN_FRONTEND=noninteractive apt install -y redis-server supervisor
COPY ./deployment/production-node/cluster-files/node1 /app/cluster-files
COPY ./deployment/production-node/cluster-files/node1/supervisord.conf /etc/supervisor/supervisord.conf
CMD supervisord -c /etc/supervisor/supervisord.conf && \
sleep infinity
Each redis instance is configured as such:
port <port number>
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
masterauth pass
requirepass pass
protected-mode no
bind 0.0.0.0
unixsocket /tmp/redis.sock
loglevel debug
logfile "serverlog.7000.txt"
cluster-config-file nodes7000.conf
cluster-announce-ip <public ip of host machine>
cluster-announce-port <port number>
After running docker compose up on both hosts and redis instances stating correctly i try to use
redis-cli to create cluster as such:
redis-cli -a pass --cluster create <host1-ip>:7000 <host1-ip>:7001 \
<host1-ip>:7002 <host2-ip>:7000 <host2-ip>:7001 <host2-ip>:7002 \
--cluster-replicas 1
This results in waiting infinitely for the cluster to join.
After some consideration I figured that this may be caused by not exposing proper cluster bus ports in docker to solve this I changed the compsoe file to list additional ports:
- "7000:7000"
- "7001:7001"
- "7002:7002"
- "17000:17000"
- "17001:17001"
- "17002:17002"
And added this line to the redis.conf files:
cluster-port 17000 <and 17001, 17002 respective to the other port used by instance>
After those changes I am not even able to connect to a single instance and get an instant connection refused when tryin to create cluster.
As of now I am not sure how to solve this problem and would be gratefull for any hints as to how properly configure this kind of redis cluster without starting containers in the host network mode.
I have a docker-compose file with three services (Solr, PostgreSQL and pgAdmin), all sharing a Docker network.
version: '2'
services:
solr:
image: solr:7.7.2
ports:
- '8983:8983'
networks:
primus-dev:
ipv4_address: 10.105.1.101
volumes:
- data:/opt/solr/server/solr/mycores
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- primus
- /opt/solr/server/solr/configsets/sample_techproducts_configs
environment:
- SOLR_HEAP=2048m
logging:
options:
max-size: 5m
db:
image: "postgres:11.5"
container_name: "primus_postgres"
ports:
- "5432:5432"
networks:
primus-dev:
ipv4_address: 10.105.1.102
volumes:
- primus_dbdata:/var/lib/postgres/data
environment:
- POSTGRES_DB=primus75
- POSTGRES_USER=primus
- POSTGRES_PASSWORD=primstav
pgadm4:
image: "dpage/pgadmin4"
networks:
primus-dev:
ipv4_address: 10.105.1.103
ports:
- "3050:80"
volumes:
- /home/nils/docker-home:/var/docker-home
environment:
- PGADMIN_DEFAULT_EMAIL=nils.weinander#kulturit.se
- PGADMIN_DEFAULT_PASSWORD=dev
networks:
primus-dev:
driver: bridge
ipam:
config:
- subnet: 10.105.1.0/24
volumes:
data:
primus_dbdata:
This works just fine after docker-compose up (at least pgAdmin can talk to PostgreSQL).
But, then I have a script (actuall a make target, but that's not the point here), which builds, runs and deletes a container with docker-compose run:
docker-compose run -e HOME=/app -e PYTHONPATH=/app/server -u 0 --rm backend \
bash -c 'cd /app/server && python tools/reindex_mp.py -s -n'
This does not work as the reindex_mp.py cannot reach Solr on 10.105.1.101, as the one shot container is not on the same Docker network. So, is there a way to tell docker-compose to use a named network with docker-compose run? docker run has an option --network but that is not available for docker-compose.
You can create a docker network outside your docker-compose and use that network while running services in docker-compose.
docker network create my-custom-created-network
now inside your docker-compose, use this network like this:
services:
serv1:
image: img
networks:
my-custom-created-network
networks:
my-custom-created-network:
external: true
The network creation example creates a bridge network.
To access containers across hosts, use an overlay network.
You can also use the network created inside docker-compose and connect containers to that network.
Docker creates a default network for docker-compose and services which do not have any network configuration specified, will be using default network created by docker for that compose file.
you can find the network name by executing this command:
docker network ls
Use the network appropriate name while starting a container, like this
docker run [options] --network <network-name> <image-name>
Note: Containers in a same network are accessible using container names, you can leverage this instead of using ips
We're making move to Docker from Vagrant.
Our first aim is to move some services out first. In this case I'm trying to host a redis server on a docker container and connect to it from my vagrant machine.
On the vagrant machine there is an apache2 webserver hosting a Laravel App
It's the connection part I'm struggling with, currently I have
Dockerfile.redis
FROM redis:3.2.12
RUN redis-server
docker-compose.yml (concatenated)
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
I've tried various way to connect to this:
Attempt 1
Using the host ip 10.0.2.2 in the config in Laravel. Results in a "Connection refused"
Attempt 2
Set up a network in the docker compose
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
network:
- app_net:
ipv4_address: 172.16.238.10
ports:
- "6379:6379"
networks:
app_net:
driver: bridge
ipam:
driver: default
- subnet: 172.16.238.0/24
This instead results in timeouts. Most solutions seem to require a gateway configured on the network, but this isn't configurable in docker compose 3. Is there maybe a way around this?
If anyone can give any guidance that would be great, most guides talk about connect to dockers in a vagrant rather than from one.
FYI - this is using Docker for Mac and version 3 of docker compose
We were able to get this going use purely docker compose and not having a dockerfile for redis at all:
redis:
image: redis
container_name: redis
working_dir: /opt
ports:
- "6379:6379"
Once done like this, able to connect to redis from within the vagrant file using
redis-cli -h 10.0.2.2
Or as the following in laravel, although we're using environment variables to set these)
'redis' => [
'client' => 'phpredis',
'default' => [
'host' => '10.0.2.2',
'password' => null,
'port' => 6379,
'database' => 0,
]
]
Your Attempt 1 should work actually. When you create a service without defining a network, docker-compose automatically creates a bridge network. For example:
When you run docker-compose up on this:
version: '3'
services:
redis:
build:
context: .
dockerfile: Dockerfile.redis
working_dir: /opt
ports:
- "6379:6379"
docker-compose creates a bridge network named <project name>_default, which is docker_compose_test_default in my case, as shown below:
me#myshell:~/docker_compose_test $ docker network ls
NETWORK ID NAME DRIVER SCOPE
6748b1ea4b85 bridge bridge local
4601c6ea30c3 docker_compose_test_default bridge local
80033acaa6e4 host host local
When you inspect your container, you can see that an IP has already been assigned to it:
docker inspect e6b196f952af
...
"Networks": {
"bridge": {
...
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
You can then use this IP to connect from the host or your vagrant box:
me#myshell:~/docker_compose_test $ redis-cli -h 172.18.0.2 -p 6379
172.18.0.2:6379> ping
PONG