My docker-compose.yml file creates a service api listening on port 9000 and a remote server 123.1.2.3 needs to access it. An SSH tunnel is required because there is a firewall preventing direct access to the Docker host port 9000.
version: "3.9"
services:
api:
image: my/api-service:latest
ports:
- "9000:9000"
I'm currently manually creating this SSH tunnel by running this command on the Docker host
ssh -fN -R 9000:127.0.0.1:9000 root#123.1.2.3
Is it possible to create another service in this docker-compose.yml file to create this SSH tunnel on running docker-compose up, using a SSH private key in the same directory as the docker-compose.yml file?
This should be possible. Looking at a copy of the ssh(1) man page, the ssh -R option sets up a port forward from the remote machine back to the local machine
ssh -R port:host:hostport
where port on the remote host is forwarded through the ssh tunnel, making outbound connections to host:hostport from the local system.
In your case, if the ssh tunnel was launched from a container, you could use normal Docker networking, and connect to api:9000, using the standard port number of the container.
The first thing you'll need is an image with the ssh client. This is easy enough to build from source, so we'll do that
FROM ubuntu:22.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes openssh-client
Do not copy the ssh keys into the image. Anything that's in an image can be trivially extracted later, and you don't want to compromise your ssh keys this way.
Instead, we'll bind mount our ssh keys into the container when it runs. ssh is extremely particular about the permissions of the ssh keys, so you need to make sure the container runs as the same numeric user ID as on your host system. Run
id -u
and remember that number (on an Ubuntu system, it might be 1000).
Now, in the Compose file, in addition to the original server, we need to
Build the image;
Specify the user ID;
Inject the ssh keys;
Create some concept of a "home directory"; and
Specify the actual ssh command to run.
version: '3.8'
services:
api:
image: my/api-service:latest
# ports: ['9000:9000'] # optional
tunnel:
build:
context: .
dockerfile: Dockerfile.ssh
user: 1000
volumes:
- /home/yourname/.ssh:/home/.ssh
environment:
HOME: /home
command: ssh -N -R 9000:api:9000 root#123.1.2.3
In the last line the first 9000 is the port number on the remote system, and api:9000 is the container name and standard port number for the target container. ports: would also publish the port on the local system and aren't required (or considered) for connections between containers. I've omitted the ssh -f option so that the ssh tunnel runs as a foreground process, as the only process in its container.
Related
My system is composed of two parts: a Postgres local_db, and a Nodejs express server that communicates with it via prisma ORM. The Nodejs server, whenever receives a GET request to localhost:4000/, shall reply with a 200-code message as shown in the code:
app.get("/", (_req, res) => res.send("Hello!"))
Basically, this behavior is used further in a health check.
The database is instantiated by the docker-compose.yml (I omit parts not related to networking):
services:
timescaledb:
image: timescale/timescaledb:2.8.1-pg14
container_name: timescale-db
ports:
- "5000:5432"
And a Nodejs backend run in a container, whose Dockerfile is (omitting the parts related to Nodejs building):
FROM node:18
# Declare and set environment variables
ENV TIMESCALE_DATABASE_URL="postgres://postgres:password#localhost:5000/postgres?connect_timeout=300"
# Build app
RUN npm run build
# Expose the listening port
EXPOSE 4000
# Run container as non-root (unprivileged) user
# The node user is provided in the Node.js base image
USER node
CMD npx prisma migrate deploy; node ./build/main.js
The container is made to run via:
docker run -it --network host --name backend my-backend-image
However, despite the container actually finding and successfully connecting to the database (thus populating it), I cannot access to localhost:4000 from the host machine as it tells me connection refused. Furthermore, using curl I obtain the same reply:
$ curl -f http://localhost:4000
curl: (7) Failed to connect to localhost port 4000: Connection refused
I have even tried to connect to the localhost actual ip 127.0.0.1:4000 but still refuses the connection, or to the actual docker daemon address http://172.17.0.1:4000 but the connection keeps hanging.
I do not understand why I cannot access it, even though I have set the flag --network host when running the container, that should map one-to-one the ports of my host machine.
Let's say I have two docker containers. One containing an image with a proxy on 0.0.0.0:PORT. The other container needs to use that proxy.
To clarify, I DO NOT want to use the host for anything here. So no network_mode: host and running the proxy on the host machine. I want to containerize both the proxy and the service that will use that proxy.
I use docker-compose so if you could provide me with an example of that, I would be glad.
If you need to know, the proxy is the tor proxy using this image.
Thank you! (:
I suggest using bridge networking mode, which is the default for docker compose. Just to give you an example:
version: '3.7'
services:
tor:
image: osminogin/tor-simple
restart: always
curl:
image: curlimages/curl
tty: true
stdin_open: true
command: ["sh"]
depends_on:
- tor
Here you can see that we set up a the tor-simple proxy and a curlimages which obviously will be used to send a request to the Tor network via the proxy. By default docker compose will set up a single network in which each container's host name will be the container name itself, so tor for the proxy and curl for the curlimage.
To prove that we can connect to tor with the proxy, first we bring up the containers with docker compose up. Then we can attach to the curl image with docker attach <image-id>, which will give us a shell (command: [sh] and also curl is running in interactive mode)
Now, we should validated our Tor connection:
curl --socks5 tor:9050 --socks5-hostname tor:9050 -s https://check.torproject.org/ | cat | grep -m 1 Congratulations | xargs
Pleas note that curl will connect to tor-simple using port 9500 (tor:9050) which will proxy the request to https://check.torproject.org/.
This should print something like:
Congratulations. This browser is configured to use Tor.
I’ve been asked to configure a ubuntu 18.04 server with docker for multiple users.
Purpose:
We have multiple testers who write test cases. But our laptops aren’t fast enough to build the project and run tescases in docker environment.
We already have a jenkins server.But we need to build/test our code BEFORE push to git.
I’ve been given a high end ubuntu 18.04 server.
I have to configure the server where all our testers can run/debug our testcases on isolated environments.
When testers push there changes to remote servers project should build and run on isolated environments. Multiple users can work on same project but one testers builds must NOT affect another one.
I already installed Docker and tried with only changing docker-compose.yml and adding different networks (using multiple accounts of course). But it was very painful.
I need to have multiple selenoid servers(for different users),different allure reports with docker , Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports so we can go through the system while writing test cases.
Is it possible to configure an environment without changing project docker-compose.yml ?
Whats the approach I should take ?
You can use Docker in Docker (docker:dind image) to run multiple instances of Docker daemon on the same host, and have each tester use a different DOCKER_HOST to run their Compose stack. Each app instance will be deployed on a separate Docker daemon and isolated without requiring any change in docker-compose.yml.
Docker in Docker can be used to run a Docker daemon from another Docker daemon. (Docker daemon is the process actually managing your container when using docker). See Docker architecture and DinD original blogpost for details.
Example: run 2 Docker daemons exposing the app port
Let's Consider 2 testers with this docker-compose.yml:
version: 3
services:
app:
image: my/app:latest
ports:
- 8080:80
Run 2 instances of Docker Daemon exposing Daemon port and any port that will be exposed by Docker Compose (see below why)
# Run docker dind and map port 23751 on localhost
# Expose Daemon 8080 on 8081 (port that will be used by Tester1)
# privileged is required to run dind (see dind-rootless exists but is experimental)
# DOCKER_TLS_CERTDIR="" is to deploy an unsecure Daemon
# it's easier to use but should only be used for testing/dev purposes
docker run -d \
-p 23751:2375 \
-p 8081:8080 \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Second Daemon using port 23752
docker run -d \
-p 23752:2375 \
-p 8082:8080 \
--privileged \
--name dockerd-tester2 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
Each tester can run their own stack on their Docker daemon by setting DOCKER_HOST env var:
# Tester 1 shell
# use dockerd-tester1 daemon on port 23751
export DOCKER_HOST=tcp://localhost:23751
# run our stack
docker-compose up -d
Same for Tester 2 on dockerd-tester2 port:
# Tester 2 shell
export DOCKER_HOST=tcp://localhost:23752
docker-compose up -d
Interacting with Tester 1 and 2's stacks
Need the ability to build and run tests using our docker-compose files and need the ability to run the actual project on different ports
The exposed ports for each testers will be exposed on the Docker daemon host and reachable via http://$DOCKER_HOST:$APP_PORT instead of localhost:$APP_PORT (that's why we also exposed app port on each Daemon).
Considering our docker-compose.yml, testers will be able to access application such as:
# Tester 1
# port 8081 is linked to port 8080 of Docker daemon running our app container
# itself redirect on port 8080
# in short: 8081 -> 8080 -> 80
curl localhost:8081
# Tester 2
# 8082 -> 8080 -> 80
curl localhost:8082
Our deployment will look like this
Alternative without exposing ports, using Docker daemon IP directly
Similar to the first example, you can also interact with the deployed app by using Docker daemon IP directly:
# Run daemon without exposing ports
docker run -d \
--privileged \
--name dockerd-tester1 \
-e DOCKER_TLS_CERTDIR=""
docker:dind
# Retrieve daemon IP
docker inspect --format '{{ .NetworkSettings.IPAddress }}' dockerd-tester1
# output like 172.17.0.2
# use it!
export DOCKER_HOST=172.17.0.2
docker-compose up -d
# our app port are exposed on Daemon
curl 172.17.0.2:8080
We contacted directly our Daemon via its IP instead of exposing its port on localhost.
You can even define your Docker daemons with static IPs in a docker-compose.yml such as:
version: "3"
services:
dockerd-tester1:
image: docker:dind
privileged: true
environment:
DOCKER_TLS_CERTDIR: ""
networks:
dind-net:
# static IP to set as DOCKER_HOST
ipv4_address: 10.5.0.6
# same for dockerd-tester2
# ...
networks:
dind-net:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
And then
export DOCKER_HOST=10.5.0.6
# ...
Notes:
This may have some performance impact depending on the machine on which Daemons are deployed
You can use dind-rootless instead of dind to avoid using --privileged flags
It's better to avoid DOCKER_TLS_CERTDIR: "" for security reasons, see TLS instruction on docker image for detailed usage of TLS
The OP has already an CI/CD-system running. The question is: How can testers wrote new testcases on a own enviroment which is not running on the local maschine.
I suggest that you setup a k8s (kubernetes) instance on your new "high-end"-server. The installation of minikube is very easy and enough when you has only one server (aka node).
With k8s you can control your docker-containers (or with the correct verb "orchestrate").
You can do one of these things next:
Wrote a script for the test-laptops, so they can start new environments. You can use the $USER-variable for the correct naming. Be aware that the testers may have access to k8s now.
My favorite: Don't create enviroments for users, create them for merge requets. They are not bound to users and can be created by your version-control-system (e.g. gitlab). The testers can open an MR, your server setup a new enviroment and the tester is ready to go. And your testers have no access to k8s.
Not recommended, but possible: Create enviroments manually for each tester.
I am running a Debian docker container on a Windows 10 machine which needs to access a particular url on port 9000 (164.16.240.30:9000)
The host machine can access it fine via the browser, however when I log in to the terminal and run wget 172.17.240.30:9000 I get failed: No route to host.
In an attempt to resolve this I added:
ports:
- 9000:9000
to the docker-compose.yml file, however that doesn't seem to have made any difference.
In case you can't guess I'm new to this so what would you try next?
Entire docker-compose.yml file:
version: '3.4'
services:
tokengeneratorapi:
network_mode: host
image: ${DOCKER_REGISTRY}tokengeneratorapi
build:
context: .
dockerfile: TokenGeneratorApi/Dockerfile
ports:
- 5000:80
- 9000
environment:
ASPNETCORE_ENVIRONMENT: local
SSM_PATH: /ic/env1/tokengeneratorapi/
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
Command I'm running:
docker-compose build --build-arg BRANCH=featuretest --build-arg CHANGE_ID=99 --build-arg CHANGE_TARGET=develop --build-arg SONAR_SERVER=164.16.240.30
It seems it's the container having connectivity issues so your proposed solution is likely to not work, as that is only mapping a host port to a container port (considering your target URL is not the actual host).
Check out https://docs.docker.com/compose/compose-file/#network_mode and try setting it to host.
Your browser has access to 164.16.240.30:9000, because it is going through proxy (typical enteprise environment), so the proxy has network connectivity to 164.16.240.30. It doesn't mean that also your host has the same network connectivity. Actually, it looks like your host doesn't have that one. That is the reason why direct wget from the container or from terminal has error No route to host.
Everything must go through the proxy. Try to configure proxy properly - linux apps use environment variables http_proxy,https_proxy usually, but apps may have own option to configure proxy, eventualy you may configure it on the source code level. It depends on used app/code.
I think the issue is that you use host mode in your docker compose config file and do you have IPTABLES firewall allowed for the ports in the debian machine? How about windows?
network_mode: host
which actually bypasses the docker bridge completely so the ports section you specify is not applied. All the ports will be opened on the host system. You can check with
nestat -tunlp | grep 5000
And you will see that the port 5000 is not open and mapped to the 80 of the docker as you would expect. However ports 80 and 9000 should be open on the debian network but not binded to any docker bridge only to the debian ip.
From here: https://docs.docker.com/network/host/
WARNING: Published ports are discarded when using host network mode
As a solution could be to remove the network_mode line and it will work as expected.
Your code doesn't allow your container access to 164.16.240.30:9000. You should wget 164.16.240.30:9000 from the terminal instead of 172.17.240.30:9000.
I commonly see solutions that expose a docker container's port to the host.
In my case I want to forward a local port from one container, to another.
Let's say I run a service on container A that has a hard-coded configuration to access db on localhost 3306. But I want to run the db server on container B.
What is the best way to port-forward from A-localhost:3306 to B-IP:3306?
Install socat in your container and at startup run
socat TCP-LISTEN:3306,fork TCP:B-IP:3306 &
This will listen locally on your 3306 and pass any traffic bidirectionally to B-IP:3306. socat is available in package named socat. So you will run any of the below commands to install it
$ yum install -y socat
$ apt install -y socat
$ apk add socat
Edit-1
You can even do this by not touching your original container
Dockerfile
FROM alpine
RUN apk update && apk add socat
Build the file as below
docker build -t socat .
Now run a container from same
docker run --name mysql-bridge-a-to-b --net=container:<containerAid> socat socat TCP-LISTEN:3306,fork TCP:BIP:3306
This will run this container on A's network. So when it listens on A's network the localhost:3306 will become available in A even though A container was not touched.
You can simply run the container with network mode equal to host.
docker run --network=host ...
In that case, from the container point of view, localhost or 127.0.0.1 will refer to the host machine. Thus if your db is running in another container B that listens on 3306, an address of localhost:3306 in container A will hit the database in container B.
If you want container B's port to be exposed as a localhost port on container A you can start container B with the network option set to container mode to start container B on container A's network namespace.
Example:
docker run --net=container:A postgres
Where:
A is the name or identifier of the container you want to map into.
This will startup postgres in a container on the same network namespace as A, so any port opened in the postgres container will be being opened on the same interface as A and it should be available on localhost inside container A.
For development on Windows we can simply use host.docker.internal: see docker-for-windows networking
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows.
So in your case:
container A publishes port 3306
this port is now available on the host
container B can simply connect to host.docker.internal:3306
This solution is way overkill for the use case stated but in the case you run across this while researching a larger scope.
Hashicorp Consul installs a localhost agent that will proxy the localhost port to the ipaddress and port of a service registered within the directory.
https://www.consul.io/intro
I have no relation to Hashicorp besides using their products in systems I've built.
Containers can access each other internally you dont need to expose or forward anything.
You can do it with Docker-Compose.
docker-compose.yml example:
version: '3'
services:
web:
build: .web
ports:
- "5000:5000"
mysql:
build: .mysql
ports:
- "3306:3306"
Here you have 2 docker instances: web and mysql, so each docker instance can see each other using the names you defined as services.
Hope this helps