Forward SSH connections based on user name - docker

I found numerous sites explaining ssh port forwarding, ssh reverse proxy, ssh multiplexing etc. involving sshpiper, sslh, running a ssh socks server, just configuring the local SSH server an so on..
so I'm quite puzzled right now and might ask a very common and/or simple question:
As you might already guess from the title I want to set up a git server (GitLab) inside a docker container listening for SSH connections on port 22 without having to use a different port for default ssh operations (terminal, scp, etc..) on the host (as suggested here)
I.e.
ssh alice#myserver.org should still be possible as well as
git clone git#myserver.com:path/to/project
and I don't want to do any setup on the client computer
If you prefer a picture:
+------ myserver.org --------+
| +----+ +- typical -+ |
+--------+ alice#myserver.org:22 | | | | SSH | |
| client | ----------------------> -+--+----+---->| service | |
+--------+ all names but `git` | | ? | +-----------+ |
| | | |
| | ? | +- docker --+ |
+--------+ git#myserver.org:22 | | | | with | |
| client | ----------------------> -+--+----+---->| GitLab | |
+--------+ only user `git` | | | | | |
| +----+ +-----------+ |
+----------------------------+
Can you tell me what's the recommended/most common way to do this? This question sounds promising but the answer seems to configure the client (which I want to avoid)

This project may help you.
https://github.com/tg123/sshpiper.
SSH Piper works as a proxy-like ware, and route connections by username, src ip , etc.
+---------+ +------------------+ +-----------------+
| | | | | |
| Bob +----ssh -l bob----+ | SSH Piper +-------------> Bob' machine |
| | | | | | | |
+---------+ | | | | +-----------------+
+---> pipe-by-name--+ |
+---------+ | | | | +-----------------+
| | | | | | | |
| Alice +----ssh -l alice--+ | +-------------> Alice' machine |
| | | | | |
+---------+ +------------------+ +-----------------+
Downstream SSH Piper Upstream

First of all, thanks for reading TheDockerExperts blog , hope our articles help you! Let me explain how we do SSH proxy in our company.
We have HAproxy that listens TCP 22 port and sends traffic to GitLab server, on host we have custom SSH port. Unfortunately as we use TCP balancing in this case, there is no way to create balancer based on domain names and users. You can take small VPS , spin up HAproxy on it and use it to balance your GIT traffic.
Hope this will help you!

Related

Is there a way to relay veth in docker to remote host by tunnel?

I do not want the docker to access my host LAN or internet directly by NAT.
Is it possible to provide a tcp or udp tunnel to relay the veth in docker to the remote proxy? So that the docker can not access my local network resource and can use the remote host to access internet (just like a proxy).
+-------------------+
| lan 10.1.2.3/24 | +-------------+
+------>+-------tunnel- ----|<----->+ remote host |
| +-------------------+ +-------------+
|
+-------+-------+
| veth01-peer |
+-------+-------+
|
+-----------+------------+
| | veth01 | |
| | 192.168.1.100 | |
| +------------------+ |
| docker |
+------------------------+

Docker Swarm: bypass load balancer and make direct request to specific containers

I have two containers running in a swarm. Each exposes a /stats endpoint which I am trying to scrape.
However, using the swarm port obviously results in the queries being load balanced and therefore the stats are all intermingled:
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| \ / |
| \ / |
| +--------------+ |
| | | |
| | Swarm Router | |
| | | |
| +--------------+ |
| v |
+-------------------------|------------------------+
|
A Stats
B Stats
A Stats
B Stats
|
v
I want to keep the load balancer for application requests, but also need a direct way to make requests to each container to scrape the stats.
+--------------------------------------------------+
| Server |
| +-------------+ +-------------+ |
| | | | | |
| | Container A | | Container B | |
| | | | | |
| +-------------+ +-------------+ |
| | \ / | |
| | \ / | |
| | +--------------+ | |
| | | | | |
| | | Swarm Router | | |
| v | | v |
| | +--------------+ | |
| | | | |
+--------|----------------|----------------|-------+
| | |
A Stats | B Stats
A Stats Normal Traffic B Stats
A Stats | B Stats
| | |
| | |
v | v
A dynamic solution would be ideal, but since I don't intend to do any dynamic scaling something like hardcoded ports for each container would be fine:
::8080 Both containers via load balancer
::8081 Direct access to container A
::8082 Direct access to container B
Can this be done with swarm?
From inside an overlay network you can get IP-addresses of all replicas with tasks.<service_name> DNS query:
; <<>> DiG 9.11.5-P4-5.1+deb10u5-Debian <<>> -tA tasks.foo_test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19860
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;tasks.foo_test. IN A
;; ANSWER SECTION:
tasks.foo_test. 600 IN A 10.0.1.3
tasks.foo_test. 600 IN A 10.0.1.5
tasks.foo_test. 600 IN A 10.0.1.6
This is mentioned in the documentation.
Also, if you use Prometheus to scrape those endpoints for metrics, you can combine the above with dns_sd_configs to set the targets to scrape (here is an article how). This is easy to get running but somewhat limited in features (especially in large environments).
A more advanced way to achieve the same is to use dockerswarm_sd_config (docs, example configuration). This way the list of endpoints will be gathered by querying Docker daemon, along with some useful labels (i.e. node name, service name, custom labels).
While less than ideal, you can introduce a microservice that acts as an intermediary to the other containers that are exposing /stats. This microservice would have to be configured with the individual endpoints and operate in the same network as said endpoints.
This doesn't bypass the load balancer, but instead makes it so it does not matter.
The intermediary could roll-up the information or you could make it more sophisticated by passing a list of opaque identifiers which the caller can then use to individually query the intermediary.
It is slightly "anti-pattern" in the sense that you have a highly coupled "stats" proxy that must be configured to be able to hit each endpoint.
That said, it is good in the sense that you don't have to expose individual containers outside of the proxy. From a security perspective, this may be better because you're not leaking additional information out of your swarm.
You can try to publish a specific container port on a host machine
,add to your services:
ports:
- target: 8081
published: 8081
protocol: tcp
mode: host

Allow container to read host network statistics, but bind to docker network

tl; dr? Jump straight to Question ;)
Context & Architecture
In this application designed with a micro-service architecture in mind, one can find notably two components:
monitor: probes system metrics and report them via HTTP
controller: read metrics reported by monitor and take actions according to rules defined in a configuration file.
+------------------------------------------------------+
| host / |
+-----/ |
| |
| +-----------------+ +-------------------+ |
| | monitor / | | controller / | |
| +--------/ | +-----------/ | |
| | +----------+ | | +-------------+ | |
| | | REST :80 |>--+--------+->| application | | |
| | +----------+ | | +-------------+ | |
| +-----------------+ +-------------------+ |
| |
+------------------------------------------------------+
Trouble with Docker
The only way I found for monitor to be able to read network statistics not contrived to its docker network stack was to start its container with --network=host. The following question assumes this is the only solution. If (fortunately) I were mistaken, please do answer with an alternative.
version: "3.2"
services:
monitor:
build: monitor
network_mode: host
controller:
build: controller
network_mode: host
Question
Is there a way for monitor to serve its report on a docker network even though it reads statistics from the host network stack?
Or, is there a way for controller to not be on --network=host even though it connects to monitor which is?
(note: I use docker-compose but a pure docker answer suits me)

How to know a process is running under docker?

I may be asking a very beginner level question but I need a way to distinguish process under docker and that under non-docker in a box. The 'ps' command command output gives me a feeling that process is running in linux box and cannot confirm if same is under hood of docker.
In the same context is it possible / feasible that process under docker be started with docker root file system.
Is the same feasible or there any other solution for same?
You can identify Docker process via the process tree on the Docker host (or on the VM if using docker for mac/windows)
The parent process to 2924(haproxy) is 2902
The parent process to 2902(haproxy-start) is 2881
2881 will be docker-container which is managed by a dockerd process
To view your process listing in a tree format use ps -ejH or pstree (available in the psmisc package)
To get a quick list of whats running under dockerd
/ # pstree $(pgrep dockerd)
dockerd-+-docker-containe-+-docker-containe-+-java---17*[{java}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-sinopia-+-4*[{V8 WorkerThread}]
| | | |-{node}
| | | `-4*[{sinopia}]
| | `-8*[{docker-containe}]
| |-docker-containe-+-node-+-4*[{V8 WorkerThread}]
| | | `-{node}
| | `-8*[{docker-containe}]
| |-docker-containe-+-tinydns
| | `-8*[{docker-containe}]
| |-docker-containe-+-dnscache
| | `-8*[{docker-containe}]
| |-docker-containe-+-apt-cacher-ng
| | `-8*[{docker-containe}]
| `-20*[{docker-containe}]
|-2*[docker-proxy---6*[{docker-proxy}]]
|-docker-proxy---5*[{docker-proxy}]
|-2*[docker-proxy---4*[{docker-proxy}]]
|-docker-proxy---8*[{docker-proxy}]
`-28*[{dockerd}]
Show the parents of a PID (-s)
/ # pstree -aps 3744
init,1
`-dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
`-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
`-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
`-java,3744 -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
|-{java},4174
|-{java},4175
|-{java},4176
|-{java},4177
|-{java},4190
|-{java},4208
|-{java},4209
|-{java},4327
|-{java},4328
|-{java},4329
|-{java},4330
|-{java},4390
|-{java},4416
|-{java},4617
|-{java},4625
|-{java},4629
`-{java},4632
Show all children of docker, including namespace changes (-S):
/ # pstree -apS $(pgrep dockerd)
dockerd,1721 --pidfile=/run/docker.pid -H unix:///var/run/docker.sock --swarm-default-advertise-addr=eth0
|-docker-containe,1728 -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim ...
| |-docker-containe,3711 8d923b3235eb963b735fda847b745d5629904ccef1245d4592cc986b3b9b384a...
| | |-java,3744,ipc,mnt,net,pid,uts -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp/zookeeper/bin/../build/cl
| | | |-{java},4174
| | | |-{java},4175
| | | |-{java},4629
| | | `-{java},4632
| | |-{docker-containe},3712
| | `-{docker-containe},4152
| |-docker-containe,3806 49125f8274242a5ae244ffbca121f354c620355186875617d43876bcde619732...
| | |-sinopia,3841,ipc,mnt,net,pid,uts
| | | |-{V8 WorkerThread},4063
| | | |-{V8 WorkerThread},4064
| | | |-{V8 WorkerThread},4065
| | | |-{V8 WorkerThread},4066
| | | |-{node},4062
| | | |-{sinopia},4333
| | | |-{sinopia},4334
| | | |-{sinopia},4335
| | | `-{sinopia},4336
| | |-{docker-containe},3814
| | `-{docker-containe},4038
| |-docker-containe,3846 2a756d94c52d934ba729927b0354014f11da6319eff4d35880a30e72e033c05d...
| | |-node,3910,ipc,mnt,net,pid,uts lib/dnsd.js
| | | |-{V8 WorkerThread},4204
| | | |-{V8 WorkerThread},4205
| | | |-{V8 WorkerThread},4206
| | | |-{V8 WorkerThread},4207
| | | `-{node},4203
The command lxc-ls and the command lxc-ps may be installable on your Linux distribution. This will allow you to list the running LXC containers and the processes running within those containers respectively. You should be able to link the output from lxc-ls to lxc-ps using streams and get a list of all containerized processes.
The big caveat is that you specified Docker and not every Docker instance is running on LXC nor is it necessarily a localhost process. Docker defines an API that can be called to list remote Docker instances, so this technique will not help with enumerating processes on remote machines as well.
In windows docker behave little bit different.
It's processes are not run as child of parent process, but running as separate process on the host.
They can be viewed by (for example), powershell, like
Get-Process powershell
For example, getting processes on the host when running microsoft/iis container will include additional powershell process (since ms/iis container runs powershell as a main executable process).

Dreamfactory - Service user is deactivated

While meddling with an experimental Dreamfactory 2.1 installation, the user service was accidentally disabled through the admin console. The message when trying to log in is
Service user is deactivated
How to get around this problem? Is there a configuration file or something that I need to edit to get this back on? After a bit of probing I saw this in the table called "service" in MySQL db(bitnami_dreamfactory).
+-------------------------+-----------+
| name | is_active |
+-------------------------+-----------+
| system | 1 |
| api_docs | 1 |
| files | 0 |
| db | 0 |
| email | 0 |
| user | 0 |
| mysql | 0 |
| mongodb | 1 |
| scr-insert | 1 |
| testdb | 1 |
| test-mlabs | 1 |
+-------------------------+-----------+
Can I just go ahead an issue an update statement to set 0 to 1, for 'user' service?
Thanks,
M&M
Yes, and then clear the application cache using 'php artisan cache:clear'.

Resources