I have an openstack(microstack) instance running on my server (ubuntu 20.04): S1. I have an ubuntu (20.04) instance up and running on it (floating ip 10.20.20.100), on which ping 8.8.8.8 works. I can ssh and ping this instance with FIP from the controller node/S1.
My intention is to access this server from my local machine (!=S1) (via WSL) with the floating IP.
LOCAL_PC(WSL)$> ssh 10.20.20.100
I'm looking into using NAT (S/D) but I could use some clarification on what is the proper way of performing such forwarding!
Thank you in advance!
BR
You can use SSH ProxyJump to do this. Using your Ubuntu server as a proxy or 'jump host'.
ssh -J user#proxy user#vm
Well, I've found the solution for the questioned problem:
I've issued the following command on my local machine to ssh the vm:
ssh -i ~/.ssh/id_rsa -o ProxyCommand='ssh -i ~/.ssh/id_rsa -W %h:%p <user>#<jumphost/controller_node>' <user>#<target_instance_in_openstack=10.20.20.100>
Related
I want to create a docker container from one machine (suppose having centos) machine and then access that container from another machine(may be centos or mac). How can we do that? Is it possible with macvlan networking? If yes , what are steps? If not, what is the way?
Depends from what is your final goal. Following are some approaches (depending on what you want to achieve as final goal):
Manage container and execute bash in the container on a remote host:
Easiest way is to use the environment variable DOCKER_HOST
export DOCKER_HOST=ssh://vagrant#192.168.5.178
docker exec -ti centos_remote /bin/bash
You can find more information in this answer https://stackoverflow.com/a/51897942/2816703
Use the container as a form of virtual machine on which user can ssh:
First you will need a container that is running the sshd. You will expose the port 22 on another port on the host network. Finally you will use the ssh with -p to connect that port. Here is a working example:
$ sudo docker run -d -P --name test_sshd rastasheep/ubuntu-sshd:14.04
$ sudo docker port test_sshd 22
0.0.0.0:49154
$ ssh root#localhost -p 49154
# The password is `root`
root#test_sshd $
or if you are on a remote machine, use the host IP address xxx.xxx.xxx.xxx, to connect to the container use:
$ ssh root#xxx.xxx.xxx.xxx -p 49154
# The password is `root`
Also you can pre-select a port (in this case port 22000) and test from the host.
~# docker run -d -p 22000:22 --name test_sshd rastasheep/ubuntu-sshd:14.04
~# ssh root#<ipaddress> -p 22000
Setup a network layer (L2/L3) between the hosts:
Using macvlan is one approach. Another approach is the ipvlan. In both cases, you are converting the host network adapter to a virtual router, after which you need to setup the routes. You can find detailed explanation on this link http://networkstatic.net/configuring-macvlan-ipvlan-linux-networking/
For context - I am attempting to deploy OKD in an air-gapped environment, which requires mirroring an image registry. This private, secured registry is then pulled from by other machines in the network during the installation process.
To describe the environment - the host machine where the registry container is running is running Centos 7.6. The other machines are all VMs running Fedora coreOS in using libvirt. The VMs and the host are connected using a virtual network created using libvirt which includes DHCP settings (dnsmasq) for the VMs to give them static IPs. The host machine also hosts the DNS server, which, as far as I can tell is configured properly, as I can ping every machine from every other machine using its fully qualified domain name and access specific ports (such as the port the apache server listens on). Podman is used instead of Docker for container management for OKD, but as far as I can tell the commands are exactly the same.
I have the registry running in the air-gapped environment using the following command:
sudo podman run --name mirror-registry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z \
-v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e REGISTRY_AUTH=htpasswd \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registry-key.pem \
-d docker.io/library/registry:latest
It is accessible using curl -u username:password https://host-machine.example.local:5000/v2/_catalog which returns {"repositories":[]}. I believe this confirms that my TLS and authorization configurations are correct. However, if I transfer the ca.pem file (used to sign the SSL certificates the registry uses) over to one of the VM's on the virtual network, and attempt to use the same curl command, I get an error:
connect to 192.168.x.x port 5000 failed: Connection refused
Failed to connect to host-machine.example.local port 5000: Connection refused
Closing connection 0
This is quite strange to me, as I've been able to use this method to communicate with the registry from the VMs in the past, and I'm not sure what has changed.
After some further digging, it seems like there is some sort of issue with the port itself, but I can't be sure where the issue is stemming from. For example, If I run sudo netstat -tulpn | grep LISTEN on the host, I receive a line indicating that podman (conmon) is listening on the correct port:
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 48337/conmon
but if I test whether the port is accessible from the VM, (nc -zvw5 192.168.x.x 5000) I get a similar error: Ncat: Connection refused. If I use the same test on any of the other listening ports on the host, it indicates successful connections to those ports.
Please note, I have completely disabled firewalld, so as far as I know, all ports are open.
I'm not sure if the issue is with my DNS settings, or the virtual network, or with the registry itself and I'm not quite sure how to further diagnose the issue. Any insight would be much appreciated.
I am trying to use the python in a docker container on a remote machine as the interpreter in Pycharm. Since that is a mouthful, here is a diagram:
There is a Jupyter Notebook running in the container, which I am able to connect to through my local browser (although this is just for testing the connection). The command I am using to launch the Docker container is
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 -p 7722:22 --ipc=host latest:latest
I can forward the port 8888 which the Jupyter notebook is running on with ssh -L 8888:0.0.0.0:8888 BBB.BBB.BBB.BBB and thus use it on the local machine. But I don't much like using Jupyter for developing and would like to use the Python interpreter in the Docker Container in Pycharm.
When I select "Add Python Interpreter" in Pycharm, I get the following options:
The documentation for Pycharm suggests using the "Add Python Interpreter/Docker" tool which looks like this:
However the documentation doesn't say how to set up the Docker container and the connections if the Docker is on a remote machine.
So my questions are: should I use a Unix or a TCP socket to connect to my remote docker? Or should I somehow forward all the relevant ports from the container and use the "SSH Interpreter" option? And if so, how do I set this all up? Am I setting up my Docker Container properly in the first place?
I think I have trawled through every forum and online resource, over the last two days, but have not come any closer to getting this to work. I have also tried to get this to work in Spyder, but to no avail either. So any advice is very appreciated!
Many thanks!
Thank you for depicting the dilemma so poignantly and clearly in your cartoon :-). My colleague and I were trying to do something similar and what ultimately worked beautifully was creating an SSH config directly to the Docker container jumping from the remote machine, and then setting it as a remote SSH interpreter so that pycharm doesn't even realize it's a Docker container. It also works well for vscode.
set up ssh service in docker container (subset of steps in https://dev.to/s1ntaxe770r/how-to-setup-ssh-within-a-docker-container-i5i, port22 stuff wasn't needed)
docker exec -it <container> bash: create admin interactive prompt for docker
apt-get install openssh-server
service ssh start
confirm with service ssh status -> * sshd is running
determine IP and test SSHing from remote machine into container (adapted from https://phoenixnap.com/kb/how-to-ssh-into-docker-container, steps 2 and 3)
from normal command prompt on remote machine (not within container): docker inspect -f "{{ .NetworkSettings.IPAddress }}" <container> to get container IP
test: ping -c 3 <container_ip>
ssh: ssh <container_ip>; should drop you into the container as your user; however, requires container to be configured properly (docker run cmd has -v /etc/passwd:/etc/passwd:ro \ etc.). It may ask for a password. note: if you do this for a different container later that is assigned the same IP, you will get a warning and may need to delete the previous key from known_hosts; just follow the instructions in the warning.
test SSH from local machine
if you don't have it set up already, set up passwordless ssh key-based authentication to the remote machine with the docker container
make SSH command that uses your remote machine as a jump server to the container: ssh -J <remote_machine> <container_ip>, as described in https://wiki.gentoo.org/wiki/SSH_jump_host; if successful you should drop into the container just as you did from the remote machine
save this setup in your ~/.ssh/config; follow the ProxyJump Example from https://wiki.gentoo.org/wiki/SSH_jump_host
test config with ssh <container_host_name_defined_in_ssh_config>; should also drop you into interactive container
configure pycharm (or vscode or any IDE that accepts remote SSH interpreter)
Preferences -> Project -> Python Interpreter -> Add -> SSH Interpreter -> New server configuration
host: <container_host_name_defined_in_ssh_config>
port: 22
username: <username_on_remote_server>
select interpreter, can navigate using the folder icon, which will walk you through paths within the docker, or you can enter the result of which python from the container
follow pycharm prompts
I want to use docker machine with a remote server docker daemon through ssh so no need to open 2376 port in the remote server.
Local Host:
$ docker-machine create --driver generic --generic-ip-address
[IP_Address] --generic-engine-port 2376 --generic-ssh-key
~/.ssh/id_rsa --generic-ssh-user root [Host]
Remote host:
$ docker daemon -H tcp://127.0.0.1:2376
Result of executing the Local Host command:
$ docker-machine create --driver generic --generic-ip-address
[IP_Address] --generic-engine-port 2376 --generic-ssh-key
~/.ssh/id_rsa --generic-ssh-user root [Host]
...
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
As per nmap remote port 2376 is closed, so the error makes sense.
I have tried tunneling through ssh by executing the following in my local host:
$ ssh -L 2376:127.0.0.1:2376 [Remote_Host]
** Note docker machine is trying to reach docker daemon in the remote host, so the tunnel is useful **
I thought maybe using ssh -R or a combination of both would work but I have not been able to make it work yet, do you have any idea or workaround to make this work?
Do not hesitate to bring me to a completely different approach to solve this.
Thanks in advance.
Have you tried rdocker? It seems to do exactly what you are looking for. Cheers
I´ve been looking in google but i cannot find any answer.
It is possible connect to a virtualbox docker container that I just start up. I have the IP of the virtual machine, but if I try to connect by SSH of course ask me for a password.
Regards.
see
https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33232371
to repeat steps.
On my Mac OS X machine
docker-machine env default
shows
export DOCKER_HOST="tcp://192.168.99.100:2376"
So i added an entry
192.168.99.100 docker
to my /etc/hosts
so that ping docker works.
As a Dockerfile i am using:
# Ubuntu image
FROM ubuntu:14.04
which I am building with
docker build -t bitplan/sshtest:0.0.1 .
and testing with
docker run -it bitplan/sshtest:0.0.1 /bin/bash
Now ssh docker will react with
The authenticity of host 'docker (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:osRuE6B8bCIGiL18uBBrtySH5+iGPkiHHiq5PZNfDmc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'docker,192.168.99.100' (ECDSA) to the list of known hosts.
wf#docker's password:
But here you are connecting to the docker machine not your image!
The ssh port is at port 22. You need to redirect it to another port and configure your image to support ssh to root or a valid user.
See e.g. https://docs.docker.com/examples/running_ssh_service/
Are you trying to connect to a running container or trying to connect to the virtualbox image running the docker daemon?
If the first, you cannot just SSH into a running container unless that container is running an ssh daemon. The easiest way to get a shell into a running container is with docker exec -ti <container name/id> /bin/sh. Do a docker ps to see running containers.
If the second, if your host was created with docker-machine then you can ssh into it with docker-machine ssh <machine name>. You can see all of you're running machines with docker-machine ls.
If this doesn't help can you clarify your question a little and provide details around how your creating your image and starting the container.
You can use ssh keys to access passwordless.
Here's some intro
https://wiki.archlinux.org/index.php/SSH_keys