Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am using Windows with CentOS in a docker container. I am already connected to VPN using open-vpn in Windows. But, when I try to reach one host from dockerized CentOS, it says unable to connect.
Is it possible to reach to that host from CentOS?
From what I've gathered from your question, you should definitely follow both the steps mentioned below:
You will be using Bridge Network Driver by default and you might not need be concerned with Step 1. I've mentioned this in case you were experimenting with the default settings.
Use Bridge Network / Host Network drivers for your Docker Container to get access to your host machine's (which is running the docker containers) network. Go through https://docs.docker.com/network/ for more info.
Configure docker proxy. Add your vpn settings in the files mentioned in https://docs.docker.com/network/proxy/ as per your use case.
Short Version:
Add the following to your Dockerfile or their equivalent in case you use docker run to execute your builds. For more info refer to the link in Step 2.
ENV HTTP_PROXY "proxy"
ENV HTTPS_PROXY "proxy"
Maybe SELinux prevents connecting to the VPN. You can check the log messages with following command:
cat /var/log/messages | grep "SELinux is preventing"
If log messages show SELinux is involved in the problem, to ensure that only SELinux is in charge, go to /etc/selinux/config and edit the line SELINUX=enforcing in this way:
SELINUX=permissive
and restart CentOS. This causes SELinux only generate log messages without enforcing the policies. If the problem solves, you should create appropriate SELinux policies regarding the log messages that tell which restrictions prevent connecting to the VPN. Your custom policies must provide required permissions. Then you can revert your change in the above file and restart CentOS to leverage SELinux security along with accessing the VPN.
Also you can check the permissions of *.pem files associated to open-vpn using the following command:
ls -l /home/some-user/.cert/nm-openvpn/
for user 'some-user' or
ls -l /root/.cert/nm-openvpn/
for user 'root'.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
Saw on a few issues (such as https://github.com/docker/for-mac/issues/2716) that network_mode: host is not supported on Mac.
However, when I was trying to run a Kubernetes cluster using kind, and run Spark on K8s, using the Spark master address as k8s://127.0.0.1:59369 which is the same address and port as the K8s control plane, I get a Connection Refused error unless I am using network_mode: host. Error line:
Caused by: java.net.ConnectException: Failed to connect to /127.0.0.1:59369
This confuses me as I thought network_mode: host should have no effect on Mac, yet there is an effect, and it is blocking me from using bridge network and adding port mapping for another application in this container.
Any ideas how to resolve this?
Specifying host networking on Docker Desktop setups does something; the problem is that the "host" network it specifies isn't actually the "host" you'd intuitively expect as someone typing on the keyboard.
Docker Desktop on MacOS launches a hidden Linux VM, and the containers run inside that VM. When you run docker run -p, Docker Desktop is able to forward that port from the VM to the host, so port mappings work normally. If you use docker run --net=host, though, you get access to the VM's host network; since this generally disables Docker's networking layer, Docker isn't able to discover what the container might be doing, and it can't forward anything from the actual host into the VM to the container. That's the way host networking "doesn't work".
In practice I see host networking suggested for four things:
Processes with unpredictable or a very large number of port mappings, where docker run -p can't be used
To actually manage the host's network environment, in spite of running in a container
To access non-container processes on the same host, where host.docker.internal would also work
To access the published ports of other containers, without setting up Docker networking
I think here's you're running into this last case. Kind is publishing a port for the Kubernetes API service, so port 59369 is accessible on both the actual host and in the Linux VM. Now if your Spark container activates host networking it is using the VM's host network, but the other container's published port is still accessible there, which is why the http://localhost:59369 URL still works.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've installed GitLab ee on Docker. I'd like to use authentication via ssh instead password but each time I try to authenticate, connection is closed. SSH Port is 1122->22 so I'm connecting with git#gitlab.example -p 1122. I also enabled the port in ufw, checked if openssh server is running in the container.
Error: Connection closed by HOST port 1122
I was searching long time but I didn't find anything so I'll be glad for any suggestion.
Potential problem with Docker and UFW
Time ago I was wondering how to work with both UFW and Docker together (The GitLab service doesn't seem to be the problem, pretty sure you could have had the same problems with any service at all).
Check out this thread: What is the best practice of docker + ufw under Ubuntu
And also consider this:
To persist the iptables rule install the linux package iptables-persistent according to your server distro, in my case (Debian) is sudo apt install iptables-persistent and the package installation will add the NAT rule to a persistent file which is executed on boot. ~afboteros
Potential problem with Gitlab and Docker
When using Gitlab through Docker, some "heavy port-binded" services like SSH might need you to configure them to the exposed port. Maybe if you set the SSH service to the 1122 as you intended to, and binding it like that on the Dockerfile maybe you could make it work.
Official Gitlab documentation
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Using gcloud compute engine to run several interconnected services
IPFS node (glcoud compute engine)
Postgres database (gcloud SQL)
Nginx + Docker app (gcloud compute engine)
In attempting to forward and open correct ports for service interconnectivity, I found some issues. Specifically when opening ports for non web or non ssh ports, and/or listening to non-standard web ports using NginX (eg. for forwarding of http requests on non-standard ports to the docker container).
Using Nmap, I discovered over 900 ports are in 'filtered' state. I'm assuming because Google cloud virtual hosting is using bc.googleusercontent.com as primary host.
This is an example: Port 8020 is being filtered as 'intu-ec-svcdisc' service. I found it to be this service INTUIT service discoverer.
I'm hoping to discover a way to open several of these ports that I need.
As per nmap
Filtered means that a firewall, filter, or other network obstacle is blocking the port so that Nmap cannot tell whether it is open or closed. Closed ports have no application listening on them, though they could open up at any time.
By default, GCP only allows port 22/tcp & 3389/tcp incoming traffic to all your instances, however you have the option to tag your GCE resourse with http-server and/or https-server to allow ports 80/tcp & 443/tcp as well.
What you are seeing on your nmap output is exactly those ports open / closed because those can actually reach your GCE instance, but GCP firewall is blocking additional incoming traffic by default, and therefore you see this as "filtered".
You would need to open your desired ports on here by using the following guide or video, keep in mind that you can also have enabled a firewall inside your operating system, that could be turning those requests away.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a Docker running inside an Ubuntu-18.04 image (VMWare Player) which is hosted on a Windows PC. Docker has a container for Gitlab which I can access through localhost:4000 from my VM. The question is how can I access the very same Gitlab instance from my Windows PC? From my understanding there are two layers I need to connect. The first is the Docker with the VM host and the second is the VM host with the Windows host. I've tried creating a bridged connection between The Windows Host and the VM but I couldn't make it work. Please provide a detailed answer with steps if possible.
OK problem solved thanks to PanosMR.
The solution for me was to set VM network as host-only. Then assign an Sub-net IP to the VM like 192.168.42.0 with a mask like 255.255.255.0.
After that I went to see which IP my VM was assigned to. The IP was 192.128.42.128. Then on docker inside my Ubuntu VM I had set the Gitlab container --publish IP at the very same VM's IP plus the port.
For example --publish 192.168.42.128:4000:80 and boom! When Gitlab container initiated I had access through my Windows PC on that IP.
That was the simplest solution I've ever saw and also the only legit.
If I remember well Virtualbox has a settings screen to configure port forward. Search google around that.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I try to ssh from my windows host to a docker ubuntu container. I know, I can use docker exec -it <container-name> /bin/bash to launch, however, I want to do a normal "ssh root#192.168.xx.xx" to login because I want to simulate remote computer login and also it works also easily with my pycharm.
However, after I installed "openssh-server", and started it, the login with ssh from my host is still not possible.
:~$ ssh root#192.168.99.105
>>> The authenticity of host '192.168.99.105 (192.168.99.105)' can't be established.
ECDSA key fingerprint is SHA256:********
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.105' (ECDSA) to the list of known hosts.
root#192.168.99.105's password: xxx
Permission denied, please try again.
How can I solve this problem? I just want to simply ssh to this container...
To answer the question asked in the title:
Why I can not simply ssh to docker container from my windows host?
Docker is a way to configure Linux kernel settings on a process to limit what the process can see (namespaces) and how many resources that process can use (cgroups). So this question becomes "why can't I ssh into a process" and the answer is typically because that process is not an sshd server. The Ubuntu image for docker is not a virtual machine with all the associated daemons, it doesn't even include the kernel. Instead, it's a minimal filesystem with utilities found in a Ubuntu environment (like apt-get and bash).
On the other hand, the docker exec command does work because it is running a second command in the same isolated environment as the rest of the container. So if bash is installed in the image, then docker exec -it $container_id bash will run an interactive shell with the same namespaces and cgroups as the rest of your container processes.
If you want to ssh into your container, my advice is that you don't. This is similar to a code smell, a sign you are treating containers like a VM, and will have issues with the immutability and ephemeral nature of containers. The goal of working with containers is to have all your changes pushed into version control, build a new image, and deploy that image, for every change to the production environment. This eliminates the risk of state drift where interactive changes were made over time by one person and not known to the person trying to rebuild the environment later.
If you still prefer to ignore the advice, or your application is explicitly an sshd server, then you need to install and configure sshd as your running application inside of the container. There's documentation from Docker on how to do this, and lots of examples on Docker Hub from various individuals if you search on sshd (note that I don't believe any of these are official so I wouldn't recommend any of them).
You likely need to configure sshd on the container to allow root access and/or enable password authentication.
sudo sed -i 's|[#]*PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
sudo service sshd restart
One or both of these commands may help if you container image is ubuntu/debian based. I personally have never had the need to ssh into a docker container.