Can't ssh to GitLab ee in a docker container [closed] - docker

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've installed GitLab ee on Docker. I'd like to use authentication via ssh instead password but each time I try to authenticate, connection is closed. SSH Port is 1122->22 so I'm connecting with git#gitlab.example -p 1122. I also enabled the port in ufw, checked if openssh server is running in the container.
Error: Connection closed by HOST port 1122
I was searching long time but I didn't find anything so I'll be glad for any suggestion.

Potential problem with Docker and UFW
Time ago I was wondering how to work with both UFW and Docker together (The GitLab service doesn't seem to be the problem, pretty sure you could have had the same problems with any service at all).
Check out this thread: What is the best practice of docker + ufw under Ubuntu
And also consider this:
To persist the iptables rule install the linux package iptables-persistent according to your server distro, in my case (Debian) is sudo apt install iptables-persistent and the package installation will add the NAT rule to a persistent file which is executed on boot. ~afboteros
Potential problem with Gitlab and Docker
When using Gitlab through Docker, some "heavy port-binded" services like SSH might need you to configure them to the exposed port. Maybe if you set the SSH service to the 1122 as you intended to, and binding it like that on the Dockerfile maybe you could make it work.
Official Gitlab documentation

Related

How to disable Gcloud ports OR Why does google cloud compute engine have so many ports filtered or open? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Using gcloud compute engine to run several interconnected services
IPFS node (glcoud compute engine)
Postgres database (gcloud SQL)
Nginx + Docker app (gcloud compute engine)
In attempting to forward and open correct ports for service interconnectivity, I found some issues. Specifically when opening ports for non web or non ssh ports, and/or listening to non-standard web ports using NginX (eg. for forwarding of http requests on non-standard ports to the docker container).
Using Nmap, I discovered over 900 ports are in 'filtered' state. I'm assuming because Google cloud virtual hosting is using bc.googleusercontent.com as primary host.
This is an example: Port 8020 is being filtered as 'intu-ec-svcdisc' service. I found it to be this service INTUIT service discoverer.
I'm hoping to discover a way to open several of these ports that I need.
As per nmap
Filtered means that a firewall, filter, or other network obstacle is blocking the port so that Nmap cannot tell whether it is open or closed. Closed ports have no application listening on them, though they could open up at any time.
By default, GCP only allows port 22/tcp & 3389/tcp incoming traffic to all your instances, however you have the option to tag your GCE resourse with http-server and/or https-server to allow ports 80/tcp & 443/tcp as well.
What you are seeing on your nmap output is exactly those ports open / closed because those can actually reach your GCE instance, but GCP firewall is blocking additional incoming traffic by default, and therefore you see this as "filtered".
You would need to open your desired ports on here by using the following guide or video, keep in mind that you can also have enabled a firewall inside your operating system, that could be turning those requests away.

Connect a docker's container port that is running inside an Ubuntu VM, with the VM's host machine network [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a Docker running inside an Ubuntu-18.04 image (VMWare Player) which is hosted on a Windows PC. Docker has a container for Gitlab which I can access through localhost:4000 from my VM. The question is how can I access the very same Gitlab instance from my Windows PC? From my understanding there are two layers I need to connect. The first is the Docker with the VM host and the second is the VM host with the Windows host. I've tried creating a bridged connection between The Windows Host and the VM but I couldn't make it work. Please provide a detailed answer with steps if possible.
OK problem solved thanks to PanosMR.
The solution for me was to set VM network as host-only. Then assign an Sub-net IP to the VM like 192.168.42.0 with a mask like 255.255.255.0.
After that I went to see which IP my VM was assigned to. The IP was 192.128.42.128. Then on docker inside my Ubuntu VM I had set the Gitlab container --publish IP at the very same VM's IP plus the port.
For example --publish 192.168.42.128:4000:80 and boom! When Gitlab container initiated I had access through my Windows PC on that IP.
That was the simplest solution I've ever saw and also the only legit.
If I remember well Virtualbox has a settings screen to configure port forward. Search google around that.

Add Insecure Registry to Docker in ubuntu [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I am trying to add private registry in docker on ubuntu machine, using nexus as repository
below is the screenshot of nexus configurations
in docker host i have added DOCKER_OPTS="--insecure-registry=xx.xx.xx.xx:8083" to /etc/default/docker
after these changes i did docker restart using below commands
systemctl daemon-reload
systemctl restart docker
now when i execute docker info its not showing up my private registry
is anything missing in my configurations
Try adding insecure registry entry in /etc/docker/daemon.json
file content
{ "insecure-registries":["registry.example.com"] }
restart the docker deamon
sudo systemctl restart docker

Why I can not simply ssh to docker container from my windows host? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I try to ssh from my windows host to a docker ubuntu container. I know, I can use docker exec -it <container-name> /bin/bash to launch, however, I want to do a normal "ssh root#192.168.xx.xx" to login because I want to simulate remote computer login and also it works also easily with my pycharm.
However, after I installed "openssh-server", and started it, the login with ssh from my host is still not possible.
:~$ ssh root#192.168.99.105
>>> The authenticity of host '192.168.99.105 (192.168.99.105)' can't be established.
ECDSA key fingerprint is SHA256:********
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.105' (ECDSA) to the list of known hosts.
root#192.168.99.105's password: xxx
Permission denied, please try again.
How can I solve this problem? I just want to simply ssh to this container...
To answer the question asked in the title:
Why I can not simply ssh to docker container from my windows host?
Docker is a way to configure Linux kernel settings on a process to limit what the process can see (namespaces) and how many resources that process can use (cgroups). So this question becomes "why can't I ssh into a process" and the answer is typically because that process is not an sshd server. The Ubuntu image for docker is not a virtual machine with all the associated daemons, it doesn't even include the kernel. Instead, it's a minimal filesystem with utilities found in a Ubuntu environment (like apt-get and bash).
On the other hand, the docker exec command does work because it is running a second command in the same isolated environment as the rest of the container. So if bash is installed in the image, then docker exec -it $container_id bash will run an interactive shell with the same namespaces and cgroups as the rest of your container processes.
If you want to ssh into your container, my advice is that you don't. This is similar to a code smell, a sign you are treating containers like a VM, and will have issues with the immutability and ephemeral nature of containers. The goal of working with containers is to have all your changes pushed into version control, build a new image, and deploy that image, for every change to the production environment. This eliminates the risk of state drift where interactive changes were made over time by one person and not known to the person trying to rebuild the environment later.
If you still prefer to ignore the advice, or your application is explicitly an sshd server, then you need to install and configure sshd as your running application inside of the container. There's documentation from Docker on how to do this, and lots of examples on Docker Hub from various individuals if you search on sshd (note that I don't believe any of these are official so I wouldn't recommend any of them).
You likely need to configure sshd on the container to allow root access and/or enable password authentication.
sudo sed -i 's|[#]*PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
echo PermitRootLogin yes | sudo tee -a /etc/ssh/sshd_config
sudo service sshd restart
One or both of these commands may help if you container image is ubuntu/debian based. I personally have never had the need to ssh into a docker container.

Unable to reach host from dockerized CentOS through OpenVPN proxy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am using Windows with CentOS in a docker container. I am already connected to VPN using open-vpn in Windows. But, when I try to reach one host from dockerized CentOS, it says unable to connect.
Is it possible to reach to that host from CentOS?
From what I've gathered from your question, you should definitely follow both the steps mentioned below:
You will be using Bridge Network Driver by default and you might not need be concerned with Step 1. I've mentioned this in case you were experimenting with the default settings.
Use Bridge Network / Host Network drivers for your Docker Container to get access to your host machine's (which is running the docker containers) network. Go through https://docs.docker.com/network/ for more info.
Configure docker proxy. Add your vpn settings in the files mentioned in https://docs.docker.com/network/proxy/ as per your use case.
Short Version:
Add the following to your Dockerfile or their equivalent in case you use docker run to execute your builds. For more info refer to the link in Step 2.
ENV HTTP_PROXY "proxy"
ENV HTTPS_PROXY "proxy"
Maybe SELinux prevents connecting to the VPN. You can check the log messages with following command:
cat /var/log/messages | grep "SELinux is preventing"
If log messages show SELinux is involved in the problem, to ensure that only SELinux is in charge, go to /etc/selinux/config and edit the line SELINUX=enforcing in this way:
SELINUX=permissive
and restart CentOS. This causes SELinux only generate log messages without enforcing the policies. If the problem solves, you should create appropriate SELinux policies regarding the log messages that tell which restrictions prevent connecting to the VPN. Your custom policies must provide required permissions. Then you can revert your change in the above file and restart CentOS to leverage SELinux security along with accessing the VPN.
Also you can check the permissions of *.pem files associated to open-vpn using the following command:
ls -l /home/some-user/.cert/nm-openvpn/
for user 'some-user' or
ls -l /root/.cert/nm-openvpn/
for user 'root'.

Resources