My question may seem really similar to some others and i'm new so reputation is not enough to add to comment to :
gitlab in docker behind traefik proxy fails (usually)
I will try to be specific :
Traefik won't forward ssh request because it is not HTTP.
I want Gitlab-ce on a vps, to which I usually connect with ssh.
Is it ok to expose Gitlab to a free port on my server to listen to ssh connections ? No confusions between command to enter the vps and command to push to Gitlab ?
Related
I'm setting up Atlassian Crowd using Docker. I'm using docker compose to run Crowd from this github repository https://github.com/teamatldocker/crowd
crowd is running on port 8095, now I need to make it available on port 80. I tried with nginx, there are many docker image out there but I didn't have any success.
What is the easiest and best way to have port 8095 proxied to port 80?
Thank you.
I'm just going to put this here, because it was very difficult to find information on this topic and I ended up solving it myself.
Setup
Bastion host in aws with a public ip address
Registry (image registry:2) on a private subnet behind bastion host
Successful ssh port forwarding through bastion, connecting localhost:5000 to registry:5000
curl localhost:5000/v2/_catalog provides a list of installed registries.
So far so good.
docker tag {my image} localhost:5000/{my image}
docker push localhost:5000/{my image}
Result
The push refers to repository [localhost:5000/{my image}]
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
How do we connect to a registry port forwarded to localhost?
I have found some obscure posts suggesting that you need to make a custom privileged container and do your ssh bastion port forwarding inside the container. This is essentially working around a problem introduced by the fact that the docker daemon is actually running inside a virtual machine!
https://docs.docker.com/docker-for-windows/networking/
You can find a hint here:
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST The host
has a changing IP address (or none if you have no network access).
From 18.03 onwards our recommendation is to connect to the special DNS
name host.docker.internal, which resolves to the internal IP address
used by the host. This is for development purpose and will not work in
a production environment outside of Docker Desktop for Windows.
So given the above, I reasoned that even though this advice is for containers, the docker daemon itself is probably acting on docker cli commands from within a similar context.
Therefore, first you need to add host.docker.internal:5000 as an insecure registry in your docker daemon setup. On Docker for Windows, this can be found in Settings > Daemon > Insecure registries. Unfortunately this doesn't count as localhost, so this has to be done (Docker allows localhost insecure registries by default). Then simply:
docker tag {my image} host.docker.internal:5000/{my image}
docker push host.docker.internal:5000/{my image}
Success!
Hopefully this helps some other very confused developers.
I have a machine with ssh running on it. Now, I wanted to run the gitlab inside the docker container. So, followed the instructions mentioned here https://docs.gitlab.com/omnibus/docker/. The instruction says bind the container ssh port 22 with host machine's ssh port(22). I was unable to do this because port was already binded with openssh server in the host machine. So I binded the container's ssh port to some other port say 222 or so. Doing so gitlab got set-up but when I try to clone the project using ssh way I am not able to do.
Is there a way to fix this issue? what could be reason, I suspect it's because of the port mapping. I want to have the ssh running on my host machine, run the gitlab inside the container and should be able to use ssh way for code commit,clone and push.
Docker port mapping is one thing but you also need to adapt the gitlab rails configuration in gitlab.rb to specify the custom ssh port :
gitlab_rails['gitlab_shell_ssh_port'] = 222
and restart the container
I am running a docker registry service in a locally-hosted docker-machine VM in a docker 1.13 swarm on OSX via:
docker service create --name registry --with-registry-auth --publish
5000:5000 registry:2
The service is running and I can push/pull images on a swarm manager however when I try to push images to the service from the machine hosting the swarm VM using port 5000:
Get https://<IP of swarm manager>:5000/v1/_ping: http: server gave HTTP response to HTTPS client
Does anyone know how to securely access a docker registry service from outside the swarm? Possibly a FAQ, but I haven't found an article addressing it on the docker site. They all seem to deal with container TLS settings or accessing the server from within the swarm (which is rather nice).
thanks!
The documentation on securing the registry socket deal with TLS settings because that's exactly what you need to configure. The registry documentation discusses this at a high level. The same steps to create a TLS CA, key, and certificate for the docker socket can be used for the registry socket and are documented on docker's site.
Note that if you generate your own certificates from your own CA, you'll need to trust your CA. There are various ways to do this just for docker, but the easier (and possibly less secure) solution is to add it to the list of root CA's on your host. This procedure varies per linux distribution.
new to docker and docker swarm. Trying docker and docker swarm both.
initially i had started a docker daemon and was able to connect it on http port i.e. 2375. I had installed docker colud plugin in jenkins and added http://daemon-IP:2375 and was able to create containers. well it creates a container, does my build inside it and destroys the container.
My Query is, will i be able to connect to docker swarm on http port, the same way i a am connecting to a standalone docker daemon ? is there any documentation on it. or the my understanding about the swarm is wrong.
please suggest.
Thanks
Yeah you can connect to a remote host the same way you are doing via the Unix Socket. People very often forget that docker is a client-server architecture and your "docker run..." commands are basically just commands issued by the docker client.
If you set certain environment variables:
DOCKER_HOST=tcp:ip.address.of.host:port
DOCKER_TLS_VERIFY=1
DOCKER_CERTS=/directory/where/certs/are
(The last two are optional for TLS connections, which I would highly recommend. You'd have to setup https://docs.docker.com/engine/security/https/ which is recommended for a production environment)
Once you've set your DOCKER_HOST environment variable, if you issue a docker command and get a response, it will be from the remote host if everything is setup correctly.