I am trying to set up gitlab inside a docker container, where my docker container is running on a remote CentOS 7 server. Apparently, gitlab is running just fine because I can access the container and perform wget and also from the server. However, I cannot do the wget from my local machine. I would like to configure it so that I can access from anywhere, but I have no idea how to do this configuration or what it's missing.
I created the docker container like this:
sudo docker run --detach --hostname gitlab.example.com --publish 18080:80 --publish 12222:22 --publish 1443:443 --name gitlab --restart always --volume /srv/gitlab/config:/etc/gitlab --volume /srv/gitlab/logs:/var/log/gitlab --volume /srv/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
just as the official documentation http://doc.gitlab.com/omnibus/docker/#after-starting-a-container was saying.
I would like to access via http://public_ip:18080. How can I achieve that?
I already tried opening port 18080 in CentOS:
$ sudo firewall-cmd --zone=public --add-port=18080/tcp --permanent
$ sudo firewall-cmd --reload
but it's not working (and I don't really know what I'm doing here).
The container is running and the command
docker ps -a
shows this:
88510679f781 gitlab/gitlab-ce:latest "/assets/wrapper" About an hour ago Up About an hour 0.0.0.0:12222->22/tcp, 0.0.0.0:18080->80/tcp, 0.0.0.0:1443->443/tcp gitlab
I can access the webpage from inside the server and from different machines inside the same cluster using the private ip of my server and port 18080. However, I cannot access with the preconfigured public ip of the server.
EDIT:
As pointed out in the replies, the issue was that the firewall was not letting us listen on every port. Changing the mapping to an available port made the trick
Related
I have a container running nifi (--name nifi) exposing port 8080 and another container running nifi registry (--name nifireg) exposing port 10808. I can get to both UI's, and I am able to connect nifi to the registry in the registry services by using the registry container's IP (172.17.0.5). These containers are also on a docker network called nifi-net. My issue is that the registry client is unable to talk to the registry when using the container name.
From the nifi I can ping by container IP as well as by name (ping nifireg), so there is some level of connectivity. But if I change the registry client to point to http://nifireg:180880 or even http://nifi-net.nifireg:18080 it clocks for a while and then eventually returns this error:
Unable to obtain listing of buckets: java.net.ConnectException: Connection refused (Connection refused)
What needs to be done to allow nifi to connect to the nifi registry using the container name?
EDIT: Here is how I set everything up:
docker run -d --name nifi -p 8080:8080 apache/nifi
docker run -d --name nifireg -p 18080:18080 apache/nifi-registry
I added the netorking after the fact, but that shouldn't be an issue.
docker network create nifi-net
docker network connect nifi-net nifi
docker network connect nifi-net nifireg
I don't understand why this solved the problem, but destroying the containers and recreating them with the --net nifi-net option at spin-up solved the problem.
docker run -d --name nifi --net nifi-net -p 8080:8080 apache/nifi
docker run -d --name nifireg --net nifi-net -p 18080:18080 apache/nifi-registry
The docs state that you can add them to a network after the fact, and I am able to ping from one container to the other using the name. I guess it's just a lesson that I need to use docker networking more.
I would suggest using docker-compose to manage the deployment since you can define the network once in docker-compose.yaml and not have to worry about it agian.
Plus it lets you learn about docker networking :P
I have been trying to create a Grafana container on my Tumbleweed server using Podman. I used NetworkManager instead of wickedd on this server. I of course published port 3000 when I ran the container:
sudo podman run -d -p 3000:3000 --name=grafana_hub -v grafana-storage:/var/lib/grafana grafana/grafana
and whitelisted the port in firewalld:
sudo firewall-cmd --zone=public --add-port=3000/tcp
but no dice.
I can access the web server with curl http://localhost:3000 on the host or even curl http://<host LAN IP>:3000 from the host, however if I run the latter on another machine on the LAN, it times out. I am at a loss here. Is there something different about Podman networking from Docker I am missing?
I have installed a docker image "gitlab/gitlab-ce" on windows 10 and I try to run with following command.
docker run --detach --hostname https://localhost/ --publish 40443:80 --name GitLab --restart always --volume d:\gitlab\config:/etc/gitlab --volume d:\gitlab\logs:/var/log/gitlab --volume d:\gitlab\data:/var/opt/gitlab gitlab/gitlab-ce:latest
but when it doesn't accessible in browser.
"This site can’t be reached.
localhost unexpectedly closed the connection."
I don't known what is wrong here in GitLab document the host name is "gitlab.example.com" and I don't known what is domain refer to.
You should use your own ip instead of "localhost" word.Because localhost IP is different for the docker.So you should check your ip address with ipconfig cmd command.And set the docker run command with your ip address.
I have write step by step at this comment : https://stackoverflow.com/a/66357935/11040700
Your docker container is exposed on :40443 port
Ref.: https://docs.docker.com/config/containers/container-networking/
P.S. hostname should be without https://
I'm looking at documentation here, and see the following line:
$ docker run -it --network some-network --rm redis redis-cli -h some-redis
What should go in the --network some-network field? My docker run command in the field before did default port mapping of docker run -d -p 6379:6379, etc.
I'm starting my redis server with default docker network configuration, and see this is in use:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcfa8a32de9 redis "docker-entrypoint.s…" 19 minutes ago Up 19 minutes 0.0.0.0:6379->6379/tcp some-redis
However, using the default bridge network produces:
$ docker run -it --network bridge --rm redis redis-cli -h some-redis
Could not connect to Redis at some-redis:6379: Name or service not known
Ignore the --network bridge command and use:
docker exec -it some-redis redis-cli
Docker includes support for networking containers through the use of network drivers. By default, Docker provides two network drivers for you, the bridge and the overlay drivers. You can also write a network driver plugin so that you can create your own drivers but that is an advanced task.
Read more here.
https://docs.docker.com/engine/tutorials/networkingcontainers/
https://docs.docker.com/v17.09/engine/userguide/networking/
You need to run
docker network create some-network
It doesn't matter what name some-network is, just so long as the Redis server, your special CLI container, and any clients talking to the server all use the same name. (If you're using Docker Compose this happens for you automatically and the network will be named something like directoryname_default; use docker network ls to find it.)
If your Redis server is already running, you can use docker network connect to attach the existing container to the new network. This is one of the few settings you're able to change after you've created a container.
If you're just trying to run a client to talk to this Redis, you don't need Docker for this at all. You can install the Redis client tools locally and run redis-cli, pointing at your host's IP address and the first port in the docker run -p option. The Redis wire protocol is simple enough that you can also use primitive tools like nc or telnet as well.
I am using gitlab-runner inside a container and register from that container.
docker run -d --name gitlab-runner --restart always \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
I started my container with that commands. And everything works as expected. However as you can see i didn't mention about ports inside my commands. So is it using something else(i dont know what)? It still work fine even if i changed the network(my custom network).
I am just a newbee about docker but the definition about container says that: each container has an isolated environment and cant communicate outside without port directions. Right?
You don't have to bind a port to the hoster because the runner will fetch your Gitlab answer periodically and Gitlab will give it jobs to do.
The runner initiates the connexion so you don't have to set an entry point.