Docker gitlab container heatly but not accessible - docker

Hello,
I have the following problem on docker 18.06.1-ce.
I have an owncloud container that works with the following configurations:
Image : owncloud/server:10.0
Status healthy
Ports : 0.0.0.0:4090->80/tcp, 0.0.0.0:4093->443/tcp
So far, so good, this container is functional.
Now, I want to add a gitlab container with the following configurations:
Image : gitlab/gitlab-ce:latest
Status : heatly
Ports : 0.0.0.0:2222->22/tcp, 0.0.0.0:8080->80/tcp, 0.0.0.0:4443->443/tcp
The problem is that I can't access the containers with the ports listed above (connection failed).
I tried to install the container in a different way:
By docker run command :
docker run --detach --hostname nsXXXXX.ip-XX-XXX-XX.eu --env GITLAB_OMNIBUS_CONFIG="external_url 'https://nsXXXXX.ip-XX-XXX-XX.eu:4443'; gitlab_rails['lfs_enabled'] = true;" --publish 4443:443 --publish 8080:80 --publish 2222:22 --name gitlab --restart always --volume /srv/gitlab/config:/etc/gitlab --volume /srv/gitlab/logs:/var/log/gitlab --volume /srv/gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
And by docker-compose:
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: 'nsXXXXXXX.ip-XX-XXX-XX.eu'
privileged: true
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://nsXXXXXXX.ip-XX-XXX-XX.eu:4443/'
gitlab_rails['gitlab_shell_ssh_port'] = 4182
ports:
- '4180:80'
- '4443:443'
- '4182:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
My docker is on a dedicated Debain Stretch hosted by kimsufi.
Do you have any ideas to help me? Thank you very much.

Solved : https://forum.gitlab.com/t/docker-gitlab-container-healthy-but-not-accessible/20042/5
It was necessary to map the port of the external URL to the internal port... Beginner's error:)

Related

Install local portainer with docker compose

I'm reading the docker-ce dock for installation. It's simple two lines :
docker volume create portainer_data &&
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
Is there a way to do all this stuff with docker-compose file ? So the portainer container will be groupped with all the others when executing docker-comppose down ?
This french tutorial got something working :
portainer:
container_name: portainer
image: portainer/portainer-ce:latest #latest might not be the best option
restart: unless-stopped
command: -H unix:///var/run/docker.sock
ports:
- 9000:9000
volumes:
- /var/run/docker.sock:/var/run/docker.sock:fr
- /etc/localtime:/etc/localtime:fr
- /etc/timezone:/etc/timezone:fr #not sure about those 3 volumes
- dataportainer:/data
volumes:
dataportainer:

Prometheus cAdvisor with docker swarm

I have setup a docker Cadvisor using docker service cluster and need to dynamically monitor the nodes of docker cluster using active service discovery.
If I have started the prometheus CAdvisor through docker cluster using docker service command, it's working fine and I am successfully able to discover the docker cluster nodes dynamically. But, if I've passed the same parameters of that command in docker compose-file, I cannot see any nodes. Following is the docker compose configuration of prometheus CAdvisor.
cadvisor:
image: google/cadvisor
container_name: cadvisor
ports:
- target: 8080
mode: host
published: 8040
network_mode: "host"
deploy:
mode: replicated
command:
- --docker_only=true
labels:
- "prometheus-job=cadvisor"
volumes:
- /:/rootfs:ro
- /var/run:/var/run
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
- /var/run/docker.sock:/var/run/docker.sock:rw
Docker service command:
docker service create --name cadvisor -l prometheus-job=cadvisor \
--mode=global --publish published=8040,target=8080,mode=host \
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro \
--mount type=bind,src=/,dst=/rootfs,ro \
--mount type=bind,src=/var/run,dst=/var/run \
--mount type=bind,src=/sys,dst=/sys,ro \
--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro \
google/cadvisor -docker_only
Any help in this regard will be appreciated.

Cannot connect to the Docker daemon from within container

I run a container (cAdvisor) that needs to access the Docker Engine of the host.
When I run it as a service with the command line, everything works fine:
docker service create --name cadvisor
--network clusternetwork -p 8080:8080
--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock,ro
--mount type=bind,src=/,dst=/rootfs,ro
--mount type=bind,src=/sys,dst=/sys,ro
--mount type=bind,src=/var/lib/docker,dst=/var/lib/docker,ro
gcr.io/google-containers/cadvisor:latest
But when I transpose the following service to a docker-compose file and run it using docker stack deploy -c myCadvisor-compose.yml cAdvisor, it doesn't work and I get the following error: failed to get docker info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Here is my docker-compose file. Did I forget to transpose something from the above service call?
version: "3.7"
services:
cadvisor:
image: gcr.io/google-containers/cadvisor:latest
ports:
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock,ro
- /:/rootfs,ro
- /sys:/sys,ro
- /var/lib/docker:/var/lib/docker,ro
networks:
- clusternetwork
networks:
clusternetwork:
external: true
I have tested your code. The main issue is the "," you must put an":". RW or RO doesn't matter. In my case the working one looks like this:
volumes:
- /:/rootfs:ro
- /var/run/docker.sock:/var/run/docker.sock:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
Cheers Jules

docker run vs docker-compose one of these things is not like the other

I have an nginx proxy setup with a shellscript that looks something like this
docker run --detach --name nginx-proxy --publish 80:80 --publish 443:443 --volume /etc/nginx/certs \
--volume /etc/nginx/vhost.d --volume /usr/share/nginx/html --volume /var/run/docker.sock:/tmp/docker.sock:ro --restart unless-stopped jwilder/nginx-proxy:alpine
echo proxy up
docker run --detach --name nginx-proxy-letsencrypt --volumes-from nginx-proxy --volume /var/run/docker.sock:/var/run/docker.sock:ro \
--restart unless-stopped jrcs/letsencrypt-nginx-proxy-companion
echo ssl companion up
docker run -d \
-e VIRTUAL_HOST=[domain] \
\-e "LETSENCRYPT_HOST=[domain]" \
-e "LETSENCRYPT_EMAIL=[emailaddress]" \
--name [domain] \
--expose 80 \
--restart always \
-v /code/[domain]:/var/www/html \
fauria/lamp
echo test site up at [domain]
and this site works properly and functions as expected.
I then stop the web server container and use the following docker-compose.yaml and it fails with a 502..
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]
Why? Aren't they the same? What am I missing?
When you use docker-compose, docker-compose creates a docker network for you, in which all of the services can communicate with each other. Since you simply stopped the container and started it with docker-compose, now it does not have access to the containers on your localhost. This is why you get the 502 error. What you need to do is add the other containers to your docker compose file, and make sure you are connecting to the hosts using the proper service name (instead of localhost use http://service_name:443). Alternatively you can somehow give the containers in your docker network access to your localhost, but I'm not sure how to do this. Maybe you need to use 0.0.0.0 instead of 127.0.0.1?
The problem is that i was not connecting my docker-compose to the bridge network used by default in the proxy image.
version: '3.3'
services:
lamp:
restart: always
image: fauria/lamp
network-mode: bridge
container_name: [domain]
expose:
- "80"
volumes:
- /code/[domain]:/var/www/html
environment:
- VIRTUAL_HOST=[domain]
- LETSENCRYPT_HOST=[domain]
- LETSENCRYPT_EMAIL=[emailaddress]

Running Kudu in a docker and master to tserver two-way connection / circular link issues - docker composition

How can you run Kudu, which requires two containers - one for the master and one for the tserver under docker, when the two containers need to connect to each other by DNS.
Kudu can be run under Docker using the following commands:
docker run --name kudu-master --hostname kudu-master --detach --publish 8051:8051 --publish 7051:7051 kunickiaj/kudu master
and:
docker run --name kudu-tserver --hostname kudu-tserver --detach --publish 8050:8050 --publish 7050:7050 --link kudu-master --env KUDU_MASTER=kudu-master kunickiaj/kudu tserver
However, the above defines a one way link, from kudu-tserver to kudu-master and not vice verse.
For Kudu to function correctly, bother kudu-master and kudu-tserver need to be able to connect to each other.
How can the Docker containers be configured, so that the two way link works?
Docker image reference
Similar image reference
The link parameter in docker run is a legacy feature which may be removed (references [1] and [2]).
You can raise multiple Docker containers and connect them to each other using docker-compose.
To get this working, create a folder named kudu and place the following docker-compose.yml file under it:
version: '3'
services:
kudu-master:
image: "kunickiaj/kudu"
hostname: kudu-master
ports:
- "8051:8051"
- "7051:7051"
command:
master
networks:
kudu_network:
aliases:
- kudu-master
kudu-tserver:
image: "kunickiaj/kudu"
hostname: kudu-tserver
ports:
- "8050:8050"
- "7050:7050"
environment:
- KUDU_MASTER=kudu-master
command:
tserver
networks:
kudu_network:
aliases:
- kudu-tserver
networks:
kudu_network:
This file includes 2 services (kudu-master and kudu-tserver) and a network within which both have aliases which are visible to the rest of the network (to each other). [File reference]
Then run docker-compose using the following command line:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d
or, if you want to recreate the Docker containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" up -d --force-recreate
Other useful commands [reference]:
To stop the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" stop
To remove the containers:
docker-compose -f "filePathToYourDockerComposeYmlFile" rm -f

Resources