Connect Nginx Docker container to 16 workers - docker

I have an Nginx Docker container, and 16 load balanced web servers each exposing a port on the host machine, 8081-8096:
docker run -d \
--restart always \
--name "web.${name}" \
-v /srv/web/web-bundle:/bundle \
-p "${port}":80 \
kadirahq/meteord:base
My Nginx container was previously linking to the only web image, before I tried to scale:
docker run -d \
--name nginx \
--link web.1:web.1 \
-v /srv/nginx:/etc/nginx \
-v /srv/nginx/html:/usr/share/nginx/html \
-p 80:80 \
-p 443:443 \
nginx
Nginx upstream config:
upstream web {
ip_hash;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
# ... you get the point
}
I need this Nginx image to be able to hit 127.0.0.1:8081-8096, however it doesn't appear to permit this. I don't want to make 16 --links! That seems off.
What is the proper way to do this?

You have no choice with nginx to spare the requests through a range of ports without specifying each one.
I recommend to try this out: https://github.com/jwilder/nginx-proxy
That is a nginx container that can automatically discover any other containers that need to be proxied. It reads some special env var from other containers in order to know how to proxy them.
Use --network instead of --link. As long as you put all containers in the same network, you don't need to link them. The --link is being deprecated.
docker network create mynet
docker run --network mynet ........

Related

How to access Gitlab's metrics (Prometheus and Grafana) from Docker installation?

I installed Gitlab using Docker image on a Ubuntu virtual machine running on a MAC M1 as follows (https://hub.docker.com/r/yrzr/gitlab-ce-arm64v8):
docker run \
--detach \
--restart always \
--name gitlab-ce \
--privileged \
--memory 4096M \
--publish 22:22 \
--publish 80:80 \
--publish 443:443 \
--hostname 127.0.0.1 \
--env GITLAB_OMNIBUS_CONFIG=" \
nginx['redirect_http_to_https'] = true; "\
--volume /srv/gitlab-ce/conf:/etc/gitlab:z \
--volume /srv/gitlab-ce/logs:/var/log/gitlab:z \
--volume /srv/gitlab-ce/data:/var/opt/gitlab:z \
yrzr/gitlab-ce-arm64v8:latest
All seems to be working correctly on localhost, except that I can't access the metrics, I got unable to connect error on:
Prometheus: http://localhost:9090
Grafana: http://localhost/-/grafana
I tried enabling metrics as in the documentation, and docker exec -it gitlab-ce gitlab-ctl reconfigure
What I'm missing?
Thanks
When Gitlab uses localhost this will resolve the localhost on the container and not the host (so your Mac).
There are two options to solve this:
Use host.docker.internal instead of localhost (this resolves to the internal IP address used by the host) - see this doc for more info
Configure your container to use the host network by adding this to the docker run command: --network=host which will let your container and host to share the same network stack (however, this is not supported nu Docker Desktop for mac according to this)

Running gitlab and jenkins with https in docker swarm

Context: I want to run gitlab and jenkins in docker swarm with https. I succeeded in making them run on the default port(8080 for jenkins and 80 for gitlab with http).
My problem: is when I try to run for example gitlab on the port 443, I get nothing even though I published my container on that port and modified the external url on the "gitlab.rb" file(I've been following the official doc).
And for Jenkins it's even harder to make it run on https, it's either adding a reverse proxy or SSL certificate.
> sudo docker service create -u 0 --name jenkins_stack \
> --network devops-net --replicas 1 --publish 8443:8443 \
> --publish 50000:50000 --mount src=jenkins-volume,dst=/var/jenkins_home \
> --hostname jenkins jenkins/jenkins
>
>
> sudo docker service create -u 0 --name gitlabstack \
> --network devops-net --replicas 1 --publish 80:80 --publish 443:443 \
> --mount src=gitlab-data,dst=/var/opt/gitlab \
> --mount src=gitlab-logs,dst=/var/log/gitlab \
> --mount src=gitlab-config,dst=/etc/gitlab \
> --hostname gitlab gitlab/gitlab-ce
Above you will find the docker lines to create the services.
I'd really appreciate it, if someone can share any video or tutorial on how to run gitlab/jenkins on docker swarm with https.
I'm sorry if I've been unclear.

How can access to gitlab through docker

I'm trying to run gitlab through docker container on my web server.
I can reach my server with 192.168.80.xxx address.
here is what've done
Get gitlab image from docker
docker pull gitlab/gitlab-ce
Then run
docker run --detach \
--hostname 192.168.80.xxx \
--publish 443:443 --publish 8081:80 --publish 2289:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce
Like port 80(apache) & 22 are already used, I've changed them into 8080 and 2289
Now I go into my browser and check 192.168.80.xxx:8081 but nothing seems to work.
I was wondering "ok and what if i try to reach my container through its IP adress ?"
So I get back its address with the following line :
docker inspect 4170434ef181
And I try it http://192.17.0.2:8081, nothing too...
So guys, how can I use this container? how can I access it?
For informations my Document root path is defined like DocumentRoot "/var/apache/www" in my httpd.conf file
Cheers
--hostname 192.168.80.xxx -> usually, you define a string, not an ip adress. And this hostname is (per default) only valid inside your docker network.
If you are publishing ports 443 and 8080, you should be able to access your server at 192.168.80.xxx:8080 or 192.168.80.xxx:443 - except there are some additional firewalls?!
Is your container running (docker ps)?

Connection refused when trying to run Kong API Gateway using a docker container

I am trying to run Kong API Gateway using a docker container. I followed the instructions on hub.docker.com/_/kong/, started Cassandra database and Kong.
I have Cassandra running using the below command:
docker run -d --name kong-database \
-p 9042:9042 \
cassandra:3
and Kong running using the below command:
docker run -d --name kong \
--link kong-database:kong-database \
-e "KONG_DATABASE=cassandra" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 7946:7946 \
-p 7946:7946/udp \
kong:latest
Both containers are running. (I don't have enough reputations to embed pictures here right now so please see a screenshot here:
my container list)
However when I do:
$ curl http://127.0.0.1:8001
I got this:
curl: (7) Failed to connect to 127.0.0.1 port 8001: Connection refused
Can anyone let me know what is the possible reason?
Ok, check the logs of the kong container to find any errors if there are any(docker logs kong).
If there aren't any errors, please check whether there is any active process running on the port or not(sudo netstat -anp | grep 8001). That will help us know whether the docker-container port 8001 was properly binded to server port 8001 and also the ip on which the port is running.
If there is process running on that port, then it might be an issue of running docker on bridge network which is not able to bind the port with localhost. try re-running the container with network host(--net host). Then it should work fine.

How do I set up a simple dockerized RabbitMQ cluster?

I've been doing a bit of reading up about setting up a dockerized RabbitMQ cluster and google turns up all sorts of results for doing so on the same machine.
I am trying to set up a RabbitMQ cluster across multiple machines.
I have three machines with the names dockerswarmmodemaster1, dockerswarmmodemaster2 and dockerswarmmodemaster3
On the first machine (dockerswarmmodemaster1), I issue the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 -p 15672:15672 \
-p 25672:25672 --hostname dockerswarmmodemaster1 --name roger_rabbit \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Now this starts up a rabbitMQ just fine, and I can go to the admin page on 15672 and see that it is working as expected.
I then SSH to my second machine (dockerswarmmodemaster2) and this is the bit I am stuck on. I have been trying variations on the following command:
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit -e CLUSTERED=true \
-e CLUSTER_WITH=rabbit#dockerswarmmodemaster1 \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
No matter what I try, the web page on both RabbitMQ machines says that there is no cluster under the 'cluster links' section. I haven't tried involving the third machine yet.
So - some more info:
The machine names are resolvable by DNS.
I have tried using the --net=host switch in the docker run command on both machines; no change.
I am not using docker swarm or swarm mode.
I do not have docker compose installed. I'd prefer not to use it if possible.
Is there any way of doing this from the docker run command or will I have to download the rabbit admin cli and manually join to the cluster?
You can use this plugin https://github.com/aweber/rabbitmq-autocluster to create a RabbitMQ docker cluster.
The plugin uses etcd2 or consul as service discovery, in this way you don't need to use the rabbitmqctl command line.
I used it with docker swarm, but it is not necessary.
Here is the result
The official container seems to not support environment variables CLUSTERED and CLUSTER_WITH. It supports only a list variables that are specified in RabbitMQ Configuration.
According to official Clustering Guide, one of possible solutions is via configuration file. Thus, you can just provide your own configuration to the container.
Modified default configuration in your case will look like:
[
{ rabbit, [
{ loopback_users, [ ] },
{ cluster_nodes, {['rabbit#dockerswarmmodemaster1'], disc }}
]}
].
Save this snippet to, for example, /home/user/rmq/rabbitmq.config.
Hint: If you want to see node in management console, you need to add another file /home/user/rmq/enabled_plugins with only string
[rabbitmq_management].
after that, your command will look like
docker run -d -p 4369:4369 -p 5671:5671 -p 5672:5672 -p 15671:15671 \
-p 15672:15672 -p 25672:25672 --name jessica_rabbit \
-v /home/user/rmq:/etc/rabbmitmq \
-e RABBITMQ_ERLANG_COOKIE='secret cookie here' \
rabbitmq:3-management
PS You may also need to consider setting environment variable RABBITMQ_USE_LONGNAME.
In order to create a cluster, all rabbitmq nodes that are to form up a cluster must be accessible (each one by others) by node name (hostname).
You need to specify a hostname for each docker container with --hostname option and to add /etc/host entries for all the other containers, this you can do with --add-host option or by manually editing /etc/hosts file.
So, here is the example for a 3 rabbitmq nodes cluster with docker containers (rabbitmq:3-management image).
First, create a network so that you can assign IPs: docker network create --subnet=172.18.0.0/16 mynet1. We are going to have the following:
3 docker containers named rab1con, rab2con and rab3con
IPs respectively will be 172.18.0.11 , -12 and -13
each of them will have the host name respectively rab1, rab2 and rab3
all of them must share the same erlang cookie
Spin up the first one
docker run -d --net mynet1 --ip 172.18.0.11 --hostname rab1 --add-host rab2:172.18.0.12 --add-host rab3:172.18.0.13 --name rab1con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
second one
docker run -d --net mynet1 --ip 172.18.0.12 --hostname rab2 --add-host rab1:172.18.0.11 --add-host rab3:172.18.0.13 --name rab2con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
last one
docker run -d --net mynet1 --ip 172.18.0.13 --hostname rab3 --add-host rab2:172.18.0.12 --add-host rab1:172.18.0.11 --name rab3con -e RABBITMQ_ERLANG_COOKIE='secret cookie here' rabbitmq:3-management
Then, in container rab2con, do
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit#rab1
rabbitmqctl start_app
and the same in rab3con and that's it.

Resources