Binding (sub)domain to traefik dashboard has 503 error - docker

Running traefik with docker and bind (sub)domain to dashboard but i got 503 when request to that.
Traefik is modern proxy reverse and i run it with docker by blew code. to see dashboard without direct url i binding port 8080 to 8080 and after that i can see dashboard. in dashboard Route Rule Host:monitor.monitor.my_domain was present in front and http://172.20.0.3:8080 was present in back but when i try to access to http://monitor.my_domain i got 503 error
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $PWD/traefik.toml:/traefik.toml \
-v $PWD/acme.json:/acme.json \
-p 80:80 \
-p 443:443 \
-l traefik.frontend.rule=Host:monitor.my_domain\
-l traefik.port=8080 \
--network web \
--name traefik \
traefik:1.7.6-alpine

You did not post your traefik.toml and overall your question is very difficult to parse. To me it sounds unlikely that without more information anyone is able to help.
If you make sure that monitor.my_domain resolves to your traefik instance and then run:
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 80:80 \
-p 443:443 \
-l traefik.frontend.rule=Host:monitor.my_domain\
-l traefik.port=8080 \
--name traefik \
traefik:1.7.6-alpine --api --docker
This will work. Navigate in your browser to http://monitor.my_domain (that you made sure to resolve to traefik) and you will see the dashboard.
My advice is to try that first, make sure that it works, and if it does, try to find out what's different with your own setup, by tweaking slightly this example or your own until either this one breaks or yours work.

Related

starting a NIFI container with my template/flow automatically loaded

i want to create a NIFI container and pass it a template/flow to be loaded automatically when the container is being created (without human intervention).
couldn't find any volumes/environments that are related to it.
i tried using (suggested by chatGPT):
docker run -d -p 8443:8443 \
-v /path/to/templates:/templates \
-e TEMPLATE_FILE_PATH=/templates/template.xml \
--name nifi apache/nifi
and
docker run -d -p 8443:8443 \
-e NIFI_INIT_FLOW_FILE=/Users/l1/Desktop/kaleidoo/devops/files/git_aliases/flow.json \
-v /Users/l1/Desktop/kaleidoo/docker/test-envs/flow.json:/flow.json:ro \
--name nifi apache/nifi
non of them worked, and i couldn't find data about NIFI_INIT_FLOW_FILE and TEMPLATE_FILE_PATH in the documentation.

How to access Gitlab's metrics (Prometheus and Grafana) from Docker installation?

I installed Gitlab using Docker image on a Ubuntu virtual machine running on a MAC M1 as follows (https://hub.docker.com/r/yrzr/gitlab-ce-arm64v8):
docker run \
--detach \
--restart always \
--name gitlab-ce \
--privileged \
--memory 4096M \
--publish 22:22 \
--publish 80:80 \
--publish 443:443 \
--hostname 127.0.0.1 \
--env GITLAB_OMNIBUS_CONFIG=" \
nginx['redirect_http_to_https'] = true; "\
--volume /srv/gitlab-ce/conf:/etc/gitlab:z \
--volume /srv/gitlab-ce/logs:/var/log/gitlab:z \
--volume /srv/gitlab-ce/data:/var/opt/gitlab:z \
yrzr/gitlab-ce-arm64v8:latest
All seems to be working correctly on localhost, except that I can't access the metrics, I got unable to connect error on:
Prometheus: http://localhost:9090
Grafana: http://localhost/-/grafana
I tried enabling metrics as in the documentation, and docker exec -it gitlab-ce gitlab-ctl reconfigure
What I'm missing?
Thanks
When Gitlab uses localhost this will resolve the localhost on the container and not the host (so your Mac).
There are two options to solve this:
Use host.docker.internal instead of localhost (this resolves to the internal IP address used by the host) - see this doc for more info
Configure your container to use the host network by adding this to the docker run command: --network=host which will let your container and host to share the same network stack (however, this is not supported nu Docker Desktop for mac according to this)

How to connect my Kibana to ElasticSearch in the docker run command?

Just trying to learn to setup Kibana and Elastic search using native docker command (i.e. not using Docker-Compose).
Below are the commands I run
docker network create es-net
docker run -d --name es-cluster \
--net es-net -p 9200:9200 \
-e "xpack.security.enabled=false" \
-e "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.2.0
docker run -d --net es-net -p 5601:5601 \
-e ELASTICSEARCH_URL=http://es-cluster:9200 \
docker.elastic.co/kibana/kibana:7.2.0
Somehow Kibana is not loading the elastic search up when I run http://localhost:5601/ and always with the message Kibana server is not ready yet
I follow the answer as per Kibana on Docker cannot connect to Elasticsearch, to ensure the ELASTICSEARCH_URL is correctly set, but it is still not coming up. Anything I miss?
note: tested with curl 0.0.0.0:9200, the elastic search is already running
Looks like since I'm in version 7.2.0 of Kibana, it has changed from ELASTICSEARCH_URL to ELASTICSEARCH_HOSTS
as per https://www.elastic.co/guide/en/kibana/current/docker.html
docker run -d --net es-net -p 5601:5601 \
-e ELASTICSEARCH_HOSTS=http://es-cluster:9200 \
docker.elastic.co/kibana/kibana:7.2.0
With this in place, all should work then.

Running a local kibana in a container

I am trying to run use kibana console with my local elasticsearch (container)
In the ElasticSearch documentation I see
docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.2.2
Which lets me run the community edition in a quick one liner.
Looking at the kibana documentation i see only
docker pull docker.elastic.co/kibana/kibana:6.2.2
Replacing pull with run it looks for the x-pack (I think it means not community) and fails to find the ES
Unable to revive connection: http://elasticsearch:9200/
Is there a one liner that could easily set up kibana localy in a container?
All I need is to work with the console (Sense replacement)
If you want to use kibana with elasticsearch locally with docker, they have to communicate with each other. To do so, according to the doc, you need to link the containers.
You can give a name to the elasticsearch container with --name:
docker run \
--name elasticsearch_container \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
And then link this container to kibana:
docker run \
--name kibana \
--publish 5601:5601 \
--link elasticsearch_container:elasticsearch_alias \
--env "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" \
docker.elastic.co/kibana/kibana:6.2.2
The port 5601 is exposed locally to access it from your browser. You can check in the monitoring section that elasticsearch's health is green.
EDIT (24/03/2020):
The option --link may eventually be removed and is now a legacy feature of docker.
The idiomatic way of reproduce the same thing is to firstly create a user-defined bridge:
docker network create elasticsearch-kibana
And then create the containers inside it:
 Version 6
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_URL=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:6.2.2
Version 7
As it was pointed out, the environment variable changed for the version 7. It now is ELASTICSEARCH_HOSTS.
docker run \
--name elasticsearch_container \
--network elasticsearch-kibana \
--publish 9200:9200 \
--publish 9300:9300 \
--env "discovery.type=single-node" \
docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker run \
--name kibana \
--publish 5601:5601 \
--network elasticsearch-kibana \
--env "ELASTICSEARCH_HOSTS=http://elasticsearch_container:9200" \
docker.elastic.co/kibana/kibana:7.6.2
User-defined bridges provide automatic DNS resolution between containers that means you can access each other by their container names.
It is convenient to use docker-compose as well.
For instance, the file below, stored in home directory, allows to start Kibana with one command:
docker-compose up -d:
# docker-compose.yml
version: "2"
kibana:
image: "docker.elastic.co/kibana/kibana:6.2.2"
container_name: "kibana"
environment:
- "ELASTICSEARCH_URL=http://<elasticsearch-endpoint>:9200"
- "XPACK_GRAPH_ENABLED=false"
- "XPACK_ML_ENABLED=false"
- "XPACK_REPORTING_ENABLED=false"
- "XPACK_SECURITY_ENABLED=false"
- "XPACK_WATCHER_ENABLED=false"
ports:
- "5601:5601"
restart: "unless-stopped"
In addition, Kibana service might be a part of your project in development environment (in case, docker-compose is used).

Connect Nginx Docker container to 16 workers

I have an Nginx Docker container, and 16 load balanced web servers each exposing a port on the host machine, 8081-8096:
docker run -d \
--restart always \
--name "web.${name}" \
-v /srv/web/web-bundle:/bundle \
-p "${port}":80 \
kadirahq/meteord:base
My Nginx container was previously linking to the only web image, before I tried to scale:
docker run -d \
--name nginx \
--link web.1:web.1 \
-v /srv/nginx:/etc/nginx \
-v /srv/nginx/html:/usr/share/nginx/html \
-p 80:80 \
-p 443:443 \
nginx
Nginx upstream config:
upstream web {
ip_hash;
server 127.0.0.1:8081;
server 127.0.0.1:8082;
server 127.0.0.1:8083;
# ... you get the point
}
I need this Nginx image to be able to hit 127.0.0.1:8081-8096, however it doesn't appear to permit this. I don't want to make 16 --links! That seems off.
What is the proper way to do this?
You have no choice with nginx to spare the requests through a range of ports without specifying each one.
I recommend to try this out: https://github.com/jwilder/nginx-proxy
That is a nginx container that can automatically discover any other containers that need to be proxied. It reads some special env var from other containers in order to know how to proxy them.
Use --network instead of --link. As long as you put all containers in the same network, you don't need to link them. The --link is being deprecated.
docker network create mynet
docker run --network mynet ........

Resources