Why is Portainer ignoring my certificate even though I have specified --sslcert and --sslkey? - docker

I have Portainer CE 2.9.2 running in a docker container. I'm starting it with the --sslcert and --sslkey options to specify my own certificate, but the browser keeps showing the built-in certificate, self-signed by localhost and not my certificate.
I'm starting Portainer with Ansible's Community Docker module. The syntax is nearly identical to docker compose. Here is the task in the Ansible playbook:
- name: Run Portainer
docker_container:
image: portainer/portainer-ce
name: portainer
hostname: portainer
state: started
restart: yes
restart_policy: unless-stopped
ports:
- 8000:8000
- 9000:9000
- 9443:9443
volumes:
- /opt/docker/portainer/certs:/certs
- /opt/docker/portainer/data:/data
- /var/run/docker.sock:/var/run/docker.sock
command:
--sslcert /certs/uno.home.crt --sslkey /certs/uno.home.key
Using docker inspect, I can see it's picked up the command line argument and the /certs bind mount is there.
"Args": [
"--sslcert",
"/certs/uno.home.crt",
"--sslkey",
"/certs/uno.home.key"
]
...
"HostConfig": {
"Binds": [
"/opt/docker/portainer/certs:/certs:rw",
"/opt/docker/portainer/data:/data:rw",
"/var/run/docker.sock:/var/run/docker.sock:rw"
]
I can also verify the presence of the certificate files inside the container.
$ docker cp portainer:/certs .
$ ls certs
uno.home.crt uno.home.key
But, when I open up a browser on port 9443, I get a certificate that is signed by localhost, not the cert I have placed in the /opt/docker/portainer/certs directory.
I don't believe it is a problem with my certificate, as I have used the very same cert with an Nginx reverse proxy setup and it works as expected. My best guess is that Portainer is ignoring my certificate in favor of its built-in one, because the certificate displayed by the browser is the same regardless of me using the --sslcert / --sslkey options or not. But, I can't figure out where I've gone wrong.
The log file shows no errors:
$ docker logs portainer
level=info msg="2021/11/05 00:12:36 [INFO] [main,compose] [message: binary is missing, falling-back to compose plugin] [error: docker-compose binary not found]"
2021/11/05 00:12:36 server: Reverse tunnelling enabled
2021/11/05 00:12:36 server: Fingerprint 79:94:35:05:71:59:7a:eb:e9:03:a2:61:ad:1a:c5:11
2021/11/05 00:12:36 server: Listening on 0.0.0.0:8000...
level=info msg="2021/11/05 00:12:36 [INFO] [cmd,main] Starting Portainer version 2.9.2"
level=info msg="2021/11/05 00:12:36 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message: starting tunnel management process]"
level=info msg="2021/11/05 00:12:36 [DEBUG] [internal,init] [message: start initialization monitor ]"
level=info msg="2021/11/05 00:12:36 [INFO] [http,server] [message: starting HTTPS server on port :9443]"
level=info msg="2021/11/05 00:12:36 [INFO] [http,server] [message: starting HTTP server on port :9000]"
All the examples I've found on the web say docker compose style configuration should be done like this:
command:
--ssl
--sslcert /certs/portainer.crt
--sslkey /certs/portainer.key
Besides the file names and the --ssl, that's what I've got. I removed the --ssl after seeing a message in the Portainer log say it was a deprecated option and was only accepted for backward compatibility.
I suppose the fact that it ignores my cert could be a bug, though I don't want to file a bug report if it's just user error on my part. Can anyone see where I've gone wrong in the configuration of this thing?

This was indeed a bug and was fixed by the Portainer team. https://github.com/portainer/portainer/issues/6021

Related

Deploy GitLab docker image on mac: Cannot see gitab.example.com

I am trying to run Gitlab Docker image on mac Big Sur locally following the steps from documentation: https://docs.gitlab.com/ee/install/docker.html . And always cannot see locally https://gitlab.example.com. I tried both Gitlab EE and CE, and different versions of images, including latest. I also tried to use usual Docker basic run and docker-compose. I also updated Docker Desktop to latest version 4.10.1. I refined the logs and for all cases I have same error in Gitally log below:
~/gitlab/logs/gitaly/current:
{"level":"warning","msg":"[core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {\n \"Addr\": \"/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.0\",\n \"ServerName\": \"/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.0\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n}. Err: connection error: desc = \"transport: Error while dialing dial unix /var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.0: connect: no such file or directory\"","pid":344,"system":"system","time":"2022-07-26T09:57:38.226Z"}
{"level":"warning","msg":"[core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\n \"Addr\": \"/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.1\",\n \"ServerName\": \"/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.1\",\n \"Attributes\": null,\n \"BalancerAttributes\": null,\n \"Type\": 0,\n \"Metadata\": null\n}. Err: connection error: desc = \"transport: Error while dialing dial unix /var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.1: connect: no such file or directory\"","pid":344,"system":"system","time":"2022-07-26T09:57:38.228Z"}
{"level":"warning","msg":"spawned","supervisor.args":["bundle","exec","bin/ruby-cd","/var/opt/gitlab/gitaly","/opt/gitlab/embedded/service/gitaly-ruby/bin/gitaly-ruby","344","/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.0"],"supervisor.name":"gitaly-ruby.0","supervisor.pid":400,"time":"2022-07-26T09:57:38.228Z"}
{"address":"/var/opt/gitlab/gitaly/gitaly.socket","level":"info","msg":"listening at unix address","time":"2022-07-26T09:57:38.234Z"}
{"level":"warning","msg":"spawned","supervisor.args":["bundle","exec","bin/ruby-cd","/var/opt/gitlab/gitaly","/opt/gitlab/embedded/service/gitaly-ruby/bin/gitaly-ruby","344","/var/opt/gitlab/gitaly/run/gitaly-344/sock.d/ruby.1"],"supervisor.name":"gitaly-ruby.1","supervisor.pid":401,"time":"2022-07-26T09:57:38.234Z"}
But I am not sure that this is the case - also checked other logs and seems to me no errors in other logs - but of course I will be happy to provide more logs if you want.
This is my 'docker ps -a' status, docker version: 4.10.1, is run with docker-compose:
web:
image: 'gitlab/gitlab-ce:latest'
container_name: 'gitlab'
restart: unless-stopped
hostname: 'gitlab.example.com'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'https://gitlab.example.com'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '$GITLAB_HOME/config:/etc/gitlab'
- '$GITLAB_HOME/logs:/var/log/gitlab'
- '$GITLAB_HOME/data:/var/opt/gitlab'
list of docker active containers says container is healthy - but actually I see nothing in browser via https://gitlab.example.com.
sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b0fb6252c7f3 gitlab/gitlab-ce:latest "/assets/wrapper" 3 days ago Up 3 minutes (healthy) 0.0.0.0:22->22/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp gitlab
I also tried to add docker host IP to my hosts file - but no success:
192.168.31.182 host.docker.internal
192.168.31.182 gateway.docker.internal
192.168.31.182 gitlab.example.com
I still see nothing in browser by https://gitlab.example.com
Adding more info to #sytech "That's just a placeholder URL... Did you replace gitlab.example.com with your actual GitLab instance URL?"
You either need to add an /etc/hosts, setting to route gitlab.example.com to localhost or modify that to be localhost
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'localhost'
These settings are telling the nginx proxy what url should be forwarded to gitlab. That's useful when you're running multiple sites from one server. If you are just running it locally, you should just use localhost.
I also tried to add docker host IP to my hosts file - but no success:
You are forwarding the ports from the docker ip. Just use localhost. It probably didn't work because the docker IP you're looking at is for a different docker network.

Connect the Cassandra container to application web container failed - Error: 202 Connecting to Node

So, I created two docker's images and I want to connect one to which other with the docker composer. The first image is Cassandra 3.11.11 (from the official hub docker) and the other I created by myself with the tomcat version 9.0.54 and my application spring boot.
I ran the docker-compose.ylm below to connect the two container, where cassandra:latest is the cassandra's image and centos7-tomcat9-myapp is my app web's image.
version: '3'
services:
casandra:
image: cassandra:latest
myapp:
image: centos7-tomcat9-myapp
depends_on:
- casandra
environment:
- CASSANDRA_HOST=cassandra
I ran the command line to start the app web's image : docker run -it --rm --name fe3c2f120e01 -p 8888:8080 centos7-tomcat9-app .
In the console log the spring boot show me the error below. It happened, because the myapp's container could not connect to the Cassandra's container.
2021-10-15 15:12:14.240 WARN 1 --- [ s0-admin-1]
c.d.o.d.i.c.control.ControlConnection : [s0] Error connecting to
Node(endPoint=127.0.0.1:9042, hostId=null, hashCode=47889c49), trying
next node (ConnectionInitException: [s0|control|connecting...]
Protocol initialization request, step 1 (OPTIONS): failed to send
request (io.netty.channel.StacklessClosedChannelException))
What am I doing wrong?
EDIT
This is the nodetool status about the cassandra's image:
[root#GDBDEV04 cassandradb]# docker exec 552d359d177e nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.18.0.3 84.76 KiB 16 100.0% 685b6e0a-13c2-4d41-ba99-f3b0fa94477c rack1
EDIT 2
I need to connect the Cassandra's DB image with the web application image. It is different to connect microservices. I tried to change the 127.0.0.0 (inside the cassandra.yaml) to 0.0.0.0 (only to test) and the error persist. I think something missing in my docker-compose.yml for sure. However, I did not know what.
Finally I found the error. In my case, I need to fixed the docker-compose.yml file adding the Cassandra and Tomcat's ports. And in my application.properties (spring boot config file), I changed the cluster's name.
Docker-compose.yml:
version: '3'
services:
cassandra:
image: cassandra:latest
ports:
- "9044:9042"
myapp:
image: centos7-tomcat9-myapp
ports:
-"8086:8080"
depends_on:
- cassandra
environment:
- CASSANDRA_HOST=cassandra
Application.config :
# CASSANDRA (CassandraProperties)
cassandra.cluster = Test Cluster
cassandra.contactpoints=${CASSANDRA_HOST}
This question help me to resolve my problem: Accessing docker container mysql databases

Included container does't work with docker compose

I have Kafka/Zookeeper container and Divolte container in - https://github.com/divolte/docker-divolte/blob/master/docker-compose.yml, which correctly starts and works by
docker-compose up -d --build
I want to add the hdfs container - https://hub.docker.com/r/mdouchement/hdfs/ which correctly starts and works by
docker run -p 22022:22 -p 8020:8020 -p 50010:50010 -p 50020:50020 -p 50070:50070 -p 50075:50075 -it mdouchement/hdfs
But after adding the code to yml:
hdfs:
image: mdouchement/hdfs
environment:
DIVOLTE_KAFKA_BROKER_LIST: kafka:9092
ports:
- "22022:22"
- "8020:8020"
- "50010:50010"
- "50020:50020"
- "50070:50070"
- "50075:50075"
depends_on:
- kafka
The web http://localhost:50070 and data node http://localhost:8020/ did not answer. Could you help me to add new container? Which of hdfs ports do I have to write as source connection port?
The logs of HDFS container is:
2020-02-21T15:11:47.613270635Z Starting OpenBSD Secure Shell server: sshd.
2020-02-21T15:11:50.440130986Z Starting namenodes on [localhost]
2020-02-21T15:11:54.616344960Z localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
2020-02-21T15:11:54.616369660Z localhost: starting namenode, logging to /opt/hadoop/logs/hadoop-root-namenode-278b399bc998.out
2020-02-21T15:11:59.328993612Z localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
2020-02-21T15:11:59.329016212Z localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-root-datanode-278b399bc998.out
2020-02-21T15:12:06.078269195Z Starting secondary namenodes [0.0.0.0]
2020-02-21T15:12:10.837364362Z 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
2020-02-21T15:12:10.839375064Z 0.0.0.0: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-root-secondarynamenode-278b399bc998.out
2020-02-21T15:12:17.249040842Z starting portmap, logging to /opt/hadoop/logs/hadoop--portmap-278b399bc998.out
2020-02-21T15:12:18.253954832Z DEPRECATED: Use of this script to execute hdfs command is deprecated.
2020-02-21T15:12:18.253993233Z Instead use the hdfs command for it.
2020-02-21T15:12:18.254002633Z
2020-02-21T15:12:21.277829129Z starting nfs3, logging to /opt/hadoop/logs/hadoop--nfs3-278b399bc998.out
2020-02-21T15:12:22.284864146Z DEPRECATED: Use of this script to execute hdfs command is deprecated.
2020-02-21T15:12:22.284883446Z Instead use the hdfs command for it.
2020-02-21T15:12:22.284887146Z
Port description:
Ports
Portmap -> 111
NFS -> 2049
HDFS namenode -> 8020 (hdfs://localhost:8020)
HDFS datanode -> 50010
HDFS datanode (ipc) -> 50020
HDFS Web browser -> 50070
HDFS datanode (http) -> 50075
HDFS secondary namenode -> 50090
SSH -> 22
The docker-compose response answer is:
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
divolte-streamsets-quickstart_divolte_1 /opt/divolte/start.sh Up 0.0.0.0:8290->8290/tcp
divolte-streamsets-quickstart_hdfs_1 /bin/sh -c service ssh sta ... Exit 0
divolte-streamsets-quickstart_kafka_1 supervisord -n Up 2181/tcp, 9092/tcp, 9093/tcp, 9094/tcp, 9095/tcp, 9096/tcp, 9097/tcp, 9098/tcp, 9099/tcp
divolte-streamsets-quickstart_streamsets_1 /docker-entrypoint.sh dc -exec Up 0.0.0.0:18630->18630/tcp

Portainer not loading properly

I have downloaded the Portainer image and created the container in the Docker manager node, by using the below command.
docker run -d -p 61010:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
But after some time the container is getting excited. Also when I access the Portainer with the above port it's just saying Portainer loading and nothing is happening. PFB the logs for the Portainer
2019/10/16 16:20:58 server: Reverse tunnelling enabled
2019/10/16 16:20:58 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:20:58 server: Listening on 0.0.0.0:8000...
2019/10/16 16:20:58 Starting Portainer 1.22.1 on :9000
2019/10/16 16:20:58 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:25:58 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
2019/10/16 16:30:12 Templates already registered inside the database. Skipping template import.
2019/10/16 16:30:12 server: Reverse tunnelling enabled
2019/10/16 16:30:12 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:30:12 server: Listening on 0.0.0.0:8000...
2019/10/16 16:30:12 Starting Portainer 1.22.1 on :9000
2019/10/16 16:30:12 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:35:12 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
I am not sure whether the Porainer is running on 61010. Also, do i need to install Agent for this to work Please help to resolve this.
Follow the docs and it should work:
Quick start If you are running Linux, deploying Portainer is as simple
as:
$ docker volume create portainer_data
$ docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
VoilĂ , you can now use Portainer by accessing the
port 9000 on the server where Portainer is running.
Once you access the localhost:9000 in the browser, you will be required to created admin account, afterwards you will see the Portainer ui

Traefik query alive/dead backends

We have a treafik installation on docker swarm with several services balanced through traefik. Each service has at least two backends balanced with wrr and a healthcheck.
Is there a way (api, rest endpoint, logfile whatever) to find out which frontends have dead backends? By dead I mean on which backends treafik has detected via healthcheck that they are not eligible for balancing?
What is the best practice for this?
I see two ways of getting that info:
Traefik log
Look at Traefik log which provides traces for healthchecks:
time="2019-03-05T22:19:35Z" level=debug msg="Refreshing health check for backend: backend-web-so-55004614",
time="2019-03-05T22:19:35Z" level=warning msg="Health check still failing. Backend: \"backend-web-so-55004614\" URL: \"http://192.168.80.2:80\" Reason: received error status code: 404",
time="2019-03-05T22:19:36Z" level=debug msg="Refreshing health check for backend: backend-web-so-55004614",
time="2019-03-05T22:19:36Z" level=warning msg="Health check still failing. Backend: \"backend-web-so-55004614\" URL: \"http://192.168.80.2:80\" Reason: received error status code: 404",
Traefik /metrics
If it is not convenient to parse Traefik logs, you could active Traefik Prometheus metrics (which is enabled by default):
docker run -d -v /var/run/docker.sock:/var/run/docker.sock -p "80:80" -p "8080:8080" traefik --api --docker
Then you can make an HTTP query on http://localhost:8080/metrics and look for lines containing _backend_server_up. Each of these lines tells you that your backend is up and healthy. If a backend is missing, that means it is unhealthy or stopped:
traefik_backend_server_up{backend="backend-robots",url="http://172.23.0.3:80"} 1
traefik_backend_server_up{backend="backend-smtp-ui",url="http://172.25.0.3:8025"} 1
traefik_backend_server_up{backend="backend-varnish-admin",url="http://172.23.0.8:6085"} 1
traefik_backend_server_up{backend="backend-varnish-http",url="http://172.23.0.8:6081"} 1
traefik_backend_server_up{backend="backend-web-apps",url="http://172.21.0.2:80"} 1
traefik_backend_server_up{backend="backend-web-report",url="http://172.19.0.6:80"} 1
You could have a script querying this URL or you could install Prometheus which has alerting rules

Resources