Problem to health check HAProxy docker container - docker

HAProxy image comes with a really compact debian version, without ping, wget, curl or other commands to verify. How to use docker-compose health check for it, verifying that the HAProxy is up and running?

You would configure a health check in the haproxy.cfg which you pass to the docker container. The health check portion could look like:
frontend frontend_name
...
use_backend healthcheck if { path_beg /health }
backend healthcheck
server disabled-server 127.0.0.1:1 disabled
errorfile 503 /path/to/template.html
And the health check template file:
HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
Content-Type: text/plain
up
How this works is that the health check backend, which you would route to from a frontend on whatever path you like /health for example. And instead of responding with a 503, the error file directive allows you to return a custom error response, in this case a 200.

Something like this may work:
echo "" > /dev/tcp/${HOSTNAME}/${PORT} || exit 1
which is using the /dev/tcp built-in with bash to test a connection to a port you know HAProxy should be running on, and will fail if cannot connect.

Related

GitLab can't reach PlantUml in docker container

So I have GitLab EE server (Omnibus) installed and set up on Ubuntu 20.04.
Next, following official documentation found on GitLab PlantUML integration, I started PlantUML in a docker container which I did with the following command:
docker run -d --name plantuml -p 8084:8080 plantuml/plantuml-server:tomcat
Next, I also configured /etc/gitlab/gitlab.rb file and added next line for redirection as my GitLab server is using SSL:
nginx['custom_gitlab_server_config'] = "location /-/plantuml/ { \n proxy_cache off; \n proxy_pass http://plantuml:8080/; \n}\n"
In the GitLab server GUI in admin panel, in Settings -> General, when I expand PlantUML, I set the value of PlantUML URL to (two ways):
1st approach:
https://HOSTNAME:8084/-/plantuml
Then, when trying to reach it through the browser through this address(https://HOSTNAME:8084/-/plantuml), I get
This site can’t provide a secure connection.
HOSTNAME sent an invalid response.
ERR_SSL_PROTOCOL_ERROR
2nd approach:
Also I tried to put before that I tried different value in in Settings -> General -> PlantUML -> PlantUML URL:
https://HOSTNAME/-/plantuml
Then, when trying to reach it through the browser through this address (https://HOSTNAME/-/plantuml), I get
502
Whoops, GitLab is taking too much time to respond
In both cases when I trace logs with gitlab-ctl tail I get the same errors:
[crit] *901 SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking, client: CLIENT_IP, server: 0.0.0.0:443
[error] 1123593#0: *4 connect() failed (113: No route to host) while connecting to upstream
My question is which of the above two ways is correct to access PlantUML with the above configuration and is there any configuration I am missing?
I believe the issue is that you are running the plantuml in a docker container and then trying to reach it via gitlab (on localhost) with name.
In order to check if that is the issue please change
proxy_pass http://plantuml:8080/
to
proxy_pass http://localhost:8080/
and trying again with the first approach.
Your second approach seems to be missing the container port in the url.

NGINX localhost upstream configuration

I am running multi services app orchestrated by docker-compose and for testing purposes I want to run it on localhost (MacOS).
With this NGINX configuration:
upstream fe {
server fe:3000;
}
upstream be {
server be:4000;
}
server {
server_name localhost;
listen 80;
location / {
proxy_pass http://fe;
}
location /api/ {
proxy_pass http://be;
}
}
I am able to get FE in browser from http://localhost/ and BE from http://localhost/api/ as expected.
Issue is that FE refusing communicate with BE with this error:
Error: Network error: request to http://localhost/api/graphql failed, reason: connect ECONNREFUSED 127.0.0.1:80
(It's NEXT.JS FE with NODE/EXPRESS/APOLLO-GQL BE)
Note: I need to upstream BE, because I need to download files from email directly with URL.
Am I missing some NGINX headers, DNS configuration etc.?
Thanks in an advance!
Initial call to Apollo is from Next.js (FE container) 'server side', that means BE needs to be addressed to docker network (it cannot be localhost, because for this call is localhost FE container itself). In my case is that call to process.env.BE that is set to http://be:4000.
However for other calls (sending login request from browser) is docker network unknown (calling it from localhost that has no access to docker network) that mean you have to address localhost/api/graphql.
I am able to achieve that functionality just with a small change in my FE httpLink - apollo connecting function:
uri: isBrowser ? `/api/graphql` : `${process.env.BE}/api/graphql`
NGINX config is the same as above.
NOTE: This needs to be handle only on local environment, on remote server it work fine without this 'hack' and address is always domain.com/api/graphql.

docker nginx-proxy "Bad Gateway"

I have what I think is exactly the setup prescribed in the documentation. Easy peasy, development-only, no SSL ... But I'm getting "Bad Gateway."
docker exec ... cat /etc/nginx/conf.d/default.conf
... seems to correctly identify the internal IP-address of the other-container of interest ... which means that scanning for ENV VIRTUAL_HOST obviously worked:
upstream my_site.local {
[...]
server 172.16.238.5:80 # CORRECT!
}
When I do docker logs app_server I see ... silence. The server isn't being contacted.
When I do docker logs nginx_proxy I see this:
failed (111: connection refused) while connecting to upstream, client 172.16.238.1 [...] upstream: "172.16.238.5:80/"
The other container specifies EXPOSE 80 ... so, why is the connection being refused and who is refusing it?
Well, as I said above, I realized the error of my ways and did this:
VIRTUAL_PROTO=fastcgi
VIRTUAL_ROOT=/var/www
... and within the Dockerfile of the app container I apparently did need to EXPOSE 9000. (This being the default port used by php-fpm for FastCGI purposes.)

Running Boot2docker behind proxy, getting FATA[0020] Forbidden for any interaction with Docker hub

Followed the instruction to set up proxy of boot2docker , Getting the following FATAs, any clue
FATA[0020] Get https://index.docker.io/v1/repositories/library/busybox/images: Forbidden while trying to pulling images
FATA[0020] Error response from daemon: Server Error: Post https://index.docker.io/v1/users/: Forbidden while trying to login
FATA[0000] Error response from daemon: Get https://index.docker.io/v1/search?q=ubuntu: Forbidden - while searching for images
updated to include result of curl -v https://index.docker.io:443
* Rebuilt URL to: https://index.docker.io:443/
* About to connect() to proxy 34363bd0dd54 port 8099 (#0)
* Trying 192.168.59.3...
* Adding handle: conn: 0x9adbad8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x9adbad8) send_pipe: 1, recv_pipe: 0
* Connected to 34363bd0dd54 (192.168.59.3) port 8099 (#0)
* Establish HTTP proxy tunnel to index.docker.io:443
> CONNECT index.docker.io:443 HTTP/1.1
> Host: index.docker.io:443
> User-Agent: curl/7.33.0
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 403 Forbidden
< Server: squid/3.4.9
< Mime-Version: 1.0
< Date: Fri, 29 May 2015 17:56:22 GMT
< Content-Type: text/html
< Content-Length: 3151
< X-Squid-Error: ERR_ACCESS_DENIED 0
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from localhost
< Via: 1.1 localhost (squid/3.4.9)
< Connection: keep-alive
<
* Received HTTP code 403 from proxy after CONNECT
* Connection #0 to host 34363bd0dd54 left intact
curl: (56) Received HTTP code 403 from proxy after CONNECT
Looks like it is a proxy issue, I am running a proxy server on the host machine, accessing it by its host name in boot2docker VM 's http_proxy and https_proxy butcurl host_proxy:port` works with no issues
I was experiencing the same issue where I would get a 403 error when trying to get lxc-docker to install from get.docker.com (it failed because it could not complete apt-get update. In my case, I have the following setup:
VM Provider: VirtualBox (Ubuntu 14.04 (Trusty))
Environment: Vagrant
Provisioner: chef-zero (via Vagrant)
PROXY: At first I had forgotten about this, but I am running apt-cacher-ng on my host machine (my Macbook Pro) to keep data downloads to a minimum when I'm running apt-get install on Vagrant VM's. In a nutshell, apt-cacher-ng sets up an apt mirror on my Mac for Ubuntu VM's to pull packages from.
I realized that apt-cacher-ng doesn't support SSL repositories (https), but does support normal http repositories. Since the Docker repository uses https, I had to find a workaround.
Before I fixed anything, I had the following in my /etc/apt/apt.conf.d/10mirror file in my Ubuntu VM's (localip is the IP address of my Mac which runs the apt-cacher-ng server):
Acquire::http { Proxy "http://#{localip}:3142"; };
The above line means my Ubuntu VM's were getting packages through apt-cacher-ng, but failing when a repository used https. By adding the following line beneath that line, things then started to work normally:
Acquire::https { Proxy \false"; };
At this point, the contents of /etc/apt/apt.conf.d/10mirror are as follows:
Acquire::http { Proxy "http://#{localip}:3142"; };
Acquire::https { Proxy \false"; };
Now, run apt-get update and then Docker installs successfully and I'm back to normal testing. In case you are using Vagrant to set up the 10mirror file, here are the lines I have in my Vagrantfile which do the job:
oak.vm.provision "shell", inline: "echo 'Acquire::http { Proxy \"http://#{localip}:3142\"; };' > /etc/apt/apt.conf.d/10mirror"
oak.vm.provision "shell", inline: "echo 'Acquire::https { Proxy \"false\"; };' >> /etc/apt/apt.conf.d/10mirror";

Logging in to private docker registry v2 behind haproxy

I am trying to set up a new Docker Registry (v2) with HAProxy. For the Docker Registry I am using the image from the docker hub and running it with docker run -d -p 5000:5000 -v /path/to/registry:/tmp/registry registry:2.0.1. And this is a subset of my HAProxy configuration:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
http-request auth realm Registry if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
use_backend docker-registry if domain_d.mydomain.com
The important things to note are that I am using HAProxy to do SSL termination and HTTP auth rather than the registry.
My issue occurs when I try to login to the new registry. If I run docker login https://d.mydomain.com/v2/ then enter the user root and password I get the following error messages:
Docker Client:
FATA[0009] Error response from daemon: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
Docker Daemon:
ERRO[0057] Handler for POST /auth returned error: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
ERRO[0057] HTTP Error: statusCode=500 invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
So I try adding --insecure-registry d.mydomain.com to:
/etc/default/docker with DOCKER_OPTS= -H unix:///var/run/docker.sock --insecure-registry d.mydomain.com
the arguments of starting docker manually with docker -d --insecure-registry d.mydomain.com
neither of these, or any other I have found online, work. Each time, after restarting docker and attempting to log in again gives me the same error message.
A few other things I have tried:
In a browser going to d.mydomain.com results in a 404
In a browser going to d.mydomain.com/v2/ results in: {}
Replacing https://d.mydomain.com/v2/ in the login command with all of these with no success
http://d.mydomain.com/v2/
d.mydomain.com/v2/
http://d.mydomain.com/
d.mydomain.com/
This setup with HAProxy doing the SSL termination and HTTP auth has worked in the past using the first version of the registry and older versions of docker. So has anything in Docker registry v2 changed? Does this still work? If it hasn't changed, why won't the --insecure-registry flag do anything anymore?
Also, I have been working on getting this working for a while so I may have forgotten all the things I have tried. If there is something that may work, let me know and I will give it a try.
Thanks,
JamesStewy
Edit
This edit has been moved to the answer below
I have got it working. So here is my new config:
haproxy.cfg
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
backend docker-registry-auth
errorfile 503 /path/to/registry_auth.http
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
use_backend docker-registry-auth if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
rsprep ^Location:\ http://(.*) Location:\ https://\1
use_backend docker-registry if domain_d.mydomain.com
registry_auth.http
HTTP/1.0 401 Unauthorized
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Docker-Distribution-Api-Version: registry/2.0
WWW-Authenticate: Basic realm="Registry"
<html><body><h1>401 Unauthorized</h1>
You need a valid user and password to access this content.
</body></html>
The differences being the http-request auth line has been replaced with use_backend docker-registry-auth. The backend docker-registry-auth has no servers to it will always give a 503 error. But the 503 error file has been changed to registry_auth.http. In registry_auth.http the error code is overridden to 401, the header WWW-Authenticate is set to Basic realm="Registry", the basic HAProxy 401 error page is supplied and, most importantly, the header Docker-Distribution-Api-Version is set to registry/2.0.
As a result this hacky work-around setup works exactly the same as the old http-request auth line except the custom header Docker-Distribution-Api-Version is now set. This allows this set up to pass the test which starts on line 236 of https://github.com/docker/docker/blob/v1.7.0/registry/endpoint.go.
So now when I run docker login d.mydomain.com, login is successful and my credentials are added to .docker/config.json.
The second issue was that I couldn't push to the new repository even through it logged in. This was fixed by adding the rsprep line in the frontend. What this line does is modify the Location header (if it exists) to turn all http:// to https://.
I also found this bit of documentation for future reference.
As a small clarification to the previous answer: I had to change this line:
WWW-Authenticate: Basic realm="Registry"
To this:
WWW-Authenticate: Basic realm="Registry realm"
and then everything worked...
BTW, hashing the pass can be done using mkpasswd (part of whois deb package)

Resources