I'm trying to parse an sslscan result by showing only the IP/HOST and the certificate but I can't make it. For now I am just trying some awk, sed, grep. But is now working! Maybe I will need some python with regex to make it works?.
Full output:
OpenSSL 1.1.1n-dev xx XXX xxxx
Connected to 10.10.10.10
Testing SSL server 10.10.10.10 on port 443 using SNI name 10.10.10.10
SSL/TLS Protocols:
SSLv2 disabled
SSLv3 disabled
TLSv1.0 disabled
TLSv1.1 disabled
TLSv1.2 enabled
TLSv1.3 disabled
SSL Certificate:
Signature Algorithm: sha256WithRSAEncryption
RSA Key Strength: 2048
Subject: xxx
Issuer: xxx
Not valid before: Apr 7 18:55:56 2020 GMT
Not valid after: Apr 7 18:55:56 2021 GMT
What i need:
Testing SSL server 10.10.10.10 on port 443 using SNI name 10.10.10.10
SSL Certificate:
Signature Algorithm: sha256WithRSAEncryption
RSA Key Strength: 2048
Subject: xxx
Issuer: xxx
Not valid before: Apr 7 18:55:56 2020 GMT
Not valid after: Apr 7 18:55:56 2021 GMT
What I’ve tried so far:
sed -ne '/SSL Certificate/,$ p' ssltest_grep_cert.txt
Related
I'm having issues connecting to any server over IP + TLS but only from within a docker container when running in (the default) network bridge. I am always getting OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to W.X.Y.Z. I've tried tcpdump (in the container) and wireshark (locally on the host) to no avail.
My work partner has the same OS/Docker version and cannot reproduce the issue. I'm at a loss as to how to debug this issue.
I have tried:
various images (ubuntu and alpine)
various clients (curl and wget)
various TLS versions (1.3 and 1.2)
My container:
FROM ubuntu:latest
RUN apt update && apt upgrade -y && apt install -y curl tcpdump openssl wget
The issue:
docker run -it --rm repro /bin/bash
# in the docker bash shell, if I try to curl a regular https hostname, all is well:
root#ba6f8aab182d:/# curl -v https://www.google.com
* Trying 172.217.10.36:443...
* TCP_NODELAY set
* Connected to www.google.com (172.217.10.36) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* ...
# if I try again but this time with the ip instead of the hostname
root#ba6f8aab182d:/# curl -v https://172.217.10.36
* Trying 172.217.10.36:443...
* TCP_NODELAY set
* Connected to 172.217.10.36 (172.217.10.36) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 172.217.10.36:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 172.217.10.36:443
The above 2 calls (curl HOSTNAME then curl MATCHINGIP) work just fine on the host machine.
Extra information in the ubuntu container:
root#ba6f8aab182d:/# openssl version
OpenSSL 1.1.1f 31 Mar 2020
root#ba6f8aab182d:/# curl --version
curl 7.68.0 (x86_64-pc-linux-gnu) libcurl/7.68.0 OpenSSL/1.1.1f zlib/1.2.11 brotli/1.0.7 libidn2/2.2.0 libpsl/0.21.0 (+libidn2/2.2.0) libssh/0.9.3/openssl/zlib nghttp2/1.40.0 librtmp/2.3
Release-Date: 2020-01-08
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS brotli GSS-API HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets
Extra information from the host:
$ docker version
Client: Docker Engine - Community
Cloud integration: 1.0.12
Version: 20.10.5
API version: 1.41
Go version: go1.13.15
Git commit: 55c4c88
Built: Tue Mar 2 20:13:00 2021
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.5
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 363e9a8
Built: Tue Mar 2 20:15:47 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.4
GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
runc:
Version: 1.0.0-rc93
GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
docker-init:
Version: 0.19.0
GitCommit: de40ad0
$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "4b8797bccccd628a6280199eb5c0372cd08d521a88a29243b174718569e9cc7e",
"Created": "2021-04-15T17:24:33.631745871Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ba6f8aab182daebd4f0b0dc449929585637cd46bc532f61991bfa28c40e09ceb": {
"Name": "flamboyant_zhukovsky",
"EndpointID": "e29407baf0eb8ac069416a8f787794b548d31f05bf9fb6c223fb58c935aff24c",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
EDIT
Trying openssl s_client -cipher ALL -servername 172.217.10.36:443 -connect 172.217.10.36:443 in the container, I get:
root#ba6f8aab182d:/# openssl s_client -cipher ALL -servername 172.217.10.36:443 -connect 172.217.10.36:443
CONNECTED(00000003)
write:errno=0
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 403 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
While on the host I get:
openssl s_client -cipher ALL -servername 172.217.10.36:443 -connect 172.217.10.36:443
CONNECTED(00000003)
depth=2 OU = GlobalSign Root CA - R2, O = GlobalSign, CN = GlobalSign
verify return:1
depth=1 C = US, O = Google Trust Services, CN = GTS CA 1O1
verify return:1
depth=0 C = US, ST = California, L = Mountain View, O = Google LLC, CN = www.google.com
verify return:1
---
Certificate chain
0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
i:/C=US/O=Google Trust Services/CN=GTS CA 1O1
1 s:/C=US/O=Google Trust Services/CN=GTS CA 1O1
i:/OU=GlobalSign Root CA - R2/O=GlobalSign/CN=GlobalSign
---
Server certificate
-----BEGIN CERTIFICATE-----
[... certificate was here ...]
-----END CERTIFICATE-----
subject=/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com
issuer=/C=US/O=Google Trust Services/CN=GTS CA 1O1
---
No client certificate CA names sent
Server Temp Key: ECDH, X25519, 253 bits
---
SSL handshake has read 3206 bytes and written 339 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-CHACHA20-POLY1305
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-CHACHA20-POLY1305
Session-ID: 052A69E409C0705AEAB8A180228C3F8E91A530504EFB06BA9214365F6B99DCAC
Session-ID-ctx:
Master-Key: A4EAC218352BBEAF3A43AB625266304DCF495FFE8A916C638679473AD20DC01B508158B8C0AA39A97003FEC5B8ABD7EC
TLS session ticket lifetime hint: 100800 (seconds)
TLS session ticket:
[... a lot of stuff here ...]
Start Time: 1618514134
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
Seems like the issue is with Docker-For-Mac 3.3.0 and 3.3.1.
The solution was to downgrade to 3.2.2 even though the docker engine is the same.
See https://github.com/docker/for-mac/issues/5568
I am trying to set up a docker network consisting of two containers:
MockServer running on 443
Client (fedora) issuing requests to MockServer
I've installed The MockServer CA X.509 taken from https://github.com/mock-server/mockserver/blob/master/mockserver-core/src/main/resources/org/mockserver/socket/CertificateAuthorityCertificate.pem
into /etc/pki/ca-trust/source/anchors/key.pem followed by update-ca-trust command.
Still, when I am trying to reach MockServer with curl, I am receiving this:
bash-4.2# curl https://www.hostname.net/simpleFirst --verbose
* About to connect() to www.hostname.net port 443 (#0)
* Trying 172.20.128.2...
* Connected to www.hostname.net (172.20.128.2) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* Server certificate:
* subject: C=UK,ST=England,L=London,O=MockServer,CN=localhost
* start date: Jul 24 14:52:38 2020 GMT
* expire date: Jul 29 14:52:38 2021 GMT
* common name: localhost
* issuer: C=UK,ST=England,L=London,O=MockServer,CN=www.mockserver.com
* NSS error -8179 (SEC_ERROR_UNKNOWN_ISSUER)
* Peer's Certificate issuer is not recognized.
* Closing connection 0
curl: (60) Peer's Certificate issuer is not recognized.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
Any advice or help is very much appreciated. Thanks in advance!
I have
$ cat terraform.Dockerfile
FROM alpine
MAINTAINER Carlos Nunez <dev#carlosnunez.me>
RUN wget -O /tmp/terraform.zip https://releases.hashicorp.com/terraform/0.12.9/terraform_0.12.9_linux_amd64.zip && \
unzip /tmp/terraform.zip -d /
RUN apk update && apk add --no-cache ca-certificates curl
USER nobody
When I do
$ docker-compose run terraform /terraform init
I get
$ docker-compose run terraform /terraform init
2020/03/29 08:25:36 [INFO] Terraform version: 0.12.9
2020/03/29 08:25:36 [INFO] Go runtime version: go1.12.9
2020/03/29 08:25:36 [INFO] CLI args: []string{"/terraform", "init"}
2020/03/29 08:25:36 [DEBUG] Attempting to open CLI config file: /.terraformrc
2020/03/29 08:25:36 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2020/03/29 08:25:36 [INFO] CLI command args: []string{"init"}
2020/03/29 08:25:36 [ERR] Checkpoint error: mkdir /.terraform.d: permission denied
Initializing the backend...
2020/03/29 08:25:36 [TRACE] Meta.Backend: no config given or present on disk, so returning nil config
2020/03/29 08:25:36 [TRACE] Meta.Backend: backend has not previously been initialized in this working directory
2020/03/29 08:25:36 [DEBUG] New state was assigned lineage "cff52927-0e9b-8ef4-8aeb-2b176dbc40a6"
2020/03/29 08:25:36 [TRACE] Meta.Backend: using default local state only (no backend configuration, and no existing initialized backend)
2020/03/29 08:25:36 [TRACE] Meta.Backend: instantiated backend of type <nil>
2020/03/29 08:25:36 [DEBUG] checking for provider in "."
2020/03/29 08:25:36 [DEBUG] checking for provider in "/"
2020/03/29 08:25:36 [DEBUG] checking for provisioner in "."
2020/03/29 08:25:36 [DEBUG] checking for provisioner in "/"
2020/03/29 08:25:36 [INFO] Failed to read plugin lock file .terraform/plugins/linux_amd64/lock.json: open .terraform/plugins/linux_amd64/lock.json: no such file or directory
2020/03/29 08:25:36 [TRACE] Meta.Backend: backend <nil> does not support operations, so wrapping it in a local backend
2020/03/29 08:25:36 [TRACE] backend/local: state manager for workspace "default" will:
- read initial snapshot from terraform.tfstate
- write new snapshots to terraform.tfstate
- create any backup at terraform.tfstate.backup
2020/03/29 08:25:36 [TRACE] statemgr.Filesystem: reading initial snapshot from terraform.tfstate
2020/03/29 08:25:36 [TRACE] statemgr.Filesystem: snapshot file has nil snapshot, but that's okay
2020/03/29 08:25:36 [TRACE] statemgr.Filesystem: read nil snapshot
2020/03/29 08:25:36 [DEBUG] checking for provider in "."
2020/03/29 08:25:36 [DEBUG] checking for provider in "/"
2020/03/29 08:25:36 [DEBUG] plugin requirements: "aws"=""
2020/03/29 08:25:36 [DEBUG] Service discovery for registry.terraform.io at https://registry.terraform.io/.well-known/terraform.json
2020/03/29 08:25:36 [TRACE] HTTP client GET request to https://registry.terraform.io/.well-known/terraform.json
Initializing provider plugins...
- Checking for available provider plugins...
2020/03/29 08:25:36 [DEBUG] Failed to request discovery document: Get https://registry.terraform.io/.well-known/terraform.json: x509: certificate signed by unknown authority
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Error: registry service is unreachable, check https://status.hashicorp.com/ for status updates
I saw several links online indicating same/similar error that was solved by installing curl.
I have curl on the container, I verified it.
$ docker-compose run terraform curl --version
curl 7.67.0 (x86_64-alpine-linux-musl) libcurl/7.67.0 OpenSSL/1.1.1d zlib/1.2.11 nghttp2/1.40.0
Release-Date: 2019-11-06
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL TLS-SRP UnixSockets
I also have the certificates installed:
$ docker-compose run terraform ls -lR /etc/ssl
Here is output of curl -v
$ docker-compose run --entrypoint 'curl -v --insecure https://registry.terraform.io/.well-known/terraform.json' terraform
* Trying 151.101.190.49:443...
* TCP_NODELAY set
* Connected to registry.terraform.io (151.101.190.49) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Fastly, Inc.; CN=q2.shared.global.fastly.net
* start date: Apr 1 14:48:12 2020 GMT
* expire date: Aug 29 17:17:53 2020 GMT
* issuer: C=US; ST=CA; O=paloalto networks; OU=IT; CN=decrypt.paloaltonetworks.com
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55cce9444220)
> GET /.well-known/terraform.json HTTP/2
> Host: registry.terraform.io
> user-agent: curl/7.67.0
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< server: Cowboy
< cache-control: stale-if-error=31536000, public, max-age=3600
< content-security-policy: default-src 'self'; script-src 'self' 'unsafe-inline' https://www.google-analytics.com https://cdn.segment.com https://www.googletagmanager.com https://a.optnmstr.com; style-src 'self' 'unsafe-inline' https://maxcdn.bootstrapcdn.com https://fonts.googleapis.com https://p.typekit.net https://use.typekit.net; img-src 'self' data: https: https://www.google-analytics.com; font-src 'self' https://maxcdn.bootstrapcdn.com https://fonts.googleapis.com https://fonts.gstatic.com https://use.typekit.net; connect-src 'self' https://www.google-analytics.com https://api.segment.io https://sentry.io https://api.omappapi.com https://api.opmnstr.com https://api.optmnstr.com
< content-type: application/json
< feature-policy:
< last-modified: Fri, 10 Apr 2020 08:49:04 GMT
< referrer-policy: no-referrer-when-downgrade
< strict-transport-security: max-age=31536000; includeSubDomains; preload
< x-content-type-options: nosniff
< x-frame-options: DENY
< x-xss-protection: 1; mode=block
< via: 1.1 vegur
< via: 1.1 varnish
< accept-ranges: bytes
< date: Sat, 11 Apr 2020 06:07:54 GMT
< via: 1.1 varnish
< age: 63
< x-served-by: cache-dca17758-DCA, cache-pao17436-PAO
< x-cache: HIT, HIT
< x-cache-hits: 1, 1
< vary: Accept-Encoding
< content-length: 62
<
{"modules.v1":"/v1/modules/","providers.v1":"/v1/providers/"}
* Connection #0 to host registry.terraform.io left intact
Run update-ca-certificates after you instal the ca-certificates package. Docker layer caching may prevent it from re-running the install step, and the CA Certificates are likely out of date.
Running update-ca-certificates didnt work for me. I tried below method of mapping node's cert file copy to container's cert file and it worked.
try to check if the node on which docker container is installed is able to connect to terraform.
Do "curl -v https://registry.terraform.io/.well-known/terraform.json" on both, docker node as well as container.
If node's curl works and container's fails, then try to create copy of the cert file. Cert file location can be fetched from curl command output as below.
After creating copy map this certificates file to container's certificate file (location for which you will get in container's curl command).
If your both of our curl fails try updating your certificate and then try above method.
link for image containing curl cmd output and the certificate location in the request header
I have QNAP NAS behind my router with public IP 1.2.3.4. I have certificate for xxxx.yyyy.cz. The certificate is valid, I am able to reach my NAS over HTTPS. I installed docker registry:2.7 on my NAS. This is container environment configuratin:
REGISTRY_HTTP_ADDR 0.0.0.0:5443
REGISTRY_HTTP_TLS_CERTIFICATE /certs/client.cert
REGISTRY_HTTP_TLS_KEY /certs/client.key
I set up port forwarding 5443 to 5443 TCP. In certs directory are 3 files:
/certs # ls -al
total 24
drwxrwxrwx 2 root root 4096 Oct 20 17:02 .
drwxr-xr-x 1 root root 4096 Oct 20 17:01 ..
-rwxrwxrwx 1 root root 1688 Oct 20 16:42 ca.crt
-rwxrwxrwx 1 root root 2060 Oct 20 16:42 client.cert
-rwxrwxrwx 1 root root 1704 Oct 20 16:42 client.key
I am able to get response from registry with curl or via browser:
$ curl --cacert Downloads/certs/ca.crt https://xxxx.yyyy.cz:5443/v2/_catalog ; echo $?
{"repositories":[]}
0
So I am sure certificate are right and registry is running correctly. When I see the container logs, I am still receiving this messages:
2019/10/20 17:51:10 http: TLS handshake error from 1.2.3.4:58164: tls: first record does not look like a TLS handshake
2019/10/20 17:51:30 http: TLS handshake error from 1.2.3.4:58334: tls: first record does not look like a TLS handshake
2019/10/20 17:51:50 http: TLS handshake error from 1.2.3.4:58498: tls: first record does not look like a TLS handshake
2019/10/20 17:52:11 http: TLS handshake error from 1.2.3.4:58654: tls: first record does not look like a TLS handshake
2019/10/20 17:52:31 http: TLS handshake error from 1.2.3.4:58810: tls: first record does not look like a TLS handshake
2019/10/20 17:52:51 http: TLS handshake error from 1.2.3.4:58982: tls: first record does not look like a TLS handshake
2019/10/20 17:53:12 http: TLS handshake error from 1.2.3.4:59136: tls: first record does not look like a TLS handshake
When I try to push something to my registry, I am receiving error:
$ docker push xxxx.yyyy.cz:5443/myimage:latest
The push refers to repository [xxxx.yyyy.cz:5443/myimage]
Get https://xxxx.yyyy.cz:5443/v2/: x509: certificate signed by unknown authority
and in docker logs I can see error message:
2019/10/20 18:43:28 http: TLS handshake error from 1.2.3.4:41632: remote error: tls: bad certificate
I used this and this instructions, but it did not helped. After I logged to the container, I checked my cert files sha256, they are okay.
How can I use TLS on my docker registry and why it does not accept
my certs?
Why it does not work via docker command?
I had problem with client.cert. It should contains also ca.crt as its mentioned here in section USE AN INTERMEDIATE CERTIFICATE:
A certificate issuer may supply you with an intermediate certificate. In this case, you must concatenate your certificate with the intermediate certificate to form a certificate bundle. You can do this using the cat command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt
You can use the certificate bundle just as you use the domain.crt file in the previous example.
I have a Docker Swarm cluster on AWS which I am trying to load balance using HAProxy. My setup which is behind a VPC looks similar to this:
haproxy_server 10.10.0.10
docker_swarm_master1 10.10.0.12
docker_swarm_master2 10.10.0.13
docker_swarm_worker3 10.10.0.14
My only Tomcat container is currently on master_1 and below is my current HAProxy config file:
global
log 127.0.0.1 local0
log 127.0.0.1 local0 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
maxconn 2000
frontend servers
bind *:80
bind *:8443 ssl crt /etc/haproxy/certs/ssl.pem
default_backend hosts
backend hosts
mode http
balance roundrobin
option httpchk OPTIONS /
option forwardfor
option http-server-close
server swarm 10.10.0.12:8443 check inter 5000
I am able able to see the index.html page in the webapps directory when I do the following from the HAProxy server:
curl -k https://10.10.0.12:8443/docs/index.html
However when I try the following curl command below, I get a 503 server not available error
curl -k https://10.10.0.10:8443/docs/index.html
Anyone know what I am doing wrong? I have spent half the day on this to no avail.
EDIT
curl -XOPTIONS -vk https://10.10.0.10:8443/docs/index.html
* Trying 10.10.0.10...
* Connected to 10.10.0.10 (10.10.0.10) port 8443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 692 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: *.secreturl.com (does not match '10.10.0.10')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,CN=*.secreturl.com
* start date: Sat, 27 Jun 2016 16:39:39 GMT
* expire date: Tue, 11 Jun 2020 18:09:38 GMT
* issuer: C=US,ST=Arizona,L=Scottsdale,O=GoDaddy.com\, Inc.,OU=http://certs.godaddy.com/repository/,CN=Go Daddy Secure Certificate Authority - G2
* compression: NULL
* ALPN, server did not agree to a protocol
> OPTIONS / HTTP/1.1
> Host: 10.10.0.10:8443
> User-Agent: curl/7.47.0
> Accept: */*
>
* HTTP 1.0, assume close after body
< HTTP/1.0 503 Service Unavailable
< Cache-Control: no-cache
< Connection: close
< Content-Type: text/html
<
<html><body><h1>503 Service Unavailable</h1>
No server is available to handle this request.
</body></html>
* Closing connection 0
curl -XOPTIONS -vk https://10.10.0.12:8443/docs/index.html
* Trying 10.10.0.12...
* Connected to 10.10.0.12 (10.10.0.12) port 8443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 692 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* common name: *.secreturl.com (does not match '10.10.0.10')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: OU=Domain Control Validated,CN=*.secreturl.com
* start date: Sat, 27 Jun 2016 16:39:39 GMT
* expire date: Tue, 11 Jun 2020 18:09:38 GMT
* issuer: C=US,ST=Arizona,L=Scottsdale,O=GoDaddy.com\, Inc.,OU=http://certs.godaddy.com/repository/,CN=Go Daddy Secure Certificate Authority - G2
* compression: NULL
* ALPN, server did not agree to a protocol
> OPTIONS / HTTP/1.1
> Host: 10.10.0.12:8443
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Allow: GET, HEAD, POST, PUT, DELETE, OPTIONS
< Content-Length: 0
< Date: Sat, 24 Dec 2016 18:39:27 GMT
<
* Connection #0 to host 10.10.0.12 left intact
If you get a 503 Service Not Available, then your health check fails.
From your configuration, HAProxy will use OPTIONS http://10.10.0.12:8443/ which will fail: your backend accept HTTPS connections. To fix that, tell HAProxy to use HTTPS:
server swarm 10.10.0.12:8443 check inter 5000 ssl verify none
Note: you can enable the stat page with
listen haproxy_admin
bind 127.0.0.1:22002
mode http
stats enable
stats uri /
That should help you debug further issues.
Edit:
The stat page shows L7STS/404, that's the http code HAProxy gets. HAProxy currently checks https://10.10.0.12:8443/ while you test https://10.10.0.12:8443/docs/index.html. Perhaps you should use this url in your check:
option httpchk OPTIONS /docs/index.html