I ran a WSO2 service using docker and I followed all instructions according to the WSO2's document and everything worked fine including it's own Hello-world API. Next I have created my own Hello World API on my local machine on port :8082. I have set both Production Endpoint and Sanbox Endpoint to http://localhost:8082 but everytime I try to test the api it gives the following error:
Failed to fetch.
Possible Reasons:
CORS
Network Failure
URL scheme must be "http" or "https" for CORS request.
Update:
I replaced localhost with my machine IP and used netcat on my local and telnet on the container to check it and it was okay. I don't know what is my next step.Is there anything specific that I should consider for developing my Hello World that I'm missing?
I assume both your API GW and Backend API run with the same hostname, which creates SSL failure, so I recommend you create a new key store for your backend and export the key store as a certificate and import it into the trust store of API GW.
keytool -keystore backend.jks -genkey -alias backend
keytool -export -keystore backend.jks -alias backend -file backend.crt
keytool -import -file backend.crt -alias backend -keystore <APIM_HOME>/repository/resources/security/client-truststore.jks
This enables the Gateway to trust the backend server (host) and enables you to communicate seamlessly.
Thanks.
Related
I have configured azure load balancer which points my public Ip http, and I reach my website and working fine.
Now, I want to achieve a routing rule is used to redirect the application traffic from HTTP protocol to HTTPS secure protocol with the help of azure application gateway.
Can we simply add our HTTPS services to the health probe after installing an SSL certificate on the load balancer? I don't have much knowledge in networking any help highly appreciate.
I tried to reproduce the same in my environment it works fine successfully.
you are able to configure your public Ip address to https using Azure application gateway. Try to create a self-signed certificate like below:
New-SelfSignedCertificate `
-certstorelocation cert:\localmachine\my `
-dnsname www.contoso.com
#To create pfx file
$pwd = ConvertTo-SecureString -String "Azure"-Force -AsPlainText
Export-PfxCertificate `
-cert cert:\localMachine\my\<YOURTHUMBPRINT> `
-FilePath c:\appgwcert.pfx `
-Password $pwd
Try to create an application gateway. you can use your exciting public Ip address like below.
In routing rule add your frontend Ip as public and protocol _ HTTPS _ as_ 443 ports _ and upload a pfx certificate like below:
Then, try to create listener with port 8080 with http and add routing rule as same and verify the backend status is healthy like below:
When I test the http protocol with Ip address redirect successfully like below:
I am trying to secure client daemon communication on windows by creating a certificate authority (CA).
The lab setup shown in the above image is used in the example but it says my lab will be different but I don't know how to find the IP addresses like 10.0.0.10, 10.0.0.11 and 10.0.0.12. I know the node names are docker.exe (client) and dockerd.exe (daemon) but what are their IP addresses?
The default installation puts them on the same host and configures them to communicate over a local IPC socket: //./pipe/docker_engine
It's also possible to configure them to communicate over the network. By default, network communication occurs over an unsecured HTTP socket on port 2375/tcp
I don't know what information in this is relevant or helpful but I need to know the IP addresses of the docker daemon and client.
In answer to the responses I am also writing this:
I am following along with the book Docker Deep Dive and I am trying to secure client daemon communication. I am creating a file called extfile.cnf which has the following inside:
subjectAltName = DNS:node3,IP=10.0.0.12
extendedKeyUsage = serverAuth
I need to know what to put instead of 10.0.0.12
When I put localhost/127.0.0.1/127.0.0.1:2375/tcp://127.0.0.1:2375 or anything else and then run the command afterwards which is this:
openssl x509 -req -days 730 -sha256 -in daemon.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out daemon-cert.pem -extfile extfile.cnf
The error is:
x509: Error on line 1 of config file "extfile.cnf"
7C0A0000:error:07000065:configuration file routines:def_load_bio:missing equal sign:crypto\conf\conf_def.c:511:HERE--> ■sline 1
First post here so apologies if my etiquette isn't quite on point!
I'm having a bit of trouble understanding how certificate authorities work between separate machines, specifically when it comes to MQTT and the mosquito broker.
I have generated my server and client certificates using this link and got them working absolutely fine on localhost. For the server, I used common name RPi-Host i.e. the hostname, and for the clients I used 'localhost'. An example of the code I use to generate a CA for a client is given below, where %NAME is just the name of the cert:
Generate Key with:
$ openssl genrsa -out <%NAME>.key 2048
Generate certificate request with:
$ openssl req -out <%NAME>.csr -key <%NAME>.key -new
Link to main CA:
$ openssl x509 -req -in <%NAME>.csr -CA ../ca/ca.crt -CAkey ../ca/ca.key -CAcreateserial -out <%NAME>.crt -days 365
Lets say I'd generated client and client2 certificates, I can then run the below in 2 different terminals on the RPi-Host, and connect no problem at all:
Subscribe to MQTT broker:
$ mosquitto_sub -p 8883 --cafile ca.crt --cert client2.crt --key client2.key -h localhost -t /world
Publish to MQTT broker:
$ mosquitto_pub -p 8883 --cafile ../ca/ca.crt --cert client.crt --key client.key -h localhost -m hello! -t /world
However, if I change the -h localhost to 192.168.0.190, i.e the IP address, I immediately get:
Error: A TLS Error occurred.
...which Isn't very helpful!
The aim is to try and connect to this from a separate machine, however I'm stumped just trying to do this on the same machine with its own IP address! Do I need to dome something fancy in the common name when generating the certificate? Sadly I have not yet sourced a tutorial which reviews connecting using mosquitto and TLS across 2 separate machines.
Any pointers appreciated, and terribly sorry if I'm missing the obvious!
Alex
The hostname (or IP address*) that you use to connect to the remote machine MUST to match the CN/SAN value in the certificate presented by that machine.
localhost shouldn't really ever be used in certificates as it is just a placeholder which says "This machine". Using TLS/SSL with localhost doesn't do anything useful. You should always generate certificates with the externally used hostname of the broker.
If you can't set up proper hostnames in a DNS server then you should probably add suitable entries to the /etc/hosts file on all the client machines with the hostname for the broker.
The temporary workaround to the error is probably to add -i to the mosquitto_pub and mosquitto_sub command lines. This tells them to ignore any miss match between the hostname and the name in the certificate. But this is only a workaround as it basically negates one of TLS/SSL's two key features (1. proving the machine you are connecting to is the machine it claims to be, 2. enabling encryption of the messages passing back and forth between the client/broker)
* Using raw IP addresses for TLS is possible, but it adds another level of difficulty getting the entries in the certificates right.
I have configured Neo4j to use encrypted connections bith with https in browser and bolt protocol. I have a valid certificate signed with a CA and the browser works fine accessing and runnign queries. Then problem comes with the cypher shell through bolt protocol. I'm getting this error:
cypher-shell --encryption true -d database -a bolt://ip_address:7687 -u user -p password--debug
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:380)
... 42 more
Both https and bolt use the same certificate and private key. The TLS configuration is:
# Bolt SSL configuration
dbms.ssl.policy.bolt.enabled=true
dbms.ssl.policy.bolt.base_directory=certificates/bolt
dbms.ssl.policy.bolt.private_key=neo4j.key
dbms.ssl.policy.bolt.public_certificate=neo4j.crt
# Https SSL configuration dbms.ssl.policy.https.enabled=true
dbms.ssl.policy.https.base_directory=certificates/https
dbms.ssl.policy.https.private_key=neo4j.key
dbms.ssl.policy.https.public_certificate=neo4j.crt
# Bolt connector
dbms.connector.bolt.enabled=true
dbms.connector.bolt.tls_level=REQUIRED
#dbms.connector.bolt.listen_address=0.0.0.0:7687
# HTTP Connector. There can be zero or one HTTP connectors. dbms.connector.http.enabled=false
#dbms.connector.http.listen_address=:7474
# HTTPS Connector. There can be zero or one HTTPS connectors.
dbms.connector.https.enabled=true
#dbms.connector.https.listen_address=0.0.0.0:7473
I'm using Neo4j 4.0.3 community version.
How can I solve thsi problem to use bolt protocol?
Use either
cypher-shell -u user -p password --debug -a host --encryption true
Or
cypher-shell -u user -p password --debug -a bolt+s://host
I am trying to set up a new Docker Registry (v2) with HAProxy. For the Docker Registry I am using the image from the docker hub and running it with docker run -d -p 5000:5000 -v /path/to/registry:/tmp/registry registry:2.0.1. And this is a subset of my HAProxy configuration:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
http-request auth realm Registry if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
use_backend docker-registry if domain_d.mydomain.com
The important things to note are that I am using HAProxy to do SSL termination and HTTP auth rather than the registry.
My issue occurs when I try to login to the new registry. If I run docker login https://d.mydomain.com/v2/ then enter the user root and password I get the following error messages:
Docker Client:
FATA[0009] Error response from daemon: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
Docker Daemon:
ERRO[0057] Handler for POST /auth returned error: invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
ERRO[0057] HTTP Error: statusCode=500 invalid registry endpoint https://d.mydomain.com/v2/: https://d.mydomain.com/v2/ does not appear to be a v2 registry endpoint. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry d.mydomain.com` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/d.mydomain.com/ca.crt
So I try adding --insecure-registry d.mydomain.com to:
/etc/default/docker with DOCKER_OPTS= -H unix:///var/run/docker.sock --insecure-registry d.mydomain.com
the arguments of starting docker manually with docker -d --insecure-registry d.mydomain.com
neither of these, or any other I have found online, work. Each time, after restarting docker and attempting to log in again gives me the same error message.
A few other things I have tried:
In a browser going to d.mydomain.com results in a 404
In a browser going to d.mydomain.com/v2/ results in: {}
Replacing https://d.mydomain.com/v2/ in the login command with all of these with no success
http://d.mydomain.com/v2/
d.mydomain.com/v2/
http://d.mydomain.com/
d.mydomain.com/
This setup with HAProxy doing the SSL termination and HTTP auth has worked in the past using the first version of the registry and older versions of docker. So has anything in Docker registry v2 changed? Does this still work? If it hasn't changed, why won't the --insecure-registry flag do anything anymore?
Also, I have been working on getting this working for a while so I may have forgotten all the things I have tried. If there is something that may work, let me know and I will give it a try.
Thanks,
JamesStewy
Edit
This edit has been moved to the answer below
I have got it working. So here is my new config:
haproxy.cfg
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
userlist auth_list
group docker_registry users root
user root password ***PASSWORD***
backend docker-registry
server 127.0.0.1:5000_localhost 127.0.0.1:5000 cookie 127.0.0.1:5000_localhost
backend docker-registry-auth
errorfile 503 /path/to/registry_auth.http
frontend shared-frontend
mode http
bind *:80
bind *:443 ssl crt *** CERT FILES ***
option accept-invalid-http-request
acl domain_d.mydomain.com hdr(host) -i d.mydomain.com
redirect scheme https if !{ ssl_fc } domain_d.mydomain.com
acl auth_docker_registry_root http_auth(auth_list) root
use_backend docker-registry-auth if !auth_docker_registry_root { ssl_fc } domain_d.mydomain.com
rsprep ^Location:\ http://(.*) Location:\ https://\1
use_backend docker-registry if domain_d.mydomain.com
registry_auth.http
HTTP/1.0 401 Unauthorized
Cache-Control: no-cache
Connection: close
Content-Type: text/html
Docker-Distribution-Api-Version: registry/2.0
WWW-Authenticate: Basic realm="Registry"
<html><body><h1>401 Unauthorized</h1>
You need a valid user and password to access this content.
</body></html>
The differences being the http-request auth line has been replaced with use_backend docker-registry-auth. The backend docker-registry-auth has no servers to it will always give a 503 error. But the 503 error file has been changed to registry_auth.http. In registry_auth.http the error code is overridden to 401, the header WWW-Authenticate is set to Basic realm="Registry", the basic HAProxy 401 error page is supplied and, most importantly, the header Docker-Distribution-Api-Version is set to registry/2.0.
As a result this hacky work-around setup works exactly the same as the old http-request auth line except the custom header Docker-Distribution-Api-Version is now set. This allows this set up to pass the test which starts on line 236 of https://github.com/docker/docker/blob/v1.7.0/registry/endpoint.go.
So now when I run docker login d.mydomain.com, login is successful and my credentials are added to .docker/config.json.
The second issue was that I couldn't push to the new repository even through it logged in. This was fixed by adding the rsprep line in the frontend. What this line does is modify the Location header (if it exists) to turn all http:// to https://.
I also found this bit of documentation for future reference.
As a small clarification to the previous answer: I had to change this line:
WWW-Authenticate: Basic realm="Registry"
To this:
WWW-Authenticate: Basic realm="Registry realm"
and then everything worked...
BTW, hashing the pass can be done using mkpasswd (part of whois deb package)