portainer keycloak 20 oauth login "unauthorized" / "Unable to login via OAuth" - oauth-2.0

I have many apps using Keycloak for authentication, but only Portainer does not work yet.
I am using the custom OAuth provider configured as following:
With the Keycloak client setup:
The URLs should all be correct and taken from https://auth.mydomain.com/realms/my-realm/.well-known/openid-configuration
However, when I try to login in Portainer, I get the error message "unauthorized" and "Unable to login via OAuth". Does anyone knows what I have missed?

I had the same issue. In my case I was running both Keycloak and Portainer in the same Kubernetes cluster and I hadn't configured CoreDNS to use my upstream DNS server correctly. Starting Portainer with --log-level=DEBUG revealed that Portainer was unable to resolve the Keycloak server while trying to swap the auth code for a token.
I was able to fix the issue by correcting the forward block in the CoreDNS config map and mounting the root CA certificate to /etc/ssl/certs/cacert.pem in the Portainer container.

Related

Docker login issue with Harbor exposed as NodePort service

I am trying to deploy Harbor on a k8s cluster without much efforts and complexity. So, I followed the Bitnami Harbor Helm chart and deployed a healthy running Harbor instance that is exposed as a NodePort service. I know that the standard is to have a LoadBalancer type of service but as I don't have required setup to provision a requested load balancer automatically, I decided to stay away from that complexity. This is not a public cloud environment where the LB gets deployed automatically.
Now, I can very well access the Harbor GUI using https://<node-ip>:<https-port> URL. However, despite several attempts I cannot connect to this Harbor instance from my local Docker Desktop instance. I have also imported the CA in my machine's keychain, but as the certificate has a dummy domain name in it rather than the IP address, Docker doesn't trust that Harbor endpoint. So, I created a local DNS record in my /etc/hosts file to link the domain name in Harbor's certificate and the node IP address of the cluster. With that arrangement, Docker seems to be happy with the certificate presented but it doesn't acknowledge the port required to access the endpoint. So, in the subsequent internal calls for authentication against Harbor, it fails with below given error. Then I also tried to follow the advice given here on Harbor document to connect to Harbor over HTTP. But this configuration killed Docker daemon and does not let it even start.
~/.docker ยป docker login -u admin https://core.harbor.domain:30908
Password:
Error response from daemon: Get "https://core.harbor.domain:30908/v2/": Get "https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry": EOF
As you can see in above error, the second Get URL does not have a port in it, which would not work.
So, is there any help can I get to configure my Docker Desktop to connect to a Harbor service running on the NodePort interface? I use Docker Desktop on MacOS.
I got the similar Error when I used 'docker login core.harbor.domain:30003' from another Host. The Error likes 'Error response from daemon: Get https://core.harbor.domain:30003/v2/: Get https://core.harbor.domain/service/token?account=admin&client_id=docker&offline_token=true&service=harbor-registry: dial tcp 1...*:443: connect: connection refused'.
However,docker can login harbor from the host on which the helm chart installed harbor.

How to set up docker with no_proxy settings

I'm setting up three docker containers on my own machine using docker compose:
One is a portal written with React.js (called portal)
One is a middleware layer with GraphQL (called gateway)
One is an auth service with node.js (called auth)
I also have a bunch of services already running behind a corporate firewall.
For the most part, gateway will request resources behind the firewall, so I have configured docker containers to proxy requests through a squid proxy with access to the additional services. However requests to my local auth service, and other local services should not proxied. As such, I have the following docker proxy configuration (note the noProxy settings):
~/.docker/config.json
...
"proxies": {
"default": {
"httpProxy": "http://172.30.245.96:3128",
"httpsProxy": "http://172.30.245.96:3128",
"noProxy": "auth,localhost,127.0.0.1,192.168.0.1/24"
}
}
...
With the above setup, portal requests do go directly to gateway through the browser using http://192.168.0.15/foo, but when gateway makes requests to auth using http://auth:3001/bar, they do not go directly to auth but instead do go through the proxy - which I am trying to avoid.
I can see the auth request is sent through the proxy with the squid proxy errors:
<p>The following error was encountered while trying to retrieve the URL: http://auth:3001/bar</p>
How can I set up the docker containers to respect the noProxy setting using docker service names like auth? It appears to me that the request from gateway to auth is mistakingly being proxed through 172.30.245.96:3128, causing it to not work. Thanks
Your Docker configuration seems fine, but your host doesn't understand how to resolve the name auth. Based on the IP given (192.168.x.x), I'll assume that you're attempting to reach the container service from the host. Add an entry for auth into your host's /etc/hosts (C:\Windows\System32\Drivers\etc\hosts if on Windows).
Take a look at Linked docker-compose containers making http requests for more details.
If you run into issues reaching services from within the container, check docker-compose resolve hostname in url for an example.

jhipster microservices running mixed in docker and locally. Gateway cannot access UAA

I have a question about jhipster working combined with docker and localhost. I have started the registry and the uaa apps using docker compose, everything is fine. Then i started locallly one microservice and the gateway. Both of them are sucessfully seen in the registry instances view. The problem is, that when the gateway tries to connect to the uaa (uaa/oauth/token) it fails (I/O error on POST request for http://uaa/oauth/token). I have tried to set in /etc/hosts uaa localhost but it did not help. Does anybody have an idea how to deal with this issue? Thanks in advance
The UAA server will have a port as well as a host name. Both will need to be specified. To specify the port you will need to change your application.properties.

How can I login to a Docker registry which also has basic auth?

I want to access a Docker registry which has a nginx in front. This nginx has basic authentication enabled. I tried to login with docker login url-of-registry but it doesn't seem to work. I always get:
Error response from daemon: login attempt to https://docker-registry-url/v2/ failed with status: 401 Unauthorized
I would expect that I first have to enter the basic auth credentials and then the Docker registry credentials. How can I login to such Docker registry?
The approach is documented under Authenticate proxy with nginx. You can find there the syntax to authorize to the registry behind the nginx proxy.

issue with Docker registry having ngnix authentication

I have installed my private docker registry with ngnix HTTP authentication where registry is running on port 5000 and ngnix on 8000. I am able to login to the registry after creating credentials. But hitting with the strange issue which is not allowing me to pull and push any image to that. Would be very helpful if you can give pointers on the same. The error message I get is :
FATA[0000] HTTP code 401, Docker will not send auth headers over HTTP.
Thanks in advance,
~Yash

Resources