I'm currently trying to build an application with Zuul + Eureka, using spring boot, running on port 9000. Everything works fine until I bring up #EnableOAuth2Sso to the application. From that moment on, no service (not even the Zuul + Eureka application itself) can register to Eureka anymore.
When I try to access http://localhost:9000, the browser shows me the login page, I enter my dummy user credentials (user, password) and I can see Eureka's dashboard, showing no client registered (as shown in the images at the bottom of this question).
Can anyone point me any direction to go? I think I could tell the Zuul + Eureka security part to permit all requests to Eureka, but I don't know how to do it.
My application.yml:
server:
port: 9000
logging:
level:
com:
netflix:
eureka: OFF
discovery: OFF
eureka:
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 2
client:
register-with-eureka: true
fetch-registry: true
serviceUrl:
defaultZone: http://127.0.0.1:9000/eureka/
healthcheck:
enabled: true
security:
sessions: ALWAYS
basic.enabled: false
oauth2:
client:
accessTokenUri: http://localhost:9999/uaa/oauth/token
userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
clientId: acme
clientSecret: acmesecret
resource:
userInfoUri: http://localhost:9999/uaa/user
jwt:
keyValue: |
-----BEGIN PUBLIC KEY-----
#here goes my public key
-----END PUBLIC KEY-----
My annotaded application class:
#EnableZuulProxy
#EnableEurekaServer
#EnableEurekaClient
#EnableOAuth2Sso
#SpringBootApplication
public class GatewayApplication {
public static void main(String[] args) {
SpringApplication.run(GatewayApplication.class, args);
}
}
Related
I have deployed Jupyterhub and Keycloak instances with Helm charts. I'm trying to authenticate user with Open Id Connect identity provider from Keycloak. But I'm pretty confused about the settings. I have followed instructions from here saying I should use a GenericOAuthenticator when implementing Keycloak.
To configure OpenId Connect Client I followed this.
I also create a group membership and audience and added to the mappers of the Jupyterhub "jhub" client. As well as a group like this and created two test users and added one of them to that group.
My problem is: When I try to logging I get a 403 error Forbidden and a URL similar to this:
https://jhub.compana.com/hub/oauth_callback?state=eyJzdGF0ZV9pZCI6ICJmYzE4NzA0ZmVmZTk0MGExOGU3ZWMysdfsdfsghfgh9LHKGJHDViLyJ9&session_state=ffg334-444f-b510-1f15d1444790&code=d8e977770a-1asdfasdf664-a790-asdfasdf.a6aac533-c75d-d555f-b510-asdasd.aaaaasdf73353-ce76-4aa9-894e-123asdafs
My questions are:
Am I right about using Oauth Proxy? Do I need it if I'm using Keycloak. According to Jupyterhub docs, there are two authentication flows, so I'm using Oauth-proxy as external authenticator but I'm not positive about the way I'm doing that.
JupyterHub is often deployed with oauthenticator, where an external
identity provider, such as GitHub or KeyCloak, is used to authenticate
users. When this is the case, there are two nested oauth flows: an
internal oauth flow where JupyterHub is the provider, and and external
oauth flow, where JupyterHub is a client.
Does Keycloak already has a default OIDC identity provider? The menu doesn't show any after the installation. Should I have done this for each client, since it's asking for an Authorization URL or is it redundant?
I tried to find out this but I only offers the possibility to define my own default identity provider according to this .
Is there a way to test the Oauth flow from the terminal or with Postman in a way that I can inspect the responses?
I could get an Id token with:
curl -k -X POST https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token -d grant_type=password -d username=myuser -d password=mypassword -d client_id=my-client -d scope=openid -d response_type=id_token -d client_secret=myclientsecret
But how can try to login from the console?
Keycloak console screenshots:
identity provider list
Relevant files:
Jupyterhub-values.yaml:
hub:
config:
Authenticator:
enable_auth_state: true
JupyterHub:
authenticator_class: generic-oauth
GenericOAuthenticator:
client_id: jhubclient
client_secret: abcsecret
oauth_callback_url: https://jhub.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
extraEnv:
OAUTH2_AUTHORIZE_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
OAUTH2_TOKEN_URL: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
OAUTH_CALLBACK_URL: https://keycloak.company.com/hub/company
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
hosts:
- jhub.company.com
keycloak-values.yaml:
mostly default values but added for https:
extraEnvVars:
- name: KEYCLOAK_PROXY_ADDRESS_FORWARDING
value: "true"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: KEYCLOAK_ENABLE_TLS
value: "true"
- name: KEYCLOAK_FRONTEND_URL
value: "https://keycloak.company.com/auth"
ingress:
enabled: true
servicePort: https
annotations:
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.org/redirect-to-https: "true"
nginx.org/server-snippets: |
location /auth {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $host;
proxy_set_header X-Forwarded-Proto $scheme;
}
I could make work with this configuration:
hub:
config:
Authenticator:
enable_auth_state: true
admin_users:
- admin
allowed_users:
- testuser1
GenericOAuthenticator:
client_id: jhub
client_secret: nrjNivxuJk2YokEpHB2bQ3o97Y03ziA0
oauth_callback_url: https://jupyter.company.com/hub/oauth_callback
authorize_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/auth
token_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/token
userdata_url: https://keycloak.company.com/auth/realms/company/protocol/openid-connect/userinfo
login_service: keycloak
username_key: preferred_username
userdata_params:
state: state
JupyterHub:
authenticator_class: generic-oauth
Creating the ingress myself like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jhub-ingress
namespace: jhub
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
spec:
ingressClassName: nginx
tls:
- hosts:
- jupyter.company.com
secretName: letsencrypt-cert-tls-jhub
rules:
- host: jupyter.company.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy-http
port:
number: 8000
I also removed the Oauth-prox deployment since this appears to already be done with Keycloak and it's actually redundant.
Then creating a regular user and admin roles and groups in Keycloak.
It appears the users hadn't the proper permissions in Keycloak.
I am facing a problem with my authentication with Keycloak. Everything works fine when my Nuxt app is running locally (npm run dev), but when it is inside a Docker container, something goes wrong.
Windows 10
Docker 20.10.11
Docker-compose 1.29.2
nuxt: ^2.15.7
#nuxtjs/auth-next: ^5.0.0-1637745161.ea53f98
#nuxtjs/axios: ^5.13.6
I have a docker service containing Keycloak and Ldap : keycloak:8180 and myad:10389. My Nuxt app is running on port 3000.
On front side, here is my configuration, which is working great when I launch my app locally with "npm run dev" :
server: {
port: 3000,
host: '0.0.0.0'
},
...
auth: {
strategies: {
local: false,
keycloak: {
scheme: 'oauth2',
endpoints: {
authorization: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/auth',
token: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: 'http://localhost:8180/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
},
token: {
property: 'access_token',
type: 'Bearer',
name: 'Authorization',
maxAge: 300
},
refreshToken: {
property: 'refresh_token',
maxAge: 60 * 60 * 24 * 30
},
responseType: 'code',
grantType: 'authorization_code',
clientId: '<client_id>',
scope: ['openid'],
codeChallengeMethod: 'S256'
}
},
redirect: {
login: '/',
logout: '/',
home: '/home'
}
},
router: {
middleware: ['auth']
}
}
And here are my Keycloak and Nuxt docker-compose configurations :
keycloak:
image: quay.io/keycloak/keycloak:latest
container_name: keycloak
hostname: keycloak
environment:
- DB_VENDOR=***
- DB_ADDR=***
- DB_DATABASE=***
- DB_USER=***
- DB_SCHEMA=***
- DB_PASSWORD=***
- KEYCLOAK_USER=***
- KEYCLOAK_PASSWORD=***
- PROXY_ADDRESS_FORWARDING=true
ports:
- "8180:8080"
networks:
- ext_sd_bridge
networks:
ext_sd_bridge:
external:
name: sd_bridge
client_ui:
image: ***
container_name: client_ui
hostname: client_ui
ports:
- "3000:3000"
networks:
- sd_bridge
networks:
sd_bridge:
name: sd_bridge
When my Nuxt app is inside its container, the authentication seems to work, but redirections are acting strange. As you can see I am always redirected to my login page ("/") after my redirection to "/home":
Browser network
Am I missing something or is there something I am doing wrong?
I figured out what my problem was.
So basically, my nuxt.config.js was wrong for a use inside a Docker container. I had to change the auth endpoints to :
endpoints: {
authorization: '/auth/realms/<realm>/protocol/openid-connect/auth',
token: '/auth/realms/<realm>/protocol/openid-connect/token',
userInfo: '/auth/realms/<realm>/protocol/openid-connect/userinfo',
logout: '/auth/realms/<realm>/protocol/openid-connect/logout?redirect_uri=' + encodeURIComponent('http://localhost:3000')
}
And proxy the "/auth" requests to the hostname of my Keycloak Docker container (note that my Keycloak and Nuxt containers are in the same network in my docker-compose files) :
proxy: {
'/auth': 'http://keycloak:8180'
}
At this point, every request was working fine except the "/authenticate" one, because "keycloak:8180/authenticate" is put in the browser URL and of course, it doesn't know "keycloak".
For this to work, I added this environment variable to my Keycloak docker-compose :
KEYCLOAK_FRONTEND_URL=http://localhost:8180/auth
With this variable, the full process of authentication/redirection is working like a charm, with Keycloak and Nuxt in their containers :)
👍
I'm a new learner of docker. Trying to start the eureka server and configure the server but the config server gives an error: Cannot execute the request on any known server at and due to this error I can't start other microservice.
application.yml(eureka service)
server:
port: 5002
eureka:
client:
service-url:
defaultZone: http://localhost:5002/eureka/
register-with-eureka: false
fetch-registry: false
application.yml(config microservice)
server:
port: 9095
spring:
application:
name: config-server
profiles:
active:
- native
cloud:
config:
server:
native:
search-locations:
- classpath:/
eureka:
instance:
hostname: localhost
port: 5002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
and having two separate dockerfiles for the same in which FROM, ADD, EXPOSE, ENTRYPOINT is used
I'm trying to setup a private docker registry that is secured via the token method. The authentication server it is hitting is a private authentication server. I'm getting this error when trying to login: level=info msg="unable to get token signing key"
I am seeing a JWT token being generated and returned from the authentication server.
Docker Registry Config:
version: 0.1
log:
accesslog:
disabled: false
level: info
fields:
service: registry
environment: development
storage:
delete:
enabled: true
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
host: https://127.0.0.1:5000
addr: 0.0.0.0:5000
debug:
addr: 0.0.0.0:5001
secret: notasecret123
tls:
certificate: /certs/registry.crt
key: /certs/registry.key
headers:
X-Content-Type-Options: [nosniff]
auth:
token:
realm: https://localhost:3443/dockerauth
service: https://localhost:5000
issuer: https://localhost:3443
rootcertbundle: /certs/registry.crt
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
JWT signing:
var i = 'https://localhost:3443';
var s = 'registry';
var a = 'https://localhost:5000';
var signOptions = {
issuer: i,
subject: s,
audience: a,
expiresIn: "1h",
algorithm: "RS256"
};
var registryKey = fs.readFileSync('/app/bin/keys/registry.key');
var token = jwt.sign(payload, registryKey, signOptions);
I've tried creating a RSA public/private key to sign the JWT token and then set the rootcertbundle to the public RSA key, but I got an error indicating that I needed a PEM cert. So I created a PEM cert and got the same error I started out with.
I want to be able to authenticate against an Identity Server (STS) from outside and inside a docker machine.
I am having trouble with setting the correct authority that works both inside and outside the container. If I set the authority to the internal name mcoidentityserver:5000 then the API can authenticate but the client cannot get a token as the client lies outside of the docker network. If I set the authority to the external name localhost:5000 then the client can get a token but the API doesn't recognise the authority name (because localhost in this case is host machine).
What should I set the Authority to? Or perhaps I need to adjust the docker networking?
Diagram
The red arrow is the part that I'm having trouble with.
Detail
I am setting up a Windows 10 docker development environment that uses an ASP.NET Core API (on Linux), Identity Server 4 (ASP.NET Core on Linux) and a PostgreSQL database. PostgreSQL isn't a problem, included in the diagram for completeness. It's mapped to 9876 because I also have a PostgreSQL instance running on the host for now. mco is a shortened name of our company.
I have been following the Identity Server 4 instructions to get up and running.
Code
I'm not including the docker-compose.debug.yml because it has run commands pertinent only to running in Visual Studio.
docker-compose.yml
version: '2'
services:
mcodatabase:
image: mcodatabase
build:
context: ./Data
dockerfile: Dockerfile
restart: always
ports:
- 9876:5432
environment:
POSTGRES_USER: mcodevuser
POSTGRES_PASSWORD: password
POSTGRES_DB: mcodev
volumes:
- postgresdata:/var/lib/postgresql/data
networks:
- mconetwork
mcoidentityserver:
image: mcoidentityserver
build:
context: ./Mco.IdentityServer
dockerfile: Dockerfile
ports:
- 5000:5000
networks:
- mconetwork
mcoapi:
image: mcoapi
build:
context: ./Mco.Api
dockerfile: Dockerfile
ports:
- 56107:80
links:
- mcodatabase
depends_on:
- "mcodatabase"
- "mcoidentityserver"
networks:
- mconetwork
volumes:
postgresdata:
networks:
mconetwork:
driver: bridge
docker-compose.override.yml
This is created by the Visual Studio plugin to inject extra values.
version: '2'
services:
mcoapi:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "80"
mcoidentityserver:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "5000"
API Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "Mco.Api.dll"]
Identity Server Dockerfile
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
EXPOSE 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT ["dotnet", "Mco.IdentityServer.dll"]
API Startup.cs
Where we tell the API to use the Identity Server and set the Authority.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
{
// This can't work because we're running in docker and it doesn't understand what localhost:5000 is!
Authority = "http://localhost:5000",
RequireHttpsMetadata = false,
ApiName = "api1"
});
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
Identity Server Startup.cs
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services.AddIdentityServer()
.AddTemporarySigningCredential()
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseIdentityServer();
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
}
Identity Server Config.cs
public class Config
{
public static IEnumerable<ApiResource> GetApiResources()
{
return new List<ApiResource>
{
new ApiResource("api1", "My API")
};
}
public static IEnumerable<Client> GetClients()
{
return new List<Client>
{
new Client
{
ClientId = "client",
// no interactive user, use the clientid/secret for authentication
AllowedGrantTypes = GrantTypes.ClientCredentials,
// secret for authentication
ClientSecrets =
{
new Secret("secret".Sha256())
},
// scopes that client has access to
AllowedScopes = { "api1" }
}
};
}
}
Client
Running in a console app.
var discovery = DiscoveryClient.GetAsync("localhost:5000").Result;
var tokenClient = new TokenClient(discovery.TokenEndpoint, "client", "secret");
var tokenResponse = tokenClient.RequestClientCredentialsAsync("api1").Result;
if (tokenResponse.IsError)
{
Console.WriteLine(tokenResponse.Error);
return 1;
}
var client = new HttpClient();
client.SetBearerToken(tokenResponse.AccessToken);
var response = client.GetAsync("http://localhost:56107/test").Result;
if (!response.IsSuccessStatusCode)
{
Console.WriteLine(response.StatusCode);
}
else
{
var content = response.Content.ReadAsStringAsync().Result;
Console.WriteLine(JArray.Parse(content));
}
Thanks in advance.
Ensure IssuerUri is set to an explicit constant. We had similar issues with accessing Identity Server instance by the IP/hostname and resolved it this way:
services.AddIdentityServer(x =>
{
x.IssuerUri = "my_auth";
})
P.S. Why don't you unify the authority URL to hostname:5000? Yes, it is possible for Client and API both call the same URL hostname:5000 if:
5000 port is exposed (I see it's OK)
DNS is resolved inside the docker container.
You have access to hostname:5000 (check firewalls, network topology, etc.)
DNS is the most tricky part. If you have any trouble with it I recommend you try reaching Identity Server by its exposed IP instead of resolving hostname.
To make this work I needed to pass in two environment variables in the docker-compose.yml and setup CORS on the identity server instance so that the API was allowed to call it. Setting up CORS is outside the remit of this question; this question covers it well.
Docker-Compose changes
The identity server needs IDENTITY_ISSUER, which is name that the identity server will give itself. In this case I've used the IP of the docker host and port of the identity server.
mcoidentityserver:
image: mcoidentityserver
build:
context: ./Mco.IdentityServer
dockerfile: Dockerfile
environment:
IDENTITY_ISSUER: "http://10.0.75.1:5000"
ports:
- 5000:5000
networks:
- mconetwork
The API needs to know where the authority is. We can use the docker network name for the authority because the call doesn't need to go outside the docker network, the API is only calling the identity server to check the token.
mcoapi:
image: mcoapi
build:
context: ./Mco.Api
dockerfile: Dockerfile
environment:
IDENTITY_AUTHORITY: "http://mcoidentityserver:5000"
ports:
- 56107:80
links:
- mcodatabase
- mcoidentityserver
depends_on:
- "mcodatabase"
- "mcoidentityserver"
networks:
- mconetwork
Using these values in C#
Identity Server.cs
You set the Identity Issuer name in ConfigureServices:
public void ConfigureServices(IServiceCollection services)
{
var sqlConnectionString = Configuration.GetConnectionString("DefaultConnection");
services
.AddSingleton(Configuration)
.AddMcoCore(sqlConnectionString)
.AddIdentityServer(x => x.IssuerUri = Configuration["IDENTITY_ISSUER"])
.AddTemporarySigningCredential()
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients())
.AddCorsPolicyService<InMemoryCorsPolicyService>()
.AddAspNetIdentity<User>();
}
API Startup.cs
We can now set the Authority to the environment variable.
app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
{
Authority = Configuration["IDENTITY_AUTHORITY"],
RequireHttpsMetadata = false,
ApiName = "api1"
});
Drawbacks
As shown here, the docker-compose would not be fit for production as the hard coded identity issuer is a local IP. Instead you would need a proper DNS entry that would map to the docker instance with the identity server running in it. To do this I would create a docker-compose override file and build production with the overridden value.
Thanks to ilya-chumakov for his help.
Edit
Further to this, I have written up the entire process of building a Linux docker + ASP.NET Core 2 + OAuth with Identity Server on my blog.
If you are running your docker containers in same network, you can do the followings:
Add IssuerUri in your identity server.
services.AddIdentityServer(x =>
{
x.IssuerUri = "http://<your_identity_container_name>";
})
This will set your identity server's URI. Therefore, your other web api services can use this URI to reach your identity server.
Add Authority in your web api that has to use identity server.
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}).AddJwtBearer(o =>
{
o.Authority = "http://<your_identity_container_name>";
o.Audience = "api1"; // APi Resource Name
o.RequireHttpsMetadata = false;
o.IncludeErrorDetails = true;
});