Spring security OAuth redirect endpoint not found - oauth-2.0

I have added Spring Security to an existing JEE application to add OAuth to the application.
The security configuration is set to protect the REST API, and that part seems to work fine.
When the UI requests a protected URL, the response contains a redirect to 'oauth2/authorize/keycloak'.
But that's where the story ends, since the request to 'oauth2/authorize/keycloak' itself returns a 404.
I am pretty out of date with spring security (have used it the last time with Spring applications about 8 years ago) and I have no idea where I am supposed to find the implementation of the endpoint 'oauth2/authorize/keycloak' in order to figure out what is missing or wrong in my setup.
The relevant part of my dependency tree looks as follows:
[INFO] | +- com.mycompany.auth:authentication-sso-configuration:jar:1.0.0-SNAPSHOT:compile
[INFO] | | +- org.reactivestreams:reactive-streams:jar:1.0.3:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-client:jar:5.3.3.RELEASE:compile
[INFO] | | | +- com.nimbusds:oauth2-oidc-sdk:jar:7.5:compile
[INFO] | | | | +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] | | | | +- com.nimbusds:content-type:jar:2.0:compile
[INFO] | | | | +- net.minidev:json-smart:jar:2.3:compile (version selected from constraint [1.3.1,2.3])
[INFO] | | | | | \- net.minidev:accessors-smart:jar:1.2:compile
[INFO] | | | | | \- org.ow2.asm:asm:jar:5.0.4:compile
[INFO] | | | | \- com.nimbusds:lang-tag:jar:1.4.4:compile
[INFO] | | | +- org.springframework.security:spring-security-oauth2-core:jar:5.3.3.RELEASE:compile
[INFO] | | | \- org.springframework:spring-core:jar:5.2.6.RELEASE:compile
[INFO] | | | \- org.springframework:spring-jcl:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-jose:jar:5.3.3.RELEASE:compile
[INFO] | | | \- com.nimbusds:nimbus-jose-jwt:jar:8.18.1:compile
[INFO] | | +- org.springframework.security:spring-security-oauth2-resource-server:jar:5.3.3.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-core:jar:5.3.3.RELEASE:compile
[INFO] | | | +- org.springframework:spring-aop:jar:5.2.6.RELEASE:compile
[INFO] | | | +- org.springframework:spring-beans:jar:5.2.6.RELEASE:compile
[INFO] | | | +- org.springframework:spring-context:jar:5.2.6.RELEASE:compile
[INFO] | | | \- org.springframework:spring-expression:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-web:jar:5.3.3.RELEASE:compile
[INFO] | | | \- org.springframework:spring-web:jar:5.2.6.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-config:jar:5.3.3.RELEASE:compile
[INFO] | | +- org.springframework.security:spring-security-saml2-service-provider:jar:5.3.3.RELEASE:compile
[INFO] | | | +- org.opensaml:opensaml-core:jar:3.4.5:compile
[INFO] | | | | +- io.dropwizard.metrics:metrics-core:jar:3.1.2:compile
[INFO] | | | | \- net.shibboleth.utilities:java-support:jar:7.5.1:compile
[INFO] | | | +- org.opensaml:opensaml-saml-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-xmlsec-api:jar:3.4.5:compile
[INFO] | | | | | \- org.opensaml:opensaml-security-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-soap-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-messaging-api:jar:3.4.5:compile
[INFO] | | | | +- org.opensaml:opensaml-profile-api:jar:3.4.5:compile
[INFO] | | | | \- org.opensaml:opensaml-storage-api:jar:3.4.5:compile
[INFO] | | | \- org.opensaml:opensaml-saml-impl:jar:3.4.5:compile
[INFO] | | | +- org.opensaml:opensaml-security-impl:jar:3.4.5:compile
[INFO] | | | +- org.opensaml:opensaml-xmlsec-impl:jar:3.4.5:compile
[INFO] | | | | \- org.apache.santuario:xmlsec:jar:2.0.10:compile
[INFO] | | | | \- com.fasterxml.woodstox:woodstox-core:jar:5.0.3:compile
[INFO] | | | | \- org.codehaus.woodstox:stax2-api:jar:3.1.4:compile
[INFO] | | | +- org.opensaml:opensaml-soap-impl:jar:3.4.5:compile
[INFO] | | | \- org.apache.velocity:velocity:jar:1.7:compile
[INFO] | | +- org.apache.logging.log4j:log4j-api:jar:2.13.3:compile
[INFO] | | +- org.apache.logging.log4j:log4j-core:jar:2.13.3:compile
[INFO] | | +- org.yaml:snakeyaml:jar:1.26:compile
[INFO] | | +- commons-collections:commons-collections:jar:3.2.2:compile
[INFO] | | +- org.bouncycastle:bcprov-jdk15on:jar:1.66:compile
[INFO] | | +- org.cryptacular:cryptacular:jar:1.2.4:compile
[INFO] | | \- org.apache.commons:commons-configuration2:jar:2.7:compile
[INFO] | | \- org.apache.commons:commons-text:jar:1.8:compile
And this is the configuration for OAuth
# OAuth2 login manifest
oauth2Login:
authorizationCode:
authorizationUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/auth"
scope:
- "openid"
- "finx"
redirectUriTemplate: "{baseUrl}/login/oauth2/code/{registrationId}"
tokenUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/token"
userInfoUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/userinfo"
jwkSetKeyUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/certs"
registrationId: "keycloak"
clientId: "finx_oauth2"
clientSecret:
vaultType: PLAIN_TEXT
secret: "my-secret"
clientName: "FinX"
entryPoints:
- pathMatcher: "/ledger-api/**"
- pathMatcher: "/ledger-api-internal/**"
- pathMatcher: "/ledger-api-ui/**"
# OAuth2 resource server
oauth2ResourceServer:
keySetUri: "http://localhost:8180/auth/realms/master/protocol/openid-connect/certs"
pathMatchers:
- "/api/**"
- "/orchestration-api/**"
I have been digging through the spring source code in order to find the implementation of the endpoint 'oauth2/authorize/keycloak', but this is not an easy task.
So looking for someone who can help me with some pointers on what could be missing/wrong in my configuration.

By default, the OAuth 2.0 Login Page is auto-generated by the DefaultLoginPageGeneratingFilter.
The login page for a client defaults to this: OAuth2AuthorizationRequestRedirectFilter.DEFAULT_AUTHORIZATION_REQUEST_BASE_URI + "/{registrationId}". As per your configuration, registrationId: "keycloak", this means (/oauth2/authorization/keycloak).
Please check your WebSecurityConfigurerAdapter configuration. Try to override the default login page by configuring oauth2Login().loginPage() and (optionally) oauth2Login().authorizationEndpoint().baseUri().
The following listing shows an example:
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.oauth2Login()
.loginPage("/login/oauth2")
...
.authorizationEndpoint()
.baseUri("/login/oauth2/authorization")
....
}
Please check OAuth 2.0 Login -โ€‰Advanced Configuration for more information.

Related

No code signing identity found and can not create a new one because you enabled `readonly`

when I using fastlane to publish ios app in github:
- name: Deploy to TestFlight/PGY
run: |
cd ./ios
bundle exec fastlane beta
env:
FLUTTER_ROOT: ${{ secrets.FLUTTER_ROOT }}
APPLE_ID: ${{ secrets.APPLE_ID }}
GIT_URL: ${{ secrets.GIT_URL }}
PGY_USER_KEY: ${{ secrets.PGY_USER_KEY }}
PGY_API_KEY: ${{ secrets.PGY_API_KEY }}
TEAM_ID: ${{ secrets.TEAM_ID }}
ITC_TEAM_ID: ${{ secrets.ITC_TEAM_ID }}
FASTLANE_USER: ${{ secrets.FASTLANE_USER }}
FASTLANE_PASSWORD: ${{ secrets.FASTLANE_PASSWORD }}
FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD: ${{ secrets.FASTLANE_APPLE_APPLICATION_SPECIFIC_PASSWORD }}
FASTLANE_SESSION: ${{ secrets.FASTLANE_SESSION }}
MATCH_PASSWORD: ${{ secrets.MATCH_PASSWORD }}
MATCH_KEYCHAIN_NAME: ${{ secrets.MATCH_KEYCHAIN_NAME }}
MATCH_KEYCHAIN_PASSWORD: ${{ secrets.MATCH_KEYCHAIN_PASSWORD }}
shows this error:
No code signing identity found and can not create a new one because you enabled `readonly`
this is the part of the log:
+-----------------------+---------+--------+
| Used plugins |
+-----------------------+---------+--------+
| Plugin | Version | Action |
+-----------------------+---------+--------+
| fastlane-plugin-pgyer | 0.2.2 | pgyer |
+-----------------------+---------+--------+
[14:58:29]: Sending anonymous analytics information
[14:58:29]: Learn more at https://docs.fastlane.tools/#metrics
[14:58:29]: No personal or sensitive data is sent.
[14:58:29]: You can disable this by adding `opt_out_usage` at the top of your Fastfile
[14:58:29]: ------------------------------
[14:58:29]: --- Step: default_platform ---
[14:58:29]: ------------------------------
[14:58:29]: Driving the lane 'ios beta' ๐Ÿš€
[14:58:29]: --------------------------
[14:58:29]: --- Step: xcode_select ---
[14:58:29]: --------------------------
[14:58:29]: Setting Xcode version to /Applications/Xcode_12.4.app for all build steps
[14:58:29]: -----------------------------
[14:58:29]: --- Step: create_keychain ---
[14:58:29]: -----------------------------
[14:58:29]: Found keychain '~/Library/Keychains/***', creation skipped
[14:58:29]: If creating a new Keychain DB is required please set the `require_create` option true to cause the action to fail
[14:58:29]: $ security list-keychains -d user
[14:58:29]: โ–ธ "/Users/runner/Library/Keychains/***-db"
[14:58:29]: Found keychain '/Users/runner/Library/Keychains/***-db' in list-keychains, adding to search list skipped
[14:58:29]: -------------------
[14:58:29]: --- Step: is_ci ---
[14:58:29]: -------------------
[14:58:30]: -------------------
[14:58:30]: --- Step: match ---
[14:58:30]: -------------------
[14:58:30]: Successfully loaded '/Users/runner/work/flutter-netease-music/flutter-netease-music/ios/fastlane/Matchfile' ๐Ÿ“„
+----------------+-----------------------------------------------------------------------------------------------------------------+
| Detected Values from './fastlane/Matchfile' |
+----------------+-----------------------------------------------------------------------------------------------------------------+
| git_url | *** |
| git_branch | master |
| storage_mode | git |
| type | adhoc |
| app_identifier | ["com.reddwarf.musicapp"] |
| username | *** |
+----------------+-----------------------------------------------------------------------------------------------------------------+
+--------------------------------+-----------------------------------------------------------------------------------------------------------------+
| Summary for match 2.191.0 |
+--------------------------------+-----------------------------------------------------------------------------------------------------------------+
| app_identifier | ["com.reddwarf.musicapp"] |
| git_url | *** |
| type | adhoc |
| readonly | true |
| keychain_name | *** |
| generate_apple_certs | true |
| skip_provisioning_profiles | false |
| username | *** |
| team_id | *** |
| storage_mode | git |
| git_branch | master |
| shallow_clone | false |
| clone_branch_directly | false |
| force | false |
| force_for_new_devices | false |
| skip_confirmation | false |
| skip_docs | false |
| platform | ios |
| derive_catalyst_app_identifier | false |
| fail_on_name_taken | false |
| skip_certificate_matching | false |
| skip_set_partition_list | false |
| verbose | false |
+--------------------------------+-----------------------------------------------------------------------------------------------------------------+
[14:58:30]: Cloning remote git repo...
[14:58:30]: If cloning the repo takes too long, you can use the `clone_branch_directly` option in match.
[14:58:30]: Checking out branch master...
[14:58:30]: ๐Ÿ”“ Successfully decrypted certificates repo
[14:58:30]: Couldn't find a valid code signing identity for distribution... creating one for you now
+---------------------------+-----------------------------------------------------+
| Lane Context |
+---------------------------+-----------------------------------------------------+
| DEFAULT_PLATFORM | ios |
| PLATFORM_NAME | ios |
| LANE_NAME | ios beta |
| KEYCHAIN_PATH | ~/Library/Keychains/*** |
| ORIGINAL_DEFAULT_KEYCHAIN | "/Users/runner/Library/Keychains/***-db" |
+---------------------------+-----------------------------------------------------+
[14:58:30]: No code signing identity found and can not create a new one because you enabled `readonly`
+------+------------------+-------------+
| fastlane summary |
+------+------------------+-------------+
| Step | Action | Time (in s) |
+------+------------------+-------------+
| 1 | default_platform | 0 |
| 2 | xcode_select | 0 |
| 3 | create_keychain | 0 |
| 4 | is_ci | 0 |
| ๐Ÿ’ฅ | match | 0 |
+------+------------------+-------------+
[14:58:30]: fastlane finished with errors
what should I do to fix the problem?
I see that is_ci also ran. Does your match command look like this:
match(.., readonly: is_ci, ...) and are you running the command on a CI service like Jenkins or some other one?
If so, run it locally first, that will generate all the relevant certs and provisioning profiles needed. Then run it on your CI service again.

Problem with using GPU with Docker-compose

I want to run a container based on python:3.8.8-slim-buster that needs access to the GPU.
When I build it from this Dockerfile:
FROM python:3.8.8-slim-buster
CMD ["sleep", "infinity"]
and then run it with "--gpus all" flag and exec nvidia-smi i get a proper response:
Sat Jun 19 12:26:57 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.27 Driver Version: 465.27 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| N/A 45C P8 N/A / N/A | 301MiB / 1878MiB | 14% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
and when I use this docker-compose:
services:
test:
image: tensorflow/tensorflow:2.5.0-gpu
command: sleep infinity
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
and exec nvidia-smi after running it i get the same response.
But when i replace the image in the docker-compose to python:3.8.8-slim-buster like in the Dockerfile, i get this response:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "nvidia-smi": executable file not found in $PATH: unknown
I appreciate any help figuring this out.

Heroku pg:push puts my tables in wrong schma

It looks like heroku pg:push has put all my data into a schema named for my local db, and not in public where they'd be accessible to my app.
How do I fix this?
Schema | Name | Type | Owner
--------------------+------------------------------+-------+----------------
information_schema | sql_features | table | postgres
information_schema | sql_implementation_info | table | postgres
information_schema | sql_languages | table | postgres
information_schema | sql_packages | table | postgres
information_schema | sql_parts | table | postgres
information_schema | sql_sizing | table | postgres
information_schema | sql_sizing_profiles | table | postgres
lorax_development | ac_coa | table | iykrnaofpnlzod
lorax_development | acbkacct | table | iykrnaofpnlzod
lorax_development | accommodation | table | iykrnaofpnlzod
lorax_development | accommodation_copy | table | iykrnaofpnlzod
lorax_development | address | table | iykrnaofpnlzod
lorax_development | advert | table | iykrnaofpnlzod
lorax_development | affiliation | table | iykrnaofpnlzod
lorax_development | agency | table | iykrnaofpnlzod
lorax_development | always | table | iykrnaofpnlzod
lorax_development | answer | table | iykrnaofpnlzod
lorax_development | ar_internal_metadata | table | iykrnaofpnlzod
lorax_development | best_month | table | iykrnaofpnlzod
lorax_development | bklin | table | iykrnaofpnlzod
...

docker-flink not showing all log statements

I am using 2 docker flink images with AMIDST and the following sample code. AMIDST is a probabilistic graphical model framework which supports Flink.
One image is running as JobManager the other as TaskManager. JM is reachable via DNS and I provide my own log4j.properties based on the startup script in bin/flink-console.sh used by these images.
public class ParallelMLExample {
private static final Logger LOG = LoggerFactory.getLogger(ParallelMLExample.class);
public static void main(String[] args) throws Exception {
final ExecutionEnvironment env;
//Set-up Flink session
env = ExecutionEnvironment.getExecutionEnvironment();
env.getConfig().disableSysoutLogging();
//generate a random dataset
DataFlink<DataInstance> dataFlink = new DataSetGenerator().generate(env, 1234, 1000, 5, 0);
//Creates a DAG with the NaiveBayes structure for the random dataset
DAG dag = DAGGenerator.getNaiveBayesStructure(dataFlink.getAttributes(), "DiscreteVar4");
LOG.info(dag.toString());
//Create the Learner object
ParameterLearningAlgorithm learningAlgorithmFlink = new ParallelMaximumLikelihood();
//Learning parameters
learningAlgorithmFlink.setBatchSize(10);
learningAlgorithmFlink.setDAG(dag);
//Initialize the learning process
learningAlgorithmFlink.initLearning();
//Learn from the flink data
LOG.info("########## BEFORE UPDATEMODEL ##########");
learningAlgorithmFlink.updateModel(dataFlink);
LOG.info("########## AFTER UPDATEMODEL ##########");
//Print the learnt Bayes Net
BayesianNetwork bn = learningAlgorithmFlink.getLearntBayesianNetwork();
LOG.info(bn.toString());
}
}
The problem is that I only see LOG.info() entries up until the updateModel call. After that silence. If I comment out this call, I can see the other entries. I am silencing Flink entries on purpose here.
Creating flink_jobmanager_1 ... done
Creating flink_jobmanager_1 ...
Creating flink_taskmanager_1 ... done
Attaching to flink_jobmanager_1, flink_taskmanager_1
jobmanager_1 | Starting Job Manager
jobmanager_1 | config file:
taskmanager_1 | Starting Task Manager
jobmanager_1 | jobmanager.rpc.address: jobmanager
taskmanager_1 | config file:
jobmanager_1 | jobmanager.rpc.port: 6123
jobmanager_1 | jobmanager.heap.mb: 1024
taskmanager_1 | jobmanager.rpc.address: jobmanager
jobmanager_1 | taskmanager.heap.mb: 1024
taskmanager_1 | jobmanager.rpc.port: 6123
jobmanager_1 | taskmanager.numberOfTaskSlots: 1
taskmanager_1 | jobmanager.heap.mb: 1024
jobmanager_1 | taskmanager.memory.preallocate: false
taskmanager_1 | taskmanager.heap.mb: 1024
jobmanager_1 | parallelism.default: 1
taskmanager_1 | taskmanager.numberOfTaskSlots: 2
jobmanager_1 | web.port: 8081
taskmanager_1 | taskmanager.memory.preallocate: false
jobmanager_1 | blob.server.port: 6124
taskmanager_1 | parallelism.default: 1
jobmanager_1 | query.server.port: 6125
taskmanager_1 | web.port: 8081
jobmanager_1 | Starting jobmanager as a console application on host c16d9156ff68.
taskmanager_1 | blob.server.port: 6124
taskmanager_1 | query.server.port: 6125
taskmanager_1 | Starting taskmanager as a console application on host 76c78378d35c.
jobmanager_1 | 2018-02-18 15:31:42,809 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
taskmanager_1 | 2018-02-18 15:31:43,897 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
jobmanager_1 | 2018-02-18 15:32:18,667 INFO com.ness.ParallelMLExample - DAG
jobmanager_1 | DiscreteVar0 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar1 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar2 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar3 has 1 parent(s): {DiscreteVar4}
jobmanager_1 | DiscreteVar4 has 0 parent(s): {}
jobmanager_1 |
jobmanager_1 | 2018-02-18 15:32:18,679 INFO com.ness.ParallelMLExample - ########## BEFORE UPDATEMODEL ##########
The updateModel method starts with a new Configuration() then retrieves the data set. It then runs a map, reduce and collect against the supplied data set but does not seem to be messing with root loggers...
What am I missing?

Application templates and instances manager for docker deployment?

I'm looking about application deployment with docker containers for production in some server (not hundreds).
I can see some deployment managers like docker-compose who deploy according to YAML service
description file.
Official docker-compose.yml example file:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
links:
- redis
redis:
image: redis
I'm looking about solution to manage/produce these YAML files and communicate with deployment managers like docker-compose.
This solution should permeit to manage Applications templates, deployeds instances of them, configuration of them, etc.
Illustration of it:
Docker
+-------------------+
docker-compose.yml | |
+---------------+ +-------+ | containers |
| APP manager |------->|Mysql_a| | +---------------+ |
| | |Mysql_b+-----------+ | |MySQL_a |Mysq| |
| MySQL Tpl | |Mysql_c| docker-compose | +---------------+ |
| Wordpress tpl | |Wp_a | | | |l_b |Mysql_c | |
| | +---+---+ | | +---------+-----+ |
| Mysql_a | | +------+ |Wp_a | | |
| Mysql_b +----------> | | | +---------+ | |
| Mysql_c | | | | | | |
| Wp_a | | | | | | |
+---------------+ | | | | | |
+---------------+ | +---------------+ |
+-------------------+
My thirst think is for panamax but is it approriate ? Whats other open source solutions exists ?

Resources