Spring Cloud Data Flow Security Issue - oauth-2.0

We want to test Spring data cloud security using uaa server(cloud foundry).Please help us with authentication failure.
Step 1 : Download uaa server war from maven
Step 2 : Set up uaa bundled spring boot project
a. git clone https://github.com/pivotal/uaa-bundled.git
b. cd uaa-bundled
c. Copy uaa server war to src/main/resources
d. ./mvnw clean install
e. java -jar target/uaa-bundled-1.0.0.BUILD-SNAPSHOT.jar
The uaa server is started on 8080 port
Step 3 : Run the uaac commands
uaac target http://localhost:8080/uaa
uaac token client get admin -s adminsecret
uaac client add dataflow --name dataflow --secret dataflow --scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,sample.create,sample.view,dataflow.create,dataflow.deploy,dataflow.destroy,dataflow.manage,dataflow.modify,dataflow.schedule,dataflow.view --authorized_grant_types password,authorization_code,client_credentials,refresh_token --authorities uaa.resource,dataflowcreate,dataflow.deploy,dataflow.destroy,dataflow.manage,dataflow.modify,dataflow.schedule,dataflow.view,sample.view,sample.create --redirect_uri http://localhost:9393/login --autoapprove openid
uaac group add "sample.view"
uaac group add "sample.create"
uaac group add "dataflow.view"
uaac group add "dataflow.create"
uaac user add springrocks -p mysecret --emails springrocks#someplace.com
uaac user add vieweronly -p mysecret --emails mrviewer#someplace.com
uaac member add "sample.view" springrocks
uaac member add "sample.create" springrocks
uaac member add "dataflow.view" springrocks
uaac member add "dataflow.create" springrocks
uaac member add "sample.view" vieweronly
Run the below curl command if the authentication is successful...
C:\Users\rajesh>curl -v -d"username=springrocks&password=mysecret&client_id=dataflow&grant_type=password" -u "dataflow:dataflow" http://localhost:8080/uaa/oauth/token -d 'token_format=opaque'
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
* Server auth using Basic with user 'dataflow'
> POST /uaa/oauth/token HTTP/1.1
> Host: localhost:8080
> Authorization: Basic ZGF0YWZsb3c6ZGF0YWZsb3c=
> User-Agent: curl/7.55.1
> Accept: */*
> Content-Length: 99
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 99 out of 99 bytes
< HTTP/1.1 200
< Cache-Control: no-store
< Pragma: no-cache
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Fri, 21 May 2021 20:41:57 GMT
<
{"access_token":"eyJhbGciOiJIUzI1NiIsImprdSI6Imh0dHBzOi8vbG9jYWxob3N0OjgwODAvdWFhL3Rva2VuX2tleXMiLCJraWQiOiJsZWdhY3ktdG9rZW4ta2V5IiwidHlwIjoiSldUIn0.eyJqdGkiOiJmNDYyMmY4NDJmZTE0ZjVkYjM2MmFhOWM1ZD
k5ZTU2NyIsInN1YiI6IjcxYjQ2NWI0LWFkZGItNDNhMi1iYjk3LTgxMjJjOTgwZWM5MiIsInNjb3BlIjpbImRhdGFmbG93LnZpZXciLCJzY2ltLnVzZXJpZHMiLCJzYW1wbGUuY3JlYXRlIiwib3BlbmlkIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc
3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIiwiZGF0YWZsb3cuY3JlYXRlIiwic2FtcGxlLnZpZXciXSwiY2xpZW50X2lkIjoiZGF0YWZsb3ciLCJjaWQiOiJkYXRhZmxvdyIsImF6cCI6ImRhdGFmbG93IiwiZ3JhbnRfdHlwZSI6InBhc3N
3b3JkIiwidXNlcl9pZCI6IjcxYjQ2NWI0LWFkZGItNDNhMi1iYjk3LTgxMjJjOTgwZWM5MiIsIm9yaWdpbiI6InVhYSIsInVzZXJfbmFtZSI6InNwcmluZ3JvY2tzIiwiZW1haWwiOiJzcHJpbmdyb2Nrc0Bzb21lcGxhY2UuY29tIiwiYXV0aF90aW1lIjoxNj
IxNjI5NzE3LCJyZXZfc2lnIjoiODA1MTk3ODYiLCJpYXQiOjE2MjE2Mjk3MTcsImV4cCI6MTYyMTY3MjkxNywiaXNzIjoiaHR0cDovL2xvY2FsaG9zdDo4MDkwL3VhYS9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImF1ZCI6WyJzY2ltIiwiY2xvdWRfY29ud
HJvbGxlciIsInBhc3N3b3JkIiwiZGF0YWZsb3ciLCJvcGVuaW QiLCJzYW1wbGUiXX0.cbT2p9agOAxDfv2-kwM9XdaL-m1lnVC5_gKPxdxRRPQ","token_type":"bearer","id_token":"eyJhbGciOiJIUzI1NiIsImprdSI6Imh0dHBzOi8vbG9jYW
xob3N0OjgwODAvdWFhL3Rva2VuX2tleXMiLCJraWQiOiJsZWdhY3ktdG9rZW4ta2V5IiwidHlwIjoiSldUIn0.eyJzdWIiOiI3MWI0NjViNC1hZGRiLTQzYTItYmI5Ny04MTIyYzk4MGVjOTIiLCJhdWQiOlsiZGF0YWZsb3ciXSwiaXNzIjoiaHR0cDovL2xvY
2FsaG9zdDo4MDkwL3VhYS9vYXV0aC90b2tlbiIsImV4cCI6MTYyMTY3MjkxNywiaWF0IjoxNjIxNjI5NzE3LCJhbXIiOlsicHdkIl0sImF6cCI6ImRhdGFmbG93Iiwic2NvcGUiOlsib3BlbmlkIl0sImVtYWlsIjoic3ByaW5ncm9ja3NAc29tZXBsYWNlLmNv
bSIsInppZCI6InVhYSIsIm9yaWdpbiI6InVhYSIsImp0aSI6ImY0NjIyZjg0MmZlMTRmNWRiMzYyYWE5YzVkOTllNTY3IiwiZW1haWxfdmVyaWZpZWQiOnRydWUsImNsaWVudF9pZCI6ImRhdGFmbG93IiwiY2lkIjoiZGF0YWZsb3ciLCJncmFudF90eXBlIjo
icGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJzcHJpbmdyb2NrcyIsInJldl9zaWciOiI4MDUxOTc4NiIsInVzZXJfaWQiOiI3MWI0NjViNC1hZGRiLTQzYTItYmI5Ny04MTIyYzk4MGVjOTIiLCJhdXRoX3RpbWUiOjE2MjE2Mjk3MTd9.4COLuUIisv2PMvFHewFta
Bhm6BgykMV6nLskhUM3Qac","refresh_token":"eyJhbGciOiJIUzI1NiIsImprdSI6Imh0dHBzOi8vbG9jYWxob3N0OjgwODA vdWFhL3Rva2VuX2tleXMiLCJraWQiOiJsZWdhY3ktdG9rZW4ta2V5IiwidHlwIjoiSldUIn0.eyJqdGkiOiIxOTQ4OT
ZiNDBlMGM0YWE1ODhkNzg2ODM1Zjg4ZDYwZS1yIiwic3ViIjoiNzFiNDY1YjQtYWRkYi00M2EyLWJiOTctODEyMmM5ODBlYzkyIiwiaWF0IjoxNjIxNjI5NzE3LCJleHAiOjE2MjQyMjE3MTcsImNpZCI6ImRhdGFmbG93IiwiY2xpZW50X2lkIjoiZGF0YWZsb
3ciLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgwOTAvdWFhL29hdXRoL3Rva2VuIiwiemlkIjoidWFhIiwiYXVkIjpbInNjaW0iLCJjbG91ZF9jb250cm9sbGVyIiwicGFzc3dvcmQiLCJkYXRhZmxvdyIsIm9wZW5pZCIsInNhbXBsZSJdLCJncmFudGVkX3Nj
b3BlcyI6WyJkYXRhZmxvdy52aWV3Iiwic2NpbS51c2VyaWRzIiwic2FtcGxlLmNyZWF0ZSIsIm9wZW5pZCIsImNsb3VkX2NvbnRyb2xsZXIucmVhZCIsInBhc3N3b3JkLndyaXRlIiwiY2xvdWRfY29udHJvbGxlci53cml0ZSIsImRhdGFmbG93LmNyZWF0ZSI
sInNhbXBsZS52aWV3Il0sImFtciI6WyJwd2QiXSwiYXV0aF90aW1lIjoxNjIxNjI5NzE3LCJncmFudF90eXBlIjoicGFzc3dvcmQiLCJ1c2VyX25hbWUiOiJzcHJpbmdyb2NrcyIsIm9yaWdpbiI6InVhYSIsInVzZXJfaWQiOiI3MWI0NjViNC1hZGRiLTQzYT
ItYmI5Ny04MTIyYzk4MGVjOTIiLCJyZXZfc2lnIjoiODA1MTk3ODYifQ.xZfW4vo26DUOlByX6yLVG4jmvq0qprdP4AufGA4B40Q","expires_in":43199,"scope":"dataflow.view scim.use rids sample.create openid cloud_controller.
read password.write cloud_controller.write dataflow.create sample.view","jti":"f4622f842fe14f5db362aa9c5d99e567"}* Connection #0 to host localhost left intact
Step 4 : Run spring cloud data flow server using application.yml
application.yml -
spring:
security:
oauth2:
client:
registration:
uaa:
client-id: springrocks
client-secret: mysecret
redirect-uri: '{baseUrl}/login/oauth2/code/{registrationId}'
authorization-grant-type: authorization_code
scope:
- openid
provider:
uaa:
jwk-set-uri: http://localhost:8080/uaa/token_keys
token-uri: http://localhost:8080/uaa/oauth/token
user-info-uri: http://localhost:8080/uaa/userinfo
user-name-attribute: springrocks#someplace.com
authorization-uri: http://localhost:8080/uaa/oauth/authorize
resourceserver:
opaquetoken:
introspection-uri: http://localhost:8080/uaa/introspect
client-id: dataflow
client-secret: dataflow
Run the below command...
java -jar spring-cloud-dataflow-server-2.7.2.jar --spring.config.additional-location=application.yml
The server is started on 9393 port.
Step 5 : - Open the url http://localhost:9393/dashboard
Click on the link OAuth2 Login
On the Cloud foundry page - give username and password
But the authentication fails.
Please find the uaa server logs as below....
[2021-05-23 11:43:15.641] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- DisableIdTokenResponseTypeFilter: Processing id_token disable filter
[2021-05-23 11:43:15.641] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- DisableIdTokenResponseTypeFilter: pre id_token disable:false pathinfo:null request_uri:/uaa/oauth/authorize response_type:code
[2021-05-23 11:43:15.641] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- DisableIdTokenResponseTypeFilter: post id_token disable:false pathinfo:null request_uri:/uaa/oauth/authorize response_type:code
[2021-05-23 11:43:15.641] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- SecurityFilterChainPostProcessor$HttpsEnforcementFilter: Filter chain 'uiSecurity' processing request GET /uaa/oauth/authorize
[2021-05-23 11:43:15.647] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... INFO --- SamlKeyManagerFactory: Loaded service provider certificate legacy-saml-key
[2021-05-23 11:43:15.649] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... INFO --- NonSnarlMetadataManager: Initialized local service provider for entityID: cloudfoundry-saml-login
[2021-05-23 11:43:15.650] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- NonSnarlMetadataManager: Found metadata EntityDescriptor with ID
[2021-05-23 11:43:15.651] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- FixHttpsSchemeRequest: Request X-Forwarded-Proto null
[2021-05-23 11:43:15.651] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- UaaSavedRequestCache: Removing DefaultSavedRequest from session if present
[2021-05-23 11:43:15.676] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... DEBUG --- SessionResetFilter: Evaluating user-id for session reset:e943b779-297b-4008-8a5d-4748cb2ef575
[2021-05-23 11:43:15.694] cloudfoundry-identity-server/uaa - 7344 [http-nio-8080-exec-4] .... INFO --- UaaAuthorizationEndpoint: Handling OAuth2 error: error="invalid_client", error_description="No client with requested id: springrocks"

The client-id and client-secret should be "dataflow". Here is my working configuration:
uaac script:
uaac target http://localhost:8080/uaa
uaac token client get admin -s adminsecret
uaac client add dataflow \
--name dataflow \
--secret dataflow \
--scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,sample.create,sample.view,dataflow.create,dataflow.deploy,dataflow.destroy,dataflow.manage,dataflow.modify,dataflow.schedule,dataflow.view \
--authorized_grant_types password,authorization_code,client_credentials,refresh_token \
--authorities uaa.resource,dataflow.create,dataflow.deploy,dataflow.destroy,dataflow.manage,dataflow.modify,dataflow.schedule,dataflow.view,sample.view,sample.create\
--redirect_uri http://localhost:9393/login \
--autoapprove openid
uaac group add "sample.view"
uaac group add "sample.create"
uaac group add "dataflow.view"
uaac group add "dataflow.create"
uaac group add "dataflow.deploy"
uaac group add "dataflow.destroy"
uaac group add "dataflow.manage"
uaac group add "dataflow.modify"
uaac group add "dataflow.schedule"
uaac user add admindf -p password --emails admindf#someplace.com
uaac user add vieweronly -p password --emails mrviewer#someplace.com
uaac member add "sample.view" admindf
uaac member add "sample.create" admindf
uaac member add "dataflow.view" admindf
uaac member add "dataflow.create" admindf
uaac member add "dataflow.deploy" admindf
uaac member add "dataflow.destroy" admindf
uaac member add "dataflow.manage" admindf
uaac member add "dataflow.modify" admindf
uaac member add "dataflow.schedule" admindf
uaac member add "sample.view" vieweronly
uaac member add "dataflow.view" vieweronly
application.yml
spring:
security:
oauth2:
client:
registration:
uaa:
client-id: dataflow
client-secret: dataflow
redirect-uri: '{baseUrl}/login/oauth2/code/{registrationId}'
authorization-grant-type: authorization_code
scope:
- openid
- dataflow.view
- dataflow.create
- dataflow.manage
- dataflow.deploy
- dataflow.destroy
- dataflow.modify
- dataflow.schedule
provider:
uaa:
jwk-set-uri: http://localhost:8080/uaa/token_keys
token-uri: http://localhost:8080/uaa/oauth/token
user-info-uri: http://localhost:8080/uaa/userinfo
user-name-attribute: user_name
authorization-uri: http://localhost:8080/uaa/oauth/authorize
resourceserver:
opaquetoken:
introspection-uri: http://localhost:8080/uaa/introspect
client-id: dataflow
client-secret: dataflow
cloud:
dataflow:
security:
authorization:
provider-role-mappings:
uaa:
map-oauth-scopes: true
role-mappings:
ROLE_VIEW: dataflow.view
ROLE_CREATE: dataflow.create
ROLE_MANAGE: dataflow.manage
ROLE_DEPLOY: dataflow.deploy
ROLE_DESTROY: dataflow.destroy
ROLE_MODIFY: dataflow.modify
ROLE_SCHEDULE: dataflow.schedule

Related

AKHQ ON AKS: Failed to read configuration file

When I try to deploy AKHQ on AKS I get the following error:
2022-01-27 14:07:06,353 ←[1;31mERROR←[0;39m ←[35mmain ←[0;39m ←[36mi.m.runtime.Micronaut ←[0;39m Error starting Micronaut server: Failed to read configuration file: /app/application.yml
The configuration file(application.yml) looks like this:
micronaut:
security:
enabled: false
akhq:
server:
access-log: # Access log configuration (optional)
enabled: true # true by default
name: org.akhq.log.access # Logger name
format: "[Date: {}] [Duration: {} ms] [Url: {} {}] [Status: {}] [Ip: {}] [User: {}]" # Logger format
# list of kafka cluster available for akhq
connections:
kafka-cluster-ssl:
properties:
bootstrap.servers: "FQN-Address-01:9093,FQN-Address-02:9093,FQN-Address-03:909"
security.protocol: SSL
ssl.truststore.location: /app/truststore.jks
ssl.truststore.password: truststor-pwd
ssl.keystore.location: /app/keystore.jks
ssl.keystore.password: keystore-pwd
ssl.key.password: key-pwd
I passed also read permission to the file in Dockerfile but that didn't help.
Dockerfile
FROM tchiotludo/akhq:0.20.0
# add ssl producer/consumer config and root ca file
ADD ./resources/ /app/
USER root
RUN chmod +r application.yml
RUN chmod +x gen-certs.sh
RUN ./gen-certs.sh
I used an older version of AKHQ docker image and AKHQ is running now.

Setup Rancher to manage my Kubernetes Cluster

I Want to setup Rancher to manage my Kubernetes Cluster.
I installed rancher on the Ubuntu and when try to connect I got the error.
Steps that I ran:
curl --insecure -sfL https://localhost/v3/import/zkzmqg5kjpcqmw9qplrskzwwvwhgbbzdk2z9dn579rhpp66wt65wgj.yaml | kubectl apply -f -
I got this error:
curl --insecure -sfL https://localhost/v3/import/zkzmqg5kjpcqmw9qplrskzwwvwhgbbzdk2z9dn579rhpp66wt65wgj.yaml | kubectl apply -f -
Error from server (Forbidden): <html><head><meta http-equiv='refresh' content='1;url=/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s'/><script>window.location.replace('/login?from=%2Fswagger-2.0.0.pb-v1%3Ftimeout%3D32s');</script></head><body style='background-color:white; color:white;'>
Authentication required
<!--
You are authenticated as: anonymous
Groups that you are in:
Permission you need to have (but didn't): hudson.model.Hudson.Read
... which is implied by: hudson.security.Permission.GenericRead
... which is implied by: hudson.model.Hudson.Administer
-->
Any idea how I can To fix the problem?

IBM Cloud Private 2.1.0.3 The conditional check failed

I am trying to install the IBM Private Cloud Community Edition but struggle with the execution of the sudo docker run command from the installation instructions:
> sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
When I execute it, it returns following output with error message (below):
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$ sudo docker run --net=host -t -e LICENSE=accept -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0.3 install
PLAY [Checking Python interpreter] *********************************************
TASK [Checking Python interpreter] *********************************************
changed: [127.0.0.1]
PLAY [Checking prerequisites] **************************************************
TASK [Gathering Facts] *********************************************************
ok: [127.0.0.1]
TASK [docker-engine-check : Getting Docker engine version] *********************
changed: [127.0.0.1]
TASK [docker-engine-check : Checking docker engine if installed] ***************
changed: [127.0.0.1]
TASK [docker-engine : include] *************************************************
TASK [docker-engine : include] *************************************************
TASK [containerd-engine-check : Getting containerd version] ********************
TASK [containerd-engine-check : Checking cri-containerd if installed] **********
TASK [containerd-engine : include] *********************************************
TASK [containerd-engine : include] *********************************************
TASK [network-check : Checking for the network pre-check file] *****************
ok: [127.0.0.1 -> localhost]
TASK [network-check : include_tasks] *******************************************
included: /installer/playbook/roles/network-check/tasks/calico.yaml for 127.0.0.1
TASK [network-check : Calico Validation - Verifying hostname for lowercase] ****
TASK [network-check : Calico Validation - Initializing interface list to be verified] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is first-found] ***
TASK [network-check : Calico Validation - Updating regex string to match interfaces to be excluded] ***
TASK [network-check : Calico Validation - Getting list of interfaces to be considered] ***
TASK [network-check : Calico Validation - Excluding default interface if defined] ***
TASK [network-check : Calico Validation - Finding Interface reg-ex when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is interface(reg-ex)] ***
TASK [network-check : Calico Validation - Finding Domain/IP when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding IP for the Domain when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when lo is found] ***
changed: [127.0.0.1]
TASK [network-check : Calico Validation - Finding Interface when autodetection_method is can-reach] ***
ok: [127.0.0.1]
TASK [network-check : Calico Validation - Finding MTU for the detected Interface(s)] ***
fatal: [127.0.0.1]: FAILED! => {"msg": "The conditional check 'hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined' failed. The error was: error while evaluating conditional (hostvars[inventory_hostname]['ansible_'~item]['mtu'] is defined): 'dict object' has no attribute u'ansible_'\n\nThe error appears to have been in '/installer/playbook/roles/network-check/tasks/calico.yaml': line 86, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Calico Validation - Finding MTU for the detected Interface(s)\n ^ here\n"}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
127.0.0.1 : ok=12 changed=6 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
user#kim:/opt/ibm-cloud-private-ce-2.1.0.3/cluster$
I am working on Ubuntu 14.04 with Docker version 17.12.1-ce, build 7390fc6.
My host file looks like this:
[master]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[worker]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
[proxy]
127.0.0.1 ansible_user="user" ansible_ssh_pass="6CEd29CN" ansible_become=true ansible_become_pass="6CEd29CN" ansible_port="22" ansible_ssh_common_args="-oPubkeyAuthentication=no"
#[management]
#4.4.4.4
#[va]
#5.5.5.5
The yaml.cfg file looks like this:
# Licensed Materials - Property of IBM
# IBM Cloud private
# # Copyright IBM Corp. 2017 All Rights Reserved
# US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
---
###### docker0: 172.17.0.1
###### eth0: 192.168.240.14
## Network Settings
network_type: calico
# network_helm_chart_path: < helm chart path >
## Network in IPv4 CIDR format
network_cidr: 127.0.0.1/8
## Kubernetes Settings
service_cluster_ip_range: 127.0.0.1/24
## Makes the Kubelet start if swap is enabled on the node. Remove
## this if your production env want to disble swap.
kubelet_extra_args: ["--fail-swap-on=false"]
# cluster_domain: cluster.local
# cluster_name: mycluster
# cluster_CA_domain: "{{ cluster_name }}.icp"
# cluster_zone: "myzone"
# cluster_region: "myregion"
## Etcd Settings
etcd_extra_args: ["--grpc-keepalive-timeout=0", "--grpc-keepalive-interval=0", "--snapshot-count=10000"]
## General Settings
# wait_for_timeout: 600
# docker_api_timeout: 100
## Advanced Settings
default_admin_user: user
default_admin_password: 6CEd29CN
# ansible_user: <username>
# ansible_become: true
# ansible_become_password: <password>
## Kubernetes Settings
# kube_apiserver_extra_args: []
# kube_controller_manager_extra_args: []
# kube_proxy_extra_args: []
# kube_scheduler_extra_args: []
## Enable Kubernetes Audit Log
# auditlog_enabled: false
## GlusterFS Settings
# glusterfs: false
## GlusterFS Storage Settings
# storage:
# - kind: glusterfs
# nodes:
# - ip: <worker_node_m_IP_address>
# device: <link path>/<symlink of device aaa>,<link path>/<symlink of device bbb>
# - ip: <worker_node_n_IP_address>
# device: <link path>/<symlink of device ccc>
# - ip: <worker_node_o_IP_address>
# device: <link path>/<symlink of device ddd>
# storage_class:
# name:
# default: false
# volumetype: replicate:3
## Network Settings
## Calico Network Settings
# calico_ipip_enabled: true
# calico_tunnel_mtu: 1430
calico_ip_autodetection_method: can-reach=127.0.0.1
## IPSec mesh Settings
## If user wants to configure IPSec mesh, the following parameters
## should be configured through config.yaml
# ipsec_mesh:
# enable: true
# interface: <interface name on which IPsec will be enabled>
# subnets: []
# exclude_ips: "<list of IP addresses separated by a comma>"
# kube_apiserver_secure_port: 8001
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# cluster_lb_address: none
## External loadbalancer IP or domain
## Or floating IP in OpenStack environment
# proxy_lb_address: none
## Install in firewall enabled mode
# firewall_enabled: false
## Allow loopback dns server in cluster nodes
# loopback_dns: false
## High Availability Settings
# vip_manager: etcd
## High Availability Settings for master nodes
# vip_iface: eth0
# cluster_vip: 127.0.1.1
## High Availability Settings for Proxy nodes
# proxy_vip_iface: eth0
# proxy_vip: 127.0.1.1
## Federation cluster Settings
# federation_enabled: false
# federation_cluster: federation-cluster
# federation_domain: cluster.federation
# federation_apiserver_extra_args: []
# federation_controllermanager_extra_args: []
# federation_external_policy_engine_enabled: false
## vSphere cloud provider Settings
## If user wants to configure vSphere as cloud provider, vsphere_conf
## parameters should be configured through config.yaml
# kubelet_nodename: hostname
# cloud_provider: vsphere
# vsphere_conf:
# user: <vCenter username for vSphere cloud provider>
# password: <password for vCenter user>
# server: <vCenter server IP or FQDN>
# port: [vCenter Server Port; default: 443]
# insecure_flag: [set to 1 if vCenter uses a self-signed certificate]
# datacenter: <datacenter name on which Node VMs are deployed>
# datastore: <default datastore to be used for provisioning volumes>
# working_dir: <vCenter VM folder path in which node VMs are located>
## Disabled Management Services Settings
## You can disable the following management services: ["service-catalog", "metering", "monitoring", "istio", "vulnerability-advisor", "custom-metrics-adapter"]
disabled_management_services: ["istio", "vulnerability-advisor", "custom-metrics-adapter"]
## Docker Settings
# docker_env: []
# docker_extra_args: []
## The maximum size of the log before it is rolled
# docker_log_max_size: 50m
## The maximum number of log files that can be present
# docker_log_max_file: 10
## Install/upgrade docker version
# docker_version: 17.12.1
## ICP install docker automatically
# install_docker: true
## Ingress Controller Settings
## You can add your ingress controller configuration, and the allowed configuration can refer to
## https://github.com/kubernetes/ingress-nginx/blob/nginx-0.9.0/docs/user-guide/configmap.md#configuration-options
# ingress_controller:
# disable-access-log: 'true'
## Clean metrics indices in Elasticsearch older than this number of days
# metrics_max_age: 1
## Clean application log indices in Elasticsearch older than this number of days
# logs_maxage: 1
## Uncomment the line below to install Kibana as a managed service.
# kibana_install: true
# STARTING_CLOUDANT
# cloudant:
# namespace: kube-system
# pullPolicy: IfNotPresent
# pvPath: /opt/ibm/cfc/cloudant
# database:
# password: fdrreedfddfreeedffde
# federatorCommand: hostname
# federationIdentifier: "-0"
# readinessProbePeriodSeconds: 2
# readinessProbeInitialDelaySeconds: 90
# END_CLOUDANT
My goal is to set up ICP on my local machine (single node) and I'm very thankful for any help regarding this issue.
So I resolved this error by uncommenting and setting # calico_ipip_enabled: true to false.
After that though I got another error because of my loopback ip:
fatal: [127.0.0.1] => A loopback IP is used in your DNS server configuration. For more details, see https://ibm.biz/dns-fails.
But there is an fix/workaround by setting loopback_dns: true as mentioned in the link.
I can't close this question here but this is how I resolved it.
IBM Cloud private suppoted OS is ubuntu 16.04 .Please check the below url
https://www.ibm.com/support/knowledgecenter/SSBS6K_2.1.0.3/supported_system_config/supported_os.html
Please check the system requirement..
I am also trying to install the setup.. I didnt face this issue.
Normally, we can not specify 127.0.0.1 as ICP node in ICP hosts file, thanks.

KeyCloak bearer-only client shouldn't be able to invoke a secured endpoint when its credentials are wrong, but he can, why?

I have a Spring Boot application with this configuration:
server:
port: 9292
keycloak:
auth-server-url: http://localhost:8180/auth
realm: SampleRealm
resource: non-existing
public-client: false
principal-attribute: preferred_username
credentials:
secret: wrong-secret
bearer-only: true
I get an access token using another valid client (cli1, secret1):
curl -X POST \
-H "Authorization: Basic c2ItYXBwOmEyY2ViZmI2LTBjMzgtNDNiNS1hMDAwLThhYmUzYjU5YjJiMQ==" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d 'username=someuser&password=somepassword&grant_type=password' \
"http://localhost:8180/auth/realms/SampleRealm/protocol/openid-connect/token"
Now I use that bearer token to invoke my Spring Boot Service:
curl -X GET \
http://localhost:9292/me \
-H 'Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJPVE5xWF9jYWRXbEc1dGZYRmJVdEJ2V25hb2NTTGhuSm9LWndpOGxkYjZZIn0.eyJqdGkiOiI5ZTdlMjRmZC1lZDZmLTQzZTItYTFjZC1iMjlkMWRkN2I5ZWUiLCJleHAiOjE1MTk2NjQwODMsIm5iZiI6MCwiaWF0IjoxNTE5NjYzNzgzLCJpc3MiOiJodHRwOi8vbG9jYWxob3N0OjgxODAvYXV0aC9yZWFsbXMvU2FtcGxlUmVhbG0iLCJhdWQiOiJzYi1hcHAiLCJzdWIiOiIyNTFlZjNhNS1iNTRkLTQ4MmMtYTAzZS0wN2MzN2M0OGE5ZWIiLCJ0eXAiOiJCZWFyZXIiLCJhenAiOiJzYi1hcHAiLCJhdXRoX3RpbWUiOjAsInNlc3Npb25fc3RhdGUiOiJlNjdjMDBiYy0zODUxLTQ4ZjYtYTIxZi1hNDVhOGI0NGQyOGMiLCJhY3IiOiIxIiwiYWxsb3dlZC1vcmlnaW5zIjpbXSwicmVhbG1fYWNjZXNzIjp7InJvbGVzIjpbInVtYV9hdXRob3JpemF0aW9uIiwidXNlciJdfSwicmVzb3VyY2VfYWNjZXNzIjp7ImFjY291bnQiOnsicm9sZXMiOlsibWFuYWdlLWFjY291bnQiLCJtYW5hZ2UtYWNjb3VudC1saW5rcyIsInZpZXctcHJvZmlsZSJdfX0sIm5hbWUiOiJKb3NlIiwicHJlZmVycmVkX3VzZXJuYW1lIjoiamFpbmlnbyIsImdpdmVuX25hbWUiOiJKb3NlIiwiZW1haWwiOiJqYWluaWdvQHByb2ZpbGUuZXMifQ.bkMPSEvUHnVr5QoCsldKcFjKw3E_3Rhdu_SJ6LbgUehysAsLuG6pyjAQ4uqShTKphuXjOUf3E1eFMlttKSxZstCqP7iRU-OyHueGZ-_zGNx1ycvDBWSxCSmQufu9cx_dmnYW4NR9u5sSsZ052eDX0T0VgCvxeTtLJCsoH741SmJIVUvzrkPagKF_M_INVBQ3qaOds74o088qJy4GVJ8eZGqgsW9YOW6nNLV6kERwLAD9WZJoEARCdTBuGARTVJZuJ0lYVI0-jI0wN88T1G3vX3DZS0HIAROmgIait89PZ5wyfOu9u6ohTyFsi3uHV6uSJcN7x7t51snnBpr9KSSMMQ' \
-H 'Cache-Control: no-cache'
The Spring Boot App is correctly invoking the secured endpoint but it shouldn't be allowed to because the resource (non-existing) and secret (wrong-secret) don't actually exist, they haven't even been configured in KeyCloak!!! Why is this working? Shouldn't the client have its client-id client-secret validated?
o.k.a.BearerTokenRequestAuthenticator : Verifying access_token
o.k.a.BearerTokenRequestAuthenticator : access_token: xxxxxxxxxx.signature
o.k.a.rotation.JWKPublicKeyLocator : Going to send request to retrieve new set of realm public keys for client non-existing
o.k.a.rotation.JWKPublicKeyLocator : Realm public keys successfully retrieved for client non-existing. New kids: [OTNqX_cadWlG5tfXFbUtBvWnaocSLhnJoKZwi8ldb6Y]
o.k.a.BearerTokenRequestAuthenticator : successful authorized
Realm public keys successfully retrieved for client non-existing What??? non-existing client doesn't exist!!

Openldap 2.4 within Docker container

I'm setting up an openldap server (slapd) within a docker container, I just took the latest centos image (tags: latest, centos7, 7, image here
and then I've installed following packages:
openldap-servers-2.4.44-5.el7.x86_64
openldap-2.4.44-5.el7.x86_64
openldap-clients-2.4.44-5.el7.x86_64
openldap-devel-2.4.44-5.el7.x86_64
and then just started the service with /usr/sbin/slapd -F /etc/openldap/slapd.d/
Then I'm trying to add the domain and root user to ldap configuration using this ldif file (db.ldif)
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcSuffix
olcSuffix: dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootDN
olcRootDN: cn=myadminuser,dc=mydomain,dc=com
dn: olcDatabase={2}hdb,cn=config
changetype: modify
replace: olcRootPW
olcRootPW: {SSHA}blablablabla
Then when I run ldapmodify -Y EXTERNAL -H ldapi:/// -f db.ldif
It throws me this ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1)
I see the port is open 'cause telnet can connect to it and actually doing ldapsearch from another machine works ldapsearch -h $myserver -p $myport -x
And response this:
# extended LDIF
#
# LDAPv3
# base <> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 32 No such object
# numResponses: 1
I really don't know what I'm missing.

Resources