Spring Cloud Config server authentication - spring-security

Is it better to store the config server username and password as an environment variable (both in the client and server), or by using a keystore? The keystore password is anyway stored as an environment variable, so why actually use a keystore? Or is there a better way to implement authentication in the Spring Cloud Config server?

In our case, config server is ONLY for backend services, but not for clients. We have multiple clients, like iOS, Android and Web app. Each kind of client will manage their own configurations.
Furthermore, we simply use HTTP basic authentication on config server, and store the username and password as instance variables. So the username and password will be not be exposed in source code level. On the other hand, our config server is not exposed to public network.
Hope this will give you some hints.

Related

Dynamically set Client URLs on Keycloak Startup

In Keycloak there is already a way to export the whole realm with all clients, users, roles etc.
This results in a file that can be used to import that realm during keycloak startup. This works like a charm, but the problem is that the URLs of the clients in keycloak are hardcoded, in my case to localhost.
I'm looking for a way to set the Base URLs of the clients dynamically, in order to deploy keycloak with an imported realm and everything works out of the box. Unfortunately, Keycloak doesnt seem to allow environment variables in the client configuration using the Keycloak Admin Dashboard.
As a consequence, using environment variables in the realm-export.json itself is also not allowed :/
The docker container of keycloak (jboss/keycloak) does not even have envsubst. Its really frustrating to already have a json file that does most of the configuration at container startup when I still have to manually configure the client URLs afterwards.
Any solution? Thanks in advance.

Jenkins pointing server to domain created

Good Morning
I have created a Jenkins server in AWS I am able to access the platform using the IP of the server
however, I want to access it more securely.
I have set up a subdomain on my hosting service and I set the IP of the server as an A record
I have also defined this in the configuration section of Jenkins
however, when I access the URL https://domainname I get nothing
but if I add 8080 at the end of it it takes me to the Jenkins platform
what am I missing here?
Thanks
I recommend you to use AWS Application Load Balancer to access to you jenkins web server.
I will host https certificat (if you are using AWS Certificate Manager) and you will be able configure DNS to redirect to ALB name.

Keycloak Docker import LDAP bind credentials without exposing them

I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.

com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException

Hello I've followed so far the tutorial https://developers.sap.com/tutorials/s4sdk-odata-service-cloud-foundry.html
step by step and I'm having issues, to run the solution on local machine.
I'm running windows 10 and according to tutorial I have set an environment variable to be as following:
destinations=[{name: "ErpQueryEndpoint", url: "xxxx.s4hana.ondemand.com", username: "INT_USER", password: "xxxxxxxx"}]
when i run the solution on localhost i get this:
Message Error occured while handling request: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: com.sap.cloud.sdk.cloudplatform.connectivity.exception.DestinationAccessException: Failed to get destinations of provider service instance: Failed to get access token for destination service. If your application is running on Cloud Foundry, make sure to have a binding to both the destination service and the authorization and trust management (xsuaa) service, AND that you either properly secured your application or have set the "ALLOW_MOCKED_AUTH_HEADER" environment variable to true. Please note that authentication types with user propagation, for example, principal propagation or the OAuth2 SAML Bearer flow, require that you secure your application and will not work when using the "ALLOW_MOCKED_AUTH_HEADER" environment variable. If your application is not running on Cloud Foundry, for example, when deploying to a local container, consider declaring the "destinations" environment variable to configure destinations.
Be sure to set the destinations variable so it is visible to your application. You can check using System.getenv("destinations"); in your code.

Access Pivotal SSO tile in local development

Our OPS team have configured a SSO tile that connects to ADFS. I am building a sample application that utilize an SSO service instance. I can deploy my application to PCF and remote debug my SSO configuration. These things work.
What I need is a way to access the SSO service instance while I am developing on my PC. Otherwise only way to verify my code really works is to deploy my application to PCF and either add log statements or configure remote debugging. Both of these are pretty time consuming.
I looked into configuring ssh access to pivotal services. That works for database service instances, but not for SSO service instance. Has anyone figured it out?
After repeated trials and error, I found the solution. Posting it here in case someone else has similar issue
In PCF, for your SSO add a new application. Auth redirect url for this application should point to your localhost. In my case it is http://localhost:8080
run cf env . Copy the p-identity section only and save to vcap_services.json. Then update the clientId and clientSecret with the values from the new application created in previous step.
Use the following command to start your application
VCAP_APPLICATION=true VCAP_SERVICES=$(cat vcap_services.json) SPRING_PROFILES_ACTIVE=... ./gradlew bootRun

Resources