I have a keycloak docker image and I import the configuration of my realm from a json file. And it works, so far so good.
But in my configuration there is an LDAP provider, which doesn't have the right credentials (Bind DN and Bind Credentials). They are not inserted in the JSON due to security purposes. So I have to manually insert the credentials in the Admin Console after startup.
I am now trying to find a secure way to automate that without exposing the credentials in clear text, so that we don't have to manually insert the credentials after each startup.
I thought about inserting them in the JSON file inside the container with a shell script or whatever and then importing the resulting file when starting keycloak. The problem is that the credentials would then be exposed in clear text in the JSON file inside the container. So anybody with access to the container would be able to see them.
I'm thinking about inserting the credentials in that JSON file based on environment variables (these are securely stored in the Gitlab runner and masked in the logs), starting keycloak and then removing the JSON file on the fly after keycloak successfully starts without exposing the credentials in any of the layers. But I couldn't find a way to do that.
Can anybody think of an idea of how this can be achieved?
Any help would be much appreciated.
A workaround is to bind your keycloak instance to an external database with a persistent volume (examples from keycloak here) and to change the migration strategy from OVERWRITE_EXISTING, to IGNORE_EXISTING (documentation here) in your docker-compose, like this:
command: '-b 0.0.0.0 -Dkeycloak.migration.strategy=IGNORE_EXISTING'
In this way, your configuration is persistent so you just enter your LDAP credentials the first time and don't need complex operations with pipelines.
Related
after some time of trying, I managed to get InfluxDB and Grafana to play together in my Docker environment, and then I had a look into my InfluxDB bucket. By all the things I can see it doesn’t look that HA is actually writing anything to that bucket.
Going through the UI of InfluxDB I see there are buckets and sources, etc. and I wonder if I have to somehow add HA there as a source.
On the other side I have my configuration.yaml in my HA and there it looks like this
influxdb:
host: 192.168.1.110
port: !secret influx_port
database: home_assistant
username: !secret influx_username
password: !secret influx_password
Any way on how I can figure out if HA is actually writing to the bucket, or can you already tell that I do not write anything because I am missing an essential part?
In the standard HomeAssistant installation there is an entity called Sun, with an entity id sun.sun. I would expect this entity to be logged in the database, but I cant find it there.
The HomeAssistant logs show the following error:
InfluxDB database is not accessible due to '401: {"code":"unauthorized","message":"Unauthorized"}'. Please check that the database, username and password are correct and that the specified user has the correct permissions set.
The name of the database is correct, the username and password are the one I use to login to InfluxDB
When I look for directory rights on the influxdb docker container, then they belong to a DSM user, who is in the user group.
Changing the information in my secrets.yaml for the credentials of the DSM user leads to the same error message I received before.
I am running
Home Assistant 2023.1.7
Frontend 20230110.0 - latest
and
InfluxDB v2.6.1
Alright, for those interested... I managed.
SInce I do not have any certificates (yet), the connection is running on HTTP, and version 2 of InfluxDB is by default pointing at HTTPS. Hence, I added a simple
ssl: false
to the configuration file.
Then I got an error message basically saying the bucket "Home Assistant" was not found. No wonder, that's not the name of the bucket... So, in v2, you do not specify a database(name) in the configuration, but a bucket. Initially I was expecting the token to clarify that, but that's not the case, and I added the line
bucket: !secret influx_bucket
to my configuration and defined the name of the bucket in the secrets file.
Checked the configuration file, restarted HA, and Bob is your uncle...
I am currently running latest versions Nifi and Postgresql via docker compose.
as of 1.14 version update of Nifi, when you accesss the UI on web it connects via https, thus asking you for ID and Password every time you log in. Its too cumbersome to go to nifi-app.log file and look for credentials every time I access the UI. I know that you can change the setting where it keeps https as the default method but I am not sure how to do that in a docker container. Can anyone help me with this?
You could use some env like AUTH in the documentation
You can find the full explanations here
In Keycloak there is already a way to export the whole realm with all clients, users, roles etc.
This results in a file that can be used to import that realm during keycloak startup. This works like a charm, but the problem is that the URLs of the clients in keycloak are hardcoded, in my case to localhost.
I'm looking for a way to set the Base URLs of the clients dynamically, in order to deploy keycloak with an imported realm and everything works out of the box. Unfortunately, Keycloak doesnt seem to allow environment variables in the client configuration using the Keycloak Admin Dashboard.
As a consequence, using environment variables in the realm-export.json itself is also not allowed :/
The docker container of keycloak (jboss/keycloak) does not even have envsubst. Its really frustrating to already have a json file that does most of the configuration at container startup when I still have to manually configure the client URLs afterwards.
Any solution? Thanks in advance.
I am running an application inside of Docker that requires me to leverage google-bigquery. When I run it outside of Docker, I just have to go to the link below (redacted) and authorize. However, the link doesn't work when I copy-paste it from the Docker terminal. I have tried port mapping as well and no luck either.
Code:
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
# Make clients.
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
Response:
requests_oauthlib.oauth2_session - DEBUG - Generated new state
Please visit this URL to authorize this application:
Please see the available solutions on this page, it's constantly updated.
gcloud credential helper
Standalone Docker credential helper
Access token
Service account key
In short you need to use a service account key file. Make sure you either use a Secret Manager, or you just issue a service account key file for the purpose of the Docker image.
You need to place the service account key file into the Docker container either at build or runtime.
I'm running a docker private registry inside a kubernetes cluster using the standard registry:2 image. The image has basic functionality to provide user authentication using the Apache htpasswd utility.
In my case multiple users need to access the repository and therefore need to setup username passwords for multiple different users. What would be the best approach to implement this.
I got the single user htpsswd based authentication working, but does not seem to find a way to enable auth for multiple users i.e. having proper access control.
The registry is SSL enabled.(TLS at the ingress level)
There are multiple ways this could be done. First of all its possible to have multiple users in the htpasswd file. It was not working with docker becasue docker required the passwords to be hashed using bcrypt algorithm.
Use the -B flag while creating the htpasswd file.
sudo htpasswd -c -B /etc/apache2/.htpasswd <username1>
Another way this could be done, is using nginx authentication annotations.
nginx.ingress.kubernetes.io/auth-url: "url to auth service"
If the service return 200, nginx forwards the request or else returns authentication error response. With this you could have a lot of custom logic as you create and manage the authentication server.