How to access specific JSON keys while using CSI Driver in AKS - azure-aks

I have a secret in the form of JSON, using Env Injector I was able to access specific keys inside the JSON using
<name_of_AzureKeyVaultSecret>#azurekeyvault?<key-name>
I have tried to find ways to replicate the same using CSI driver but I haven't been able to find anything, is there a way I can access or do I need to break down the JSON to form individual secrets and access from there.

Related

Airflow fernet key does not mask credentials

I am using Apache Airflow 2.2.3 with Python 3.9 and run everything in docker containers.
When I add connections to airflow I do it via the GUI because this way the passwords were supposed to be encrypted. In order for the encryption to work I installed the python package "apache-airflow[crypto]" on my local machine and generated a Fernet Key that I then put in my docker-compose.yaml as the variable "AIRFLOW__CORE__FERNET_KEY: 'MY_KEY'".
I also added the package "apache-airflow[crypto]" to my airflow repositories requirements.txt so that airflow can handle fernet keys.
My questions are the following:
When I add the fernet key as an environment variable as described, I can see the fernet key in the docker-compose.yaml and also when I enter the container and use os.environ["AIRFLOW__CORE__FERNET_KEY"] it's shown - isn't that unsafe? As far as I understand it credentials can be decrypted using this fernet key.
When I add connections to airflow I can get their properties via the container CLI by using "airflow connections get CONNECTION_NAME". Although I added the Fernet Key I see the password in plain text here - isn't that supposed to be hidden?
Unlike passwords the values (/connection strings) in the GUI's "Extra" field do not disappear and are even readable in the GUI. How can I hide those credentials from the GUI and from the CLI?
The airflow GUI tells me that my connections are encrypted so I think that the encryption did work somehow. But what is meant by that statement though when I can clearly see the passwords?
I think you make wrong assumptions about "encryption" and "security". The assumptions that you can prevent user who have access to running software (which airflow CLI gives you) are unrealistic and is not really "physically achievable".
Fernet key is used to encrypt data "At rest" in the database. If your database content is stolen (but not your Airflow program/configuration) - your data is protected. This is the ONLY reason for Fernet Key. It protect your data stored in the database "at rest". But once you have the key (from Airflow runtime) you can decrypt it. Usually the database is in some remote server and it has some backups. As long as the backups are not kept together with the key, if your airflow instances is "safe" but your database or backup gets "stolen" no-one will be able to use that data.
Yes. If you have access to airflow running instance you are supposed to be able to read passwords in clear text. How else do you expect Airflow to work? It needs to read the passwords to authenticate. If you can run airflow program, the data needs to be accessible. There is no work around it and you cannot do it differently this is impossible by design. What you CAN do to protect your data better - you can use Secrets Managers https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/secrets-backend/index.html - but they give you at most possibility of frequent rotation of secrets. Airflow - when running needs to have access to those passwords, otherwise you would not be able to well - authenticate. And once you have access to Airflow Runtime (for example with CLI) there is no way to prevent accessing those passwords that airflow has to know at runtime. This is basic property of any system that needs to authenticate with external system and is accessible at runtime. Airflow is written in Python and you can easily write any code that uses its runtime, so there is no way you can physically protect the runtime passwords that need to be known to "airflow core". At runtime, it needs to know authentication to connect and communicate with external systems. And once you have access to the system, you have - by definition - access to all secrets that system uses at runtime. There is no system in the world that can do it differently - that's just the nature of it. Frequent rotation and temporary authentication is the only way to deal with it so that potentially leaked authentication is not used for a long time.
Modern Airflow (2.1 + I believe) has secret masker that masks sensitive data also from extras when you specify it. https://airflow.apache.org/docs/apache-airflow/stable/security/secrets/mask-sensitive-values.html. The secret masker also masks sensitive data in logs, because logs can also be archived and backed up so - similarly to database - it makes sense to protect it. The UI - unlike CLI (which gives you access to runtime of airflow "Core") is just a front-end and it does not give you access to running core, so there masking sensitive data also makes sense.

How to write data with defined token or user/password into InfluxDB 2.0

I need to set an influxdb 2.0.3 to be able to write data into it with a defined set of user/pass or token.
My use case is to provide a software component before deploy influxdb, and this component need to have a predefined configuration to write into influxdb (is not possible to interact with api influxdb, just write operation is implemented).
I'm understand in documentation that a user/password is couldn' be used with this new release to write data, only token provide this feature. Is it true?
In other hand, a solution is to set manually a predefined token into influxdb, but I don't found this feature in a api / documentation.
Someone have another solution or a way to bypass this limitation?
Thanks
You cannot define a token's string when creating one in InfluxDB. You will need to create one when you deploy it, and pass the newly created token to whatever software will be writing to InfluxDB.

The best place to store Google service account JSON file?

Current setup: using Docker to generate the image of the application, deploy the image via Google Container Engine (GKE).
It is not ideal to have Google service account JSON file in the code and later on wrap it into the image. Is there any other way to store the JSON file?

ec2 roles vs ec2 roles with temporary keys for s3 access

So I have a standard Rails app running on ec2 that needs access to s3. I am currently doing it with long-term access keys, but rotating keys is a pain, and I would like to move away from this. It seems I have two alternative options:
One, tagging the ec2 instance with a role with proper permissions to access the s3 bucket. This seems easy to setup, yet not having any access keys seems like a bit of a security threat. If someone is able to access a server, it would be very difficult to stop access to s3. Example
Two, I can 'Assume the role' using the ruby SDK and STS classes to get temporary access keys from the role, and use them in the rails application. I am pretty confused how to set this up, but could probably figure it out. It seems like a very secure method, however, as even if someone gets access to your server, the temporary access keys make it considerably harder to access your s3 data over the long term. General methodology of this setup.
I guess my main question is which should I go with? Which is the industry standard nowadays? Does anyone have experience setting up STS?
Sincere thanks for the help and any further understanding on this issue!
All of the methods in your question require AWS Access Keys. These keys may not be obvious but they are there. There is not much that you can do to stop someone once they have access inside the EC2 instance other than terminating the instance. (There are other options, but that is for forensics)
You are currently storing long term keys on your instance. This is strongly NOT recommended. The recommended "best practices" method is to use IAM Roles and assign a role with only required permissions. The AWS SDKs will get the credentials from the instance's metadata.
You are giving some thought to using STS. However, you need credentials to call STS to obtain temporary credentials. STS is an excellent service, but is designed to for handing out short term temporary credentials to others - such as the case where your web server is creating credentials via STS to hand to your users for limited case use such as accessing files on S3 or sending an email, etc. The fault in your thinking about STS is that once the bad guy has access to your server, he will just steal the keys that you call STS with, thereby defeating the need to call STS.
In summary, follow best practices for securing your server such as NACLs, security groups, least privilege, minimum installed software, etc. Then use IAM Roles and assign the minimum privileges to your EC2 instance. Don't forget the value of always backing up your data to a location that your access keys CANNOT access.

Is all WSO2 API Manager's configuration saved in the database?

Say one implements a WSO2 API Manager Docker instance connecting to a separate database (like MySql) which is not dockerized. Say some API configuration is made within the API Manager (like referencing a Swagger file in a GitHub).
Say someone rebuilds the WSO2 API Manager Docker image (to modify CSS files for example), will the past configuration still be available from the separate database? Or does one have to reconfigure everything in the new Docker instance?
To put it in another way, if one needs to reconfigure everything, is there an easy way to do it? Something automatic?
All the configurations are stored in database. (Some are stored in internal registry, but registry saves data in database at the end)
API artifacts (synapse files) are saved in the file system [1]. You can use API Manager's API import/export tool to migrate API artifacts (and all other related files such as swagger, images, sequences etc.) between one server to another.
[1] <APIM_HOME>/repository/deployment/server/synapse-configs/default/api/

Resources