I have a dockerfile where I am trying to copy everything in Github to dockerfile and build it as an image. I have a file called config.json which contains sensitive user data such as username and password. This will also be copied. The issue here is, I want this data to be encrypted and passed onto the dockerfile. While the image is being deployed onto kubernetes, I want this data to be decrypted back again. Can anyone please suggest an ideal method of doing this.
You shouldn't put this in the container image at all. Use a tool like Sealed Secrets, Lockbox, or sops-operator to encrypt the values separately, and then those get decrypted into a Secret object in Kubernetes which you can mount into your container as a volume so the software sees the same config.json file but it's stored externally.
As other people have mentioned the technically correct way to do this is to treat secrets like ordinary config and have something external to the container doing the secret-fu to keep everything safe.
However sometimes you may be in a situation in which the the technically correct thing is not the practically correct thing and you need to deploy config and/or secrets in your artifact/docker image.
If you just need to encrypt a single file, generating a key and doing symmetric encryption using a tool like gpg may be the easiest way to go about doing this.
If you are encrypting many files or encrypting them frequently it may make sense to use asymmetric encryption to do so. In this case, PKCS7/cms may make sense and the openssl binary conveniently has a cms subcommand for encrypting and decrypting CMS content.
Related
My docker build requires a .env-local file for local development. I keep some env vars there, which are defined also in online environment. One of those values is... a password :) This is my personal password for some API usage. I don't want anyone to see it!
The .env-local file is git-ignored, so there's hardly a chance that I would accidentally push it. However the password is written in plain text, so there's a chance that while screen sharing or just cooperating with my repo open, someone sees the inside of my .env-local.
How can I store the password more securely? It's not about 100% ultra secure method, but I just don't want someone to see my password by accident...
I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.
From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/source/file/path')
In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)
The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?
I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.
Because S3 doesn't have traditional directories, what you would consider the entire 'file path' in your client machines, i.e. \some\directory\test.xls becomes the 'key'. The object is the data in the file.
Buckets are unique across S3, and the keys must be unique within your bucket.
As far as the credentials, there are multiple ways of providing them - one is to actually supply the id and secret access key right in your code, another is to store them in a config file somewhere on your machine (this varies by OS type), and then when you are running your code in production, i.e. on an EC2 instance, the best practice is to start your instance with a IAM Role assigned, and then anything that runs on that machine automatically has all of the permissions of that role. This is the best/safest option for code that runs in EC2.
I use Heroku to deploy a Rails app. I store sensitive data such as API keys and passwords in Heroku's environment variables, and then use the data in rake tasks that utilize various APIs.
I am just wondering how secure Heroku's environmental variables are? Is there a way to hash these variables while retaining the ability to use them in the background somehow?
I came across a previous thread here: Is it secure to store passwords as environment variables (rather than as plain text) in config files?.
But it doesn't quite cover instances when I still need to unhashed password to perform important background tasks.
Several things (mostly my opinion):
--
1. API Key != Password
When you talk about API Keys, you're talking about a public token which is generally already very secure. The nature of API's nowadays is they need some sort of prior authentication (either at app or user level) to create a more robust level of security.
I would firstly ensure what type of data you're storing in the ENV variables. If it's pure passwords (for email etc), perhaps consider migrating your setup to one of the cloud providers (SendGrid / Mandrill etc), allowing you to use only API keys
The beauty of API keys is they can be changed whilst not affecting the base account, as well as limiting interactivity to the constrains of the API. Passwords affect the base account
--
2. ENV Vars are OS-level
They are part of the operating environment in which a process runs.
For example, a running process can query the value of the TEMP
environment variable to discover a suitable location to store
temporary files, or the HOME or USERPROFILE variable to find the
directory structure owned by the user running the process.
You must remember Environment Variables basically mean you store the data in the environment you're operating. The generally means the "OS", but can be the virtual instance of an OS too, if required.
The bottom line is your ENV vars are present in the core of your server. The same way as text files would be sitting in a directory on the hard drive - Environment Variables reside in the core of the OS
Unless you received a hack to the server itself, it would be very difficult to get the ENV variable data pro-grammatically, at least in my experience.
What are you looking for? Security against who or what?
Every piece of information store in a config file or the ENV is readable to everyone who has access to the server. And even more important, every gem can read the information and send it somewhere.
You can not encrypt the information, because then you need to store the key to decrypt somewhere. Same problem.
IMO both – environment variables and config files – are secure as long you can trust everyone that has access to your servers and you carefully reviewed the source code of all libraries and gems you have bundled with your app.
Playing with the ssh and public_key application in Erlang, I've discovered a nice feature.
I was trying to connect to my running Erlang SSH daemon by using a rsa key, but the authentication was failing and I was prompted for a password.
After some debugging and tracing (and a couple of coffees), I've realized that, for some weird reason, a non valid key for my user was there. The authorized_keys file contained two keys. The wrong one was at some point in the file, while the correct one was appended at the end of the file.
Now, the Erlang SSH application, when diffing the provided key with the ones contained in the authorized_keys, it was finding the first entry (completely ignoring the second on - the correct one). Then, it was switching to different authentication mechanism (at first it was trying dsa instead of rsa and then it was prompting for a password).
The question is: Is this behavior intended or should the SSH server check for multiple entries for the same user in the authorized_keys file? Is this a generic SSH behaviour or it's just specific to the Erlang implementation?
Yes, its a 'first failure' authentication, and I came across your issue several times. As far as implementation goes, it was explained to me that the demon iterated over the authorised_keys file looking for a matching login, and THEN checked the key.
This seems to be the standard implementation,
One of my Rails applications is going to depend on a secret key in memory, so all of its functions will only be available once administrator goes to a certain page and uploads the valid key.
The problem is that this key needs to be stored securely, so no other processes on the same machine should be able to access it (so memcached and filesystem are not suitable). One good idea would be just to store it in some configuration variable in the application, but newly spawned instances won't have access to that variable. Any thoughts how to implement this on RubyEE/Apache/mod_passenger?
there is really no way to accomplish that goal. (this is the same problem all DRM systems have)
You can't keep things secret from the operating system. Your application has to have the key somewhere in memory and the operating system kernel can read any memory location it wants to.
You need to be able to trust the operating system, which means that you then can also trust the operating system to properly enforce file access permissions. This in turn means that can store the key in a file that only the rails-user-process can read.
Think of it this way: even if you had no key at all, what is to stop an attacker on the server from simply changing the application code itself to gain access to the disabled functionality?
I would use the filesystem, with read access only to the file owner, and ensure the ruby process is the only process owned by this user. (using chmod 400 file)
You can get more complex than that, but it all boils down to using the unix users and permissions.
Encrypt it heavily in the filesystem?
What about treating it like a regular password, and using a salted hash? Once the user authenticates, he has access to the functions of the website.