I am currently facing an issue and cannot find a proper answer on internet.
In my jenkins directory I have a file with a lot of confidentials data such as username and passwords. I need to encrypt it and be able to use these data inside a pipeline.
I was thinking of encrypting it and add the key inside jenkins credential but I am not sure if it is a good idea or not.
Do you have ideas about how I can perform this operation in the most secure way ?
Thank you in advance for your answer !
Related
I wish to have a job that will accept as a parameter an ssh private key, be able to use it securely, and it will be discarded at the end.
I found a lot of examples that utilize withCredentials() but it requires me to store the credentials in Jenkins first, which I rather not to do.
Also, I read that the file parameter does not get discarded at the end, according to this issue.
Is there an elegant way to do this?
I am a junior DE at my first job and have to work a lot with Docker. I have a CI/CD setup made by other team and part of it is a Dockerfile that takes ARGs, some of them with sensitive data (e.g. password).
While I have a general understanding of Docker, I was wondering what is the best practice for inserting sensitive data (e.g. password) into Dockerfile arguments? E.g. I have "ARG CONDA_PASSWORD" that should take password stored locally in a text file. What's the best practice to insert my local password and login into the Dockerfile arguments? Those credentials (login, password) are needed for authorisation at the next steps of pipeline.
Couldn't find any definite answer in Docker documentation or online. My initial idea was to set them in .bashrc and have setup as environmental variables (export CONDA_USER=...), but I am not sure about safety of such solution. Thanks for help!
I have a dockerfile where I am trying to copy everything in Github to dockerfile and build it as an image. I have a file called config.json which contains sensitive user data such as username and password. This will also be copied. The issue here is, I want this data to be encrypted and passed onto the dockerfile. While the image is being deployed onto kubernetes, I want this data to be decrypted back again. Can anyone please suggest an ideal method of doing this.
You shouldn't put this in the container image at all. Use a tool like Sealed Secrets, Lockbox, or sops-operator to encrypt the values separately, and then those get decrypted into a Secret object in Kubernetes which you can mount into your container as a volume so the software sees the same config.json file but it's stored externally.
As other people have mentioned the technically correct way to do this is to treat secrets like ordinary config and have something external to the container doing the secret-fu to keep everything safe.
However sometimes you may be in a situation in which the the technically correct thing is not the practically correct thing and you need to deploy config and/or secrets in your artifact/docker image.
If you just need to encrypt a single file, generating a key and doing symmetric encryption using a tool like gpg may be the easiest way to go about doing this.
If you are encrypting many files or encrypting them frequently it may make sense to use asymmetric encryption to do so. In this case, PKCS7/cms may make sense and the openssl binary conveniently has a cms subcommand for encrypting and decrypting CMS content.
I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.
From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/source/file/path')
In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)
The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?
I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.
Because S3 doesn't have traditional directories, what you would consider the entire 'file path' in your client machines, i.e. \some\directory\test.xls becomes the 'key'. The object is the data in the file.
Buckets are unique across S3, and the keys must be unique within your bucket.
As far as the credentials, there are multiple ways of providing them - one is to actually supply the id and secret access key right in your code, another is to store them in a config file somewhere on your machine (this varies by OS type), and then when you are running your code in production, i.e. on an EC2 instance, the best practice is to start your instance with a IAM Role assigned, and then anything that runs on that machine automatically has all of the permissions of that role. This is the best/safest option for code that runs in EC2.
I want to build a website in which people can upload files to my S3 bucket via a rails app. I want the upload to be encrypted so that I have no knowledge of what is being uploaded and I want only the user to have the key to decrypt it.
Could someone give me some suggestions on how to go about this or some methods of achieving this?
You can only encrypt it localy, everything else on the serverside (or even the ISP) can be manipulated somehow what is delivered
Lichtamberg is right, the best and most secure way would be for the user to do it clientside. Perhaps you could tell them what encryption types are accepted (such as GPG) and provide instructions for doing so, or recommend tools that might make it easier.
You could probably enforce this in your code by checking whether an uploaded file is encrypted, and rejecting it if not. The check would be similar to an image upload feature that rejects non-image files, for instance.