I have already created a databag item which is existing on the chef server.
Now, I am trying to pass on that databag item secret value to a docker container.
I am creating the data bag as follows:
knife data bag create bag_secrets bag_masterkey --secret-file C:\path\data_bag_secret
I am retrieving value of that databag item in Chef recipe as follows:
secret = Chef::EncryptedDataBagItem.load_secret("#{node['secret']}")
masterkey = Chef::EncryptedDataBagItem.load("databag_secrets", "databag_masterkey", secret)
What logic do i need to add to pass on the data bag secret to a docker container?
I've said this like twice on different questions: DO NOT USE ENCRYPTED DATABAGS LIKE THIS IT IS NOT SAFE.
I think you fundamentally misunderstand the security model of encrypted bags, they exist only to allow for data where the Chef Server cannot read it. The cost is you must manage key distribution. For Docker this would probably be via sidecar containers or data volumes but running chef-client inside a container is relatively rare so you'll have to sort that out yourself. I would recommend working with a security/infosec engineer at your company to figure out the right security model for your usage.
Related
I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?
The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...
I have a dockerfile where I am trying to copy everything in Github to dockerfile and build it as an image. I have a file called config.json which contains sensitive user data such as username and password. This will also be copied. The issue here is, I want this data to be encrypted and passed onto the dockerfile. While the image is being deployed onto kubernetes, I want this data to be decrypted back again. Can anyone please suggest an ideal method of doing this.
You shouldn't put this in the container image at all. Use a tool like Sealed Secrets, Lockbox, or sops-operator to encrypt the values separately, and then those get decrypted into a Secret object in Kubernetes which you can mount into your container as a volume so the software sees the same config.json file but it's stored externally.
As other people have mentioned the technically correct way to do this is to treat secrets like ordinary config and have something external to the container doing the secret-fu to keep everything safe.
However sometimes you may be in a situation in which the the technically correct thing is not the practically correct thing and you need to deploy config and/or secrets in your artifact/docker image.
If you just need to encrypt a single file, generating a key and doing symmetric encryption using a tool like gpg may be the easiest way to go about doing this.
If you are encrypting many files or encrypting them frequently it may make sense to use asymmetric encryption to do so. In this case, PKCS7/cms may make sense and the openssl binary conveniently has a cms subcommand for encrypting and decrypting CMS content.
I'm deploying KSQLDB in a docker container and I need to create some tables (if they don't exist) when the database starts. Is there a way to do that? Any examples?
As of version 0.11 you would need to have something that could query the servers rest endpoint to determine what tables existed, and then submit SQL to create any missing tables. This is obviously a little clunky.
I believe the soon to be released 0.12 release comes with CREATE OR REPLACE support for creating streams and tables. With this feature all you'd need is a script with a few curl commands within your docker image that waited for the server to become available and then fired in a SQL script with your table definitions using CREATE OR REPLACE.
The 0.12 release also comes with IF NOT EXIST syntax support for streams, tables, connectors and types. So you can do:
CREATE STREAM IF NOT EXISTS FOO (ID INT) WITH (..);
Details of what to pass to the server can be found in the Rest API docs.
Or you should be able to script sending in the commands using the CLI.
The title says it all. I have a VM instance set up in my google cloud for generating some model data. A friend of mine also has a new account. We're both basically using the free credits Google provides. We're trying to figure out if there is a way that I can generate the data in my VM instance and then transfer it to my friend's GCS Bucket. He hasn't set up any buckets yet, so we're also open to suggestions on the type of storage that would help us do this task.
I realize I can set up a persistent disk and mount it to my own VM instance. But that isn't our goal right now. We just need to know if there is a way to transfer data from one Google account to another. Any input is appreciated.
There is a way to do this by having your friend create the bucket and then he gives your email permission to access the bucket. Then from your VM you can us the gsutil command to copy the files to the bucket.
1) Have your friend create the bucket in the console.
2) In the permissions section, he will Add Member and add your email and provide you with Storage Object Creator role
3) Then you SSH into your VM and use the following gsutil command to copy the files. For example gsutil cp testfile.txt gs://friend_bucket
4) If you get 403 error, you probably have to run gcloud auth login first
I'm giving Amazon Web Services a try for the first time and getting stuck on understanding the credentials process.
From a tutorial from awsblog.com, I gather that I can upload a file to one of my AWS "buckets" as follows:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/source/file/path')
In the above circumstance, I'm assuming he's using the default credentials (as described here in the documentation), where he's using particular environment variables to store the access key and secret or something like that. (If that's not the right idea, feel free to set me straight.)
The thing I'm having a hard time understanding is the meaning behind the .object('key'). What is this? I've generated a bucket easily enough but is it supposed to have a specific key? If so, how to I create it? If not, what is supposed to go into .object()?
I figure this MUST be out there somewhere but I haven't been able to get it (maybe I'm misreading the documentation). Thanks to anyone who gives me some direction here.
Because S3 doesn't have traditional directories, what you would consider the entire 'file path' in your client machines, i.e. \some\directory\test.xls becomes the 'key'. The object is the data in the file.
Buckets are unique across S3, and the keys must be unique within your bucket.
As far as the credentials, there are multiple ways of providing them - one is to actually supply the id and secret access key right in your code, another is to store them in a config file somewhere on your machine (this varies by OS type), and then when you are running your code in production, i.e. on an EC2 instance, the best practice is to start your instance with a IAM Role assigned, and then anything that runs on that machine automatically has all of the permissions of that role. This is the best/safest option for code that runs in EC2.