How to store a file as bitbucket pipeline variable? - bitbucket

I'm using bitbucket pipelines and I need to store the .env file (for example) as a variable so that I can use it in my deployment. When I stored it as a plain text variable it echoed as a single line text and the app couldn't use it.

If your file contains linebreaks, they will be mangled by the input field in the pipeline variables page.
A solution is to encode the file content with base64 and decode the variable when writing it back to a file.
base64 < .env
pipelines:
default:
- step:
script:
- echo $MYVAR | base64 --decode > .env
Beware that if your file contains secrets and mark the base64-encoded variable as secret, you will loose a security feature that prevents accidental prints of its value in the pipeline logs. See Bitbucket: Show value of variables marked as secret

Related

DBT - environment variables and running dbt

I am relatively new to DBT and I have been reading about env_var and I want to use this in a couple of situations and I am having difficultly and looking for some support.
firstly I am trying to use it in my profiles.yml file to replace the user and password so that this can be set when it is invoked. When trying to test this locally (before implementing this on our AWS side) I am failing to find the right syntax and not finding anything useful online.
I have tried variations of:
dbt run --vars '{DBT_USER: my_username, DBT_PASSWORD=my_password}'
but it is not recognizing and giving nothing useful error wise. When running dbt run by itself it does ask for DBT_USER so it is expecting it, but doesn't detail how
I would also like to use it in my dbt_project.yml for the schema but I assume that this will be similar to the above, just a third variable at the end. Is that the case?
Thanks
var and env_var are two separate features of dbt.
You can use var to access a variable you define in your dbt_project.yml file. The --vars command-line option lets you override the values of these vars at runtime. See the docs for var.
You should use env_var to access environment variables that you set outside of dbt for your system, user, or shell session. Typically you would use environment variables to store secrets like your profile's connection credentials.
To access environment variables in your profiles.yml file, you replace the values for username and password with a call to the env_var macro, as they do in the docs for env_var:
profile:
target: prod
outputs:
prod:
type: postgres
host: 127.0.0.1
# IMPORTANT: Make sure to quote the entire Jinja string here
user: "{{ env_var('DBT_USER') }}"
password: "{{ env_var('DBT_PASSWORD') }}"
....
Then BEFORE you issue the dbt_run command, you need to set the DBT_USER and DBT_PASSWORD environment variables for your system, user, or shell session. This will depend on your OS, but there are lots of good instructions on this. To set a var for your shell session (for Unix OSes), that could look like this:
$ export DBT_USER=my_username
$ export DBT_PASSWORD=abc123
$ dbt run
Note that storing passwords in environment variables isn't necessarily more secure than keeping them in your profiles.yml file, since they're stored in plaintext and not protected from being dumped into logs, etc. (You shouldn't be checking profiles.yml into source control). You should consider at least using an environment variable name prefixed by DBT_ENV_SECRET_ so that dbt keeps them out of logs. See the docs for more info

Change base64 encoded Docker ENV variable during build process to decoded value

For our GitLab runner we have some variables saved on GitLab. One of them is a base64 encoded USER_DB_PASSWORD_ENCODED variable.
I pass the variable to the Docker build command used for our tests and can access it like this in the Dockerfile:
ARG USER_DB_PASSWORD_ENCODED
ENV USER_DB_PASSWORD_ENCODED=${USER_DB_PASSWORD_ENCODED}
From here my app could access this ENV variable USER_DB_PASSWORD_ENCODED, but I need to decode it to be able to use it in the app. For this purpose I tried this sequence:
RUN echo "$USER_DB_PASSWORD_ENCODED" | base64 --decode > /temp
RUN USER_DB_PASSWORD_ENCODED=$(cat /temp); echo "Output: $USER_DB_PASSWORD_ENCODED"
ENV USER_DB_PASSWORD=$USER_DB_PASSWORD_ENCODED
RUN echo $USER_DB_PASSWORD
I decode the encoded variable into a /temp dir, try to assign that value to the existing variable, and try to assign that existing variable to a new variable, with the name that is actually used in the app.
The decoding works, the output echo shows me the decoded value correctly, but when I echo the new variable, it still shows me the decoded value.
How can I properly deal with an encoded variable and overwrite an existing/create a new ENV variable in a Dockerfile?
An alternative idea was to not define a separate ENV variable, but decode the value directly into a .env file in the directory where it is needed, e.g.
RUN echo "$USER_DB_PASSWORD_ENCODED" | base64 --decode > /api/.env
But then I have the problem of only getting the decoded value into the file, while I also need to prepend the value with USER_DB_PASSWORD_ENCODED for it to be recognized by the app
In the end, I went with my alternative approach:
RUN echo -n "USER_DB_PASSWORD=" > api/.env
RUN echo -n "$USER_DB_PASSWORD_ENCODED" | base64 --decode >> api/.env
The first command creates a .env where it can be read from the application and writes the string USER_DB_PASSWORD= to it (without newline).
The second command appends the decoded value of the encoded and masked GitLab variable to the existing line in the existing .env file.
This way, the cleartext value is never visible in the job log and the container is destroyed after a few minutes of running the tests.

How to encrypt your Travis keys

The Travis docs say that the easiest way to encrypt keys eg. To upload to S3, is to use their command line tool.
Are there other ways to do this that doesn't involve installing Ruby etc just to use their command line tool?
There happens to be a Javascript method, and it's available here with the corresponding github repo here.
To use encrypted S3 keys is moderately confusing because the principles are not well explained in the Travis docs.
In the top left field of the form mentioned above you enter your Travis-CI userid/repo-name so this allows the script to pull down the public key for your repository that has been created by Travis.
In the right top field, you enter:
AWS_ACCESS_KEY_ID:...the..access..string..from..Amazon.IAM...
Click on Encrypt and copy the string generated below Encrypted Data
Then in the right top field, you enter:
AWS_SECRET_ACCESS_KEY:...the.very.secret.string.from.Amazon.IAM...
and again copy the encrypted string. Note that the encrypted strings change each time due to random data being included into the encrypted strings.
These encrypted key pairs are decrypted by Travis and exported as environment variables. You enter them in the .travis.yml file like this:
global:
# travis encrypt AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- secure: "--first-very--long-encrypted-string--="
# travis encrypt AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- secure: "--second--very-long-encrypted-string--="
- AWS_S3_BUCKET_NAME: yourbucketname
Now in the deploy section, you reference them by using the names you used for the encryption pair
deploy:
provider: s3
# these are set up in the global env
access_key_id: $AWS_ACCESS_KEY_ID
secret_access_key: $AWS_SECRET_ACCESS_KEY
bucket: $AWS_S3_BUCKET_NAME
skip_cleanup: true
upload-dir: travis-builds
If you had used the name ACCESS_ID in global env when you encrypted it, then in deploy you would refer to it as $ACCESS_ID
The upload-dir is created in the named bucket.
When your build runs in Travis, the decrypted keys are not exposed. Instead what you see is:
See https://docs.travis-ci.com/user/workers/container-based-infrastructure/ for details.
Setting environment variables from .travis.yml
$ export AWS_ACCESS_KEY_ID=[secure]
$ export AWS_SECRET_ACCESS_KEY=[secure]
$ export AWS_S3_BUCKET_NAME=yourbucketname

Amazon ElasticBeanstalk: configure environment variable with newlines

I want to add environment variable with newlines to my application using ElasticBeanstalk such as RSA private key. I have the following form for this purpose:
and AWS CLI as well.
I didn't want to add a key file to my build, as we build from git, and keys in version control can be a security hazard, so I used this workaround instead:
# From your shell: Base64 encode the RSA private key file
# -w 0 disables wrapping, we don't want new lines
base64 -w 0 id_rsa
Base64 encoded data doesn't have newlines, so you can use the output directly as an ElasticBeanstalk environment variable. You can then use this variable inside your application like so:
# From the shell
echo "$SSH_PRIVATE_KEY" | base64 --decode - > .ssh/id_rsa
# Or just decode it with some other programming language of your choice
This way, you don't have to include the file that you're referencing into your build, but you can contain the key completely in the environment variable.

How to parse variables from a parameter file in a K Shell script

I have a shell script I wish to read parameters from an external file, to get files via FTP:
parameters.txt:
FTP_SERVER=ftpserer.foo.org
FTP_USER_NAME=user
FTP_USER_PASSWORD=pass
FTP_SOURCE_DIRECTORY="/data/secondary/"
FTP_FILE_NAME="core.lst"
I cannot find how to read these variables into my FTP_GET.sh script, I have tried using read but it just echoed the vars and doesn't store them as required.
Assuming that 'K Shell' is Korn Shell, and that you are willing to trust the contents of the file, then you can use the dot command '.':
. parameters.txt
This will read and interpret the file in the current shell. The feature has been in Bourne shell since it was first released, and is in the Korn Shell and Bash too. The C Shell equivalent is source, which Bash also treats as a synonym for dot.
If you don't trust the file then you can read the values with read, validate the values, and then use eval to set the variables:
while read line
do
# Check - which is HARD!
eval $line
done

Resources