I want to add environment variable with newlines to my application using ElasticBeanstalk such as RSA private key. I have the following form for this purpose:
and AWS CLI as well.
I didn't want to add a key file to my build, as we build from git, and keys in version control can be a security hazard, so I used this workaround instead:
# From your shell: Base64 encode the RSA private key file
# -w 0 disables wrapping, we don't want new lines
base64 -w 0 id_rsa
Base64 encoded data doesn't have newlines, so you can use the output directly as an ElasticBeanstalk environment variable. You can then use this variable inside your application like so:
# From the shell
echo "$SSH_PRIVATE_KEY" | base64 --decode - > .ssh/id_rsa
# Or just decode it with some other programming language of your choice
This way, you don't have to include the file that you're referencing into your build, but you can contain the key completely in the environment variable.
Related
I'm using bitbucket pipelines and I need to store the .env file (for example) as a variable so that I can use it in my deployment. When I stored it as a plain text variable it echoed as a single line text and the app couldn't use it.
If your file contains linebreaks, they will be mangled by the input field in the pipeline variables page.
A solution is to encode the file content with base64 and decode the variable when writing it back to a file.
base64 < .env
pipelines:
default:
- step:
script:
- echo $MYVAR | base64 --decode > .env
Beware that if your file contains secrets and mark the base64-encoded variable as secret, you will loose a security feature that prevents accidental prints of its value in the pipeline logs. See Bitbucket: Show value of variables marked as secret
I am using a base.env as an env_file for several of my docker services.In this base.env I have several parts of the environment variable that repeat throughout the file. For example, port and ip are the same for three different environment variables.
I would like to specify these in an environment variable and reuse those variables to fill out the other environment variables.
Here is base.env:
### Kafka
# kafka's port is 9092 by default in the docker-compose file
KAFKA_PORT_NUMBER=9092
KAFKA_TOPIC=some-topic
KAFKA_IP=kafka
KAFKA_CONN: //$KAFKA_IP:$KAFKA_PORT_NUMBER/$KAFKA_TOPIC
# kafka topic that is to be created. Note that ':1:3' should remain the same.
KAFKA_CREATE_TOPICS=$KAFKA_TOPIC:1:3
# the url for connecting to kafka
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$KAFKA_IP:$KAFKA_PORT_NUMBER
I have tried writing
KAFKA_CONN: //$${KAFKA_IP}:$${KAFKA_PORT_NUMBER}/$${KAFKA_TOPIC}
in the environment section of the appropriate service in the docker-compose.yml, but this gets interpreted as a literal string in the container.
Is there a way to do what I want in the base.env file?
Thank you for your help!
You can actually do it like this (at least in vlucas/dotenv package (php), not sure about others, please check it yourself)
MAIL_NAME=${MAIL_FROM}
Read more about it here
There is no way to do this in an env_file since it is not run as a bash command. This means that the variable is not created and then concatenated into the next variable it appears in. The values are just read in as they are in the env_file.
I used $ in Node.js and React.js , and both worked
POSTGRES_PORT=5432
DATABASE_URL="postgresql://root#localhost:${POSTGRES_PORT}/dbname"
and in react
REACT_APP_DOMAIN=domain.com
#API Configurations
REACT_APP_API_DOMAIN=$REACT_APP_DOMAIN
I know that I am a little late to the party, but I had the same question and I found a way to do it. There is a package called env-cmd, which allows you to use a .js file as an .env file. The file simply needs to export an object with the keys being your environment variable names and the values being, well, the values. This now allows you to run javascript before the environment variables are exported and thus use environment variables to set others.
I temporarly managed to deal with this where I create a script to replace env file vars from another env file vars like so:
.env.baseurl:
BASEURL1=http://127.0.0.1
BASEURL2=http://192.168.1.10
.env.uris.default:
URI1=${BASEURL1}/uri1
URI2=${BASEURL2}/uri2
URI3=${BASEURL2}/uri3
convert-env.sh:
#!/bin/bash
# To allow using sed correctly from same file multiple times
cp ./.env.uris.default ./.env.uris
# Go through each variable in .env.baseurl and store them as key value
for VAR in $(cat ./.env.baseurl); do
key=$(echo $VAR | cut -d "=" -f1)
value=$(echo $VAR | cut -d "=" -f2)
# Replace env vars by values in ./.env.uris
sed -i "s/\${$key}/$value/g" ./.env.uris
done
then you can run docker run command to start the container and load it with your env vars (from .env.baseurl to .env.uris) :
docker run -d --env-file "./.env.uris" <image>
This is not the best solution but helped me for now.
Using Nextjs, in the .env.local file I have the following variables:
NEXT_PUBLIC_BASE_URL = http://localhost:5000
NEXT_PUBLIC_API_USERS_URL_REGISTER = ${NEXT_PUBLIC_BASE_URL}/api/users/register
it works well, I used the variable NEXT_PUBLIC_BASE_URL in the variable NEXT_PUBLIC_API_USERS_URL_REGISTER.
There is a simple way to do this you will just need to run:
env >>/root/.bashrc && source /root/.bashrc
this will append all environment variables inside /root/.bashrc then convert those if they have not been converted while passing the env-file
you can use something like this ${yourVar}
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${KAFKA_IP}:${$KAFKA_PORT_NUMBER}
I test this on PHP / Laravel .env it's working fine
For our GitLab runner we have some variables saved on GitLab. One of them is a base64 encoded USER_DB_PASSWORD_ENCODED variable.
I pass the variable to the Docker build command used for our tests and can access it like this in the Dockerfile:
ARG USER_DB_PASSWORD_ENCODED
ENV USER_DB_PASSWORD_ENCODED=${USER_DB_PASSWORD_ENCODED}
From here my app could access this ENV variable USER_DB_PASSWORD_ENCODED, but I need to decode it to be able to use it in the app. For this purpose I tried this sequence:
RUN echo "$USER_DB_PASSWORD_ENCODED" | base64 --decode > /temp
RUN USER_DB_PASSWORD_ENCODED=$(cat /temp); echo "Output: $USER_DB_PASSWORD_ENCODED"
ENV USER_DB_PASSWORD=$USER_DB_PASSWORD_ENCODED
RUN echo $USER_DB_PASSWORD
I decode the encoded variable into a /temp dir, try to assign that value to the existing variable, and try to assign that existing variable to a new variable, with the name that is actually used in the app.
The decoding works, the output echo shows me the decoded value correctly, but when I echo the new variable, it still shows me the decoded value.
How can I properly deal with an encoded variable and overwrite an existing/create a new ENV variable in a Dockerfile?
An alternative idea was to not define a separate ENV variable, but decode the value directly into a .env file in the directory where it is needed, e.g.
RUN echo "$USER_DB_PASSWORD_ENCODED" | base64 --decode > /api/.env
But then I have the problem of only getting the decoded value into the file, while I also need to prepend the value with USER_DB_PASSWORD_ENCODED for it to be recognized by the app
In the end, I went with my alternative approach:
RUN echo -n "USER_DB_PASSWORD=" > api/.env
RUN echo -n "$USER_DB_PASSWORD_ENCODED" | base64 --decode >> api/.env
The first command creates a .env where it can be read from the application and writes the string USER_DB_PASSWORD= to it (without newline).
The second command appends the decoded value of the encoded and masked GitLab variable to the existing line in the existing .env file.
This way, the cleartext value is never visible in the job log and the container is destroyed after a few minutes of running the tests.
I am using a base.env as an env_file for several of my docker services.In this base.env I have several parts of the environment variable that repeat throughout the file. For example, port and ip are the same for three different environment variables.
I would like to specify these in an environment variable and reuse those variables to fill out the other environment variables.
Here is base.env:
### Kafka
# kafka's port is 9092 by default in the docker-compose file
KAFKA_PORT_NUMBER=9092
KAFKA_TOPIC=some-topic
KAFKA_IP=kafka
KAFKA_CONN: //$KAFKA_IP:$KAFKA_PORT_NUMBER/$KAFKA_TOPIC
# kafka topic that is to be created. Note that ':1:3' should remain the same.
KAFKA_CREATE_TOPICS=$KAFKA_TOPIC:1:3
# the url for connecting to kafka
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://$KAFKA_IP:$KAFKA_PORT_NUMBER
I have tried writing
KAFKA_CONN: //$${KAFKA_IP}:$${KAFKA_PORT_NUMBER}/$${KAFKA_TOPIC}
in the environment section of the appropriate service in the docker-compose.yml, but this gets interpreted as a literal string in the container.
Is there a way to do what I want in the base.env file?
Thank you for your help!
You can actually do it like this (at least in vlucas/dotenv package (php), not sure about others, please check it yourself)
MAIL_NAME=${MAIL_FROM}
Read more about it here
There is no way to do this in an env_file since it is not run as a bash command. This means that the variable is not created and then concatenated into the next variable it appears in. The values are just read in as they are in the env_file.
I used $ in Node.js and React.js , and both worked
POSTGRES_PORT=5432
DATABASE_URL="postgresql://root#localhost:${POSTGRES_PORT}/dbname"
and in react
REACT_APP_DOMAIN=domain.com
#API Configurations
REACT_APP_API_DOMAIN=$REACT_APP_DOMAIN
I know that I am a little late to the party, but I had the same question and I found a way to do it. There is a package called env-cmd, which allows you to use a .js file as an .env file. The file simply needs to export an object with the keys being your environment variable names and the values being, well, the values. This now allows you to run javascript before the environment variables are exported and thus use environment variables to set others.
I temporarly managed to deal with this where I create a script to replace env file vars from another env file vars like so:
.env.baseurl:
BASEURL1=http://127.0.0.1
BASEURL2=http://192.168.1.10
.env.uris.default:
URI1=${BASEURL1}/uri1
URI2=${BASEURL2}/uri2
URI3=${BASEURL2}/uri3
convert-env.sh:
#!/bin/bash
# To allow using sed correctly from same file multiple times
cp ./.env.uris.default ./.env.uris
# Go through each variable in .env.baseurl and store them as key value
for VAR in $(cat ./.env.baseurl); do
key=$(echo $VAR | cut -d "=" -f1)
value=$(echo $VAR | cut -d "=" -f2)
# Replace env vars by values in ./.env.uris
sed -i "s/\${$key}/$value/g" ./.env.uris
done
then you can run docker run command to start the container and load it with your env vars (from .env.baseurl to .env.uris) :
docker run -d --env-file "./.env.uris" <image>
This is not the best solution but helped me for now.
Using Nextjs, in the .env.local file I have the following variables:
NEXT_PUBLIC_BASE_URL = http://localhost:5000
NEXT_PUBLIC_API_USERS_URL_REGISTER = ${NEXT_PUBLIC_BASE_URL}/api/users/register
it works well, I used the variable NEXT_PUBLIC_BASE_URL in the variable NEXT_PUBLIC_API_USERS_URL_REGISTER.
There is a simple way to do this you will just need to run:
env >>/root/.bashrc && source /root/.bashrc
this will append all environment variables inside /root/.bashrc then convert those if they have not been converted while passing the env-file
you can use something like this ${yourVar}
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${KAFKA_IP}:${$KAFKA_PORT_NUMBER}
I test this on PHP / Laravel .env it's working fine
The Travis docs say that the easiest way to encrypt keys eg. To upload to S3, is to use their command line tool.
Are there other ways to do this that doesn't involve installing Ruby etc just to use their command line tool?
There happens to be a Javascript method, and it's available here with the corresponding github repo here.
To use encrypted S3 keys is moderately confusing because the principles are not well explained in the Travis docs.
In the top left field of the form mentioned above you enter your Travis-CI userid/repo-name so this allows the script to pull down the public key for your repository that has been created by Travis.
In the right top field, you enter:
AWS_ACCESS_KEY_ID:...the..access..string..from..Amazon.IAM...
Click on Encrypt and copy the string generated below Encrypted Data
Then in the right top field, you enter:
AWS_SECRET_ACCESS_KEY:...the.very.secret.string.from.Amazon.IAM...
and again copy the encrypted string. Note that the encrypted strings change each time due to random data being included into the encrypted strings.
These encrypted key pairs are decrypted by Travis and exported as environment variables. You enter them in the .travis.yml file like this:
global:
# travis encrypt AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- secure: "--first-very--long-encrypted-string--="
# travis encrypt AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- secure: "--second--very-long-encrypted-string--="
- AWS_S3_BUCKET_NAME: yourbucketname
Now in the deploy section, you reference them by using the names you used for the encryption pair
deploy:
provider: s3
# these are set up in the global env
access_key_id: $AWS_ACCESS_KEY_ID
secret_access_key: $AWS_SECRET_ACCESS_KEY
bucket: $AWS_S3_BUCKET_NAME
skip_cleanup: true
upload-dir: travis-builds
If you had used the name ACCESS_ID in global env when you encrypted it, then in deploy you would refer to it as $ACCESS_ID
The upload-dir is created in the named bucket.
When your build runs in Travis, the decrypted keys are not exposed. Instead what you see is:
See https://docs.travis-ci.com/user/workers/container-based-infrastructure/ for details.
Setting environment variables from .travis.yml
$ export AWS_ACCESS_KEY_ID=[secure]
$ export AWS_SECRET_ACCESS_KEY=[secure]
$ export AWS_S3_BUCKET_NAME=yourbucketname