CircleCI mapping example - circleci

Somehow I feel this should be very simple, and I cannot seem to find a way to do it with the CircleCI config.
I have a simple task—take one value and map it to another based on a predefined map.
Something like this:
buckets:
staging:
dashboard: staging-dashboard.example.com
referrer: referrer-stage.example.com
YAML allows me to define this without problems. My issue is I cannot figure out how to use it in CircleCI config:
deploy:
executor: gcp/google
parameters: *build_parameters
steps:
- attach_workspace:
root: .
- gcp/install
- configure_google_sdk
- gcp/initialize:
gcloud-service-key: GCLOUD_SERVICE_KEY_DECODED
- get_bucket: <<< HOW_TO_DO_THIS >>>
environment: <<parameters.environment>>
dashboard_type: <<parameters.dashboard_type>>
- gcp-storage/upload:
source_path: dist/apps/<<parameters.dashboard_type>>-admin/**
destination_bucket: $BUCKET_NAME
My only solution so far has been to write a custom bash script which would map things inside code and set $BUCKET_NAME environment variable. It seems like a massive overkill for such a simple thing, and the last thing I want is to store a mapping in some cryptic bash script.
Any better ideas?
With GitHub Actions I can read from JSON files directly. Maybe something like this exists for CircleCI?

Related

DBT - environment variables and running dbt

I am relatively new to DBT and I have been reading about env_var and I want to use this in a couple of situations and I am having difficultly and looking for some support.
firstly I am trying to use it in my profiles.yml file to replace the user and password so that this can be set when it is invoked. When trying to test this locally (before implementing this on our AWS side) I am failing to find the right syntax and not finding anything useful online.
I have tried variations of:
dbt run --vars '{DBT_USER: my_username, DBT_PASSWORD=my_password}'
but it is not recognizing and giving nothing useful error wise. When running dbt run by itself it does ask for DBT_USER so it is expecting it, but doesn't detail how
I would also like to use it in my dbt_project.yml for the schema but I assume that this will be similar to the above, just a third variable at the end. Is that the case?
Thanks
var and env_var are two separate features of dbt.
You can use var to access a variable you define in your dbt_project.yml file. The --vars command-line option lets you override the values of these vars at runtime. See the docs for var.
You should use env_var to access environment variables that you set outside of dbt for your system, user, or shell session. Typically you would use environment variables to store secrets like your profile's connection credentials.
To access environment variables in your profiles.yml file, you replace the values for username and password with a call to the env_var macro, as they do in the docs for env_var:
profile:
target: prod
outputs:
prod:
type: postgres
host: 127.0.0.1
# IMPORTANT: Make sure to quote the entire Jinja string here
user: "{{ env_var('DBT_USER') }}"
password: "{{ env_var('DBT_PASSWORD') }}"
....
Then BEFORE you issue the dbt_run command, you need to set the DBT_USER and DBT_PASSWORD environment variables for your system, user, or shell session. This will depend on your OS, but there are lots of good instructions on this. To set a var for your shell session (for Unix OSes), that could look like this:
$ export DBT_USER=my_username
$ export DBT_PASSWORD=abc123
$ dbt run
Note that storing passwords in environment variables isn't necessarily more secure than keeping them in your profiles.yml file, since they're stored in plaintext and not protected from being dumped into logs, etc. (You shouldn't be checking profiles.yml into source control). You should consider at least using an environment variable name prefixed by DBT_ENV_SECRET_ so that dbt keeps them out of logs. See the docs for more info

Docker compose build time args from file

I'm aware of the variable substitutions available, where I could use a .env at the root of the project and that would be done, but in this case I'm adapting an existing project, where existing .env file locations are expected and I would like to prevent having to have var entries on multiple files!
See documentation for more info, and all the code is available as WIP on the docker-support branch of the repo, but I'll succinctly describe the project and issue below:
Project structure
|- root
| |- .env # mongo and mongo-express vars (not on git!)
| |- docker-compose.yaml # build and ups a staging env
| |- docker-compose.prod.yaml # future wip
| |- api # the saas-api service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
| |- app # the saas-app service
| |- Dockerfile # if 'docked' directly should build production
| |- .env # api relative vars (not on git!)
Or see the whole thing here, it works great by the way for the moment, but there's one problem with saas-app when building an image for staging/production that I could identify so far.
Issue
At build time Next.js builds a static version of the pages using webpack to do it's thing about process.env substitution, so it requires the actual eventual running vars to be included at docker build stage so next.js doesnt need to rebuild again at runtime and also so that I can safely spawn multiple instances when traffic requires!
I'm aware that if at runtime the same vars are not sent it will have to rebuild again defying the point of this exercise, but that's precisely what I'm trying to prevent here, to that if the wrong values are sent it's on us an not the project!
And I also need to consider Next.js BUILD ID managemement, but that's for another time/question.
Attempts
I've been testing with including the ARG and ENV declarations for each of the variables expected by the app on it's Dockerfile, e.g.:
ARG GA_TRACKING_ID=
ENV GA_TRACKING_ID ${GA_TRACKING_ID}
This works as expected, however it forces me to manually declare them on the docker-compose.yml file, which is not ideal:
saas-app:
build:
context: app
args:
GA_TRACKING_ID: UA-xXxXXXX-X
I cannot use variable substitution here because my root .env does not include this var, it's on ./app/.env, and I also tested leaving the value empty but it is not picking it up from the env_file or enviroment definitions, which I believe is as expected.
I've pastbinned a full output of docker-compose config with the existing version on the repository:
Ideally, I'd like:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
To become:
saas-app:
build:
args:
LOG_LEVEL: notice
NODE_ENV: development
PORT: '3000'
BUCKET_FOR_POSTS: xxxxxx
BUCKET_FOR_TEAM_AVATARS: xxxxxx
GA_TRACKING_ID: ''
LAMBDA_API_ENDPOINT: xxxxxxapi
NODE_ENV: development
STRIPEPUBLISHABLEKEY: pk_test_xxxxxxxxxxxxxxx
URL_API: http://api.saas.localhost:8000
URL_APP: http://app.saas.localhost:3000
context: /home/pedro/src/opensource/saas-boilerplate/app
command: yarn start
container_name: saas-app
depends_on:
- saas-api
environment:
...
Questions
How would I be able to achieve this, if possible, but:
Without merging the existing .env files into a single root, or having to duplicate vars on multiple files.
Without manually declaring the values on the compose file, or having to infer them on the command e.g. docker-compose build --build-arg GA_TRACKING_ID=UA-xXxXXXX-X?
Without having to COPY each .env file during the build stage, because it doesn't feel right and/or secure?
Maybe a args_file on the compose build options feature request for the compose team seems to me to be a valid, would you also say so?
Or perhaps have a root option on the compose file where you could set more than one .env file for variable substituion?
Or perhaps another solution i'm not seeing? Any ideas?
I wouldn't mind sending each .env file as a config or secret, it's a cleaner solution than splitting the compose files, is anyone running such an example for production?
Rather than trying to pass around and merge values in multiple .env's would you consider making one master .env and having the API and APP services inherit the same root .env?
I've managed to achieve a compromise that does not affect any of the existing development workflows, nor does it allow for app to build without env variables (a requirement that will be more crucial for production builds).
I've basically decided to reuse the internal ability of docker to read the .env file and use those in variable substitution on the compose file, here's an example:
# compose
COMPOSE_TAG_NAME=stage
# common to api and app (build and run)
LOG_LEVEL=notice
NODE_ENV=development
URL_APP=http://app.saas.localhost:3000
URL_API=http://api.saas.localhost:8000
API_PORT=8000
APP_PORT=3000
# api (run)
MONGO_URL=mongodb://saas:secret#saas-mongo:27017/saas
SESSION_NAME=saas.localhost.sid
SESSION_SECRET=3NvS3Cr3t!
COOKIE_DOMAIN=.saas.localhost
GOOGLE_CLIENTID=
GOOGLE_CLIENTSECRET=
AMAZON_ACCESSKEYID=
AMAZON_SECRETACCESSKEY=
EMAIL_SUPPORT_FROM_ADDRESS=
MAILCHIMP_API_KEY=
MAILCHIMP_REGION=
MAILCHIMP_SAAS_ALL_LIST_ID=
STRIPE_TEST_SECRETKEY=
STRIPE_LIVE_SECRETKEY=
STRIPE_TEST_PUBLISHABLEKEY=
STRIPE_LIVE_PUBLISHABLEKEY=
STRIPE_TEST_PLANID=
STRIPE_LIVE_PLANID=
STRIPE_LIVE_ENDPOINTSECRET=
# app (build and run)
STRIPEPUBLISHABLEKEY=
BUCKET_FOR_POSTS=
BUCKET_FOR_TEAM_AVATARS=
LAMBDA_API_ENDPOINT=
GA_TRACKING_ID=
See the updated docker-compose.yml I've also made use of Extension fields to make sure only the correct and valid vars are sent across on build and run.
It breaks rule 1. from the question, but I feel it's a good enough compromise, because it no longer relies on the other .env files, that would potentically be development keys most of the time anyway!
Unfortunately we will need to mantain the compose file if the vars change in the future, and the same .env file has to be used for a production build, but since that will probably be done externally on some CI/CD, that does not worry much.
I'm posting this but not fully closing the question, if anyone else could chip in with a better idea, I'd be greatly appreciated.

How to safely pass variables that may contain special characters to a docker-compose file in Ansible

I am currently using an ansible script to deploy a docker-compose file (using the docker_service module), which sets a series of environment variables which are read by the .NET Core service running inside the docker container, like this:
(...)
environment:
- Poller:Username={{ poller_username }}
- Poller:Password={{ poller_password }}
(...)
The variables for poller_username and poller_password are being loaded from an Ansible Vault (which will be moved to a Hashicorp Vault eventually), and are interpolated into the file with no problem.
However, I have come across a scenario where this logic fails: the user has a '$' in the middle of his password. This means that instead of the environment variable being set to 'abc$123' it's instead set to 'abc', causing my application to fail.
Upon writing a debug command, I get the password output to the console correctly. If I do docker exec <container_name> env I get the wrong password.
Is there a Jinja filter I can use to ensure the password is compliant with docker-compose standards? It doesn't seem viable to me to guarantee the password will never have a $.
EDIT: {{ poller_password | replace("$","$$") }} works, but this isn't a very elegant solution to have in, potentially, every variable I use in the docker-compose module.
For this particular scenario, the {{ poller_password | replace("$","$$") }} solution seems to be inevitable. Thankfully, it appears to be the only case that requires this caution.
Had a similar situation was not a $ but some other character, end up using
something: !unsafe "{{ variable }}"
couldn't find a better way.

Travis CI Build to deploy on Cloud Foundry fails

I am trying to deploy a python Flask Application on Cloudfoundry but it fails.
It shows the output
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
Please find the screenshot of the error
Here is what my travis.yml looks like:
stages:
- test
- deploy
language: python
python:
- '3.6'
env:
- PORT=8080
cache: pip
script: python hello.py &
jobs:
include:
- stage: test
install:
- pip install -r requirements.txt
- pip install -r tests/requirements_test.txt
script:
- python hello.py &
- python tests/test.py
- stage: deploy
deploy:
provider: cloudfoundry
username: vaibhavgupta0702#gmail.com
password:
secure: myencrytedpassword
api: https://api.run.pivotal.io
organization: Hello_Flask
space: development
on:
repo: vaibhavgupta0702/flask_helloWorld
Here is what my manifest.yml file looks like
---
applications:
- name: hello
memory: 128M
buildpacks:
- https://github.com/vaibhavgupta0702/flask_helloWorld.git
command: python hello.py &
timeout: 60
env:
PORT: 8080
I do not understand why the error is coming. Any help would be highly appreciated.
The app cannot be mapped to route hello.cfapps.io because the route exists in a different space.
This means exactly what it says. The domain cfapps.io is a shared domain which can be used by many people on the platform. When you see this error, it is telling you that someone else using the platform has already pushed an app which is utilizing that route.
There's a couple possibilities here:
Routes are scoped to a space. If you have multiple spaces, it's possible that the route in question could be used by an app in one of your other spaces. What you can do is run cf routes --orglevel. This will list all the routes in all the spaces under your organization. If you see the route hello listed under one of your spaces, simply run cf delete-route cfapps.io --hostname hello in the space where the route exists. That will delete it. Then deploy again.
Someone else is using the route. This means it would be in another org & space where you can't see it being used. In this case, there's not much you can do. You just need to pick another route or use a custom, private domain (note that custom, private domains require you to register a domain name & configure DNS as described here).
You can pick another route in a couple ways.
Use a random route. This works OK for testing, but not for anything where you want a consistent address. To use, just add random-route: true to your manifest.
Change your app name. By default, the route assigned to your app will be <app-name>.<default-domain>. Thus you get hello.cfapps.io because hello is your app name and cfapps.io is the default domain on PWS. If you change your app name to something unique, that'll result in a unique route that no one else is using.
Specifically define one or more routes. You can do this in your manifest.yml file. You need to add a routes: block and then add one or more routes.
Example:
---
...
routes:
- route: route1.example.com
- route: route2.example.com
- route: route3.example.com

Microservices: What is the ESHOP_OCELOT_VOLUME_SPEC line mean in docker.compose file

I am looking at the code in eShopOnContainer under the docker-compose.override.yml. I can see a line in
volumes:
- ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
webshoppingapigw:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- IdentityUrl=http://identity.api #Local: You need to open your local dev-machine firewall at range 5100-5110.
ports:
- "5202:80"
volumes:
- ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
What does the line in the volumes ${ESHOP_OCELOT_VOLUME_SPEC .. is? I would think it will create a volumes of something but the ${ESHOP_OCELOT_VOLUME_SPEC … I can't see where it define in the project even not inside the .env file.
When I went inside the docker-compose.override.prod, the line ${ESHOP_OCELOT_VOLUME not even there.
Currently I have exception running the sample code, therefore I tried to do follow the code from eShopOnContainer but code a simple version so I can easily to follow. I start doing the ApiGateway and building up from there.
I don't know is this question eligible to be asked. People here very fuzzy of the question.
volumes: - ./src/ApiGateways/Web.Bff.Shopping/apigw:${ESHOP_OCELOT_VOLUME_SPEC:-/app/configuration}
That means:
Mount the ./src/ApiGateways/Web.Bff.Shopping/apigw to the path mentioned by $ESHOP_OCELOT_VOLUME_SPEC
If $ESHOP_OCELOT_VOLUME_SPEC is empty (not defined), then use as a mount path /app/configuration.
That gives the opportunity to a user to override the default path by a path of his/her choosing.
docker run -e ESHOP_OCELOT_VOLUME_SPEC=/my/path ...
ESHOP_OCELOT_VOLUME_SPEC which is an environment variable. The variable value may be exported/set in some place of the code or in the instance. ESHOP_OCELOT_VOLUME_SPEC will be replaced with value, that's why you where not able to see ESHOP_OCELOT_VOLUME_SPEC in docker instead the value in ESHOP_OCELOT_VOLUME_SPEC.

Resources