Rails 5.2 credentials + asset precompile - ruby-on-rails

I have a continuous integration which takes rails app and package it as a docker image.
As one of the steps of this packaging process, I want to do asset precompilation.
I was doing this on Rails 5.1. I had to provide some dummy SECRET_KEY_BASE to let it go through.
SECRET_KEY_BASE=1 RAILS_ENV=production rails assets:precompile
I am moving to Rails 5.2 now and want to start using credentials. I am trying following command:
RAILS_ENV=production rails assets:precompile
If I don't RAILS_MASTER_KEY then it will show me an error:
Missing encryption key to decrypt file with. Ask your team for your
master key and write it to /home/config/master.key or put it in the
ENV['RAILS_MASTER_KEY'].
If I provide dummy (incorrect) RAILS_MASTER_KEY, it will complain that it can't decode credentials.
I don't want to give a real RAILS_MASTER_KEY to CI.
As result, the question is. How to compile asset without it or what are the workarounds?

I'm not seeing a solution either. Another way would be to continue to set config/environments/production.rb to have the line:
config.require_master_key = false
and continue to use your SECRET_KEY_BASE=1 rails assets:precompile
I haven't found a better way. At least this way seems better than maintaining a fake master key.

I created a fake credentials.yml.enc and RAILS_MASTER_KEY associated with it and I use them when I precompile asset.

We can easy solve the issue in Docker 1.13 and higher (assuming your CI is also running in a Docker container) by passing the master.key to the CI container using the docker secrets. Note that this works only in docker swarm, but also a single docker container can act as a docker swarm node. To change a single node (for example, on your local development system) to a swarm node use the init command and follow the instructions:
docker swarm init
https://docs.docker.com/engine/reference/commandline/swarm_init/
Then in your docker-compose.yml of your CI container declare the master.key as docker secret and target it to the right position in the container:
version: '3.4'
services:
my_service:
...
secrets:
- source: master_key
target: /my_root/config/master.key
uid: '1000'
gid: '1000'
mode: 0440
...
security_opt:
- no-new-privileges
...
secrets:
master_key:
file: config/master.key
https://docs.docker.com/compose/compose-file/
As you can see, we can also assign dedicated access rights to the master.key in the container and protect it from privilege escalation. For further explanations to docker swarm visit:
https://docs.docker.com/engine/swarm/secrets/#how-docker-manages-secrets
Why should this be the preferred solution to the issue? Your secret master.key is no longer stored in the CI Docker container, but secure kept in the encrypted Raft log of the Docker Swarm infrastructure, and you do not have to make any acrobatic contortions with faked keys.
BTW, I use this approach to protect my domain-specific expertise in public Docker containers: Using the target parameter, each docker secret can carry generic strings or binary content up to 500 kb in size, especially pieces of code that contain sensitive knowledge.

Related

Rails .env with gitlab CI/CD

Encountered a problem with environmental variables for the Rails project to pass gitlab CI. Currently, I’m using dotenv gem to store the credentials of my project. Also, I’ve assigned the environment variables in gitlab CI environment. For example, database.yml:
host: <%= ENV['DATABASE_HOST'] %>
.env file:
DATABASE_HOST=somehost
gitlab CI variable:
DATABASE_HOST=somehost
I put .env file in .gitignore and guessed Rails would use variables from gitlab CI. But getting an access error to database. Found a way around, to create local .env files and shared ones as the instruction of dotenv gem suggest. Then put local files in .gitignore and let shared files with credentials for gitlab CI/CD be pushed to repository.
But struggling to understand how secure this approach is? And, in general, what is the best practice for using environment variables/credentials for Rails project and gitlab CI/CD?
Ideally .env will include sensitive information in most of the cases. so its not a good practice to commit these into any version control system.
https://dev.to/somedood/please-dont-commit-env-3o9h - Detailed guide here of the risks involved with .env file
I usually try to avoid dotenv for CIs because it may represent an overhead for the setup. You can conditionally load dotenv just for some environments but exclude it from the CI/CD. This could be done using a custom ENV variable like so:
Dotenv::Railtie.load unless ENV['GITLAB_CI']
And setting it up in Gitlab envs like GITLAB_CI = true
Regarding your original question, if you really want to have a .env file, you can follow the recommendation from this post answer https://stackoverflow.com/a/55581164/992000, for documentation reasons I'll post them here too:
Create your environment variables in your gitlab repo config
Create setup_env.sh:
#!/bin/bash
echo DATABASE_HOST=$DATABASE_HOST >> .env
Modify your .gitlab-ci.yml. Upsert below to your before_script: section
- chmod +x ./setup_env.sh
- ./setup_env.sh

Rails: Use environment variables defined with Figaro in Docker

If I'm not wrong, it seems like defined with Figaro variables are not available in Docker container.
I have env files to configure Postgresq DB:
POSTGRES_USER=ENV['db_user']
POSTGRES_PASSWORD=ENV['db_password]
POSTGRES_DB=ENV['db_name']
I have application.yml file copied with all the other Rails app files to the container (I could check it with ls in the container shell).
Is it a normal behaviour ?
I also faced this issue when working on a Rails application.
The thing to note here is that for Docker, environment variables are placed in a .env file or an env_file (my_variables.env). Environment files placed in .yml or .yaml files (application.yml) are not often recognized by Docker during runtime. Here's a link to Docker's official documentation that further explains it: The “.env” file
An example of such environment variables are Database connection strings which are required during the application startup process like:
database_name
database_username
database_password
However, you can still use the Figaro gem for defining variables that are not required during application startup, like:
RAILS_MASTER_KEY
That's all.
I hope this helps

What is the purpose of this information in a seperate .yml file

I'm pretty new to this and I was curious as to why this information may have been given to me in a separate .yml file to be used on a RoR app.
I assumed that it was info to the put into my bash profile as it has corresponding environmental variables in the app itself.
BASE_URL: 'http://localhost.com:5000'
development:
MAX_THREADS: '1'
PORT: '5000'
WEB_CONCURRENCY: '1'
test:
I'm also curious as to why you would want to set your url differently as the information states.
Thanks a bunch.
I'd think changing the default port a matter of preference unless there's another part of the stack the development team likes to leave running at 3000 by default, for example, a Node.JS server or other projects.
The .yml file you've been given should be picked up when running bundle exec <command>, but not as part of your bash environment variables.

Dokku - Persistant storage

What is the best method to setup persistant storage for a Rails/Dokku app? The Dokku docs dont seem to say anything about the subject. When used Google to search the docs site the only thing it returned was the dokku-volume-plugin, which I've tried the without success.
I can create a volume for my app:
dokku volume:add myapp /public
but nothing gets written to the volume.
Is this the current(2015) best way to setup persistant storage with Dokku? If it is, am I missing something?
I use dokku-volume-plugin without any problems. Here's how it works.
The dokku volumes:add myapp /app/uploads/ command adds a volume that will be persisted on the host for the files stored inside your app's /app/uploads/ directory. If your app tries to write into that directory, it would instead by written on the host. The files are actually stored in the folder /home/dokku/.o_volume/.
From what I can tell, the only difference with your command and my command is the trailing slash. dokku volume:add myapp /public/ should fix your issues.
Alternatively, you could try an Amazon S3 based solution.
For the archives, so that nobody walks down the wrong path:
The current (2016, dokku > 0.5) path changed. I used #mixxorz approach in the past with success, but as of now the built-in plugin storage seems to take over the stage:
(... ssh dokku#host || dokku ...) storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
Its well documented at http://dokku.viewdocs.io/dokku/dokku-storage/ .
The concepts stay the same.

Auto deploy after commiting

I'm developing on Ruby on Rails and write some articles on Hexo.
My project source code repo is on Gitlab server.
So my deploy flow is
Commit the production code to Gitlab from my working space.
Login in the Web server, then pull the production code from Gitlab
Restart the Webserver, or regenerate articles for Hexo.
Is there any way to let me, renew the webserver with one step ?
hexo is on node so not sure how ruby on rails fits in. I would recommend you take a look at their documentation which has few plugins for deployments link
pasting an excerpt that's valid at this moment
Edit _config.yml.
deploy:
type: git
message: [message]
repo:
gitlab: <repository url>,[branch]
gitcafe: <repository url>,[branch]
It's better to setup a GitLab hook which should call a hexo generate command.

Resources