I created a Azure devops project that deploys my code from Github. My project is a Rails app that runs on Docker. When I create the Azure DevOps project, it creates the CI/CD pipelines for me. The deployment works, but I can't figure out how to automatically migrate the database when we deploy. I know I can do it manually, but I prefer not to as we might forget when we deploy.
I tried to run the following commands in the "Post Deployment Actions"
rake db:migrate
bundle exec rake db:migrate
rbenv exec bundle exec rake db:migrate
docker-compose run web rake db:migrate
I've also generated my own kuduscript (using kuduscript) to add in the line for migration but it did not work. I don't know if it's because it's not reading my deployment script or if that line doesn't work.
Am I missing something? Should I try to figure out how to migrate through Docker instead? I've looked at all these links but they all run the migration manually.
https://learn.microsoft.com/en-us/azure/app-service/containers/quickstart-ruby
https://medium.com/paris-rb/deploying-your-rails-postgresql-app-on-microsoft-azure-180f8a9fab47
https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-ruby-postgres-app
So not sure if you ever figured this out but I just had to do this for a migration I'm working on. The best way to do what you're after from what I've found is to setup a stage in your release pipeline like so:
stage setup
Where basically:
Your service connection logs into the container registry
Pulls the image you just built and pushed to the registry
Use a docker run command to then run your image with all the necessary environment variables and anything else needed to run it properly.
You can then use another task if needed to run a docker exec bundle exec rake db:migrate
If the migrate task runs successfully it should exit 0 allowing you to then run your app service on the latest tagged image with the correct database changes already done.
Related
Scope of problem
Rails 4.2.11
Ansible 2.1.1.0
Ubuntu 14
Ubuntu user: deploy
I have Rails app which I deploy to Ubuntu server by Ansible script.
I have problem of understanding why Rails app create log files with root permissions when Ansible script execute rake tasks.
In my example it's running rake db:migrate but also same behaviour with rake assets:precompile
You can see by photo below that application is deployed via user 'deploy' but after run of rake task it create 2 log files with root permission. After restart of web server it crash with permission denied error, so I need manually change ownership to deploy:deploy
Structure of Rails logger is also looks suspicious. You can check #dev=IO:<STDERR> value. I've checked in another project and there I can see something like #dev=#<File:/var/www/.../log/production.log>
I tried to explore source code of ror4 but so far no luck to understand from there what is happening. Only idea is could be that Rails raise exception when create log file and STDERR become output
Please help if you have similar problem or can point out where I can look to.
The rake script runs as root User.
To answer the question you must add the complete ansible script or the bundle script if you manually add a "sudo" commands or something unusual.
There are different possible positions to define the user in ansible.
Read the become section of the ansible documentation https://docs.ansible.com/ansible/latest/user_guide/become.html
Or give a try with
- name: Run db:migrate
shell: ... rake cmd ...
become: yes
become_user: deploy
become_method: su
Change the become_method to your needs e.g become_flags: '-s /bin/sh' or sudo
I was wondering if it was possible to run migrations automatically during deployment with Google App Engine. I have been using AWS Elasticbeanstalk for a while and they were ran automatically but now I am considering moving to the Google App Engine for my future projects.
Right now, I must run this command manually:
bundle exec rake appengine:exec -- bundle exec rake db:migrate GAE_CONFIG=app.yml
Thank you
WARNING: As discussed in comments, there is a race condition in migrations if deployment is done on multiple containers in parallel, because it will try to run migration on all containers. Solution is being discussed in comments, i will update this answer when we land on something.
Disclaimer: This answer is not exactly what was asked for, but it solves same problem and it works. And from what i can tell from question, doing it with some appengine config is not a requirement, rather he just want the migrations to run automatically.
I will expand on my comment on question, here is something i tried and it works. I am strong believer of KISS(keep it simple and stupid). So instead of trying to figure out appengine(which i have never used anyway) if i were you, i would take a generic approach. Which is, to plug into rails server booting process and trigger migrations. For this we have multiple approaches.
With my understanding of appengine and suggested by this official doc link appengine has a app.yaml file, this file has an entry something like:
entrypoint: rails server
So we will use this entry point to plug in our code to run migrations before starting server. For this i did this:
Make a new file in bin directory, i named it
rails_with_migrations.sh you can name it whatever you like.
Give it execute permissions with chmod +x bin/rails_with_migrations.sh
Put this code inside it:
#!/bin/bash
bundle exec rake db:migrate
bundle exec rails $#
Of course you can give whatever RAILS_ENV you want to give these.
Now in app.yaml on the entrypoint section, instead of rails server give it bin/rails_with_migrations.sh server and it should be it. It worked on local, should work everywhere.
NOTE: In entrypoint: i have bin/rails_with_migrations.sh server here, server is rails command parameter, you can pass as much parameters as you like these all will be passed to rails server command with $#'s magic. It is there to allow you to pass port and any other parameters you may need to provide for your environment. Also it allows you to run rails console locally with bin/rails_with_migrations.sh console which will also cause migrations to get triggered.
UPDATE1: As per comment, i checked what happens if migration fails, and it starts server even if migration fail. We can alter this behavior of-course in our sh file.
UPDATE2: The shell-script with migration error code handling will look something like:
#!/bin/bash
bundle exec rake db:migrate
if [ $? -eq 0 ]
then
bundle exec rails $#
else
echo "Failure: migrations failed, please check application logs for more details." >&2
exit 1
fi
This update will prevent server from starting and causing a non zero exit code from the script, which should indicate that this command failed.
I have a Rails 6.0.0.rc1 application (with the appengine gem install) that I deployed to GCP. Is there a way to log into a remote rails console on the instance that runs the application? I tried this:
bundle exec rake appengine:exec -- bundle exec rails c
which gives the following output:
...
---------- EXECUTE COMMAND ----------
bundle exec rails c
Loading production environment (Rails 6.0.0.rc1)
Switch to inspect mode.
...
so apparently it executed the command, but closes the connection right after.
Is there an easy way to do this?
As reference: On Heroku this would simply be:
heroku run rails c --app my-application
There's a few steps involved:
https://gist.github.com/kyptin/e5da270a54abafac2fbfcd9b52cafb61
If you're running a Rails app in Google App Engine's flexible environment, it takes a bit of setup to get to a rails console attached to your deployed environment. I wanted to document the steps for my own reference and also as an aid to others.
Open the Google App Engine -> instances section of the Google Cloud Platform (GCP) console.
Select the "SSH" drop-down for a running instance. (Which instance? Both of my instances are in the same cluster, and both are running Rails, so it didn't matter for me. YMMV.) You have a choice about how to connect via ssh.
Choose "Open in browser window" to open a web-based SSH session, which is convenient but potentially awkward.
Choose "View gcloud command" to view and copy a gcloud command that you can use from a terminal, which lets you use your favorite terminal app but may require the extra steps of installing the gcloud command and authenticating the gcloud command with GCP.
When you're in the SSH session of your choice, run sudo docker ps to see what docker containers are presently running.
Identify the container of your app. Here's what my output looked like (abbreviated for easier reading). My app's container was the first one.
jeff#aef-default-425eaf...hvj:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND NAMES
38e......552 us.gcr.io/my-project/appengine/default... "/bin/sh -c 'exec bun" gaeapp
8c0......0ab gcr.io/google_appengine/cloud-sql-proxy "/cloud_sql_proxy -di" focused_lalande
855......f92 gcr.io/google_appengine/api-proxy "/proxy" api
7ce......0ce gcr.io/google_appengine/nginx-proxy "/var/lib/nginx/bin/s" nginx_proxy
25f......bb8 gcr.io/google_appengine/fluentd-logger "/opt/google-fluentd/" fluentd_logger
Note the container name of your app (gaeapp in my case), and run container_exec bash.
Add ruby and node to your environment: export PATH=$PATH:/rbenv/versions/2.3.4/bin:/rbenv/bin:/nodejs/bin
cd /app to get to your application code.
Add any necessary environment variables that your Rails application expects to your environment. For example: export DATABASE_URL='...'
If you don't know what your app needs, you can view the full environment of the app with cat app.yaml.
bin/rails console production to start a Rails console in the Rails production environment.
I didn't use Heroku for a while. I found Heroku change something so want to give it another try.
But after I click the "Deploy Branch" button, my app is still not working.
So I check the build log and realize Heroku seems not do the db:migrate command.
But it did do the asset:compile command. And I don't found anywhere to click to do the db:migrate thing.
So I have to do it with command line tools, right?
This is a well known limitation of Heroku. It won't run your migrations out of the box. However, you can automate it in couple of ways:
You can write a simple script that will first push new code to Heroku git repository and then run migrations. The problem is that you need to run this script locally on your machine
You can add this buildpack and then set environmental variable DEPLOY_TASKS to db:migrate. You can do this via UI, command line heroku config:set DEPLOY_TASKS='db:migrate' or you can add everything to app.json so it should work out of the box with deploy button.
You can use release phase by adding release: rake db:migrate to your Procfile.
Please keep in mind that there are many issues related to migrating your database during deployment. You can read about it in the docs for release phase.
I'm using Capistrano to deploy a Rails application. I'm thinking of a situation where there were database changes, so I can't simply cap deploy because the migrations need to run before the code is updated. I realize there's a cap deploy:migrations, but that's a little more automatic than I'd like. I'd like to:
Push the new code to the releases directory, but not update the symlink or restart the application.
ssh into the server, run rake:db_abort_if_pending_migrations to confirm that the migrations I want to run are the only pending ones, then run rake db:migrate manually.
Complete the deploy, updating the symlink and restarting the application.
Is there any easy way to do this with the built-in Capistrano tasks, or would I need to write my own deployment steps to accomplish this?
I should mention too that I'm thinking of cases (like adding columns) where the migration can be run on a live database. For more destructive changes I realize I'd need to bring down the site with a maintenance page during the deploy.
Try:
cap deploy:update_code
Do what you described loging in to the server manually or via cap
shell
cap deploy:symlink deploy:restart
See cap -e deploy:update_code deploy:symlink deploy:restart deploy:shell for more information.
I hope this will be helpful to You.