How to rollback database in docker container using elixir phoenix releases and the example MyApp.Release.rollback in the guides - docker

I cannot figure out how to rollback a database when trying to do it through a phoenix app running in a docker container. I am trying to simulate locally what it would be like when migrating on a remote server.
I am running it locally by running:
docker run -it -p 4000:4000 -e DATABASE_URL=ecto://postgres:postgres#host.docker.internal/my_app_dev -e SECRET_KEY_BASE=blahblah my-app-tag:v1
I view the running containers with:
docker ps
I bash into the container
docker exec -it 8943918c8f4f /bin/bash
cd into app/bin
cd bin
try to rollback
./my_app rpc 'MyApp.Release.rollback(MyApp.Repo, "20191106071140")'
=> 08:43:45.516 [info] Already down
If this did indeed work when running through the application it should blow up as I do different things. But it doesn't.
If I try eval
./my_app eval 'MyApp.Release.rollback(MyApp.Repo, "20191106071140")'
=>
08:46:22.033 [error] GenServer #PID<0.207.0> terminating
** (RuntimeError) connect raised KeyError exception: key :database not found. The exception details are hidden, as they may contain sensitive data such as database credentials. You may set :show_sensitive_data_on_connection_error to true when starting your connection if you wish to see all of the details
(elixir) lib/keyword.ex:393: Keyword.fetch!/2
(postgrex) lib/postgrex/protocol.ex:92: Postgrex.Protocol.connect/1
(db_connection) lib/db_connection/connection.ex:69: DBConnection.Connection.connect/2
(connection) lib/connection.ex:622: Connection.enter_connect/5
(stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: nil
** (EXIT from #PID<0.163.0>) shutdown
I am trying to ensure I know how to deploy an application to a remote (Heroku, AWS) and have the application automatically migrate on every deploy but also have the option to run a command to roll back 1 step at a time.
I am not finding any information. The debugging above is first step in creating this migrate/rollback functionality on a remote server but testing on my local machine first.
The migrate/rollback code is taken directly from https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands
Any help/direction would be greatly appreciated.
Thank you

In the first place, rpc call should succeed. Make sure you indeed have the migration in the question up before running my_app rpc. Note, that the second argument is the version to revert to, not the migration to revert.
Regarding the eval. One should start or at least load the application before any attempt to access its config. As per documentation:
You can start an application by calling Application.ensure_all_started/1. However, if for some reason you cannot start an application, maybe because it will run other services you do not want, you must at least load the application by calling Application.load/1. If you don't load the application, any attempt at reading its environment or configuration may fail. Note that if you start an application, it is automatically loaded before started.
For the migration to succeed, one needs Ecto aplication Ecto.Adapters.SQL.Application started and your application loaded (to access configs.)
That said, something like this should work.
def my_rollback(version) do
Application.load(:my_app)
Application.ensure_all_started(:ecto_sql)
Ecto.Migrator.with_repo(MyApp.Repo,
&Ecto.Migrator.run(&1, :down, to: version))
end
And call it as
./my_app eval 'MyApp.Release.my_rollback(20191106071140)'
Still, rpc should start the required applications out of the box (and it indeed does, according to the message you get back,) so I’d suggest you to triple-check the migration you are requesting to down is already up and you pass the proper version to downgrade to.

There were two issues here and thanks to #aleksei-matiushkin I got it working.
The first issue was not having Application.load(:my_app) in the function.
The second issue was that I was calling the rollback functions (both mine and #aleksei-matiushkin) as a string and not an int. Now I call it like: ./my_app eval 'MyApp.Release.my_rollback(20191106071140)'
The file now looks like this:
defmodule MyApp.Release do
#app :my_app
def migrate do
for repo <- repos() do
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :up, all: true))
end
end
def rollback(repo, version) do
setup_for_rollback()
{:ok, _, _} = Ecto.Migrator.with_repo(repo, &Ecto.Migrator.run(&1, :down, to: version))
end
def my_rollback(version) do
setup_for_rollback()
rollback(MyApp.Repo, version)
end
defp setup_for_rollback() do
Application.load(#app)
Application.ensure_all_started(:ecto_sql)
end
defp repos do
Application.load(#app)
Application.fetch_env!(#app, :ecto_repos)
end
end
I am not sure if this is an idiomatic implementation. I did not have any issues excluding Application.ensure_all_started(:ecto_sql) but since it was recommended I guess I'll leave it in.

Related

Looking for a convenient way to start and stop applications with docker-compose

For each of my projects, I have configured a docker development environment consisting of several containers. I often switch between projects. That requires stopping one set of containers and starting another. I currently do it like this:
$ cd project1
$ docker-compose stop
$ cd ../project2
$ docker-compose up -d
So I need to remember which application is currently running, cd into the directory where its docker-compose.yml is, stop it, then remember what other project I want to run, cd there and start it.
Is there a better way? Like a utility that remembers which multicontainer applications I have, can stop the currently running one and run another one without manual cding and docker-composeing?
(By the way, what's the correct term for a set of containers hosting parts of a single application?)
Hope docker-compose-ui will help you in managing applications.
I think the real problem here is this:
That requires stopping one set of containers and starting another.
You shouldn't need to stop one project to start another.
Instead of mapping to the same host ports I would not map any ports at all. Then use a script to lookup the IP of the container, and connect directly to that:
#!/bin/bash
cip=$(docker inspect -f '{{range $key, $value := .NetworkSettings.Networks}} {{ $value.IPAddress}} {{end}}' $1)
This will look up the container ip. Combine that with a command to open the url:
url=http://cip:8080/
xdg-open $url || open $url
All together this will let you run the application without having to map any host ports. When host ports don't exist, you don't have to stop other projects.
If you are ruby proven a bit, you can use scaffolding for this.
A barebone example using thread ( to start different docker-compose session without one process and then stop them all together )
require 'docker-compose'
threads = []
project_paths = %w(/project/path1 /project/path2 /project/path3 /project/path)
project_paths.each do |path|
threads.push Docker::Compose::Session.new(dir:compose_base_path1)
end
begin
threads.each do |thread|
thread.join
end
rescue SystemExit, Interrupt
threads.each do |thread|
thread.kill
end
rescue Exception => e
handle_exception e
end
source
It uses
docker-compose gem
threads
Just set project_paths to the folders of your projects. And if you want to end them all, use CTRL+c
You can of course go beyond that, using a daemon and try to start / stop some of them giving "names" and such, but i guess as a starting point for scaffolding, that should be enaugh

Rails - Run system command in production

I'm trying to run a C++ executable in my Rails app that is place in a folder called "algo", like this:
result = `cd algo && ./my_main #{str} -1 -1 #{id}`
In development works flawlessly but in production in the cloud does not run
Consider that:
1) In the cloud, that is a virtual machine, i run the same executable without problems in the console terminal navigate through the Rails application folders, it only fails when i try to run it from the Rails application
2)
Rails.logger.info result
Returns nothing
3)
Rails.logger.info `pwd`
Does return the current folder of the proyect
4)
Rails.logger.info $?
Only returns: pid 35314 exit 127
5)
Rails.logger.info File.exist?("algo/my_main")
Returns true
6)
In the config/environments/production.rb the log level is config.log_level = :info
7)
In the log/production.log does not appear any error like you will see in development in the terminal
8)
I also try to use other commands like: system(), exec(), %x() with the same result
9)
Finally, i run sudo chmod -R 777 in the virtual machine, in the main folder before the Rails folder app, i think that is implicit in the point 1, but for clarify
You should always use absolute paths for any code that will be executed by a script. The PATH variable may be different for the user executing the script than it is for the user that you use, and its much better to be 100% precise about the file path you want than to rely on PATH.
Along the same lines, make sure the user that runs the Rails server have execute permissions on the script. If in doubt, login as that user and attempt to execute the script.
You also need to escape both str and id for security reasons. Even if these variables are not currently derived in any way from submitted parameters, there's always a possibility that whatever function contains this code might be executed with user-submitted variables at some point. Basically, its better to be safe than sorry, because this is the kind of security hole that could allow anyone on the Internet to execute arbitrary code on your server.

GitlabHQ - W denied for rails

At work I've been tasked with setting up out GIT server with a front end and I found GitlabHQ which looks amazing.
I've installed it all semi-successfully but I cannot push my repos at all as it says I need to push them.
Since I've never used GitLabHQ before first is:
You should push repository to proceed.
After push you will be able to browse code, commits etc.
Normal when adding projects?
and every-time I run
git push -u origin master
I get this,
W access for focus DENIED to rails
(Or there may be no repository at the given path. Did you spell it correctly?)
fatal: The remote end hung up unexpectedly
is anyone able to help since I can't expect the team to keep SSH'ing?
Thanks.
EDIT:
Server = Ubuntu Server 11.10 fully updated and I followed these instructions: https://github.com/gitlabhq/gitlabhq/wiki/V2.0-easy-setup-for-ubuntu
This was fixed by re-running the install (It must have failed silently the first time) and killing the process once it had started with
lsof -p :3000
kill 9 {Whatever the PID was returned from above}
Then re-running the bundle (differs between production or not) I use this
bundle exec rails s -e production -d

Postgres Server error -> PGError: could not connect to server

I get the error below when trying to start my rails app on the localhost:
PGError (could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
From what I have read it sounds like this is most likely a problem in connecting to the Postgres server, and may indicate that it is not started?
It all started when I was attempting my first (yay noobs!) merge using git. There was a conflict (having to do with the Rubymine workspace.xml file), and I started to open up a conflict resolution program. It then seemed that there was really no need to track workspace.xml at all and so I quit from the resolution program, intending to run "git rm --cached" on the file. I think in quitting the program something went foul, and I ended up restarting, before untracking the file, and then completing the merge. Further evidence that something was gummed up is that my terminal shell didn't open up correctly until I restarted the machine.
Now, as far as I can tell, everything in the merge went fine (there were trivial differences in the two branches anyway), but for the life of me I can't seem to get rid of this PGError. If it is as simple as starting the server, then I'd love help on how to do that.
(other context: OSx, rails 3, posgresql83 installed via macports, pg gem).
EDIT - I have been trying to start up the server, but am not succeeding. e.g., I tried:
pg_ctl start -D /opt/local/var/db/postgresql83/defaultdb
This seems to be the right path for the data (it finds the postgresql.conf file) but the response I get is "cannot execute binary file."
Try sudo port load postgresql83-server - this should work with the latest 8.3 port from macports.
If this doesn't work, try sudo port selfupdate; sudo port upgrade outdated and then try again.
Note - this may take absolutely ages.
and may indicate that it is not started?
Yes, sounds like the server is not running on your local machine.
See the description of this error in the PostgreSQL manual:
http://www.postgresql.org/docs/8.3/static/server-start.html#CLIENT-CONNECTION-PROBLEMS
To start the server, try something along the following lines (adjust for pgsql version # and logfile):
sudo su postgres -c '/opt/local/lib/postgresql84/bin/pg_ctl -D /opt/local/var/db/postgresql8/defaultdb -l /opt/local/var/log/postgresql84/postgres.log start'
To stop the server,
sudo su postgres -c '/opt/local/lib/postgresql84/bin/pg_ctl -D /opt/local/var/db/postgresql84/defaultdb stop'

Optimizing Rails loading for maintenance scripts

I wrote a script that does maintenance tasks for a rails application. The script uses a class that uses models defined in the application. Just an example, let's say application defines model User, and my class (used within the script), sends messages to it, like User.find id.
I am looking for ways to optimize this script, because right now it has to load the application environment: require '../config/environment'. This takes ~15 seconds.
Had the script not use application codebase to do its job, I could have replaced model abstractions with raw SQL. But unfortunatly I can't do that because I would have to repeat the code in the script that is already present in the codebase. Not only would this violate DRY principle and require alot of work, the script would not be very maintainable, in case the model methods that I am using change.
I would like to hear ideas how to approach this problem. The script is not run from the application itself, but from the shell (with Capistrano for instance).
I hope I've described the problem clear enough. Thank you.
Could you write a little daemon that is in a read on a pipe (or named fifo, or unix domain socket, or, with more complexity, a tcp port) that accepts 'commands' that would be run on your database?
#!/usr/bin/ruby
require '../config/environment'
while (true) do
File.open("/tmp/fifo", "r") do |f|
f.each_line do |line|
case line
when "cleanup" then puts "clean!"
when "publish" then puts "published!"
else puts "invalid command, ignoring"
end
end
end
end
You could start this thing up with vixie cron's #reboot specifier, or you could run it via capistrano commands, or run it out of init or init scripts. Then you write your capistrano rules (that you have now) to simply echo commands into the fifo:
First,
mkfifo /tmp/fifo
In one terminal:
$ ./env.rb
In another terminal:
$ echo -n "cleanup" > /tmp/fifo
$ echo -n "publish" > /tmp/fifo
$ echo -n "go away" > /tmp/fifo
The output in the first terminal looks like this:
clean!
published!
invalid command, ignoring
You could make the matching as friendly (perhaps allow plain echo, rather than require echo -n as my example does) or unfriendly as you want. And the commands that get run can of course call into your model files to do their work.
Please make sure you choose a good location for your fifo -- /tmp/ is probably a bad place, as many distributions clear it on reboot. Also make sure you set the fifo owner and permission (chown and chmod) appropriately for your application -- you might not want to allow your Firefox's flash plugin to write to this file and command your database.

Resources