I am using dokku-alot to deploy my Rails 4 app to my staging server and everything is working just swell.
One requirement I have with my current project is in regards to seed file data. I've had to keep my seeds.rb file out of version control because of sensitive information. However, I can't figure out how to add the seeds.rb file into the container after a build.
I've tried ssh root#myhost ap_name which gets me into the VM but even if I scp the files into there, the container doesn't see them. How can I drop a few files where my rails code is in the docker image?
Depending on how much information is in your seeds.rb file, you could use environmental variables. This the solution I ended up using.
You basically set the variable: config:set my-app SECRET=whateversupersecretinfo. Then in your code, you can extract that app variable by using ENV['SECRET']. (This works pretty much the same in Heroku) Not sure if that would solve your use case, but leaving this answer here for posterity.
subnote: In Node.js you can extract these variables like process.env.SECRET
Related
I have a Rails app that has some scripts that ingest data, and these scripts are run with rails runner. The data files I need to read are in my Rails codebase e.g. in resources/data. I need to be able to run these scripts in local dev or in heroku, but I'm having difficulty coming up with a clean and permanent way to figure out in the code what the path to these data files is in each environment. Referring to RAILS_ROOT in Heroku didn't work for me, and I thought that was going to be the best approach. (I know that in Heroku the path is /app/resources... but that doesn't really help solve the problem.)
What's the best way to do this? I've seen one approach that looked for tell-tale strings in ENV but that's a hack.
Thanks
I'm converting my RoR app to use capistrano v3. I have a number of configuration file that are generated by ERB. Most of these files, like like /etc/logrotate.d/app_name, are referenced external to my app. So I like the idea of linking them to my shared/config directory. Capistrano supports managing linked files via the linked_files array. So far so good. But, the files to be linked don't technically exist until I run ERB. And capistrano runs :deploy:check:linked_files as the first step of :starting, at which point the files don't exist and the check fails.
So my question is, what's a really good way to handle this? Do I check in empty config files in my config directory, let capistrano link them to shared, and then overwrite them via ERB at a later stage? That doesn't feel good. I can't generate them before the :starting task because on an initial deploy the source tree isn't there yet. Any suggestions?
I have a Redmine (www.redmine.org) installation pushed up onto Heroku (cedar stack). On my local instance of Redmine, the way file uploads work is that the database simply stores some data about the file including a name and the location of the file on disk, and the file itself is just stored on disk under [app-location]/files (Redmine is a ruby-on-rails application). When my Redmine project is pushed to Heroku, the files directory is nowhere to be found. From what I've read about Heroku's filesystem, this is no surprise. But what is surprising and confusing, is that file uploads still work and I didn't setup s3 which is the common recommendation for file uploads on Heroku. I checked the Heroku database to get the data about the file upload.
Here are the steps I took to locate the file.
heroku run rails c
and – to get the location of the most recent file – ran:
Attachment.last.diskfile
which returned:
=> "/app/files/2014/06/140610184025_Very-Basic-Globe-icon.png"
This path simply does not exist on the Heroku instance (using heroku run bash and listing directories or running a find). I also downloaded a dump of the Heroku database and imported it locally. The database data shows up on my local instance, but the file can't be found (no surprise).
So my questions are:
Where is the Heroku instance storing the files really?
Is there a way for me to back those files up locally without relying
on Amazon s3?
This app should remain fairly small, so I am not concerned about massive scalability, I just want to be able to get the file uploads if one day needed.
I know this question is a bit old, and you may have already found a solution, but just in case other people stumble on this question:
Heroku really is storing the files where it says it is. What's happening when you run heroku run bash is Heroku is spinning up a one-off dyno to run the command. This means that you will not be given a command prompt in the dyno that is actually running your app. This is why you are not able to find the file you're looking for.
There are currently no official addons that support backing up physical files (only databases), however you could write your own custom script to back up your data to where ever you choose (s3 or otherwise). To do so, you will likely need to use Heroku Scheduler to run your backup script in a cron-like way.
To give you some context, I'm trying to use Figaro to safely add in environment variables without having to worry about security risks. The problem is is that I can't seem to get Engine Yard to play nice with production.
I went and did a touch application.yml and then vim application.yml, i, and then command+v to insert that api keys and what not. I know the ENV['VARIABLES'] work because of development and all my rspec and cucumber tests (which utilize the APIs), passed.
When I've got everything ready, I add within the .gitignore:
# Ignore application configuration
/config/application.yml
Afterwards, I deploy the site. I open it up and data isn't going to the APIs anymore. OK...
cd into config and discover application.yml isn't there anymore. Paste it back in... Redeploy the site since now it understands it has to ignore that file and I'm not seeing changes on production. Check back... and its gone again!
Stumped on what's going on.
Simply putting a file into your deployed application's filesystem will not work because you get a clean environment each time you deploy. EngineYard cannot know that you want that particular file copied to that particular location without a little bit of extra work.
Their official recommendation is to put your YAML configuration files in /data/<app>/shared/config and symlink them to /data/<app>/current/config each time you deploy using deploy hooks.
So, I have an app deployed on Heroku, and I'm trying to populate the database on the app through a script I wrote. I have the database in a text file, and the script runs through the file and populates the database. I don't want to push the database file to the heroku server, since it's a very large file.
Is there any way to do this on Heroku? It works fine locally, but I can't get it to work on the Heroku server.
I've tried
heroku run rails runner PATH/TO/SCRIPT LOCAL/PATH/TO/DATABASE --app my_app
to no avail.
To run a local script on Heroku:
irbify.rb script.rb | heroku run rails console --app=my_app
irbify.rb is a silly tiny script I wrote to convert a script to a single eval statement.
Regarding passing data: you can serialize it in some form and put it inside script.
Hope it helps someone.
UPDATE: this does not work well anything beyond trivial datasets.
You can also upload your script to a gist and then do:
binding.eval(open("your gist raw url").read)
I had to use global variables ($dollar_prefixed) since the context would not get filled with the variables (using Pry), otherwise it went well.