I have a Redmine (www.redmine.org) installation pushed up onto Heroku (cedar stack). On my local instance of Redmine, the way file uploads work is that the database simply stores some data about the file including a name and the location of the file on disk, and the file itself is just stored on disk under [app-location]/files (Redmine is a ruby-on-rails application). When my Redmine project is pushed to Heroku, the files directory is nowhere to be found. From what I've read about Heroku's filesystem, this is no surprise. But what is surprising and confusing, is that file uploads still work and I didn't setup s3 which is the common recommendation for file uploads on Heroku. I checked the Heroku database to get the data about the file upload.
Here are the steps I took to locate the file.
heroku run rails c
and – to get the location of the most recent file – ran:
Attachment.last.diskfile
which returned:
=> "/app/files/2014/06/140610184025_Very-Basic-Globe-icon.png"
This path simply does not exist on the Heroku instance (using heroku run bash and listing directories or running a find). I also downloaded a dump of the Heroku database and imported it locally. The database data shows up on my local instance, but the file can't be found (no surprise).
So my questions are:
Where is the Heroku instance storing the files really?
Is there a way for me to back those files up locally without relying
on Amazon s3?
This app should remain fairly small, so I am not concerned about massive scalability, I just want to be able to get the file uploads if one day needed.
I know this question is a bit old, and you may have already found a solution, but just in case other people stumble on this question:
Heroku really is storing the files where it says it is. What's happening when you run heroku run bash is Heroku is spinning up a one-off dyno to run the command. This means that you will not be given a command prompt in the dyno that is actually running your app. This is why you are not able to find the file you're looking for.
There are currently no official addons that support backing up physical files (only databases), however you could write your own custom script to back up your data to where ever you choose (s3 or otherwise). To do so, you will likely need to use Heroku Scheduler to run your backup script in a cron-like way.
Related
When I deploy a new app to nginx using Capistrano.
I follow tutorial to do git mv database.yml database.yml.example and git mv secrets.yml secrets.yml.example , then created a new database.yml file on remote server. But now when I want to run app on my local mechine, it shows me an error
No such file - ["config/database.yml"]
Because there is no database.yml on my local repo.
Can I create an new and empty database.yml to fix this?
The guide just tells you that storing database credentials in a repository is bad practice and you shouldn't do it, but it doesn't mean you don't need to have this files at all.You application still needs it, so you definitely need to create it, just don't store it in main repo with code, this security critical information is better to store it elsewhere you decide to keep your authentification data like separate repository for credentials, key-pass storage or whatever place you want for such critical information.
PS Of course, if you just learning since it's not a big deal, you COULD keep your "root-123" credits in repository, but it's better to develop right habit from the beginning or at least get the idea why it should be separated.
I write a script to fetch content from other site, and save it to my public/ directory. Due to my network's poor environment, I deployed it to Heroku and wish it do it instead of me doing it locally.
Just something simple like this
movie_file = "#{Rails.root}/public/movie_list#{year}.json"
File.open(movie_file, "w"){|f| f.write(JSON.pretty_generate($movie_list))}
However, when I run it in heroku(just a rake task), it seems like it can't write into the public/ directory, I get no such page error. And I find this answer: Problems with public directory when deploying Node.js app with Heroku
But the original article is unavaible in heroku, and I'm not sure if its still true.
I'm wondering:
If there is any workaround that I can save it to the server (maybe something other than the public/), and then I can download it to my computer?
Or, Instead of wrting the file into public/, maybe I can upload it to other free space?
======================
UPDATE:
Finally, I first save file to tmp, and then save it to Qiniu(An China counterpart as AWS), and you can save it to AWS
The storage on an Heroku dyno should be regarded as ephemeral, as a dyno restart will cause saved files to disappear, and the file would not be visible from other dynos.
You should use the dyno to upload your files to permanent storage, such as AWS, from which you can download it through your browser.
No permanent filesystem for Heroku?
I deployed "Harrys Prelauncher" on Heroku and try to do the teardown (currently just testing). See here: https://github.com/harrystech/prelaunchr#teardown
After running the rake task ...
heroku run rake prelaunchr:create_winner_csvs
... a csv file is created in "/lib/assets", but I dont know how to access the file (it works locally in development).
How can I download or access the file?
Heroku uses "ephemeral" filesystem that is not guaranteed to preserve changes made at runtime. Simply put, if it's not pushed to git (I assume you're using git with heroku), it's not guaranteed to exist in all the instances of your app. It may exist in one of them, but you may have no simple way of accessing that specific filesystem. And you shouldn't, really.
It's done like that so that multiple instances of the same app can be fired up seamlessly. Of course, that requires some discipline: storage of any meaningful state outside: in the database, on external disk, anywhere. The benefit of this is horizontal scalability: should you be short on resources, you can fire up another web dyno that would (normally) behave exactly the same way. New dynos are started from bundles that are packed on git push and thus do not contain any changes you may have made in another instance.
A workaround may be running heroku run bash, so that you end up in an interactive shell linked to another instance of your bundle.
Then you can make that file (by running your rake task) and access its contents in any way you deem reasonable. Text files can be echoed into the console with cat and copy-pasted anywhere else. That's a dirty way.
A much cleaner way would be rigging the app to send the file in question via email. and it's one of the few reasonable ways if that rake task is invoked by the Rails app itself.
I ran into this problem recently while developing the Prelaunchr campaign for a client. Assuming you have a local version of your app, you can "pull" your Heroku database down to your local machine, set that as your development database in database.yml, and run the rake task from your local app, which should now have the same database as your heroku version. Here is the command to pull the db (subbing out name_for_database & heroku_app_name with your own):
heroku pg:pull HEROKU_POSTGRESQL_COPPER_URL name_for_database --app heroku_app_name
Make sure to restart your local server to see the new database info populated.
I am using dokku-alot to deploy my Rails 4 app to my staging server and everything is working just swell.
One requirement I have with my current project is in regards to seed file data. I've had to keep my seeds.rb file out of version control because of sensitive information. However, I can't figure out how to add the seeds.rb file into the container after a build.
I've tried ssh root#myhost ap_name which gets me into the VM but even if I scp the files into there, the container doesn't see them. How can I drop a few files where my rails code is in the docker image?
Depending on how much information is in your seeds.rb file, you could use environmental variables. This the solution I ended up using.
You basically set the variable: config:set my-app SECRET=whateversupersecretinfo. Then in your code, you can extract that app variable by using ENV['SECRET']. (This works pretty much the same in Heroku) Not sure if that would solve your use case, but leaving this answer here for posterity.
subnote: In Node.js you can extract these variables like process.env.SECRET
When Capistrano deploys a Rails app, it creates a shared/ directory to store files that should be shared across releases and not re-exported every time. In my application I have several things in the shared/ directory that rarely change (so they belong there rather than in the application tree), but I'd still like them to be version controlled for the times when they do change.
What is the best way to approach version controlling those files but keeping them separate from the repository Capistrano is exporting from?
The /shared directory is really for un-versioned data. For example, you might store bundled gems so that you don't have to re-install all your gems every release. You can also store you logs there so they don't get overwritten every time you deploy. You can store pid files there so you don't loose the process ids of critical processes during a deploy. You might even store user generated or partially processed data there so that it is not removed during a release. If a file is meant to be versioned and has the chance of changing though, I would recommend keeping it with the rest of your files and out of the shared directory.
That said, you can always also write deploy scripts to pre-populate data in your shared directory, like database configuration files. These scripts will get run on each deploy and can be entirely customized. For example, your database config script might only write the config file if it doesn't already exist.
Another common use of the shared directory is for configuration files. Versioning and source control for configuration files is a very good idea, but should managed in a system configuration management tool. In my environment, I manage code releases with Capistrano and system configuration with Puppet. That way, there is still source control over configuration files, but they are kept distinct from the code deploy process. In turn, the code deploy process is kept independent of system configuration.