What is the best method to setup persistant storage for a Rails/Dokku app? The Dokku docs dont seem to say anything about the subject. When used Google to search the docs site the only thing it returned was the dokku-volume-plugin, which I've tried the without success.
I can create a volume for my app:
dokku volume:add myapp /public
but nothing gets written to the volume.
Is this the current(2015) best way to setup persistant storage with Dokku? If it is, am I missing something?
I use dokku-volume-plugin without any problems. Here's how it works.
The dokku volumes:add myapp /app/uploads/ command adds a volume that will be persisted on the host for the files stored inside your app's /app/uploads/ directory. If your app tries to write into that directory, it would instead by written on the host. The files are actually stored in the folder /home/dokku/.o_volume/.
From what I can tell, the only difference with your command and my command is the trailing slash. dokku volume:add myapp /public/ should fix your issues.
Alternatively, you could try an Amazon S3 based solution.
For the archives, so that nobody walks down the wrong path:
The current (2016, dokku > 0.5) path changed. I used #mixxorz approach in the past with success, but as of now the built-in plugin storage seems to take over the stage:
(... ssh dokku#host || dokku ...) storage:mount <app> /var/lib/dokku/data/storage:/app/public/uploads
Its well documented at http://dokku.viewdocs.io/dokku/dokku-storage/ .
The concepts stay the same.
Related
I have a ruby on rails app running on elastic beanstalk and I wanted to upload some large files - possibly around 5gb.
To do this, I added a config file on .ebextensios/nginx/01_upload_file_size.config with the following content:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
client_max_body_size 20G;
After I deploy the code to EB, I restart the nginx server using the command sudo service nginx reload. This seem to work for a while.
Uploading large files the next day gives me 'Connection is reset' error. The log file log/nginx/error.logs tells me error client intended to send too large body: 24084848 bytes
I have no idea why this occurs. Seems like the config file is ignored after a short term or maybe reset but I can't see any reference of this happening in the documentation. Note that when I SSH into the eb environment again and restart nginx again, I can upload large files without a problem.
After looking after everything, I saw these events on my EB console.
Added instance [i-076127f714faac566] to your environment.
Removed instance [i-0c51791325b54873c] from your environment.
I also notice that the IP address of the host changes when the config resets.
I think that when the instances were automatically added and removed from EB, it didn't apply the config file or didn't restart the nginx server like I did manually via SSH.
So the question is: How do I make sure that the client_max_body_type is always set to 20G, even after instance is removed and re added? Or, how to make the config persistent so I don't have to manually restart the nginx server?
I think you have two questions here - why is EB replacing your instance, and how can you automate the restart of nginx.
Answering the first question will take a bit of research on your part, but I suspect it may be the default CloudWatch alarm that kills instances when network traffic drops below a certain threshold.
The second question should be fairly straightforward; following the documentation, you should be able to add a section to 01_upload_file_size.config that automatically restarts nginx during the deployment process:
container_commands:
01_restart_nginx:
command: "service nginx reload"
I would also check to make sure that the /etc/nginx/conf.d/proxy.conf file is actually being created - I don't know if folders under .ebextensions are supported. You might need to move your config file to .ebextensions/01_upload_file_size.config.
I'm currently building a Laravel 5.1 system, that is being automatically deployed to several servers in several steps (local, test and production).
Unfortunatly I have an issue with the optimized class loader. During deployment, Composer runs and through the composer.json file, the two commands php artisan clear-compiled and php artisan optimize runs without any problems.
My problem is that Laravel at some time during execution of a page tries to write to /bootstrap/cache/services.json, but this fails since the (systemwise) user that created the folder, is not the same as the user that tries to write to the file (It also doesn't make sense that it tries to optimize, since the optimiser file is already created).
Is is possible to disable the "on-the-fly" class loader optimizer? (And if it is, what are the consequenses?)
Before any "You should just change permissions to ...", then i'd like to point out that it is currently not a viable solution. Everything is versionized, so the folder on the server is named something like server/project/20151122192701/laravel and I don't think our tech guys are intersted in changing permissions every time we commit to production :)
I ended up deleting php artisan clear-compiled and php artisan optimize from the composer.json to prevent the commands from running when committing. I also !services.json to /bootstrap/cache/.gitignore (to make it committable) and committing services.json with new writeable permissions (755).
This is to prevent the deploy user from deleting services.json and recreating it with no-write permissions...
I had some other problems with Laravel also caching views and sessions, but this was solved by caching views in the system temp folder (i know this is probably not the best solution, but it works) and using memcached for sessions.
I write a script to fetch content from other site, and save it to my public/ directory. Due to my network's poor environment, I deployed it to Heroku and wish it do it instead of me doing it locally.
Just something simple like this
movie_file = "#{Rails.root}/public/movie_list#{year}.json"
File.open(movie_file, "w"){|f| f.write(JSON.pretty_generate($movie_list))}
However, when I run it in heroku(just a rake task), it seems like it can't write into the public/ directory, I get no such page error. And I find this answer: Problems with public directory when deploying Node.js app with Heroku
But the original article is unavaible in heroku, and I'm not sure if its still true.
I'm wondering:
If there is any workaround that I can save it to the server (maybe something other than the public/), and then I can download it to my computer?
Or, Instead of wrting the file into public/, maybe I can upload it to other free space?
======================
UPDATE:
Finally, I first save file to tmp, and then save it to Qiniu(An China counterpart as AWS), and you can save it to AWS
The storage on an Heroku dyno should be regarded as ephemeral, as a dyno restart will cause saved files to disappear, and the file would not be visible from other dynos.
You should use the dyno to upload your files to permanent storage, such as AWS, from which you can download it through your browser.
No permanent filesystem for Heroku?
I am using dokku-alot to deploy my Rails 4 app to my staging server and everything is working just swell.
One requirement I have with my current project is in regards to seed file data. I've had to keep my seeds.rb file out of version control because of sensitive information. However, I can't figure out how to add the seeds.rb file into the container after a build.
I've tried ssh root#myhost ap_name which gets me into the VM but even if I scp the files into there, the container doesn't see them. How can I drop a few files where my rails code is in the docker image?
Depending on how much information is in your seeds.rb file, you could use environmental variables. This the solution I ended up using.
You basically set the variable: config:set my-app SECRET=whateversupersecretinfo. Then in your code, you can extract that app variable by using ENV['SECRET']. (This works pretty much the same in Heroku) Not sure if that would solve your use case, but leaving this answer here for posterity.
subnote: In Node.js you can extract these variables like process.env.SECRET
I have a Redmine (www.redmine.org) installation pushed up onto Heroku (cedar stack). On my local instance of Redmine, the way file uploads work is that the database simply stores some data about the file including a name and the location of the file on disk, and the file itself is just stored on disk under [app-location]/files (Redmine is a ruby-on-rails application). When my Redmine project is pushed to Heroku, the files directory is nowhere to be found. From what I've read about Heroku's filesystem, this is no surprise. But what is surprising and confusing, is that file uploads still work and I didn't setup s3 which is the common recommendation for file uploads on Heroku. I checked the Heroku database to get the data about the file upload.
Here are the steps I took to locate the file.
heroku run rails c
and – to get the location of the most recent file – ran:
Attachment.last.diskfile
which returned:
=> "/app/files/2014/06/140610184025_Very-Basic-Globe-icon.png"
This path simply does not exist on the Heroku instance (using heroku run bash and listing directories or running a find). I also downloaded a dump of the Heroku database and imported it locally. The database data shows up on my local instance, but the file can't be found (no surprise).
So my questions are:
Where is the Heroku instance storing the files really?
Is there a way for me to back those files up locally without relying
on Amazon s3?
This app should remain fairly small, so I am not concerned about massive scalability, I just want to be able to get the file uploads if one day needed.
I know this question is a bit old, and you may have already found a solution, but just in case other people stumble on this question:
Heroku really is storing the files where it says it is. What's happening when you run heroku run bash is Heroku is spinning up a one-off dyno to run the command. This means that you will not be given a command prompt in the dyno that is actually running your app. This is why you are not able to find the file you're looking for.
There are currently no official addons that support backing up physical files (only databases), however you could write your own custom script to back up your data to where ever you choose (s3 or otherwise). To do so, you will likely need to use Heroku Scheduler to run your backup script in a cron-like way.