Upload to heroku rails app with file path string - ruby-on-rails

I'm trying to upload a file to my rails app by sending the user's local location of the file via a parameter, so the input URL would look like this:
http://rails-app.herokuapp.com/element?file=C:\temp\data.txt
It's easy if I'm working on my local machine, since I can just use File.read(filename), however this doesn't work on heroku. Thanks for any help!

First of all Heroku has read-only file system. So you can't upload anything directly to heroku.
Use something like Amazon S3 to keep files.
The second issue is your approach.
When running app locally - it has access to your C:/ drive.
But app that is located at remote server does not have access to your computers C:/ drive, so it can't fetch file.
You should upload either through browser file field or through passing accessible to anyone http link.

Related

Move Images From Parse To S3 AWS

I need help moving the images I have from Parse to S3 on AWS. I have viewed numerous supposed guides and GitHub projects, but everything stops short at giving you all the information. One even says, you need GCS bucket set up, but gives no details on how to set up one. Just someone please help me with this. I have the S3 File Adapter in my index.js all set up for the app, but none of the images are there, they are still hosted in parse.
If you are referring to old images that where hosted with parse.com that you want to move across to your own environment then it can be done with the utility tool.
Get all files across all classess in a Parse database. Print file URLs
to console OR transfer to S3, GCS, or filesystem. Rename files so that
Parse Server no longer detects that they are hosted by Parse. Update
MongoDB with new file names.
https://github.com/parse-server-modules/parse-files-utils
Moving forward if you have setup your S3 bucket correctly all new images from your app will be stored there.
https://github.com/ParsePlatform/parse-server/wiki/Configuring-File-Adapters

How can i set "FILES_ROOT" to a folder on server?

I want to use RoxyFileMan to manage uploaded images and files. But I should save them on the server. As you know, RoxyFileMan Upload uploads files and images in a folder named Uploads in a fileman directory. We can change FILES_ROOT to another local path to change the directory files get uploaded to.
But, I want to upload files on the server and then read them from the server after they've been uploaded so that they can be edited in ckeditor.
Can anyone please provide advice/guidance on how to achive this outcome?
It's not very clear how your question is meant, but if I understand you correctly, you are using RoxyFileMan on a local server and want to upload files to a remote online server, right?
Roxy uploads files to a directory on the server it is run from. Which means if run from localhost, it will use directory on your localhost. If run from a server, it will use a directory on server.
As far as my knowledge goes, you cannot upload from a localhost to an online server directly.
You could maybe achieve that using some custom script to open an FTP connection, but then you would have to also remake all the code to also load images from there... which would be rather silly.

How to reference and update a file on S3 from Rails 4

I have a Rails 4 application that needs to use a number of excel files, representing rosters, (20 or so, grouped by their own individual committee) that have to be read in and editable by the User. Pre-deploy I had the system working perfectly where these files would live in public/rosters and could be referenced and edited by any authenticated user, unfortunately when I deployed to Heroku I could no longer do this.
I have been using an S3 bucket to host the other files necessary for this and other related apps, and it's been working wonderfully, for what I've been using it for; so I decided to try it as a solution to this problem. Unfortunately it would appear as if I could only access the files the way I had been by making them publicly accessible, which is not something that I want to do.
So my question is this: what would be the best way to reference these files (using my access_key_id and secret_access_key to authenticate ideally) and allow a User to push changes that will overwrite the file on the S3 bucket.
You have to use aws-sdk-ruby to write file to S3 which works using access_key_id and secret_access_key. Check this documentation. Hope this helps. Thanks!

Creating a dashboard using csv files

I am trying to create a dashboard using CSV files, Highcharts.js, and HTML5. In a local development environment I can render the charts using CSVs both on my file system and hosted on the web. The current goal is to deploy the dashboard live on Heroku.
The CSVs will be updated manually - for now - once per day in a consistent format as required by Highcharts. The web application should be able to render the charts with these new, "standardized" CSVs whenever the dashboard page is requested. My question is: where do I host these CSVs? Do I use S3? Do I keep them on my local file system and manually push the updates to heroku daily? If the CSVs are hosted on another machine, is there a way for my application (and only my application) to access them securely?
Thanks!
Use the gem carrierwave direct to upload the file directly from the client to an Amazon S3 bucket.
https://github.com/dwilkie/carrierwave_direct
You basically give the trusted logged in client a temporary key to upload the file, and nothing else, and then the client returns information about the uploaded file to your web app. Make sure you have set the upload to be private to prevent any third parties from trying to brut force find the CSV. You will then need to create a background worker to do the actually work on the CVS file. The gem has some good docs on how to do this.
https://github.com/dwilkie/carrierwave_direct#processing-and-referencing-files-in-a-background-process
In short in the background process you will download the file temporarily to heroku, parse it out, get the data you need and then discard the copy on heroku, and if you want the copy on S3. This way you get around the heroku issue of permanent file storage, and the issue of tied up dynos with direct uploads, because there is nothing like NGINX for file uploads on heroku.
Also make sure that the file size does not exceed the available memory of your worker dyno, otherwise you will crash. Sense you don't seem to need to worry about concurrency I would suggest https://github.com/resque/resque.

Uploaded files disappear during new push?

I have paperclip working just fine where I can upload files to my site, but whenever I make updates and push a new version of the site all of the files I uploaded via paperclip seem to disappear (All the information that was entered into the database remains though).
I assume the problem is that I haven't pulled the files from the live version of the site, but whenever I do a git pull it tells me everything is up to date. Is there anyway for me to download the files I've uploaded. (I would prefer to not use amazon S3 to store the files currently)
The files you have uploaded are stored at public folder. And public folder is not deployed with code, so your files are assuming to be disappeared.
If, you use amazon S3, then images will be stored at s3 and it will provide a dynamic url to access images. Then, you will be able to access images properly.
You can also save images at dropbox. In this application images are stored at dropbox and running on heroku. You may take the referance:
https://github.com/aman199002/Album-App # open source app to store albums(at dropbox).
http://album-app.herokuapp.com #Url of the application running on heroku.
When you deploy your application to Heroku, your pushed code is compiled into what is called a slug - this is essentially an archive of everything needed to run your application. When you restart or scale your application your original slug is then copied to be run. However, it's a readonly slug so when you upload files they exist on the dyno that received them so if you had multiple dynos your files wouldn't exist across them and they then do not persist when your application is restarted or you push new code nor is there any way to retrieve them.
Your only way to perist files on Heroku is to use an external cloud storage solution like Amazon S3, Rackspace files etc - fortunately it's very simple to have Paperclip use S3 for it's storage mechanism - there's a tutorial at https://devcenter.heroku.com/articles/paperclip-s3

Resources