In my ruby on rails application, I would like to be able to read (only read) files from a directory in my own dropbox.
All the tutorials I have found are to authorize access to the visitor's dropbox, and so require to login to dropbox using their oauth login page.
Is there a way to do it by using my credentials I'd save in a file in my application (and so without needing to log myself manually) ?
I'd also like to be able to do it from google drive.
Thanks !
I've followed the following steps and I've read/write to my Dropbox.
Title:
Using Dropbox with Ruby On Rails on Heroku
Objective:
Heroku does not offer a persist storage and suggests amazon s3 which needs a credit card to register and use it.
So Dropbox may be a good replacement at least for training and development level.
Steps:
1. Install sdk
command: gem install Dropbox-sdk
link: https://www.Dropbox.com/developers-v1/core/sdks/ruby
Create a Dropbox accout if you don't have one
link: https://www.Dropbox.com
action: create an account
Create an App on Dropbox platform
link: https://www.Dropbox.com/developers/apps
action: Specify a name for you app and you will given App Key and App Secret
remarks: App can have access to whole Dropbox or just a specific folder
Try this basic tutorial to test what you can do
link: https://www.Dropbox.com/developers-v1/core/start/ruby
action:
a.replace 'INSERT_APP_KEY' and 'INSERT_APP_SECRET' with your App keys
b.Execute ruby script
c.Browse given link to authorize and generate access token
d.Copy and paste the code on the script console and continue
caution:
The script is trying to load a local file first, so be sure you create it on proper path
Execution steps:
a.Authenticate
b.Upload file
c.Download file and write it to local
You could generate access token on your app home and use it instead of everytime generating it with APP_KEY & APP_SECRET
To Use Dropbox with rails (CarrierWare)
link: https://github.com/robin850/carrierwave-Dropbox
steps:
6a. include gem 'carrierwave-Dropbox' in your Gemfile
6b. run 'bundle install'
6c. run
rake Dropbox:authorize APP_KEY=app_key APP_SECRET=app_secret ACCESS_TYPE=Dropbox|app_folder
6d. set appropriate settings in your ImageUploader file (CustomNameUploader)
class ImageUploader < CarrierWave::Uploader::Base
storage :Dropbox
def initailize
CarrierWave.configure do |config|
...
# Dropbox settings
...
end
end
end
7. If your are on a source control it will be better choice to set values as env vars and then use them instead.
The link shows how to set or persist environment variables in ubuntu.
link: https://help.ubuntu.com/community/EnvironmentVariables
On production (heroku) set environment vars like follows:
usage: heroku config:set ACCESS_TOKEN_SECRET='your_app_access_token_secret'
link: https://devcenter.heroku.com/articles/config-vars
It will be a good practice to create carrierwave.rb file in config/initializers and place all setting in that file
also it can be set conditionally for production and development
Beware that Dropbox may be slow and you will get application error, so try it with smaller files and load them
with pagination if they are too many.
This is a published link on linkedin:
https://www.linkedin.com/pulse/using-dropbox-ruby-rails-heroku-serjik-isagholian?trk=prof-post
Related
I would like to create some_javascript_file.js after a user submits a form and save this file in the public directory of my app.
When testing this localy I can simply use File.new("./public/some_javascript_file.js", "w") to accomplish this but this solution doesn't work when the app is deployed to Heroku.
Please would you have any suggestions? Thank you
This is because of how Heroku works. Basically, as stated in the docs, the filesystem is ephemeral and you can't rely on it. If you want persistence, you should upload this JS file to some external service, such as AWS S3. Or you might want to deploy your app to a different environment, such as self-managed VPS, where filesystem is real and your file won't disappear.
I'm building a fairly basic Ruby on Rails app, I'll be using about 2000 images, and this is my first real dive into aws/s3. The app won't have any user interaction, so I'm not sure if it's better to have all of the images on the app, and then upload them to my bucket, or add them to my bucket manually, and then download them to the app from there. The AWS documentation is a bit all over the place.
I currently have carrierwave installed and not sure what the next steps should be, or how to retrieve images from S3 into rails. I'll be using Heroku as well, but I've already set up the config with my AWS credentials.
uploaders/photo_uploader.rb
class PhotoUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def content_type_whitelist
/image\//
end
end
initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: "AWS",
aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"],
aws_secret_access_key: ENV["AWS_SECRET_ACCESS_KEY"]
}
config.fog_directory = ENV["S3_BUCKET"]
end
First step is to integrate image uploading and you can utilize a number of libraries to make this happen.
You want to grab dotenv-rails gem so you can securely manage the credentials you will need from AWS S3. This is a dedicated resource for production ready RoR app.
The next gem you want is the carrierwave-aws and the carrierwave gem that will manage everything and so that's three gems thus far. Fourth and final gem is mini_magick which is a requirement in order to use the methods available by carrierwave.
Second step is to sign up to an AWS account to use the S3 bucket. You cannot have the images on the app because if you do, you will not be able to deploy to Heroku with the images. Heroku will get rid of them.
Once you've installed these gems, you run a bundle install and then build out the basic functionality.
Here is some documentation on carrierwave: https://github.com/carrierwaveuploader/carrierwave
The documentation in the above link will walk you through how to properly install carrierwave.
So you will do something like:
rails generate uploader Photo
In your photo_uploader.rb file, you want to uncomment this code:
def extension_whitelist
%w(jpg jpeg gif png)
end
You want this uncommented to serve as a validator of the type of files you can upload. So if its not a jpg jpeg gif png RoR will throw an error. This whitelist is handy so I strongly recommend it.
Next, you have to set up your mapping between your uploader and the database.
So, fast forwarding to the part where you need to connect AWS to your app. This is where your dotenv-rails gem comes in. By the way, all these gems can be found in rubygems.org.
In the root of your folder, you are going to create a file called .env.
In the .env file you are going to add these:
S3_BUCKET_NAME=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_REGION=
Never push the AWS keys to any codebase versioning tool like Github.
You want to go into your .gitignore file and ensure the .env file is included. This way git will not track that file.
To get your AWS credentials, go to your name in the AWS console and click on it and you will see a dropdown with
my security credentials
as an option.
Next, create your bucket.
To test successful integrating with your RoR app, go to rails console and run this command:
ENV.fetch('S3_BUCKET_NAME')
If you get an error at this stage, you may need to go to config/application.rb and add:
require "dotenv-rails"
Once having done that, go back into rails c and run ENV.fetch('S3_BUCKET_NAME') again and you should be goood to go if you followed the steps correctly.
You should have an initializers folder and in there you are going to create carrierwave.rb file.
Inside of that file you are going to paste all the code thats under the Usage section of this documentation:
https://github.com/sorentwo/carrierwave-aws
Go back to your photo_uploader.rb file and change storage :file to storage :aws.
Home stretch here, go back to carrierwave.rb file and there is one line of code you need to completely remove from what you copy and pasted from the above link and it is this line here:
config.asset_host = "http://example.com/
Now you can start up your rails server and instead of pointing to your local file system it should now be pointing to your bucket.
You need to upload these all images using application, after install carrierwave and fog-aws then you need to create model controller and form for uploading images.
OK, currently you have confused how to show image after uploaded, Right?
The simple is if image uploaded properly then the imagine the table is images and model is Image and column is picture because you have did not provided those names.
images_controller
class ImagesController < ApplicationController
def index
#images = Image.all
end
end
view/images/index.html.erb
<% #images.each do |image| %>
<%= image_tag image.picture.url %>
<% end %>
Note
This not to promote a product
If you need to see a sample with source code then this is the BitBucket repository and this is the live Heroku app and Stripe test card number a CVC code must be provided type anything like 232 etc.
I am trying to use the Facebook marketing API SDK to upload images to Facebook.
This is the sdk
I want the user to be able to click to select a file from the browser and run the upload via Rails and the SDK.
Basically, here is the flow I am trying to do.
user select file
click upload
The backend controller processes the request and uploads it to facebook via the API.
However, the issue I am running into is, for security reasons, browsers do not have access to file path, which Facebook SDK asks for.
ad_account.adimages.create({
'logo1.png' => File.open('./assets/logo1.jpg')
})
If I use ActionDispatch::Http::FileUpload that is built into Rails or carrierwave, I get access to the tempfile, which has a name similar to RackMultipart20170803-89798-1e9hr
If I try to upload that to Facebook, I get an error saying
API does not accept files of this type
Does anyone have an idea on what the best option is? The only thing I can think of is upload the file to a host like cloudinary, then get the url from that and upload via the url from cloudinary.
You are right, a possible solution for your case is using Cloudinary.
Cloudinary's Ruby integration library is available as an open-source Ruby GEM.
To install the Cloudinary Ruby GEM, run:
gem install cloudinary
If you use Rails 3.x or higher, edit your Gemfile, add the following line and run bundle.
gem 'cloudinary'
Your cloud_name account parameter is required to build URLs for your media assets. api_key and api_secret are further needed to perform secure API calls to Cloudinary.
Setting the configuration parameters can be done either programmatically in each call to a Cloudinary method or globally using a cloudinary.yml configuration file, located under the config directory of your Rails project.
Here's an example of a cloudinary.yml file:
production:
cloud_name: "sample"
api_key: "874837483274837"
api_secret: "a676b67565c6767a6767d6767f676fe1"
Uploading directly from the browser is done using Cloudinary's jQuery plugin
http://cloudinary.com/documentation/jquery_integration
To ensure that all uploads were authorized by your application, a secure signature must first be generated in your server-side Rails code.
Full disclosure: I work as a software engineer at Cloudinary.
A solution I found is the create a duplicate copy of the uploaded files in the public folder and then process from there.
uploaded_file = params["file"]
file_name = uploaded_file.original_filename
file_path = File.expand_path(uploaded_file.tempfile.path)
file = File.open("#{FILE_PATH}/#{file_name}", "wb")
file.write uploaded_file.tempfile.read
file.close
I have a Ruby on Rails site with models using CarrierWave for file handling, currently using local storage. I want to start using cloud storage and I need to migrate existing local files to the cloud. I am wondering if anyone can point out a method for doing this?
Bonus points for using a model attribute that would allow me to do this row-by-row in the background without interrupting my site for extended downtime (in other words, some model rows would still have local storage while others used cloud storage).
My first instinct is to create a new uploader for each model that uses cloud storage, so I have two uploaders on each model, then transferring the files from one to the other, setting an attribute to indicate which file should be used until they are all transferred, then removing the old uploader. That seems a little excessive.
Minimal to Possibly Zero Donwtime Procedure
In my opinion, the easiest and fastest way to accomplish what you want with almost no downtime is this: (I will assume that you will use AWS cloud, but similar procedure is applicable to any cloud service)
Figure out and setup your assets bucket, bucket policies etc for making the assets publicly accessible.
Using s3cmd (command line tool for interacting with S3) or a GUI app, copy entire assets folder from file system to the appropriate folder in S3.
In your app, setup carrierwave and update your models/uploaders for :fog storage.
Do not restart your application yet. Instead bring up rails console and for your models, check that the new assets URL is correct and accessible as planned. For example, for a video model with picture asset, you can check this way:
Video.first.picture.url
This will give you a full cloud URL based on the updated settings. Copy the URL and paste in a browser to make sure that you can get to it fine.
If this works for at least one instance of each model that has assets, you are good to restart your application.
Upon restart, all your assets are being served from cloud, and you didn't need any migrations or multiple uploaders in your models.
(Based on comment by #Frederick Cheung): Using s3cmd (or something similar) rsync or sync the assets folder from the filesystem to S3 to account for assets that were uploaded between steps 2 and 5, if any.
PS: If you need help setting up carrierwave for cloud storage, let me know.
I'd try the following steps:
Change the storage in the uploaders to :fog or what ever you want to use
Write a migration like rails g migration MigrateFiles to let carrierwave get the current files, process them and upload them to the cloud.
If your model looks like this:
class Video
mount_uploader :attachment, VideoUploader
end
The migration would look like this:
#videos = Video.all
#videos.each do |video|
video.remote_attachment_url = video.attachment_url
video.save
end
If you execute this migration the following should happen:
Carrierwave downloads each image because you specified a remote url for the attachment(the current location, like http://test.com/images/1.jpg) and saves it to the cloud because you changed that in the uploader.
Edit:
Since San pointed out this will not work directly you should maybe create an extra column first, run a migration to copy the current attachment_urls from all the videos into that column, change the uploader after that and run the above migration using the copied urls in that new column. With another migration just delete the column again. Not that clean and easy but done in some minutes.
When we use Heroku, most of people suggest to use cloudinary. Free and simple setup.
My case is when we use cloudinary service and need move into aws S3 for some reasons.
This is what i did with the uploader:
class AvatarUploader < CarrierWave::Uploader::Base
def self.set_storage
if ENV['UPLOADER_SERVICE'] == 'aws'
:fog
else
nil
end
end
if ENV['UPLOADER_SERVICE'] == 'aws'
include CarrierWave::MiniMagick
else
include Cloudinary::CarrierWave
end
storage set_storage
end
also, setup the rake task:
task :migrate_cloudinary_to_aws do
profile_image_old_url = []
Profile.where("picture IS NOT NULL").each do |profile_image|
profile_image_old_url << profile_image
end
ENV['UPLOADER_SERVICE'] = 'aws'
load("#{Rails.root}/app/uploaders/avatar_uploader.rb")
Profile.where("picture IS NOT NULL OR cover IS NOT NULL").each do |profile_image|
old_profile_image = profile_image_old_url.detect { |image| image.id == profile_image.id }
profile_image.remote_picture_url = old_profile_image.picture.url
profile_image.save
end
end
The trick is how to change the uploader provider by env variable. Good luck!
I have migrated the Carrier wave files to Amazon s3 with s3cmd and it works.
Here are the steps to follow:
Change the storage kind of the uploader to fog.
Create a bucket on Amazon s3 if you already dont have one.
Install s3cmd on the remote server sudo apt-get install s3cmd
Configure s3cmd s3cmd --configure.
You would need to enter public and secret key here, provided by Amazon.
Sync the files by this command s3cmd sync /path_to_your_files ://bucket_name/
Set this flag --acl-public to upload the file as public and avoid permission issues.
Restart your server
Notes:
sync will not duplicate your records. It will first check if the file is present on remote server or not.
I have a rails web application that contains a Windows executable file the user can download and install on his PC. This program will communicate with certain peripherals on the user's PC using web services to the web application.
Where should I put such a program in my Rails tree, and how should I make this "available" for users to download?
Assets
Alternatively to putting them in the public dir, you may wish to create a downlods folder in the asset pipeline: /app/assets/downloads
This will give you the ability to call downloads_path, and precompile each file.
In order to get this to work, you'll be best creating a helper method like this:
#app/helpers/application_helper.rb
def downloads_path path
asset_path("/downloads/#{path}")
end
#app/views/application/index.html.erb
<%= downloads_path("download.exe") %>
This has the added benefit of allowing you to precompile the files, and having them accessible from the likes of CDN's, as Michael Szyndel recommended