Accessing data stored in Amazon S3 through Rails - ruby-on-rails

My goal is to make graphs from data within excel files that users have uploaded to Amazon S3.
I've already implemented the functionality for users to upload the excel files with Carrierwave, now I need to be able to access the data and make it presentable for use with a charting library (Highcharts).
The task that I am stuck on is directly accessing the data in S3 through Rails. Once the data is pulled it should be fairly straightforward to manipulate it with Highcharts.
Any suggestions would be much appreciated!

You can use the AWS SDK:
require 'aws-sdk'
# retrieve the access key and secret key
access_key_id = ENV["ACCESS_KEY_ID"]
secret_access_key = ENV["SECRET_ACCESS_KEY"]
# create an instance of the s3 client
s3 = AWS::S3.new(access_key_id: access_key_id, secret_access_key: secret_access_key)
# get the bucket
bucket = s3.buckets['your-bucket-name']
# retrieve the objects
bucket.objects.each do |object|
puts object.key
puts object.read
end

s3 = Aws::S3::Client.new
bucket = Aws::S3::Bucket.new('AWS_BUCKET NAME HERE')
bucket.objects.each do |obj|
File.open("#{Rails.root}/#{obj.key}", 'wb') do |file|
s3.get_object( bucket:ENV[:AWS_BUCKET], key: obj.key , response_target: file)
end
end
OR
s3 = Aws::S3::Client.new
s3.list_objects(bucket: 'AWS_BUCKET NAME HERE').each do |response|
response.contents.each do |obj|
File.open("#{Rails.root}/#{obj.key}", 'wb') do |file|
s3.get_object( bucket: 'AWS_BUCKET NAME HERE', key: obj.key , response_target: file)
end
end
end
There is official AWS-SDK RUBY gem
AWS SDK ruby official documentation for version 2
For enviornment variable configuration you can figaro or dotenv (for development environment) or set in ~/.bashrc file.
Note:
You need to create S3 bucket and get AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Before development you can test and access your s3 bucket data using google chrome s3 browser extenstion
Run command source ~/.bashrc or . ~/.bashrc file to reflect changes, if you store ENV variables there.
Code Reference

Related

Sitemap generator not uploading sitemap

I started playing with a test app trying to upload sitemaps using Amazon S3. I've been following https://github.com/kjvarga/sitemap_generator trying to figure out the gem and have only been half successful. A sitemap will generate in the public folder, but not upload to the S3 bucket.
I've added the config/sitemap.rb found in the tutorial above.
require 'rubygems'
require 'sitemap_generator'
require 'aws-sdk'
SitemapGenerator::Sitemap.create_index = true
SitemapGenerator::Sitemap.default_host = 'https://www.myapp.herokuapp.com'
SitemapGenerator::Sitemap.create do
add '/home', :changefreq => 'daily', :priority => 0.9
end
SitemapGenerator::Sitemap.adapter = SitemapGenerator::S3Adapter.new(fog_provider: 'AWS',
aws_access_key_id: 'KEY',
aws_secret_access_key: 'SECRET',
fog_directory: 'DIR',
fog_region: 'REGION')
I type in
rails sitemap:refresh
in my terminal and it generates the maps. It just doesn't upload them. No errors, no clues as to what didn't happen, nothing. It even tells me that google and bing are successfully pinged.
Of course I can visit my AWS bucket and manually upload these files but that feels...wrong. I've used shrine for images in the past and am used to uploading to a cache. There must be something I missed.
Check your secrets, maybe you don't have the aws account env vars so S3 adapter would never works or policy in your bucket

How does one transfer files to S3 from heroku rails 3 AWS v1

In my rails 3 app I'm generating a file when a user does an action. This file needs to be permanent. So this file needs to be stored in some place like S3 AWS. What is the simplest way to do this using S3 AWS (aws-sdk v1)?
The image its genererated using the gem barby (https://github.com/toretore/barby), so the user doesn't need to upload it.
Use the AWS SDK for Ruby to copy the file to S3. The official documentation pretty much gives you the code:
require 'aws-sdk'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('bucket-name').object('key')
obj.upload_file('/path/to/source/file')
Correct Mark B! This is with the version AWS SDK version 2.
Witht the version 1 its:
require 'aws-sdk'
s3 = AWS::S3.new
# Upload a file.
key = File.basename(file_name)
s3.buckets[bucket_name].objects[key].write(:file => file_name)

"You need to give a SHA parameter" error when serving Dragonfly images through Cloudfront

I've successfully installed and uploaded images in my Rails app using the dragonfly gem. Loading images works fine if I load it directly through the app. However, if I load the images through Cloudfront, I get an error page that says "You need to give a SHA parameter" which is strange because the SHA is included in the URL already:
For context here's my dragonfly settings:
# config/initializers/dragonfly.rb
require 'dragonfly'
# Configure
Dragonfly.app.configure do
plugin :imagemagick
secret "randomsecret"
url_format "/media/:job/:name"
datastore :file,
root_path: Rails.root.join('public/system/dragonfly', Rails.env),
server_root: Rails.root.join('public')
end
# Logger
Dragonfly.logger = Rails.logger
# Mount as middleware
Rails.application.middleware.use Dragonfly::Middleware
# Add model functionality
if defined?(ActiveRecord::Base)
ActiveRecord::Base.extend Dragonfly::Model
ActiveRecord::Base.extend Dragonfly::Model::Validations
end
I've already read the CDN section in the documentation of dragonfly, but it didn't fix the problem. The only difference it made is that it included the url_host when calling image.url instead of just the relative path that it normally returns.
By default, CloudFront does not forward query strings to the origin. Have you verified that you have query string forwarding enabled?

Ruby On Rails uploading songs to Rackspace Cloud Files container

Ok so here's the deal, I'm creating a web app and I'm adding the function to upload music to user profiles. I'm using Rackspace cloud files for storage, and I'm having a little bit of trouble completing two task. Firstly I'm having trouble writing the code to upload the file to my container. Secondly I need to generate the url of the file to store in the database. I'm very new to integrating API's so i don't have a lot of knowledge.
object = container.create_object 'filename', false
object.write file
Is this code correct for uploading the files?
Using fog directly
First of all, if you're not already using it, the officially supported Ruby library for interacting directly with Cloud Files from Ruby is fog. To start, add it to your Gemfile:
gem 'fog'
Then run bundle install to install it.
To upload a file and get its public url, using fog directly:
# Use your Rackspace API key, not your password; you can find your API key by logging
# in to the control panel, clicking on your name in the top-right, and choosing
# "account settings".
service = Fog::Storage.new(
provider: 'rackspace',
rackspace_username: ENV['RACKSPACE_USERNAME'],
rackspace_api_key: ENV['RACKSPACE_API_KEY']
)
dir = service.directories.create key: 'directory-name', public: true
files_to_upload.each do |path|
file = dir.files.create key: File.basename(path), body: File.open(path, 'r')
puts "public URL for #{file} is #{file.public_url}"
end
Using CarrierWave
However! What you're doing is a pretty common usecase in Rails, so there's a gem for it: CarrierWave. Add the following line to your Gemfile:
gem 'fog'
gem 'carrierwave'
And run bundle install to install it. Now configure CarrierWave to use Cloud Files:
# config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'rackspace',
:rackspace_username => 'xxxxxx',
:rackspace_api_key => 'yyyyyy'
}
config.fog_directory = 'name_of_directory'
end
Next generate an uploader:
rails g uploader Song
Now you can use the SongUploader to store and retrieve Song data. See the generated code, or the CarrierWave docs, for more details.

Carrierwave storage path management with a staging instance on Heroku

I have two instances of my app in production on Heroku, staging.myapp.com and www.myapp.com, and I am following this workflow: Staging instance on Heroku.
As I am using Carrierwave with AWS S3, I would like to know if it is possible to modify the storage path in order to specify each instance, e.g.:
def store_dir
instance = "staging" | "production"
#{instance}/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}
end
I keep my assets in seperate buckets and do it like this;
config.fog_directory = "myappname-#{Rails.env}-assets"
so it will use a bucket name myappname-production-assets or myappname-staging-assets.
in my carrierwave initializer. Make sure you read 'Configuring Carrierwave' on https://github.com/jnicklas/carrierwave and 'Using Amazon S3'

Resources