I'm looking for someone who can give me some tips - or, ideally, knows where to find a step-by-step guide or something - for working with private rackspace containers (via temp URL) using Fog in a Rails app. I've got fairly far using just their documentation, but none of the temp URLs I generate seem to be valid (401 errors).
Anyone have any tips? I know this is fairly vague, but was hoping there might be a comprehensive guide out there or something - wasn't able to find one via googling around.
Thanks!
EDITED
So in response to a comment, I tried following the directions from the getting started guide exactly. When I go to the URL returned by the code below, I get ERR_CONNECTION_REFUSED. Any ideas?
require "fog"
#storage = Fog::Storage.new(:rackspace_username => '{myUsername}',
:rackspace_api_key => '{myAPIKey}',
:rackspace_region => '{myRegion}',
:provider => 'Rackspace')
directory = #storage.directories.get('{myContainer}')
directory.public = false
directory.save
file = directory.files.create(
:key => 'somefile.txt',
:body => 'Rackspace is awesome!'
)
account = #storage.account
account.meta_temp_url_key = '{myTempUrlKey}'
account.save
#storage = Fog::Storage.new(:rackspace_username => '{myUsername}',
:rackspace_api_key => '{myAPIKey}',
:rackspace_region => '{myRegion}',
:rackspace_temp_url_key => '{myTempUrlKey}',
:provider => 'Rackspace')
directory = #storage.directories.get('{myContainer}')
file = directory.files.get('somefile.txt')
temp_url = file.url(Time.now.to_i + 1000000)
puts temp_url
SOLVED
By getting rid of the directory, file, and temp_url variables at the end and instead using
#storage.get_object_https_url('{myContainer}', 'somefile.txt', Time.now + 60)
which was found in the fog source here.
Related
I have Rails application and use fog gem to upload files to cloud storage (Rackspace). As of now I have been successfully uploading local files to cloud storage.
#service = Fog::Storage.new(options)
directory = #service.directories.new :key => 'test'
directory.files.create :key => path, :body => file, :content_type => content_type
I have a new requirement now. I want to be able to use remote link (public-url) and have it uploaded to cloud storage. Is there a way to achieve this without downloading it locally or loading the whole thing in memory?
I'm looking for something like this:
directory.files.create :key => path, :body => 'url-to-remote-file', :content_type => content_type
A stream-based approach would also be quite helpful.
Thanks.
I'm adding Brakeman to a Rails product but I'm running into an issue. I want it to ignore my Gemfile and Gemfile.lock but when I run it with a command like
brakeman --skip-files Gemfile.lock,Gemfile
it's still touching the files. We use other systems to monitor our gems, but is it not possible to ignore the gem files completely? I can use a brakeman.ignore file of course but would prefer not to. Thanks for any assistance.
I believe this is the check to which you are referring:
https://github.com/presidentbeef/brakeman/blob/master/lib/brakeman/scanner.rb#L39-L40
Brakeman.notify "Processing gems..."
process_gems
The process_gems function is defined here:
https://github.com/presidentbeef/brakeman/blob/master/lib/brakeman/scanner.rb#L131-L152
#Process Gemfile
def process_gems
gem_files = {}
if #app_tree.exists? "Gemfile"
gem_files[:gemfile] = { :src => parse_ruby(#app_tree.read("Gemfile")), :file => "Gemfile" }
elsif #app_tree.exists? "gems.rb"
gem_files[:gemfile] = { :src => parse_ruby(#app_tree.read("gems.rb")), :file => "gems.rb" }
end
if #app_tree.exists? "Gemfile.lock"
gem_files[:gemlock] = { :src => #app_tree.read("Gemfile.lock"), :file => "Gemfile.lock" }
elsif #app_tree.exists? "gems.locked"
gem_files[:gemlock] = { :src => #app_tree.read("gems.locked"), :file => "gems.locked" }
end
if gem_files[:gemfile] or gem_files[:gemlock]
#processor.process_gems gem_files
end
rescue => e
Brakeman.notify "[Notice] Error while processing Gemfile."
tracker.error e.exception(e.message + "\nWhile processing Gemfile"), e.backtrace
end
The AppTree::exists? function is defined here:
https://github.com/presidentbeef/brakeman/blob/master/lib/brakeman/app_tree.rb#L82-L84
def exists?(path)
File.exist?(File.join(#root, path))
end
The GemProcessor::process_gems function is defined here:
https://github.com/presidentbeef/brakeman/blob/master/lib/brakeman/processors/gem_processor.rb#L11
...lots of code...
I don't see any code that would skip this functionality if a certain switch is provided to brakeman. It also looks like the AppTree::exists? function does not take into account if a file was provided to the --skip-files option.
Unfortunately, I believe the current answer is that you can not ignore the gem files completely.
You could create a PR to do what you want and see if the Brakeman team includes it in the next build:
https://brakemanscanner.org/docs/contributing/
Let us know if you discover a way to solve your problem.
I have (what I thought was) a perfectly working Heroku-Carrierwave-AWS.
I can upload images like a charm.
I now need to send the respective images, via a JSON request, to an app. This has been working on my test server, but for some reason I'm getting the following from my Heroku Logs from my rails call:
Started POST "/downloadUserPhotos" for ?? at 2014-05-03 03:27:38 +0000
Errno::ENOENT (No such file or directory # rb_sysopen - uploads/photo/mainphoto/1/largeimage.jpg):
app/controllers/stats_controller.rb:22:in `read'
app/controllers/stats_controller.rb:22:in `downloadPhotos'
I'm pretty sure this has something to do with the following Ruby/Rails code:
def downloadPhotos
#photos = Photo.find_by_user_id(current_user.id)
#mainphoto = Base64.strict_encode64(File.read(#photos.mainphoto.current_path))
end
When I use my console on Heroku and type the following:
#photos = Photo.find(1)
It works and I get the correct record shown. When I ask for current_path for mainphoto, I get:
irb(main):002:0> #photos.mainphoto.current_path
=> "uploads/photo/mainphoto/1/largeimage.jpg"
So, it knows it exists. And it's in the right place.
Can anyone enlighten me (or point me in the right direction) as to why I can't use File.read. And, more importantly, how I get it to now read the image file and encode it??
This has perplexed me somewhat.
I've tried to use #photos.mainphoto.url, but other than giving me the whole URL, it still doesn't find the file using File.read.
My carrier wave config is:
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => ENV['S3_REGION']
}
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_directory = ENV['S3_BUCKET_NAME']
end
And I have the following in my Uploader:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.user.id}"
end
Thanks in advance.
In the line:
#mainphoto = Base64.strict_encode64(File.read(#photos.mainphoto.current_path))
Change current_path for url
I'm trying to upload image to Amazon S3 with this Ruby code:
require 'net/http/post/multipart'
url = URI.parse('http://public.domain.com/')
File.open("/tmp/uup_1114.jpg") do |jpg|
req = Net::HTTP::Post::Multipart.new url.path,
'key' => s3_key,
'acl' => s3_acl,
'content_type' => s3_content_type,
'AWSAccessKeyId' => s3_AWSAccessKeyId,
'policy' => s3_policy,
'signature' => s3_signature,
"file" => UploadIO.new(jpg, "image/png", "image.jpg")
res = Net::HTTP.start(url.host, url.port) do |http|
http.request(req)
end
end
And I'm getting error from Amazon:
InvalidArgument: Bucket POST must contain a field named 'key'. If it is specified, please check the order of the fields.
Looks like 'file' field goes first in query and that causes an error above. I can't figure out how to post file field at the end of query.
I have successfully used the AWS-SDK in ruby to create post forms. But in my case I was getting users to upload from a browser into an AWS account. Still this may help:
the aws-sdk has a call on a bucket called presigned_post(options) that creates a pre signed post that works fine.
See also
https://forums.aws.amazon.com/thread.jspa?messageID=296867񈞣
It's better to use AWS::S3 (http://amazon.rubyforge.org/)
and S3Object
If you're experiencing some problems try to check if your local computer time is valid (it's really important) and try setting
AWS::S3.const_set('DEFAULT_HOST', "s3-eu-west-1.amazonaws.com")
if you're working with bucket(s) located in Europe.
I have a bunch of jpeg files in a folder on my server, and I'm attempting to attach them to their corresponding Property instances through a rake task.
property.rb has the following code:
has_attached_file :temp_photo,
:styles => PropertyImage::STYLES,
:url => "/assets/:class/:attachment/:id_partition/:style_:basename.:extension",
:path => "#{Rails.root}/public/assets/:class/:attachment/:id_partition/:style_:basename.:extension"
I use paperclip on other models, and there are no issues whatsoever, but I get a problem when I attempt the following:
p = Property.find(id)
file = File.open(temp_file_path)
p.temp_photo = file
p.save
# => false
file.close
p.errors
# => "/tmp/stream20110524-1126-1cunv0y-0.jpg is not recognized by the 'identify' command."
The file definitely exists, and I've tried changing the permissions. Restarting the server doesn't help. The problem seems to be with using the command line, as the normal form / HTTP approach works fine. This is only a temporary set-up, so I'm looking for a working way to import a batch of files into my rails app paperclip model.
Any suggestions?
path = 'target_file_path'
attach_name = 'temp_photo'
p = Property.find(id)
attach = Paperclip::Attachment.new(attach_name, p, p.class.attachment_definitions[attach_name.to_suym])
file = File.open(path)
attach.assign file
attach.save
file.close