Nginx + CarrierWave + Fog + GoogleCloud - ruby-on-rails

I tried with solutions and answers that already exist in this environment but do not solve my problem, so I open a new thread.
Mount a test server with these technologies in a homely own server and works perfectly in keeping gcloud store. When I ride gcloud a server with the same configuration it does not work.
I discard probleamas Fog credentials proque in pre-production environment works.
user permissions and try 644 755
When I upload a photo, rather than persist in gcloud storage, it is in the root project folder with filenames such RackMultipart20160414-4440-ry1g0r.jpg
and stored in public / uploads / tmp folder one
No more can be. settings are
config/initializers/carrier_wave.rb
config.fog_credentials = {
:provider => 'Google',
:google_storage_access_key_id => Figaro.env.google_storage_access_key_id,
:google_storage_secret_access_key => Figaro.env.google_storage_secret_access_key
}
config.fog_directory = 'uploads-prod'
config.fog_public = true
config.storage = :fog
config.root = Rails.root.join('public')
content_uploader.rb
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
Any idea or solution ?!
Thank you

Related

Not able to change S3 path of images in refinerycms (using dragonfly) in rails

Currently I am migrating a 10 year old on-prem application to heroku.
We have good amount of data in our servers.
dragonfly/refinerycms data stored at: public/system/refinery/...
Images stored under refinery folder images/2021/11/25/486ucenknk_image.png
same is the case for resources.
But when I set in images.rb
config.s3_datastore = true
obviously files start saving to S3 bucket. But with a different path.
Which is
2021/11/25/02/01/37/5f6e0f21-658c-4cf2-9edc-da7cb8575ab8/images.png
means it is including time as well in folders.
I tried changing this path at so many places but I couldn't. I tried changing url_format as well, but looks like it is not affecting anything in the store location.
I have attached the config files for both the files.
config/initializers/dragonfly.rb
# config/initializers/dragonfly.rb
require 'dragonfly/s3_data_store'
# Configure
Dragonfly.app.configure do
protect_from_dos_attacks true
secret "Some secret"
url_format "/media/:job/:name"
datastore :s3,
bucket_name: ENV['S3_BUCKET'],
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
url_scheme: 'https'
end
# Logger
Dragonfly.logger = Rails.logger
# Mount as middleware
Rails.application.middleware.use Dragonfly::Middleware
# Add model functionality
if defined?(ActiveRecord::Base)
ActiveRecord::Base.extend Dragonfly::Model
ActiveRecord::Base.extend Dragonfly::Model::Validations
end
Excon.defaults[:write_timeout] = 500
config/initializers/refinery/images.rb
# config/initializers/refinery/images.rb
# encoding: utf-8
Refinery::Images.configure do |config|
# Configure S3 (you can also use ENV for this)
# The s3_backend setting by default defers to the core setting for this but can be set just for images.
config.s3_datastore = false
config.s3_bucket_name = ENV['S3_BUCKET']
config.s3_access_key_id = ENV['S3_KEY']
config.s3_secret_access_key = ENV['S3_SECRET']
config.s3_region = 'us-east-1'
# Configure Dragonfly
config.dragonfly_verify_urls = false
config.datastore_root_path = "/refinery/images"
end
If anyone encountered problem like this before, please help me. Thanks in Advance.

Carrierwave fog local storage full attachment path

I am building a rails app with carrierwave and fog for attachment storage. In my test environment, I am using fog local storage.
I am looking for a way to get the full attachment path with this configuration.
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'Local',
local_root: '/Users/me/fog',
endpoint: '/Users/me/fog',
}
config.fog_directory = 'test.myapp.com
config.fog_public = false
config.fog_attributes = { 'Cache-Control' => 'max-age=315576000' }
end
When I use any other storage options (like AWS S3), I can get the full url to an attachment just by doing my_object.my_attachment_url or my_object.my_attachment.path.
However, when using Local storage, I only get a relative path to my configuration options like my_object/my_attachment/1/test.jpg.
Is there any way through carrierwave or fog to get the full path to this local file?
For my example, the output I am looking for would be: /Users/me/fog/test.myapp.com/my_object/my_attachment/1/test.jpg
For me, the answer was modifying to carrierwave uploader class.
I had
def store_dir
"#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
which worked fine for AWS S3 as all the S3 specific info was inserted before this string. However, to get this to work with fog Local as well, I added:
if Rails.env.test?
def base_path
"#{File.expand_path(CONFIG.fog_local_root)}/#{CONFIG.fog_directory}/"
end
else
def base_path
''
end
end

Carrierwave + fog + aws s3 and rails in production

I have just migrated from paperclip to carrierwave and managed to successfully get uploading to S3 to work locally on my machine, but after deploying the Rails application to my server (which uses Ubuntu, Passenger with nginx, was a mission getting it to work), and when i try to upload an image, it tries to save it to public/uploads/... which comes up with a permission denied error, I have looked and searched everywhere to find out why its not working, and have found nothing.
My Uploader file:
class AvatarUploader < CarrierWave::Uploader::Base
include CarrierWave::Compatibility::Paperclip
storage :fog
def store_dir
"/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
fog.rb
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '*******', # required
aws_secret_access_key: '********', # required
region: 'ap-southeast-2', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'publicrant' # required
# config.fog_public = false
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Ok so after hours of googling and failing miserably at finding a solution, turns out, in a production environment it did need to put the file temporary in uploads/tmp before it pushes it to S3 Bucket
Seems like you doesn't have read permissions for other users (o+r). Check it use command:
namei -lm <absolute path to your current/public>
and grant read permissions:
chmod o+r <directory>
In your case I think it will be /home/<user> directory.

how to Make file uploaded to s3 private

I have a rails app in which employers can upload files for a freelancer to work on. i am using amazon s3 to store the files. The problem is that amazon s3 assigns the file a url that if some has has, they can access the file. Employers will often upload private files that only the freelancer should be able to see. How do I make it so when an employer uploads a file, only the freelancer can see it?
Here is the file uploader code:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key => ENV['AWS_ACCESS'],
:aws_secret_access => ENV['AWS_SECRET']
}
config.fog_directory = ENV['S_BUCKET']
end
Use the config.fog_public = false option to make the files private. And fog_authenticated_url_expiration (time in seconds) to add a TTL to each file URL. See the fog Storage module for more info: https://github.com/carrierwaveuploader/carrierwave/blob/master/lib/carrierwave/storage/fog.rb

Configure Environment to Use Different Storage Paths on Amazon S3 with Carrierwave

I would like to have distinct folders in my S3 bucket to keep the production database clear from the development environment.
I am not sure how to do this, here is the skeleton I've come up with in the carrierwave initializer:
if Rails.env.test? or Rails.env.development?
CarrierWave.configure do |config|
//configure dev storage path
end
end
if Rails.production?
CarrierWave.configure do |config|
//configure prod storage path
end
end
Two options:
Option1: You don't care about organizing the files by model ID
In your carrierwave.rb initializer:
Rails.env.production? ? (primary_folder = "production") : (primary_folder = "test")
CarrierWave.configure do |config|
# stores in either "production/..." or "test/..." folders
config.store_dir = "#{primary_folder}/uploads/images"
end
Option 2: You DO care about organizing the files by model ID (i.e. user ID)
In your uploader file (i.e. image_uploader.rb within the uploaders directory):
class ImageUploader < CarrierWave::Uploader::Base
...
# Override the directory where uploaded files will be stored.
def store_dir
Rails.env.production? ? (primary_folder = "production") : (primary_folder = "test")
# stores in either "production/..." or "test/..." folders
"#{primary_folder}/uploads/images/#{model.id}"
end
...
end
Consider the following initializer:
#config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.enable_processing = true
# For testing, upload files to local `tmp` folder.
if Rails.env.test?
config.storage = :file
config.root = "#{Rails.root}/tmp/"
elsif Rails.env.development?
config.storage = :file
config.root = "#{Rails.root}/public/"
else #staging, production
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['S3_KEY'],
:aws_secret_access_key => ENV['S3_SECRET']
}
config.cache_dir = "#{Rails.root}/tmp/uploads" # To let CarrierWave work on heroku
config.fog_directory = ENV['S3_BUCKET']
config.fog_public = false
config.storage = :fog
end
end
In development, the uploads are sent to the local public directory.
In test mode, to the Rails tmp directory.
And finally, in "else" environment (which is usually a production or staging environment) we direct the files to S3 using Environmental Variables to determine which bucket and AWS credentials to use.
Use different Amazon s3 buckets for your different environments. In your various environment .rb files, set the environment specific asset_host. Then you can avoid detecting the Rails environment in your uploader.
For example, in production.rb:
config.action_controller.asset_host = "production_bucket_name.s3.amazonaws.com"
The asset_host in development.rb becomes:
config.action_controller.asset_host = "development_bucket_name.s3.amazonaws.com"
etc.
(Also consider using a CDN instead of hosting directly from S3).
Then your uploader becomes:
class ImageUploader < CarrierWave::Uploader::Base
...
# Override the directory where uploaded files will be stored.
def store_dir
"uploads/images/#{model.id}"
end
...
end
This is a better technique from the standpoint of replicating production in your various other environments.

Resources