I have a new requirement for an existing app and I wanted to ask other developers how to go about developing it. I want to export CSV every 2 days and provide my sales team feeds which they use to bulk import on a designated data storage location, say google drive. they can then check the file and do uploads etc.
I want to run a Heroku scheduler which runs every 2 days and exports data from the app, save the file in a specific format with a specific header to that storage.
I know that I can write a class method which generates the format, use strategy pattern to get a specific header and provide it using respond_to and give a format csv so a user can access that file through a URL but how can I write a rake task which creates the file and uploads it to the specific location I specify ?
Will really appreciate any direction
In Rails rake tasks are usually stored in lib/tasks and have .rake extension. For the CSV part, you can use the CSV API of Ruby. After generating a CSV you can save it locally or upload it to any service you want (e.g. S3, Google Drive, etc). For example, take S3:
# lib/tasks/csv_tasks.rake
namespace :csv do
desc 'Generates new feed'
task :feed do
client = Aws::S3::Client.new(region: 'eu-west-1',
access_key_id: 'access_key_id',
secret_access_key: 'secret_access_key')
feed_csv = CSV.generate do |csv|
# headers
csv << [:first_name, :last_name]
# CSV rows
Employee.find_each do |employee|
csv << [employee.first_name, employee.last_name]
end
end
client.put_object({
body: feed_csv,
bucket: 'my_bucket',
key: 'feed.csv',
})
end
end
Then in Heroku scheduler use the defined task rake csv:feed
You might also consider having a model for your files in order to save their paths and then display them easily in your application.
I advise you to save your S3 or other credentials in the secrets.yml file or the credentials file (Rails 5.2+). To use the AWS SDK for Ruby, add this to your Gemfile:
gem 'aws-sdk', '~> 3'
And require it in the rake task, if needed. For more info about how to work with S3 and ruby, read here.
Related
I have been given a CSV file that contains emails and passwords.
My task is to go through this CSV file and create users using the devise gem.
You will probably wonder why I hace been given peoples emails and passwords (I am as well), but I've been told not to worry about it.
My csv file looks like this:
Email,Password,Password_confirmation,,
email1#email.com ,password1,password1,,
email2#email.com ,password2,password2,,
email3#email.com ,password3,password3,,
I wrote the following code in my seeds.rb file:
require 'csv'
CSV.foreach("db/fixtures/users.csv", :col_sep => ",", :headers => true) do |row|
User.create(email: row['email'], password: row['password'], password_confirmation: row['password_confirmation'])
end
I run rails db:seed to create the users, but nothing happens, no error messaged either.
Any support would be welcomed.
First of all, it's a really bad idea to put that file with actual passwords under version control and to import it in seeds.rb. Would be much better to add a rake task for that purpose accepting csv file path as an argument and perform it once in a suitable environment.
About the code, the headers in CSV import are case sensitive, so it would work if you get the row value via row['Email'], row['Password'] etc.
Also, make sure that filling in those fields are enough to save the user, there may be some other required fields in your particular model.
I've been implementing an Active Storage Google strategy on Rails 5.2, at the moment I am able to upload files using the rails console without problems, the only thing I am missing is if there is a way to specify a directory inside a bucket. Right now I am uploading as follows
bk.file.attach(io: File.open(bk.source_dir.to_s), filename: "file.tar.gz", content_type: "application/x-tar")
The configuration on my storage.yml
google:
service: GCS
project: my-project
credentials: <%= Rails.root.join("config/myfile.json") %>
bucket: bucketname
But in my bucket there are different directories such as bucketname/department1 and such. I been through the documentation and have not found a way to specify further directories and all my uploads end up in bucket name.
Sorry, I’m afraid Active Storage doesn’t support that. You’re intended to configure Active Storage with a bucket it can use exclusively.
Maybe you can try metaprogramming, something like this:
Create config/initializers/active_storage_service.rb to add set_bucket method to ActiveStorage::Service
module Methods
def set_bucket(bucket_name)
# update config bucket
config[:bucket] = bucket_name
# update current bucket
#bucket = client.bucket(bucket_name, skip_lookup: true)
end
end
ActiveStorage::Service.class_eval { include Methods }
Update your bucket before uploading or downloading files
ActiveStorage::Blob.service.set_bucket "my_bucket_name"
bk.file.attach(io: File.open(bk.source_dir.to_s), filename: "file.tar.gz", content_type: "application/x-tar")
I uploaded multiple PDF files in following path (user/pdf/) in AWS S3. So that path for each file is going to be like user/pdf/file1.pdf, user/pdf/file2.pdf, etc.
In my website(Angular front-end and Rails backend), I'm trying to do 3 things.
1) Retrieving files in certain path (user/pdf/).
2) Make a view which lists names of the files I retrieved from certain path.
3) Let users to click the name of the file and it will open the file using S3 endpoint
4) Delete the file by clicking a button.
I was looking into AWS S3 doc, but I could not find related API calls from the doc. Would love to get some help on performing above actions.
you should review the ruby S3 sdk doc
listing objects from a bucket
# enumerate ALL objects in the bucket (even if the bucket contains
# more than 1k objects)
bucket.objects.each do |obj|
puts obj.key
end
# enumerate at most 20 objects with the given prefix
bucket.objects.with_prefix('photos/').each(:limit => 20) do |photo|
puts photo.key
end
getting an object
# makes no request, returns an AWS::S3::S3Object
obj = bucket.objects['key']
deleting an object
bucket.objects.delete('abc')
I'm making Angular-Rails web app now. I successfully retrieve files from certain path in AWS S3.
Let's say I call below function
#files = bucket.objects.with_prefix('pdf/folder/')
#files.each(:limit => 20) do |file|
puts file.key
end
file.key prints pdf/folder/file1.pdf, pdf/folder.file2.pdf, etc.
I do not want the whole path but just name of files like file1.pdf, file2.pdf, etc.
Is regex the only way or is there a API call for this in AWS S3? I was reading the doc and could not find related API function.
The call you want is probably File#basename:
puts File.basename(file.key)
On my Rails 3 application, I would like to export static JSON data to an Amazon S3 bucket, which can be later retrieved and parsed by an AJAX call from said application.
The JSON will be generated from the app's database.
My design requirements probably will only need something like a rake task to initiate the export to S3. Every time the rake task is initiated, it'll overwrite the files. Preferably the file name will correspond to the ID number of the record where the JSON data is generated from.
Does anyone have any experience with this and can point me in the right direction?
This can be accomplished with the aws-sdk gem.
Your task could be broken into two basic steps: 1) generate a temporary local file with your json data, 2) upload to S3. A very basic, procedural example of this:
require 'aws-sdk'
# generate local file
record = Record.find(1)
file_name = "my-json-data-#{record.id}"
local_file_path = "/tmp/#{file_name}"
File.open(local_file_path, 'w') do |file|
file.write(record.to_json)
end
# upload to S3
s3 = AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY')
bucket = s3.buckets['my-s3-bucket-key']
object = bucket.objects[file_name]
object.write(Pathname.new(local_file_path))
Check out the S3Object docs for more info.