Starting with ActiveStorage you can know define mirrors for storing your files.
local:
service: Disk
root: <%= Rails.root.join("storage") %>
amazon:
service: S3
access_key_id: <%= Rails.application.credentials.dig(:aws, :access_key_id) %>
secret_access_key: <%= Rails.application.credentials.dig(:aws, :secret_access_key) %>
region: us-east-1
bucket: mybucket
mirror:
service: Mirror
primary: local
mirrors:
- amazon
- another_mirror
If you add a mirror after a certain point of time you have to take care about copying all files e.g. from "local" to "amazon" or "another_mirror".
Is there a convenient method to keep the files in sync?
Or method run a validation to check if all files are avaiable on each service?
I have a couple of solutions that might work for you, one for Rails <= 6.0 and one for Rails >= 6.1:
Firstly, you need to iterate through your ActiveStorage blobs:
ActiveStorage::Blob.all.each do |blob|
# work with blob
end
then...
Rails <= 6.0
You will need the blob's key, checksum, and the local file on disk.
local_file = ActiveStorage::Blob.service.primary.path_for blob.key
# I'm picking the first mirror as an example,
# but you can select a specific mirror if you want
mirror = blob.service.mirrors.first
mirror.upload blob.key, File.open(local_file), checksum: blob.checksum
You may also want to avoid uploading a file if it already exists on the mirror. You can do that by doing this:
mirror = blob.service.mirrors.first
# If the file doesn't exist on the mirror, upload it
unless mirror.exist? blob.key
# Upload file to mirror
end
Putting it together, a rake task might look like:
# lib/tasks/active_storage.rake
namespace :active_storage do
desc 'Ensures all files are mirrored'
task mirror_all: [:environment] do
# Iterate through each blob
ActiveStorage::Blob.all.each do |blob|
# We assume the primary storage is local
local_file = ActiveStorage::Blob.service.primary.path_for blob.key
# Iterate through each mirror
blob.service.mirrors.each do |mirror|
# If the file doesn't exist on the mirror, upload it
mirror.upload(blob.key, File.open(local_file), checksum: blob.checksum) unless mirror.exist? blob.key
end
end
end
end
You may run into a situation like #Rystraum mentioned where you might need to mirror from somewhere other than the local disk. In this case, the rake task could look like this:
# lib/tasks/active_storage.rake
namespace :active_storage do
desc 'Ensures all files are mirrored'
task mirror_all: [:environment] do
# All services in our rails configuration
all_services = [ActiveStorage::Blob.service.primary, *ActiveStorage::Blob.service.mirrors]
# Iterate through each blob
ActiveStorage::Blob.all.each do |blob|
# Select services where file exists
services = all_services.select { |file| file.exist? blob.key }
# Skip blob if file doesn't exist anywhere
next unless services.present?
# Select services where file doesn't exist
mirrors = all_services - services
# Open the local file (if one exists)
local_file = File.open(services.find{ |service| service.is_a? ActiveStorage::Service::DiskService }.path_for blob.key) if services.select{ |service| service.is_a? ActiveStorage::Service::DiskService }.any?
# Upload local file to mirrors (if one exists)
mirrors.each do |mirror|
mirror.upload blob.key, local_file, checksum: blob.checksum
end if local_file.present?
# If no local file exists then download a remote file and upload it to the mirrors (thanks #Rystraum)
services.first.open blob.key, checksum: blob.checksum do |temp_file|
mirrors.each do |mirror|
mirror.upload blob.key, temp_file, checksum: blob.checksum
end
end unless local_file.present?
end
end
end
While the first rake task answers the OP's question, the latter is much more versatile:
It can be used with any combination of services
A DiskService is not required
Uploading via DiskServices are prioritized
Avoids extra exists? calls as we only call it once per service per blob
Rails > 6.1
Its super easy, just call this on each blob...
blob.mirror_later
Wrapping it up as a rake task looks like:
# lib/tasks/active_storage.rake
namespace :active_storage do
desc 'Ensures all files are mirrored'
task mirror_all: [:environment] do
ActiveStorage::Blob.all.each do |blob|
blob.mirror_later
end
end
end
(03-11-2021) On Rails > 6.1.4.1, using active_storage > 6.1.4.1 and within:
Gemfile:
gem 'azure-storage-blob', github: 'Azure/azure-storage-ruby'
config/environments/production.rb
# Store uploaded files on the local file system (see config/storage.yml for options).
config.active_storage.service = :mirror #:microsoft or #:amazon
config/storage.yml:
amazon:
service: S3
access_key_id: XXX
secret_access_key: XXX
region: XXX
bucket: XXX
microsoft:
service: AzureStorage
storage_account_name: YYY
storage_access_key: YYY
container: YYY
mirror:
service: Mirror
primary: amazon
mirrors: [ microsoft ]
This does NOT work:
ActiveStorage::Blob.all.each do |blob|
blob.mirror_later
end && puts("Mirroring done!")
What DID work is:
ActiveStorage::Blob.all.each do |blob|
ActiveStorage::Blob.service.try(:mirror, blob.key, checksum: blob.checksum)
end && puts("Mirroring done!")
Not sure why that is, maybe future versions of Rails support it, or it needs additional background job setup, or it would have happened eventually (which never happened for me).
TL;DR
If you need to do mirroring for your entire storage immediately, add this rake task and execute it on your given environment with bundle exec rails active_storage:mirror_all:
lib/tasks/active_storage.rake
namespace :active_storage do
desc 'Ensures all files are mirrored'
task mirror_all: [:environment] do
ActiveStorage::Blob.all.each do |blob|
ActiveStorage::Blob.service.try(:mirror, blob.key, checksum: blob.checksum)
end && puts("Mirroring done!")
end
end
Optional:
Once you mirrored all the blobs, then you probably want to change all their service names if you want them to actually get served from the right storage:
namespace :active_storage do
desc 'Change each blob service name to microsoft'
task switch_to_microsoft: [:environment] do
ActiveStorage::Blob.all.each do |blob|
blob.service_name = 'microsoft'
blob.save
end && puts("All blobs will now be served from microsoft!")
end
end
Finally, change: config.active_storage.service= in production.rb or make the primary mirror to be the one you want future uploads to go to.
I've worked on top of https://stackoverflow.com/a/57579839/365218 so the rake task does not assume that the file is in local.
I started with S3, and due to cost concerns, I've decided to move the files to disk and use S3 and Azure as mirrors instead.
So my situation is that for some files, my primary (disk) sometimes don't have the file and my complete repository is actually on my 1st mirror.
So, it's 2 things:
Move files from S3 to disk
Added a new mirror, and want to keep it up to date
namespace :active_storage do
desc "Ensures all files are mirrored"
task mirror_all: [:environment] do
ActiveStorage::Blob.all.each do |blob|
source_mirror = if blob.service.primary.exist? blob.key
blob.service.primary
else
blob.service.mirrors.find { |m| m.exist? blob.key }
end
source_mirror.open(blob.key, checksum: blob.checksum) do |file|
blob.service.primary.upload(blob.key, file, checksum: blob.checksum) unless blob.service.primary.exist? blob.key
blob.service.mirrors.each do |mirror|
next if mirror == source_mirror
mirror.upload(blob.key, file, checksum: blob.checksum) unless mirror.exist? blob.key
end
end
rescue StandardError
puts blob.key.to_s
end
end
end
Everything is stored according to ActiveStorage's keys, so as long as your bucket names and file names aren't changed in the transfer, you can just copy everything over to the new service. See this post for how to copy stuff over.
Related
The guide says that I can save an attachment to disc to run a process on it like this:
message.video.open do |file|
system '/path/to/virus/scanner', file.path
# ...
end
My model has an attachment defined as:
has_one_attached :zip
And then in the model I have defined:
def process_zip
zip.open do |file|
# process the zip file
end
end
However I am getting an error :
private method `open' called
on the zip.open call.
How can I save the zip locally for processing?
As an alternative in Rails 5.2 you can do this:
def process_zip
# Download the zip file in temp dir
zip_path = "#{Dir.tmpdir}/#{zip.filename}"
File.open(zip_path, 'wb') do |file|
file.write(zip.download)
end
Zip::File.open(zip_path) do |zip_file|
# process the zip file
# ...
puts "processing file #{zip_file}"
end
end
That’s an edge guide (note edgeguides.rubyonrails.org in the URL); it applies to the master branch of the rails/rails repository on GitHub. The latest changes in master haven’t been included in a released version of Rails yet.
You’re likely using Rails 5.2. Use edge Rails to take advantage of ActiveStorage::Blob#open:
gem "rails", github: "rails/rails"
I'm receiving a file in a request params through a standard file input
def create
file = params[:file]
upload = Upload.create(file: file, filename: "img.png")
end
However, for large uploads, I'd like to do this in a background job.
Popular background jobs options like Sidekiq or Resque depend on Redis to store the parameters, so I can't just pass a file object through redis.
I could use a Tempfile, but on some platforms such as Heroku, local storage is not reliable.
What options do I have to make it reliable on "any" platform ?
I would suggest uploading directly to a service like Amazon S3 and then processing the file as you see fit in a background job.
When the user uploads the file, you can rest assure it will be safely stored in S3. You can use a private bucket for prohibiting public access. Then, in your background task you can process the upload by passing the file's S3 URI and let your background worker download the file.
I don't know what your background worker does with the file, but it goes without saying that downloading it again might not be necessary. It's stored somewhere after all.
I've used the carrierwave-direct gem in the past with success. Since you're mentioning Heroku, they have a detailed guide for uploading files directly to S3.
No tempfile
It sounds like you want to either speed up image uploading or push it into background. Here are my suggestions from another post. Maybe they'll help you if that's what you're looking for.
The reason I found this question is because I wanted to save a CSV file and have my background job add to the database with the info in that file.
I have a solution.
Because you the question is a bit unclear and I'm too lazy to post my own question and answer my own question, I'll just post the answer here. lol
Like the other dudes said, save the file on some cloud storage service. For Amazon, you need:
# Gemfile
gem 'aws-sdk', '~> 2.0' # for storing images on AWS S3
gem 'paperclip', '~> 5.0.0' # image processor if you want to use images
You also need this. Use the same code but different bucket name in production.rb
# config/environments/development.rb
Rails.application.configure do
config.paperclip_defaults = {
storage: :s3,
s3_host_name: 's3-us-west-2.amazonaws.com',
s3_credentials: {
bucket: 'my-bucket-development',
s3_region: 'us-west-2',
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
}
}
end
You also need a migration
# db/migrate/20000000000000_create_files.rb
class CreateFiles < ActiveRecord::Migration[5.0]
def change
create_table :files do |t|
t.attachment :import_file
end
end
end
and a model
class Company < ApplicationRecord
after_save :start_file_import
has_attached_file :import_file, default_url: '/missing.png'
validates_attachment_content_type :import_file, content_type: %r{\Atext\/.*\Z}
def start_file_import
return unless import_file_updated_at_changed?
FileImportJob.perform_later id
end
end
and a job
class FileImportJob < ApplicationJob
queue_as :default
def perform(file_id)
file = File.find file_id
filepath = file.import_file.url
# fetch file
response = HTTParty.get filepath
# we only need the contents of the response
csv_text = response.body
# use the csv gem to create csv table
csv = CSV.parse csv_text, headers: true
p "csv class: #{csv.class}" # => "csv class: CSV::Table"
# loop through each table row and do something with the data
csv.each_with_index do |row, index|
if index == 0
p "row class: #{row.class}" # => "row class: CSV::Row"
p row.to_hash # hash of all the keys and values from the csv file
end
end
end
end
In your controller
def create
#file.create file_params
end
def file_params
params.require(:file).permit(:import_file)
end
First you should save the file on storage(either local or AWS S3).
Then pass filepath or uuid as a parameter to background job.
I strongly recommend avoiding passing Tempfile on parameters. This stores object in memory which can get out of date, causing stale data problems.
I'm working on a project that is migrating data from a customers old_busted DB into rails objects to be worked on later. Similarly, I need to convert these objects into a CSV and upload it to a neutral FTP (this is to allow a coworker to build the example pages through Sugar CRM). I've created rake files to do all of this, and it was successful. Now, I'm going to continue this process for each object that I create in rails (relative to the previous DB) and, best case, wanted these generated when I run rake generate scaffold <object>.
Here is my import rake:
desc "Import Clients from db"
task :get_busted_clients => [:environment] do
#old_clients = Busted::Client.all
#old_clients.each do |row|
#client = Client.new();
#client.client_id = row.NUMBER
#client.save
end
end
Here is my CSV convert/FTP upload rake:
desc "Exports db's to local CSV and uploads them to FTP"
task :export_clients_CSV => [:environment] do
# Required libraries for CSV read/write and NET/FTP IO #
require 'csv'
require 'net/ftp'
# Pull all Editor objects into clients for reading #
clients = Client.all
puts "Creating CSV file for <Clients> and updating column names..."
# Open a new CSV file that uses the column headers from Client #
CSV.open("clients.csv", "wb",
:write_headers => true, :headers => Client.column_names) do |csv|
puts "--Loading each entry..."
# Load all entries from Client into the CSV file row by row #
clients.each do |client|
# This line specifically puts the attributes in the rows WITH RESPECT TO#
# THE COLUMNS
csv << client.attributes.values_at(*Client.column_names)
end
puts "--Done loading each entry..."
end
puts "...Data populated. Finished bulding CSV. Closing File."
puts "------------------------"
# Upload CSV File to FTP server by requesting new FTP connection, assigning credentials
# and informing the client what file to look for and what to name it
puts "Uploading <Clients>..."
ftp = Net::FTP.new('192.168.xxx.xxx')
ftp.login(user = "user", passwd = "passwd")
ftp.puttextfile("clients.csv", "clients.csv")
ftp.quit()
puts "...Finished."
end
I ran rake generate g get_busted and put this in my get_busted_generator.rb:
class GetBustedGenerator < Rails::Generators::NamedBase
source_root File.expand_path('../templates', __FILE__)
def generate_get_busted
copy_file "getbusted.rake", "lib/tasks/#{file_name}.rake"
end
end
After that, I got lost. I can't find anything on templating a rake file or the syntax included to do so.
Rails has been a recent endeavor and I may be overlooking something in terms of design of the solution to my problem.
TL;DR: Is templating a rake file a bad thing? Solution alternatives? If not, whats the syntax for generating either script custom to the object (or point me in the direction, please).
I've been knocking my head around with Heroku, while trying to download a zip file with all my receipt files data.
The files are stored on amazon s3 and it all works fine on my development machine..
I thought it had to do with Tempfile, and abandoned that previous solution, since heroku has some strict policies with their filesystem, so i used the tmp folder, but the problem doesn't seem to be there. I already tried to load directly from s3 (using openUri) to the zip file, but it doesn't seem to work either on Heroku.
What might be wrong with my code for Heroku not loading the files to the zip?
Here is my model method :
def zip_receipts(search_hash=nil)
require 'zip/zip'
require 'zip/zipfilesystem'
t=File.open("#{Rails.root}/tmp/#{Digest::MD5.hexdigest(rand(12).to_s)}_#{Process.pid}",'w')
# t = Tempfile.new(Digest::MD5.hexdigest(rand(12).to_s))
# Give the path of the temp file to the zip outputstream, it won't try to open it as an archive.
Zip::ZipOutputStream.open(t.path) do |zos|
logger.debug("search hash Zip: #{search_hash.inspect}")
self.feed(search_hash).each do |receipt|
begin
require 'open-uri'
require 'tempfile'
#configures filename
filen = File.basename(receipt.receipt_file_file_name)
ext= File.extname(filen)
filen_noext = File.basename(receipt.receipt_file_file_name, '.*')
filen=filen_noext+SecureRandom.hex(10)+ext
logger.info("Info Zip - Filename: #{filen}")
# Create a new entry on the zip file
zos.put_next_entry(filen)
# logger.info("Info Zip - Added entry: #{zos.inspect}")
# Add the contents of the file, reading directly from amazon
tfilepath= "#{Rails.root}/tmp/#{File.basename(filen,ext)}_#{Process.pid}"
open(tfilepath,"wb") do |file|
file << open(receipt.authenticated_url(:original),:ssl_verify_mode => OpenSSL::SSL::VERIFY_NONE).read
end
zos.print IO.binread tfilepath
# logger.info("Info Zip - Extracted from amazon: #{zos.inspect}")
rescue Exception => e
logger.info("exception #{e}")
end # closes the exception begin
end #closes receipts cycle
end #closes zip file stream cycle
# The temp file will be deleted some time...
t.close
#returns the path for send file controller to act
t.path
end
My controller:
def download_all
#user = User.find_by_id(params[:user_id])
filepath = #user.zip_receipts
# Send it using the right mime type, with a download window and some nice file name.
send_file(filepath,type: 'application/zip', disposition: 'attachment',filename:"MyReceipts.zip")
end
And I write also my view and routes, so that it might serve anyone else trying to implement a download all feature
routes.rb
resources :users do
post 'download_all'
end
my view
<%= link_to "Download receipts", user_download_all_path(user_id:user.id), method: :post %>
The problem seemed to be with the search hash, and the sql query, and not the code itself. For some reason, the receipts get listed, but aren't downloaded. So it is an all different issue
In the end i have this code for the model
def zip_receipts(search_hash=nil)
require 'zip/zip'
require 'zip/zipfilesystem'
t=File.open("#{Rails.root}/tmp/MyReceipts.zip_#{Process.pid}","w")
# t = Tempfile.new(Digest::MD5.hexdigest(rand(12).to_s))
#"#{Rails.root}/tmp/RecibosOnline#{SecureRandom.hex(10)}.zip"
puts "Zip- Receipts About to enter"
# Give the path of the temp file to the zip outputstream, it won't try to open it as an archive.
Zip::ZipOutputStream.open(t.path) do |zos|
self.feed(search_hash).each do |receipt|
begin
require 'open-uri'
require 'tempfile'
filen = File.basename(receipt.receipt_file_file_name)
ext= File.extname(filen)
filen_noext = File.basename(receipt.receipt_file_file_name, '.*')
filen=filen_noext+SecureRandom.hex(10)+ext
# puts "Info Zip - Filename: #{filen}"
# Create a new entry on the zip file
zos.put_next_entry(filen)
zos.print open(receipt.authenticated_url(:original),:ssl_verify_mode => OpenSSL::SSL::VERIFY_NONE).read
rescue Exception => e
puts "exception #{e}"
end # closes the exception begin
end #closes receipts cycle
end #closes zip file stream cycle
# The temp file will be deleted some time...
t.close
#returns the path for send file controller to act
t.path
end
What is the best way to get a temporary directory with nothing in it using Ruby on Rails? I need the API to be cross-platform compatible. The stdlib tmpdir won't work.
The Dir object has a method mktmpdir which creates a temporary directory:
require 'tmpdir' # Not needed if you are using rails.
Dir.mktmpdir do |dir|
puts "My new temp dir: #{dir}"
end
The temporary directory will be removed after execution of the block.
The Dir#tmpdir function in the Ruby core (not stdlib that you linked to) should be cross-platform.
To use this function you need to require 'tmpdir'.
A general aprox I'm using now:
def in_tmpdir
path = File.expand_path "#{Dir.tmpdir}/#{Time.now.to_i}#{rand(1000)}/"
FileUtils.mkdir_p path
yield path
ensure
FileUtils.rm_rf( path ) if File.exists?( path )
end
So in your code you can:
in_tmpdir do |tmpdir|
puts "My tmp dir: #{tmpdir}"
# work with files in the dir
end
The temporary dir will be removed automatically when your method will finish.
Ruby has Dir#mktmpdir, so just use that.
require 'tempfile'
Dir.mktmpdir('prefix_unique_to_your_program') do |dir|
### your work here ###
end
See http://www.ruby-doc.org/stdlib-1.9.3/libdoc/tmpdir/rdoc/Dir.html
Or build your own using Tempfile tempfile that is process and thread unique, so just use that to build a quick Tempdir.
require 'tempfile'
Tempfile.open('prefix_unique_to_your_program') do |tmp|
tmp_dir = tmp.path + "_dir"
begin
FileUtils.mkdir_p(tmp_dir)
### your work here ###
ensure
FileUtils.rm_rf(tmp_dir)
end
end
See http://www.ruby-doc.org/stdlib-1.9.3/libdoc/tempfile/rdoc/Tempfile.html for optional suffix/prefix options.
require 'tmpdir' # not needed if you are loading Rails
tmp_dir = File.join(Dir::tmpdir, "my_app_#{Time.now.to_i}_#{rand(100)}")
Dir.mkdir(tmp_dir)
Works for me.
You can use Dir.mktmpdir.
Using a block will get rid of the temporary directory when it closes.
Dir.mktmpdir do |dir|
File.open("#{dir}/foo", 'w') { |f| f.write('foo') }
end
Or if you need multiple temp directories to exist at the same time, for example
context 'when there are duplicate tasks' do
it 'raises an DuplicateTask error' do
begin
tmp_dir1 = Dir.mktmpdir('foo')
tmp_dir2 = Dir.mktmpdir('bar')
File.new("#{tmp_dir1}/task_name", 'w+')
File.new("#{tmp_dir2}/task_name", 'w+')
expect { subject.filepath('task_name') }.to raise_error(TaskFinder::DuplicateTask)
ensure
FileUtils.remove_entry tmp_dir1
FileUtils.remove_entry tmp_dir2
end
end
end
Dir.mktmpdir creates a temporary directory under Dir.tmpdir (you'll need to require 'tmpdir' to see what that evaluates to).
If you want to use your own path, Dir.mktmpdir takes an optional second argument tmpdir if non-nil value is given. E.g.
Dir.mktmpdir(nil, "/var/tmp") { |dir| "dir is '/var/tmp/d...'" }
I started to tackle this by hijacking Tempfile, see below.
It should clean itself up as Tempfile does, but doesn't always yet..
It's yet to delete files in the tempdir.
Anyway I share this here, might be useful as a starting point.
require 'tempfile'
class Tempdir < Tempfile
require 'tmpdir'
def initialize(basename, tmpdir = Dir::tmpdir)
super
p = self.path
File.delete(p)
Dir.mkdir(p)
end
def unlink # copied from tempfile.rb
# keep this order for thread safeness
begin
Dir.unlink(#tmpname) if File.exist?(#tmpname)
##cleanlist.delete(#tmpname)
#data = #tmpname = nil
ObjectSpace.undefine_finalizer(self)
rescue Errno::EACCES
# may not be able to unlink on Windows; just ignore
end
end
end
This can be used the same way as Tempfile, eg:
Tempdir.new('foo')
All methods on Tempfile , and in turn, File should work.
Just briefly tested it, so no guarantees.
Update: gem install files, then
require "files"
dir = Files do
file "hello.txt", "stuff"
end
See below for more examples.
Here's another solution, inspired by a few other answers. This one is suitable for inclusion in a test (e.g. rspec or spec_helper.rb). It makes a temporary dir based on the name of the including file, stores it in an instance variable so it persists for the duration of the test (but is not shared between tests), and deletes it on exit (or optionally doesn't, if you want to check its contents after the test run).
def temp_dir options = {:remove => true}
#temp_dir ||= begin
require 'tmpdir'
require 'fileutils'
called_from = File.basename caller.first.split(':').first, ".rb"
path = File.join(Dir::tmpdir, "#{called_from}_#{Time.now.to_i}_#{rand(1000)}")
Dir.mkdir(path)
at_exit {FileUtils.rm_rf(path) if File.exists?(path)} if options[:remove]
File.new path
end
end
(You could also use Dir.mktmpdir (which has been around since Ruby 1.8.7) instead of Dir.mkdir but I find the API of that method confusing, not to mention the naming algorithm.)
Usage example (and another useful test method):
def write name, contents = "contents of #{name}"
path = "#{temp_dir}/#{name}"
File.open(path, "w") do |f|
f.write contents
end
File.new path
end
describe "#write" do
before do
#hello = write "hello.txt"
#goodbye = write "goodbye.txt", "farewell"
end
it "uses temp_dir" do
File.dirname(#hello).should == temp_dir
File.dirname(#goodbye).should == temp_dir
end
it "writes a default value" do
File.read(#hello).should == "contents of hello.txt"
end
it "writes a given value" do
# since write returns a File instance, we can call read on it
#goodbye.read.should == "farewell"
end
end
Update: I've used this code to kickstart a gem I'm calling files which intends to make it super-easy to create directories and files for temporary (e.g. unit test) use. See https://github.com/alexch/files and https://rubygems.org/gems/files . For example:
require "files"
files = Files do # creates a temporary directory inside Dir.tmpdir
file "hello.txt" # creates file "hello.txt" containing "contents of hello.txt"
dir "web" do # creates directory "web"
file "snippet.html", # creates file "web/snippet.html"...
"<h1>Fix this!</h1>" # ...containing "<h1>Fix this!</h1>"
dir "img" do # creates directory "web/img"
file File.new("data/hello.png") # containing a copy of hello.png
file "hi.png", File.new("data/hello.png") # and a copy of hello.png named hi.png
end
end
end # returns a string with the path to the directory
Check out the Ruby STemp library: http://ruby-stemp.rubyforge.org/rdoc/
If you do something like this:
dirname = STemp.mkdtemp("#{Dir.tmpdir}/directory-name-template-XXXXXXXX")
dirname will be a string that points to a directory that's guaranteed not to exist previously. You get to define what you want the directory name to start with. The X's get replaced with random characters.
EDIT: someone mentioned this didn't work for them on 1.9, so YMMV.