Rails detect changes to files programatically - ruby-on-rails

I would like to write a method that programatically detects whether any of the files in my rails app have been changed. Is it possible do do something like an MD5 of the whole app and store that in a session variable?
This is mostly for having some fun with cache manifest. I already have a dynamically generated cache and it works well in production. But in my dev environment, I would like the id of that cache to update whenever I change anything in the app directory (as opposed to every 10 seconds, which is how I have it setup right now).
Update
File.ctime(".") would be perfect, except that "." is not marked as having changed when deeper directory files have changed.
Does it make sense to iterate through all directories in "." and add together the ctimes for each?

Have you considered using Guard.
You can programatically do anything whenever a file in your project changes.
There is a nice railscast about it

There is a simple ruby gem called filewatcher. This is the most advanced example:
require 'filewatcher'
FileWatcher.new(["README.rdoc"]).watch() do |filename, event|
if(event == :changed)
puts "File updated: " + filename
end
if(event == :delete)
puts "File deleted: " + filename
end
if(event == :new)
puts "New file: " + filename
end
end

File.ctime is the key. Iterate through all files and create a unique id based on the sum of all their ctimes:
cache_id = 0
Dir.glob('./**/*') do |this_file|
ignore_files = ['.', '..', "log"]
ignore_files.each do |ig|
next if this_file == ig
end
cache_id += File.ctime(this_file).to_i if File.directory?(this_file)
end
Works like a charm, page only re-caches when it needs to, even in development.

Related

Rails FTP OPEN CSV

I have the following code to connect my rails app to my FTP. This works great. However, I want to use open-uri to open the csv file so I can parse it. Any ideas how to do this? I think it's an easy thing to do but I'm missing something.
require 'net/ftp'
ftp = Net::FTP.new
ftp.connect("xxx.xxx.xx.xxx",21)
ftp.login("xxxxx","xxxx")
ftp.chdir("/")
ftp.passive = true
puts ftp.list("TEST.csv")
You'll need to use #gettextfile.
A) Get the file to a local temporary file and read its content
# Creating a tmp file can be done differently as well.
# It may also be omitted, in which case `gettextfile`
# will create a file in the current directory.
Dir::Tmpname.create(['TEST', ['.csv']) do |file_name|
ftp.gettextfile('TEST.csv', file_name)
content = File.read(file_name)
end
B) Pass a block to gettextfile and get the content one line at a time
content = ''
ftp.gettextfile('TEST.csv') do |line|
content << line
end

change path of screenshot taken by calabash in ios simulator on scenario failure using ruby

i am using calabash to do testing on a iOS native app. calabash takes a screenshot and names it screenshot_0 on scenario failure and saves it in my project directory.
i want to know how i can change the path of the screenshot and how i can change the name of the file.
i used the following:
dir_path = "/Users/bmw/Desktop/Calabash/screenshots "
unless Dir.exist?(dir_path)
Dir.mkdir(dir_path,0777)
puts "=========Directory is created at #{dir_path}"
else
puts "=========Directory already exists at #{dir_path}"
end
#Run after each scenario
After do |scenario|
#Check, scenario is failed?
if(scenario.failed?)
time = Time.now.strftime('%Y_%m_%d_%Y_%H_%M_%S_')
name_of_scenario = time + scenario.name.gsub(/\s+/, "_").gsub("/","_")
puts "Name of snapshot is #{name_of_scenario}"
file_path = File.expand_path(dir_path)+'/'+name_of_scenario +'.png'
page.driver.simulator.save_screenshot file_path
puts "Snapshot is taken"
puts "#===========================================================#"
puts "Scenario:: #{scenario.name}"
puts "#===========================================================#"
end
end
i had seen page.driver.browser,simulator.save_screenshot somewhere... and replaced browser with simulator and that did not work...
is there any way to change location where ios sim saves simulator without touching failure_helpers?
Calabash exposes and environment variable named SCREENSHOT_PATH you can use to set the path where to save the screenshots.
As for the file name, you might want to try and use the screenshot API. Reading your comment you seem to have tried it already, but I think you might not have used the proper signature.
Looking at the source for screenshot we see that it's defined like this:
def screenshot(options={:prefix => nil, :name => nil})
...
As you can see it's expecting a map, so what you should try is
screenshot({:name => name_of_scenario })
Also note that the documentation says that the use of screenshot_embed is preferred to screenshot.

Carrierwave & Zipfiles: Using an extracted file as a version

Something I'm not getting about the version process...
I have a zip file with a file inside, and I want to upload the file as a "version" of the zip:
Uploader:
version :specificFile do
process :extract_file
end
def extract_file
file = nil
Zip::ZipFile.open(current_path) do |zip_file|
file = zip_file.select{|f| f.name.match(/specificFile/)}.first
zip_file.extract(file, "tmp/" + file.name.gsub("/", "-")){ true }
end
File.open("tmp/" + file.name.gsub("/", "-"))
end
Usage:
=link_to "Specific File", instance.uploader.specificFile.url
Only this just nets me two copies of the zip. Clearly, there's something I'm missing about how version / process works, and I haven't been able to find documentation that actually explains the magic.
So how do I do this, and what am I missing?
This provided the "why", although it took a bit to understand:
How do you create a new file in a CarrierWave process?
To rephrase, when you go to create a version, carrierwave makes a copy of the file and then passes the process the file path. When the process exits, carrierwave will upload the contents of that path - not the file the process returns, which is what I thought was going on.
Working code:
version :specificFile do
process :extract_file
def full_filename (for_file = model.logo.file)
"SpecificFile.ext"
end
end
def extract_plist
file = nil
Zip::ZipFile.open(current_path) do |zip_file|
file = zip_file.select{|f| f.name.match(/specificFile/)}.first
zip_file.extract(file, "tmp/" + file.name.gsub("/", "-")){ true }
end
File.delete(current_path)
FileUtils.cp("tmp/" + file.name.gsub("/", "-"), current_path)
end
So, to make what I want to happen, happen, I:
Tell carrierwave to use a particular filename. I'm using a hardcoded value but you should be able to use whatever you want.
Overwrite the contents of current_path with the contents you want under the version name. In my case, I can't just overwrite the zip while I'm "in it" (I think), so I make a copy of the file I care about and overwrite the zip via File and FileUtils.
PS - It would be nice to avoid the duplication of the zip, but it doesn't look like you can tell carrierwave to skip the duplication.

Ruby recurse directory

I am trying to recurse a directory, and all its subdirectories. I dont want to use "Find" or any other way except this one:
task :locate do
Dir.chdir(Dir.pwd+"/public/servers_info/config/deploy/")
puts "Current Directory is: "+ Dir.pwd
dir = Dir.pwd
def get_information(dir)
Dir.foreach(".") {|f|
next if f == '.' or f == '..'
if File.directory? f
puts f
#puts Dir.pwd+"/"+f
get_information(Dir.pwd+"/"+f)
else
puts "Not Directory"
end
}
end
get_information(dir)
end
I am pretty sure that it will work, I just dont know why it get stucks in the first directory! It enters the base directory, checks is the file is a directory or not, and then runs the SAME function again. But it doesnt! it gets stuck on the first folder and I get an error! Any help?
Your code is always looking at the "current" (.) directory. Your get_information method passes in a value bound to dir, which you never use.
Since you never use that parameter, you never change directories.
What you're trying to do is easier with Dir.glob, but if you're wedded to your solution, you'll need to change Dir.foreach(".") to something like Dir.foreach(dir).
Edited to add: If all you want is to print out a list of subdirectories, I would do
puts Dir.glob('*/**').select { |f| File.directory? f}
This includes only directories. If you want pretty close to the exact output of your existing code, I would do something like:
puts Dir.glob('*/**').map { |f| File.directory?(f)? f : "Not a Directory" }
Check out Dir.glob. Docs here

Generating a CSV and uploading it to S3 when finished in a background job

I'm providing users with the ability to download an extremely large amount of data via CSV. To do this, I'm using Sidekiq and putting the task off into a background job once they've initiated it. What I've done in the background job is generate a csv containing all of the proper data, storing it in /tmp and then call save! on my model, passing the location of the file to the paperclip attribute which then goes off and is stored in S3.
All of this is working perfectly fine locally. My problem now lies with Heroku and it's ability to store files for a short duration dependent on what node you're on. My background job is unable to find the tmp file that gets saved because of how Heroku deals with these files. I guess I'm searching for a better way to do this. If there's some way that everything can be done in-memory, that would be awesome. The only problem is that paperclip expects an actual file object as an attribute when you're saving the model. Here's what my background job looks like:
class CsvWorker
include Sidekiq::Worker
def perform(report_id)
puts "Starting the jobz!"
report = Report.find(report_id)
items = query_ranged_downloads(report.start_date, report.end_date)
csv = compile_csv(items)
update_report(report.id, csv)
end
def update_report(report_id, csv)
report = Report.find(report_id)
report.update_attributes(csv: csv, status: true)
report.save!
end
def compile_csv(items)
clean_items = items.compact
path = File.new("#{Rails.root}/tmp/uploads/downloads_by_title_#{Process.pid}.csv", "w")
csv_string = CSV.open(path, "w") do |csv|
csv << ["Item Name", "Parent", "Download Count"]
clean_items.each do |row|
if !row.item.nil? && !row.item.parent.nil?
csv << [
row.item.name,
row.item.parent.name,
row.download_count
]
end
end
end
return path
end
end
I've omitted the query method for readabilities sake.
I don't think Heroku's temporary file storage is the problem here. The warnings around that mostly center around the facts that a) dynos are ephemeral, so anything you write can and will disappear without notice; and b) dynos are interchangeable, so the presence of inter-request tempfiles are a matter of luck when you have more than one web dyno running. However, in no situation do temporary files just vanish while your worker is running.
One thing I notice is that you're actually creating two temporary files with the same name:
> path = File.new("/tmp/filename", "w")
=> #<File:/tmp/filename>
> path.fileno
=> 3
> CSV.open(path, "w") do |csv| csv << %w(foo bar baz); puts csv.fileno end
4
=> nil
You could change the path = line to just set the filename (instead of opening it for writing), and then make update_report open the filename for reading. I haven't dug into what Paperclip does when you give it an empty, already-overwritten, opened-for-writing file handle, but changing that flow may well fix the issue.
Alternately, you could do this in memory instead: generate the CSV as a string and give it to Paperclip as a StringIO. (Paperclip supports certain non-file objects, including StringIOs, using e.g. Paperclip::StringioAdapter.) Try something like:
# returns a CSV as a string
def compile_csv(items)
CSV.generate do |csv|
# ...
end
end
def update_report(report_id, csv)
report = Report.find(report_id)
report.update_attributes(csv: StringIO.new(csv), status: true)
report.save!
end

Resources