Interactive prompt with thor - ruby-on-rails

I want to somehow ask the user to say their flickr_id, flickr_apikey and that stuff, but id' be most happy to do it under my install command so it dosn't end up being such a long and heavy line because of alle the arguments.
so something like
$ thor PhotoonRails:install
We're about to install your system.. blaa, blaa, blaa...
We have to know you're Flick ID, get i here http://idgettr.com/
Flickr ID: {here you should type your id}
We also has to know you're flick api key, make one here ...
API Key: {here you should type your key}
and so on? Do you get the idea, and can it be done?

Indeed it can!
You are looking for ask.
An example:
class PhotoonRails < Thor
desc "install", "install my cool stuff"
def install
say("We're about to install your system.. blaa, blaa, blaa... We have to know you're Flick ID, get i here http://idgettr.com")
flickr_id = ask("Flickr ID: ")
say("We also has to know you're flick api key, make one here ...")
flickr_api_key = ask("API Key: ")
# validate flickr creds
# do cool stuff
say("Complete!", GREEN)
end
end

It's also possible to set color as a symbol
say "Caution!", :yellow
ask 'Agreed?', :bold
# Choose limit:
ask "We have noticed.", :green, limited_to: ['proceed', 'exit']
# Default value (possible without :blue)
ask 'Type app name', :blue, default: 'blog'
Full list of available colors for Thor, here: http://www.rubydoc.info/github/wycats/thor/Thor/Shell/Color

Related

Rails : Pathname issue with WSL

Hey we are 3 students, we all use the same rails db:seed. Our project is well git pulled and coordinated, but ..
One uses Linux, the rails:db:seed works for him.
One uses Mac, the rails:db:seed works for him too.
Me, I use WSL, and it dosnt work !
I've tried both Windows & WSL paths, as the screens bellow.
Thanks if anyone can guide me !
config/seed.rb
# This file should contain all the record creation needed to seed the database with its default values.
# The data can then be loaded with the bin/rails db:seed command (or created alongside the database with db:setup).
#
# Examples:
#
# movies = Movie.create([{ name: 'Star Wars' }, { name: 'Lord of the Rings' }])
# Character.create(name: 'Luke', movie: movies.first)
require 'faker'
RealEstate.destroy_all
User.destroy_all
Category.destroy_all
Category.create(title:"House")
Category.create(title:"Flat")
10.times do
u = User.create(email: Faker::Internet.email, password: Faker::Internet.password)
end
30.times do
re = RealEstate.create(
title: Faker::Space.galaxy,
description: Faker::Lorem.paragraph_by_chars(number: 256),
address: Faker::Address.full_address,
location: Faker::Address.city,
price: Faker::Number.number(digits: 8),
user: User.all.sample(),
category: Category.all.sample()
)
re.images.attach(io: File.open(ENV['SAMPLE_IMAGES']), filename: 'sample_image')
end
puts "%" * 50
puts " Base de données remplie !"
puts "%" * 50
Resolved !
I actually went to the folder where the file was stocked and simply typed " pwd " :
So, in my case the pathname was : '/home/pedrofromperu/next/images/indian.jpg'
This issue you are experiencing happens to lots of people who switch between Unix-Like and Windows operating systems. Paths in Windows are written using ‘\’ instead of ‘/‘. This is particularly confusing when using WSL and PowerShell in windows terminal - you have to keep track of which shell environment you are using. Congrats using ‘print working directory’, pwd, to solve the problem.
One thing you could do if you change environments a lot, (and this is just one approach.) Use the OS gem like so:
require 'os'
OS.windows? # returns true or false.
You could then either provide different paths or get fancy and replace the characters in your string.

Rails+resque background job import not adding anything to the database

I have an issue with importing a lot of records from a user provided excel file into a database. The logic for this is working fine, and I’m using ActiveRecord-import to cut down on the number of database calls. However, when a file is too large, the processing can take too long and Heroku will return a timeout. Solution: Resque and moving the processing to a background job.
So far, so good. I’ve needed to add CarrierWave to upload the files to S3 because I can’t just hold the file in memory for the background job. The upload portion is also working fine, I created a model for them and am passing the IDs through to the queued job to retrieve the file later as I understand I can’t pass a whole ActiveRecord object through to the job.
I’ve installed Resque and Redis locally, and everything seems to be setup correctly in that regard. I can see the jobs I’m creating being queued and then run without failing. The job seems to run fine, but no records are added to the database. If I run the code from my job line by line in the console, the records are added to the database as I would expect. But when the queued jobs I’m creating run, nothing happens.
I can’t quite work out where the problem might be.
Here’s my upload controller’s create action:
def create
#upload = Upload.new(upload_params)
if #upload.save
Resque.enqueue(ExcelImportJob, #upload.id)
flash[:info] = 'File uploaded.
Data will be processed and added to the database.'
redirect_to root_path
else
flash[:warning] = 'Upload failed. Please try again.'
render :new
end
end
This is a simplified version of the job with fewer sheet columns for clarity:
class ExcelImportJob < ApplicationJob
#queue = :default
def perform(upload_id)
file = Upload.find(upload_id).file.file.file
data = parse_excel(file)
if header_matches? data
# Create a database entry for each row, ignoring the first header row
# using activerecord-import
sales = []
data.drop(1).each_with_index do |row, index|
sales << Sale.new(row)
if index % 2500 == 0
Sale.import sales
sales = []
end
end
Sale.import sales
end
def parse_excel(upload)
# Open the uploaded excel document
doc = Creek::Book.new upload
# Map rows to the hash keys from the database
doc.sheets.first.rows.map do |row|
{ date: row.values[0],
title: row.values[1],
author: row.values[2],
isbn: row.values[3],
release_date: row.values[5],
units_sold: row.values[6],
units_refunded: row.values[7],
net_units_sold: row.values[8],
payment_amount: row.values[9],
payment_amount_currency: row.values[10] }
end
end
# Returns true if header matches the expected format
def header_matches?(data)
data.first == {:date => 'Date',
:title => 'Title',
:author => 'Author',
:isbn => 'ISBN',
:release_date => 'Release Date',
:units_sold => 'Units Sold',
:units_refunded => 'Units Refunded',
:net_units_sold => 'Net Units Sold',
:payment_amount => 'Payment Amount',
:payment_amount_currency => 'Payment Amount Currency'}
end
end
end
I can probably have some improved logic anyway as right now I’m holding the whole file in memory, but that isn’t the issue I’m having – even with a small file that has only 500 or so rows, the job doesn’t add anything to the database.
Like I said my code worked fine when I wasn’t using a background job, and still works if I run it in the console. But for some reason the job is doing nothing.
This is my first time using Resque so I don’t know if I’m missing something obvious? I did create a worker and as I said it does seem to run the job. Here’s the output from Resque’s verbose formatter:
*** resque-1.27.4: Waiting for default
*** Checking default
*** Found job on default
*** resque-1.27.4: Processing default since 1508342426 [ExcelImportJob]
*** got: (Job{default} | ExcelImportJob | [15])
*** Running before_fork hooks with [(Job{default} | ExcelImportJob | [15])]
*** resque-1.27.4: Forked 63706 at 1508342426
*** Running after_fork hooks with [(Job{default} | ExcelImportJob | [15])]
*** done: (Job{default} | ExcelImportJob | [15])
In the Resque dashboard the jobs aren’t logged as failed. They get executed and I can see an increment in the ‘processed’ jobs on the stats page. But as I say the DB remains untouched. What’s going on? How can I debug the job more clearly? Is there a way to get into it with Pry?
It looks like my problem was with Resque.enqueue(ExcelImportJob, #upload.id).
I changed my code to ExcelImportJob.perform_later(#upload.id) and now my code actually runs!
I also added a resque.rake task to lib/tasks as described here: http://bica.co/2015/01/20/active-job-resque/.
That link also notes how to use rails runner to call the job without running the full Rails server and triggering the job, which is useful for debugging.
Strangely, I didn't quite manage to get the job to print anything to STDOUT as suggested by #hoffm but at least it led me down a good avenue of inquiry.
I still don't fully understand the difference between why calling Resqueue.enqueue still added my jobs to the queue and indeed seemed to run them, but the code wasn't executed, so if someone has a better grasp and an explanation, that would be much appreciated.
TL;DR: calling perform_later rather than Resque.enqueue fixed the problem but I don't know why.

Base CRM Rails Gem legacy search?

It looks like Base CRM has upgraded their API and replaced all of their endpoints/parameters.
Previously I was able to retrieve "Won" deals using this call:
session = BaseCrm::Session.new("<LEGACY_ACCESS_TOKEN>")
session.deals.all(stage: :won, sort_by: :last_activity, sort_order: :desc, page: 1)
This query recently started ignoring my parameters, yet it continued to respond with unfiltered data (that was fun when I realized that was happening).
The new syntax is:
client = BaseCRM::Client.new(access_token: "<YOUR_PERSONAL_ACCESS_TOKEN>")
client.deals.where(organization_id: google.id, hot: true)
yet this does not work:
client.deals.where(stage_name: :won)
client.deals.where(stage_name: "Won")
client.deals.where(stage_id: 8) # specified ID found in Base Docs for "Won"
etc.
I've looked into the most recent updates to the Base CRM Gem as well as the Base CRM API Docs but have not found a solution to searching by specific deal stage.
Has anyone had any luck with the new API and this kind of query?
Is there a way to use the legacy API?
I've left message with Base but I really need to fix this, you know, yesterday.
Thanks for your help!
ADDITIONAL INFO
The legacy API/gem responded with JSON where the v2 API/gem responds with a BaseCRM::Deal object:
$ session.deals.find(123456)
# <BaseCRM::Deal
dropbox_email="dropbox#67890.deals.futuresimple.com",
name="Cool Deal Name",
owner_id=54321,
creator_id=65432,
value=2500,
estimated_close_date=nil,
last_activity_at="2016-04-21T02:29:43Z",
tags=[],
stage_id=84588,
contact_id=098765432,
custom_fields={:"Event Location"=>"New York, NY", :Source=>"Friend"},
last_stage_change_at="2016-04-21T02:08:20Z",
last_stage_change_by_id=559951,
created_at="2016-04-18T22:16:35Z",
id=123456,
updated_at="2016-04-21T02:08:20Z",
organization_id=nil,
hot=false,
currency="USD",
source_id=1466480,
loss_reason_id=nil
>
Checkout stage_id. Is this a bug? According to the Docs stage_id should return an integer between 1 and 10.

Unable to retrieve excerpt from Postgres using pg_search gem

Update
First of all, there is no method "context". That was a word my brain made up at some point and stuck with. Obviously I should've been running .excerpt(). Second, I was running the command against the returned array, not an individual instance of the PG_Search::Document
Double mistakes, but yes the code does in fact work.
End Update
First some system info:
Ruby 1.9.3p194
Rails 3.2.13
pg_search 0.5.7
Postgres 9.2.3 (with
unaccent enabled)
I'm trying to follow the progress made in this thread: (How to show excerpts from pg-search multisearch results)
Okay so assuming the use of the following query cribbed from that post:
#query = params[:query]
PgSearch.multisearch(#query).select("ts_headline(pg_search_documents.content, plainto_tsquery('english', ''' ' || unaccent('#{#query}') || ' ''' || ':*')) AS excerpt")
returns:
=> [#<PgSearch::Document id: 7, content: "1 <p>You think water moves fast? You should see ice...", searchable_id: 2, searchable_type: "Release", created_at: "2013-03-27 18:58:52", updated_at: "2013-03-27 18:58:52">]
It successfully returns some search results but they don't have the context method at all. It's as if I just called multisearch without the select method.
I'm a newbie when it comes to SQL and Postgres so I'm not exactly sure where to start in debugging that snippet. I would love some help debugging or getting an explanation of what is happening.
Also, an aside that I think is important, I want to thank anyone who works on pg_search or responds to questions like these. You make the world a better place.
You have to select the other columns you need as well.
For example
sanitized = ActionController::Base.helpers.sanitize(params[:q])
#results = PgSearch.multisearch(params[:q])
.select(:id, :content, :searchable_id, :searchable_type)
.select(["ts_headline(pg_search_documents.content, plainto_tsquery('english', ''' ' || '#{sanitized}' || ' ''' || ':*')) AS excerpt"])
Will return the id, content, searchable_id, searchable_type, excerpt
Also notice the sanitizing. Don't want to suffer from an sql injection attack. :]

Testing interactive thor tasks

I have the following thor command:
require 'highline'
class Import < Thor
desc "files", "Import files into the database"
method_option "path", :required => true, :desc => "Path to folder containing new files", :aliases => "-p", :type => :string
def files
require './config/environment'
line = HighLine.new
line.say(line.color("Identified files as Version 15 (English)", :green))
if line.agree(line.color("Are you sure you want to import?", :yellow))
line.say(line.color("Finished. Imported 70,114 items", :green))
else
line.say(line.color("Aborting...", :red))
end
end
end
Now, obviously, at the moment this is just outputting some language to the screen. However, what I need to do is write a test for the command that tests the output is as I would expect, and that when I start hooking in the heavy lifting that I can stub that stuff out.
I've had a look at Aruba, but this doesn't appear to like interactivity for some reason, and it's not clear why.
Therefore, does anyone have any ideas on how this might be testable (with RSpec)?
Aruba is a pretty complete set of steps for testing command line apps. If it's not working for you it might be because aruba defaults all file operations into tmp/aruba.
But neimOo is right about how to write the scenario with aruba
When I run `thor import` interactively
And I type "yes"
Here is how you can do this with Aruba
Scenario: Test import
When I run `thor import` interactively
And I type "yes"
Then the stdout should contain "Finished. Imported 70,114 items"
Here you can find a lot of aruba examples
https://github.com/cucumber/aruba/blob/master/features/interactive.feature
And here is implementation itself
https://github.com/cucumber/aruba/blob/master/lib/aruba/cucumber.rb

Resources