I'm writing a simple app that keeps track of how many lines of code there are in my rails application. It will keep track of how many lines of code I write per hour. At the moment I'm using a crontab command that runs every 10 minutes and appends the number of lines to a file.
0,10,20,30,40,50 * * * * cd bimble ; find . \( -name '*.rb' -o -name '*.erb' \) | xargs wc -l | tail -1 | sed 's/total//' >>linesOfCode.txt
Rather than writing to a file I would like to send numberOfLines variable to a rails app, what would be the easiest simplest way to do this?
Do I have to write an api? something like this http://squarism.com/2011/04/01/how-to-write-a-ruby-rails-3-rest-api/ or is there a better way to do this?
Thanks for your help!
Full API sounds like overkill. You can have a controller action that responds to GET, and pulls the right value you want out of the params hash, and does whatever you want internally in rails with it. Something like:
def getline
lines = params[:lines]
# do whatever, put in model, etc with lines var
end
Then you can simply have your cron script send that via a wget similar to
wget http://rails_url/controller/getline?lines=`cat linesOfCode.txt` &> /dev/null
Related
I do have a KeePass-Database which has up to 100 entries with url's in it. It has a bunch of entries where the url looks like this:
https://banking.consorsfinanz.de/onlinebanking-cfg/loginFormAction.do
Now I want to "shorten/cleanup" this URL's to this:
https://banking.consorsfinanz.de/
I could export the Database to csv and re-import it, but this forces me to create a new db which i try to avoid. Is there maybe another way? If not, can somebody write a line of code which runs preferrably in windows (if not, linux is also possible) to fix this in the csv?
Something like:
Search for the third occurence of / and delete everything afterwards OR
Search for * //*/ and delete everything afterwards
could work, or am I wrong?
Thank you!
where the url looks like this:
https://banking.consorsfinanz.de/onlinebanking-cfg/loginFormAction.do
Now I want to "shorten/cleanup" this URL's to this:
https://banking.consorsfinanz.de/
Awk
awk 'BEGIN{FS=OFS="/"}{print $1,$2,$3,""}'
example:
$ awk 'BEGIN{FS=OFS="/"}{print $1,$2,$3,""}' <<< "https://domain.name/foo/bar/blah/whatever"
https://domain.name/
Sed
sed 's#\(https://[^/]*/\).*#\1#'
example:
$ sed 's#\(https://[^/]*/\).*#\1#' <<<"https://domain.name/foo/bar/blah/whatever"
https://domain.name/
I have a Rails webapp full of students with test scores. Each student has exactly one test score.
I want the following functionality:
1.) The user enters an arbitrary test score into the website and presses "enter"
2.) "Some weird magic where data is passed from Rails database to bash script"
3.) The following bash script is run:
./tool INPUT file.txt
where:
INPUT = the arbitrary test score
file.txt = a list of all student test scores in the database
4.) "More weird magic where output from the bash script is sent back up to a rails view and made displayable on the webpage"
And that's it.
I have no idea how to do the weird magic parts.
My attempt at a solution:
In the rails dbconsole, I can do this:
SELECT score FROM students;
which gives me a list of all the test scores (which satisfies the "file.txt" argument to the bash script).
But I still don't know how my bash script is supposed to gain access to that data.
Is my controller supposed to pass the data down to the bash script? Or is my model supposed to? And what's the syntax for doing so?
I know I can run a bash script from the controller like this:
system("./tool")
But, unfortunately, I still need to pass the arguments to my script, and I don't see how I can do that...
You can just use the built-in ruby tools for running shell commands:
https://ruby-doc.org/core-2.3.1/Kernel.html#method-i-60
For example, in one of my systems I need to get image orientation:
exif_orientation = `exiftool -Orientation -S "#{image_path}"`.to_s.chomp
Judging from my use of .to_s, running the command may sometimes return nil, and I don't want an error trying to chomp nil. A normal output includes the line ending which I feed to chomp.
I need to go through in each and every folders/files in php to find some specific keyword in this case translation format e.g $this-> and $translator->.
I need to get those result and put it on to new files.
Here is what I have tried before using ruby.
this = File.readlines("folder_path.php")
#If I need to get any translation that contain $this-> should I use grep? I tried using grep before but not giving result that I need.
that = File.open("new_file.txt", "w")
that << this
that.close
Hope that I didn't make any confusion. Thanks.
Just use grep:
grep '$this->' -R *.php -n > result.txt
i have a set of tags, from which i need to extract some data. I knwo this might be simple. I am not able to get to the part exactly. The tag is shown bewlow.
<Response><Result>Success</Result></Response>
I want to extract whatever comes between the tags. In this case, 'Success'.
I tried using the grep command , but couldnt get it done. Any help would be appreciated.
echo "<Response><Result>Success</Result></Response>" | perl -npe 's/.*>([^<]+)<.*/$1/'
If the data is saved in a file:
perl -npe 's/.*>([^<]+)<.*/$1/' infile
How do I get a a complete list of all the urls that my rails application could generate?
I don't want the routes that I get get form rake routes, instead I want to get the actul URLs corrosponding to all the dynmically generated pages in my application...
Is this even possible?
(Background: I'm doing this because I want a complete list of URLs for some load testing I want to do, which has to cover the entire breadth of the application)
I was able to produce useful output with the following command:
$ wget --spider -r -nv -nd -np http://localhost:3209/ 2>&1 | ack -o '(?<=URL:)\S+'
http://localhost:3209/
http://localhost:3209/robots.txt
http://localhost:3209/agenda/2008/08
http://localhost:3209/agenda/2008/10
http://localhost:3209/agenda/2008/09/01
http://localhost:3209/agenda/2008/09/02
http://localhost:3209/agenda/2008/09/03
^C
A quick reference of the wget arguments:
# --spider don't download anything.
# -r, --recursive specify recursive download.
# -nv, --no-verbose turn off verboseness, without being quiet.
# -nd, --no-directories don't create directories.
# -np, --no-parent don't ascend to the parent directory.
About ack
ack is like grep but use perl regexps, which are more complete/powerful.
-o tells ack to only output the matched substring, and the pattern I used looks for anything non-space preceded by 'URL:'
You could pretty quickly hack together a program that grabs the output of rake routes and then parses the output to put together a list of the URLs.
What I have, typically, done for load testing is to use a tool like WebLOAD and script several different types of user sessions (or different routes users can take). Then I create a mix of user sessions and run them through the website to get something close to an accurate picture of how the site might run.
Typically I will also do this on a total of 4 different machines running about 80 concurrent user sessions to realistically simulate what will be happening through the application. This also makes sure I don't spend overly much time optimizing infrequently visited pages and can, instead, concentrate on overall application performance along the critical paths.
Check out the Spider Integration Tests written By Courtnay Gasking
http://pronetos.googlecode.com/svn/trunk/vendor/plugins/spider_test/doc/classes/Caboose/SpiderIntegrator.html