Profiling a JRUBY rails application outputs <unknown> elements - ruby-on-rails

Environment: Linux Mint 32 bit, JRuby-1.6.5 [ i386 ], Rails 3.1.3.
I am trying to profile my rails application deployed on JRuby 1.6.5 on WEBrick (in development mode).
My JRUBY_OPTS: "-Xlaunch.inproc=false --profile.flat"
In one of my models, I introduced an explicit sleep(5) and ensured that this method is called as part of before_save hook while saving the model. Pseudo code...
class Invoice < ActiveRecord::Base
<some properties here...>
before_save :delay
private
def delay
sleep(5)
end
end
The above code ensures that just before an instance of Invoice gets persisted, the method, delay is invoked automatically.
Now, when I profile the code that creates this model instance (through an rspec unit test), I get the following output:
6.31 0.00 6.31 14 RSpec::Core::ExampleGroup.run
6.30 0.00 6.30 14 RSpec::Core::ExampleGroup.run_examples
6.30 0.00 6.30 1 RSpec::Core::Example#run
6.30 0.00 6.30 1 RSpec::Core::Example#with_around_hooks
5.58 0.00 5.58 1 <unknown>
5.43 0.00 5.43 2 Rails::Application::RoutesReloader#reload!
5.00 0.00 5.00 1 <unknown>
5.00 5.00 0.00 1 Kernel#sleep
4.87 0.00 4.87 40 ActiveSupport.execute_hook
4.39 0.00 4.39 3 ActionDispatch::Routing::RouteSet#eval_block
4.38 0.00 4.38 2 Rails::Application::RoutesReloader#load_paths
In the above output, why do I see those two elements instead of Invoice.delay or something similar.
In fact, when I start my rails server (WEBrick) with the same JRUBY_OPTS (mentioned above), all my application code frames show up as unknown elements in the profiler output !
Am I doing anything wrong ?

Looks like you max of the profile methods limit
Set -Xprofile.max.methods JRUBY_OPTS to a big number (default is 100000 and is never enough). E.g.
export JRUBY_OPTS="--profile.flat -Xprofile.max.methods=10000000"

Related

Ceph cluster down, Reason OSD Full - not starting up

Cephadm Pacific v16.2.7
Our Ceph cluster is stuck pgs degraded and osd are down
Reason:- OSD's got filled up
Things we tried
Changed vale to to maximum possible combination (not sure if done right ?)
backfillfull < nearfull, nearfull < full, and full < failsafe_full
ceph-objectstore-tool - tried to delte some pgs to recover space
tried to mount osd and delete pg's to recover some space, but not sure how to do it in bluestore .
Global Recovery Event - stuck for ever
ceph -s
cluster:
id: a089a4b8-2691-11ec-849f-07cde9cd0b53
health: HEALTH_WARN
6 failed cephadm daemon(s)
1 hosts fail cephadm check
Reduced data availability: 362 pgs inactive, 6 pgs down, 287 pgs peering, 48 pgs stale
Degraded data redundancy: 5756984/22174447 objects degraded (25.962%), 91 pgs degraded, 84 pgs undersized
13 daemons have recently crashed
3 slow ops, oldest one blocked for 31 sec, daemons [mon.raspi4-8g-18,mon.raspi4-8g-20] have slow ops.
services:
mon: 5 daemons, quorum raspi4-8g-20,raspi4-8g-25,raspi4-8g-18,raspi4-8g-10,raspi4-4g-23 (age 2s)
mgr: raspi4-8g-18.slyftn(active, since 3h), standbys: raspi4-8g-12.xuuxmp, raspi4-8g-10.udbcyy
osd: 19 osds: 15 up (since 2h), 15 in (since 2h); 6 remapped pgs
data:
pools: 40 pools, 636 pgs
objects: 4.28M objects, 4.9 TiB
usage: 6.1 TiB used, 45 TiB / 51 TiB avail
pgs: 56.918% pgs not active
5756984/22174447 objects degraded (25.962%)
2914/22174447 objects misplaced (0.013%)
253 peering
218 active+clean
57 undersized+degraded+peered
25 stale+peering
20 stale+active+clean
19 active+recovery_wait+undersized+degraded+remapped
10 active+recovery_wait+degraded
7 remapped+peering
7 activating
6 down
2 active+undersized+remapped
2 stale+remapped+peering
2 undersized+remapped+peered
2 activating+degraded
1 active+remapped+backfill_wait
1 active+recovering+undersized+degraded+remapped
1 undersized+peered
1 active+clean+scrubbing+deep
1 active+undersized+degraded+remapped+backfill_wait
1 stale+active+recovery_wait+undersized+degraded+remapped
progress:
Global Recovery Event (2h)
[==========..................] (remaining: 4h)
'''
Some versions of BlueStore were susceptible to BlueFS log growing extremely large - beyond the point of making booting OSD impossible. This state is indicated by booting that takes very long and fails in _replay function.
This can be fixed by::
ceph-bluestore-tool fsck –path osd path –bluefs_replay_recovery=true
It is advised to first check if rescue process would be successful::
ceph-bluestore-tool fsck –path osd path –bluefs_replay_recovery=true –bluefs_replay_recovery_disable_compact=true
If above fsck is successful fix procedure can be applied
Special Thank you to, this has been solved with the help of a dewDrive Cloud backup faculty Member

Parsing a huge file in Fortran

I am trying to parse an output file of a popular QM program, in order to extract data corresponding to two related properties: 'frequencies' and 'intensities'. An example of how the output file looks can be found below:
Max difference between off-diagonal Polar Derivs IMax= 2 JMax= 3 KMax= 13 EMax= 8.65D-04
Full mass-weighted force constant matrix:
Low frequencies --- -2.0296 -1.7337 -1.3848 -0.0005 -0.0003 0.0007
Low frequencies --- 216.4611 263.3990 368.1703
Diagonal vibrational polarizability:
18.1080784 9.1046025 11.9153848
Diagonal vibrational hyperpolarizability:
127.1032599 2.7794305 -8.7599786
Harmonic frequencies (cm**-1), IR intensities (KM/Mole), Raman scattering
activities (A**4/AMU), depolarization ratios for plane and unpolarized
incident light, reduced masses (AMU), force constants (mDyne/A),
and normal coordinates:
1 2 3
A A A
Frequencies -- 216.4611 263.3989 368.1703
Red. masses -- 3.3756 1.0427 3.0817
Frc consts -- 0.0932 0.0426 0.2461
IR Inten -- 3.6192 21.7801 0.2120
Raman Activ -- 1.0049 0.1635 0.9226
Depolar (P) -- 0.6948 0.6536 0.7460
Depolar (U) -- 0.8199 0.7905 0.8546
Atom AN X Y Z X Y Z X Y Z
1 6 0.00 0.00 0.22 0.00 0.01 0.02 0.06 0.15 -0.01
2 7 0.00 0.00 0.00 0.00 0.00 0.00 0.10 -0.02 0.00
3 6 0.00 0.00 -0.23 0.00 -0.01 0.00 0.01 -0.07 0.00
4 6 0.00 0.00 0.00 0.00 0.00 0.00 -0.08 -0.02 0.00
5 6 0.00 0.00 0.21 0.00 0.01 -0.03 -0.06 0.15 0.00
6 6 0.00 0.00 0.11 0.00 0.01 0.00 -0.01 0.17 0.00
7 7 -0.02 0.00 -0.22 0.00 0.03 0.00 -0.01 -0.26 0.00
8 1 0.10 -0.02 -0.32 0.02 -0.30 0.66 0.34 -0.39 -0.13
9 1 0.07 -0.02 -0.39 -0.05 -0.25 -0.63 -0.37 -0.40 0.12
10 1 0.00 0.00 0.39 0.01 0.01 0.07 0.18 0.22 -0.03
11 1 0.00 0.00 -0.53 0.00 -0.01 0.01 0.02 -0.15 0.01
12 1 0.00 0.00 -0.03 -0.01 0.00 -0.02 -0.18 -0.09 0.00
13 1 0.00 0.00 0.31 0.00 0.00 -0.09 -0.18 0.22 0.03
4 5 6
A A A
Frequencies -- 411.0849 501.4206 548.5728
Red. masses -- 3.4204 2.8766 6.5195
Frc consts -- 0.3406 0.4261 1.1559
IR Inten -- 4.2311 30.8234 6.3698
Raman Activ -- 0.1512 0.8402 4.2329
Depolar (P) -- 0.7404 0.1511 0.4224
Depolar (U) -- 0.8508 0.2625 0.5939
Atom AN X Y Z X Y Z X Y Z
1 6 0.00 0.00 0.20 0.00 -0.01 0.01 0.02 -0.12 -0.01
2 7 0.00 0.00 -0.21 0.00 0.00 -0.16 0.06 -0.18 0.02
3 6 0.00 0.00 -0.03 0.01 0.00 0.15 0.32 -0.01 -0.02
4 6 0.00 0.00 0.27 0.01 0.00 -0.08 0.18 0.10 0.01
5 6 0.00 0.00 -0.23 0.00 0.00 -0.03 0.11 0.19 0.00
6 6 0.00 0.00 -0.02 0.00 0.00 0.32 -0.26 0.01 -0.04
7 7 0.00 -0.01 0.01 -0.04 0.00 -0.04 -0.39 0.02 0.04
8 1 -0.01 0.05 -0.10 0.17 0.03 -0.36 -0.36 0.06 -0.08
9 1 -0.02 0.04 0.16 0.15 -0.01 -0.35 -0.30 0.02 -0.11
10 1 0.01 0.01 0.48 0.01 0.00 -0.35 0.22 -0.01 0.03
11 1 0.00 0.00 -0.12 0.01 0.00 0.23 0.31 0.13 -0.02
12 1 0.00 0.00 0.54 0.00 0.00 -0.39 -0.02 -0.03 0.05
13 1 -0.01 0.00 -0.47 0.01 0.00 -0.45 0.34 0.06 0.04
7 8 9
A A A
Frequencies -- 629.8582 652.6212 716.4846
Red. masses -- 7.0000 1.4491 2.4272
Frc consts -- 1.6362 0.3637 0.7341
IR Inten -- 9.4587 253.3389 18.8342
Raman Activ -- 3.5151 11.7363 0.2311
Depolar (P) -- 0.7397 0.2892 0.7423
Depolar (U) -- 0.8504 0.4486 0.8521
Atom AN X Y Z X Y Z X Y Z
1 6 0.24 -0.18 -0.01 -0.02 0.03 -0.04 0.00 0.00 -0.12
2 7 0.30 0.27 0.02 -0.02 0.00 0.04 0.00 0.00 0.17
3 6 0.06 0.12 -0.02 -0.03 -0.01 -0.04 0.00 0.00 -0.15
4 6 -0.23 0.23 0.01 0.02 -0.04 0.02 0.00 0.00 0.18
5 6 -0.22 -0.20 -0.01 0.02 0.00 -0.04 0.00 0.00 -0.08
6 6 -0.04 -0.15 -0.02 0.04 0.01 -0.04 0.00 0.00 0.13
7 7 -0.13 -0.07 0.06 -0.05 0.00 0.14 0.01 0.00 -0.01
8 1 0.02 -0.03 -0.20 0.30 0.13 -0.57 0.00 -0.02 0.05
9 1 0.00 -0.12 -0.26 0.29 -0.10 -0.63 -0.01 0.02 0.05
the code I'm using is:
program gau_parser
implicit none
integer :: ierr ! Error value for read statement
integer, parameter :: iu = 20 ! input unit
integer, parameter :: ou = 30 ! output unit
character (len=*), parameter :: search_str = " Frequencies --" ! this is the property I'm looking for
! ^===============^ there are 15 characters here. First character is blank.
!
! NOTE: a typical string looks like this: " Frequencies -- 411.0849 501.4206 548.5728"
! ============== ======== ======== ========
! search_str xx(1) xx(2) xx(3)
!
! the string length is 73 but may be variable but very seldomly more than 80
!
real :: xx(3) ! this will be the three values associated to the above property
character (len=80) :: text
character (len=15) :: word
open (unit=iu,file="dummy.log",action="read") ! read the file I wish to parse
open (unit=ou,file='output.log',action="write") ! Open a file where I wish the parse results to be written to!
do ! the search is done line by line, until the end of the file
read (iu,"(a)",iostat=ierr) text ! read line into character variable
if (ierr /= 0) then
cycle ! If a reading error occurs, advance to new line
end if
read (text,*) word ! read first word of line
if (word == search_str) then ! found search string at beginning of line
read (text,*) word,xx ! read the entire line
write(30,*) word,xx ! write the entire line
end if
end do ! finish the search cycle
end program gau_parser
My questions are following:
a) The present code is compilable, but 'hangs up' upon execution. Can anyone compile their own version and see if the same is happening to them? What (user induced) error may be causing such behavior?
b) How can I make the multiple values of 'xx' be written in a single array in sequence? That is, they should be read like this from the parsed file
word xx(1) xx(2) xx(3)
...
junk
...
word xx(4) xx(5) xx(6)
...
more junk
...
word xx(7) xx(8) xx(9)
I know that I've stated in the program the array to be of dimension(3), but that is just for test sake. In reality, it must be allocated but unspecified until, upon reaching the end of the parsed file, it must INQUIRE:SIZE. My idea is to print it into a scratch file, evaluate it, and the write it back in memory, as xx(INQUIRE:SIZE) dimension array. Any thought on the matter would be most welcome!
EDIT: After trying to debug the program, I realized that it was actually looping! I've inserted a couple of write statements to see what could be going wrong
open (unit=iu,file="dummy.log",action="read") ! read the file I wish to parse
print*,'file opened'
! open (unit=ou,file='output.log',action="write") ! Open a file where I wish the parse results to be written to!
do ! the search is done line by line, until the end of the file
print*,'Do loop has started'
read (iu,"(a)",iostat=ierr) text ! read line into character variable
if (ierr /= 0) then
write(*,*)'Error!'
cycle ! If a reading error occurs, advance to new line
end if
and ... voilà! My screen started to be filled up by a flurry of
Error!
Do has started
messages! In essence, I'm stuck in a loop! Where have I failed?
There is a subtle error in the code. The statement
read (iu,"(a)",iostat=ierr) text ! read line into character variable
reads a line of text from the file into the variable text, and it uses the edit descriptor "(a)" which means that text is what you expect it to be. On the other hand the statement
read (text,*) word
uses list directed input (that's what the * means) and it does not get, for example, the string Frequencies from the line. Helpfully the compiler strips off the leading blank characters and word gets the string Frequencies (no leading space). This will never match the searched-for string.
An aside: especially when developing codes do not let loops run
indefinitely, put in a reasonable maximum loop iteration, eg do ix = 1,200 for your test case, this will stop you wasting time staring at
a computation which ain't ever going to finish.
The reason that the code runs forever is that there is no end condition. Instead, the block of code
if (ierr /= 0) then
cycle ! If a reading error occurs, advance to new line
end if
sends execution back to the do statement - ad infinitum. I would use a stopping condition like this:
IF (IS_IOSTAT_END(ierr)) EXIT
The function IS_IOSTAT_END frees you from having to figure out what error code end-of-file causes on your compiler, the values of those codes are not standardised. IS_IOSTAT_EOR is useful to check for end-of-record.
The next error you will find is that the statement
read (text,*) word
won't make word match Frequencies -- either. Again, using list-directed input means that the compiler will treat blank spaces in the input file as separators, and the line of code will only get Frequencies into word. But that leads to another problem,
read (text,*) word,xx ! read the entire line
will try to read the string -- into the real variable xx, with unhappy results.
One, perhaps the, solution to this series of problems, is to use an explicit edit descriptor in the read statements, like this. First change
read (text,*) word
to
read (text,'(a15)') word
Next, you have to change the line to read xx to something like
read (text,'(a15,3(f18.4))') word,xx ! read the entire line
You will find that, as it stands, this line does not read all 3 values into xx correctly. That's because the edit descriptor 3(f18.4) does not quite properly describe the layout of the line, in fact it may need f(18.4),2(fNN.4), where of course you replace NN by the proper field width for your file. And it's time you did some of the work.

Test Plan with ApacheBench(AB) testing tool

I am trying load testing here. My backend is in Ruby(2.2) on Rails(3).
I read many pages about how to work with Ab testing.
Here is what I have tried:
ab -n 100 -c 30 url
Result:
This is ApacheBench, Version 2.3 <$Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 52.74.130.35 (be patient).....done
Server Software: nginx/1.6.2
Server Hostname: 52.74.130.35
Server Port: 80
Document Path: url
Document Length: 1372 bytes
Concurrency Level: 3
Time taken for tests: 10.032 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 181600 bytes
HTML transferred: 137200 bytes
Requests per second: 9.97 [#/sec] (mean)
Time per request: 300.963 [ms] (mean)
Time per request: 100.321 [ms] (mean, across all concurrent requests)
Transfer rate: 17.68 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 2 9 25.0 5 227
Processing: 176 289 136.5 257 1134
Waiting: 175 275 77.9 256 600
Total: 180 298 139.2 264 1143
Percentage of the requests served within a certain time (ms)
50% 264
66% 285
75% 293
80% 312
90% 361
95% 587
98% 1043
99% 1143
Which seams to be working perfectly. But my problem is I want to test many API's, not just one. So I have to write a script in which I write all the Api's with particular probabilities(weights) and load test on them.
I know how its possible with Locust, but locust does not support nested json to be passed as parameters.
Can somebody help with this.
Also let me know if there is any problem/ambiguity in the question itself.

Sphinx returns old data after indexer --rotate

I'm having sphinx version 2.0.4 fully working.
Whenever I want to reindex data, I'm using indexer
/usr/bin/indexer --config /etc/sphinxsearch/sphinx.conf XXX --rotate
It gives output:
root#dsphinx:~# /usr/bin/indexer --config /etc/sphinxsearch/sphinx.conf XXX --rotate
using config file '/etc/sphinxsearch/sphinx.conf'...
indexing index 'XXX'...
collected 9536 docs, 55.8 MB
sorted 4.7 Mhits, 100.0% done
WARNING: 2 duplicate document id pairs found
total 9536 docs, 55758410 bytes
total 3.930 sec, 14187197 bytes/sec, 2426.34 docs/sec
total 4 reads, 0.005 sec, 2926.5 kb/call avg, 1.3 msec/call avg
total 262 writes, 0.062 sec, 311.5 kb/call avg, 0.2 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=14068).
The problem is that process 14068 gives old indexed data.
If I reload service (/etc/inid.d/sphinxsearch reload) this process ID is changed and sphinx returns new indexed data.
Is this a bug or I'm not doing something right?
How are you running queries?
Are you using any sort of persistant connection manager in your client? If so, it might be holding connections open, that doesnt give searchd a chance to actully restart.
(ie the restart will be delayed until all connections are closed)

reducing jitter of serial ntp refclock

I am currently trying to connect my DIY DC77 clock to ntpd (using Ubuntu). I followed the instructions here: http://wiki.ubuntuusers.de/Systemzeit.
With ntpq I can see the DCF77 clock
~$ ntpq -c peers
remote refid st t when poll reach delay offset jitter
==============================================================================
+dispatch.mxjs.d 192.53.103.104 2 u 6 64 377 13.380 12.608 4.663
+main.macht.org 192.53.103.108 2 u 12 64 377 33.167 5.008 4.769
+alvo.fungus.at 91.195.238.4 3 u 15 64 377 16.949 7.454 28.075
-ns1.blazing.de 213.172.96.14 2 u - 64 377 10.072 14.170 2.335
*GENERIC(0) .DCFa. 0 l 31 64 377 0.000 5.362 4.621
LOCAL(0) .LOCL. 12 l 927 64 0 0.000 0.000 0.000
So far this looks OK. However I have two questions.
What exactly is the sign of the offset? Is .DCFa. ahead of the system clock or behind the system clock?
.DCFa. points to refclock-0 which is a DIY DCF77 clock emulating a Meinberg clock. It is connected to my Ubuntu Linux box with an FTDI usb-serial adapter running at 9600 7e2. I verified with a DSO that it emits the time with jitter significantly below 1ms. So I assume the jitter is introduced by either the FTDI adapter or the kernel. How would I find out and how can I reduce it?
Part One:
Positive offsets indicate time in the client is behind time on the server.
Negative offsets indicate that time in the client is ahead of time on the server.
I always remember this as "what needs to happen to my clock?"
+0.123 = Add 0.123 to me
-0.123 = Subtract 0.123 from me
Part Two:
Yes the USB serial converters add jitter. Get a real serial port:) You can also use setserial and tell it that the serial port needs to be low_latency. Just apt-get setserial.
Bonus Points:
Lose the unreferenced local clock entry. NO LOCL!!!!

Resources