I have a file like this reported from Emma.
OVERALL COVERAGE SUMMARY:
[class, %] [method, %] [block, %] [line, %] [name]
6% (2/33)! 3% (4/150)! 1% (48/4378)! 1% (11.8/799)! all classes
OVERALL STATS SUMMARY:
total packages: 2
total classes: 33
total methods: 150
total executable files: 18
total executable lines: 799
As you can see, EMMA does not report SLOC, instead it says total executable lines: 799. What exactly is executable lines?
SLOC includes blank lines and comments too. Emma reports code coverage on the lines that contain code only, i.e executable lines of code.
Related
I am reading chapter 16 from OSTEP on memory segmentation.
In a example of the section, it translate the 15KB virtual address to physical address:
| Segment | Base | Size | Grow Positive |
| Code | 32KB | 2K | 1 |
| Heap | 34KB | 2K | 1 |
| Stack | 28KB | 2K | 0(negative) |
to translate 15KB virtual address to physical (in the text book):
15KB translate to bit => 11 1100 0000 00000
the top 2 bit(11) determined the segment, which is stack.
left with 3KB used to obtain correct offset:
3KB - maximum segment size = 3KB - 4KB = -1KB
physical address = 28KB -1KB = 27KB
My question is, in step 4, why is the maximum segment 4KB--isn't it 2KB?
in step 4, why is the maximum segment 4KB--isn't it 2KB?
For that part of that book; they're assuming that the hardware uses the highest 2 bits of the (14-bit) virtual address to determine which segment is being used. This leaves you with "14-2 = 12 bits" for the offset within a segment, so it's impossible for the hardware to support segments larger than 4 KiB (because the offset is 12 bits and 2**12 is 4 KiB).
Of course just because the maximum possible size of a segment is 4 KiB doesn't mean you can't have a smaller segment (e.g. a 2 KiB segment). For expand down segments I'd assume that the hardware being described in the book does something like "if(max_segment_size - offset >= segment_limit) { segmentation_fault(); }", so if the segment's limit is 2 KiB and "max_segment_size - offset = 4 KiB - 3 KiB = 1 KiB" it'd be fine (no segmentation fault) because 1 KiB less than the segment limit (2 KiB).
Note: Because no modern CPUs and no modern operating systems use segmentation (and because segmentation works differently on other CPUs - e.g. with segment registers and not "highest N bits select segment"); I'd be tempted to quickly skim through chapter 16 without paying much attention. The important part is "paging" (starting in chapter 18 of the book).
We have a java application running on Mule. We have the XMX value configured for 6144M, but are routinely seeing the overall memory usage climb and climb. It was getting close to 20 GB the other day before we proactively restarted it.
Thu Jun 30 03:05:57 CDT 2016
top - 03:05:58 up 149 days, 6:19, 0 users, load average: 0.04, 0.04, 0.00
Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie
Cpu(s): 4.2%us, 1.7%sy, 0.0%ni, 93.9%id, 0.2%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 24600552k total, 21654876k used, 2945676k free, 440828k buffers
Swap: 2097144k total, 84256k used, 2012888k free, 1047316k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3840 myuser 20 0 23.9g 18g 53m S 0.0 79.9 375:30.02 java
The jps command shows:
10671 Jps
3840 MuleContainerBootstrap
The jstat command shows:
S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT
37376.0 36864.0 16160.0 0.0 2022912.0 1941418.4 4194304.0 445432.2 78336.0 66776.7 232 7.044 17 17.403 24.447
The startup arguments are (sensitive bits have been changed):
3840 MuleContainerBootstrap -Dmule.home=/mule -Dmule.base=/mule -Djava.net.preferIPv4Stack=TRUE -XX:MaxPermSize=256m -Djava.endorsed.dirs=/mule/lib/endorsed -XX:+HeapDumpOnOutOfMemoryError -Dmyapp.lib.path=/datalake/app/ext_lib/ -DTARGET_ENV=prod -Djava.library.path=/opt/mapr/lib -DksPass=mypass -DsecretKey=aeskey -DencryptMode=AES -Dkeystore=/mule/myStore -DkeystoreInstance=JCEKS -Djava.security.auth.login.config=/opt/mapr/conf/mapr.login.conf -Dmule.mmc.bind.port=1521 -Xms6144m -Xmx6144m -Djava.library.path=%LD_LIBRARY_PATH%:/mule/lib/boot -Dwrapper.key=a_guid -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=10744 -Dwrapper.version=3.5.19-st -Dwrapper.native_library=wrapper -Dwrapper.arch=x86 -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 -Dwrapper.lang.domain=wrapper -Dwrapper.lang.folder=../lang
Adding up the "capacity" items from jps shows that only my 6144m is being used for java heap. Where the heck is the rest of the memory being used? Stack memory? Native heap? I'm not even sure how to proceed.
If left to continue growing, it will consume all memory on the system and we will eventually see the system freeze up throwing swap space errors.
I have another process that is starting to grow. Currently at about 11g resident memory.
pmap 10746 > pmap_10746.txt
cat pmap_10746.txt | grep anon | cut -c18-25 | sort -h | uniq -c | sort -rn | less
Top 10 entries by count:
119 12K
112 1016K
56 4K
38 131072K
20 65532K
15 131068K
14 65536K
10 132K
8 65404K
7 128K
Top 10 entries by allocation size:
1 6291456K
1 205816K
1 155648K
38 131072K
15 131068K
1 108772K
1 71680K
14 65536K
20 65532K
1 65512K
And top 10 by total size:
Count Size Aggregate
1 6291456K 6291456K
38 131072K 4980736K
15 131068K 1966020K
20 65532K 1310640K
14 65536K 917504K
8 65404K 523232K
1 205816K 205816K
1 155648K 155648K
112 1016K 113792K
This seems to be telling me that because the Xmx and Xms are set to the same value, there is a single allocation of 6291456K for the java heap. Other allocations are NOT java heap memory. What are they? They are getting allocated in rather large chunks.
Expanding a bit more details on Peter's answer.
You can take a binary heap dump from within VisualVM (right click on the process in the left-hand side list, and then on heap dump - it'll appear right below shortly after). If you can't attach VisualVM to your JVM, you can also generate the dump with this:
jmap -dump:format=b,file=heap.hprof $PID
Then copy the file and open it with Visual VM (File, Load, select type heap dump, find the file.)
As Peter notes, a likely cause for the leak may be non collected DirectByteBuffers (e.g.: some instance of another class is not properly de-referencing buffers, so they are never GC'd).
To identify where are these references coming from, you can use Visual VM to examine the heap and find all instances of DirectByteByffer in the "Classes" tab. Find the DBB class, right click, go to instances view.
This will give you a list of instances. You can click on one and see who's keeping a reference each one:
Note the bottom pane, we have "referent" of type Cleaner and 2 "mybuffer". These would be properties in other classes that are referencing the instance of DirectByteBuffer we drilled into (it should be ok if you ignore the Cleaner and focus on the others).
From this point on you need to proceed based on your application.
Another equivalent way to get the list of DBB instances is from the OQL tab. This query:
select x from java.nio.DirectByteBuffer x
Gives us the same list as before. The benefit of using OQL is that you can execute more more complex queries. For example, this gets all the instances that are keeping a reference to a DirectByteBuffer:
select referrers(x) from java.nio.DirectByteBuffer x
What you can do is take a heap dump and look for object which are storing data off heap such as ByteBuffers. Those objects will appear small but are a proxy for larger off heap memory areas. See if you can determine why lots of those might be retained.
So I need to print out an individual iphone app's memory usage for a soak test. It would help greatly if there was a stored log monitoring usage against time (ran periodically within the automated test).
To do this I've jailbroken the iPhone and installed mobile terminal. My plan was to use top -p to filter out the rest of the processes and then pipe out the output to a log file. Then the data could be reclaimed at a later date and analysed.
Unfortunately, when I run for PID 616:
top -p 616
then all I get is 616 printed off multiple times:
Processes: 77 total, 1 running, 5 stuck, 71 sleeping... 335 threads 02:38:09
Load Avg: 1.23, 0.93, 0.90 CPU usage: 3.33% user, 0.00% sys, 96.67% idle
SharedLibs: num = 0, resident = 0 code, 0 data, 0 linkedit.
MemRegions: num = 0, resident = 0 + 0 private, 0 shared.
PhysMem: 108M wired, 152M active, 39M inactive, 497M used, 519M free.
VM: 28G + 0 904390(0) pageins, 32065(0) pageouts
PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE
616
616
616
616
616
616
616
616
616
616
616
616
616
616
616
616
616
616
I've looked around, and it seems that the flags on the top for iphone are slightly different but I can't find a specific description. Can anyone show me how to print out the data just for one process?
Thanks.
If you want to find out the proper command line switches for top, or anything else, try something like this:
>> top --help
You will see, however, that the PID (-p) switch isn't supported in the version for jailbroken iOS.
However, if you use this:
>> top -l 2 | grep 616
It should give you what you need (in the second line of output). The -l switch gives you N samples. You need at least 2, because top calculates CPU% as a delta between samples, so with only 1 sample, it will always be 0%. If you only need memory usage, though, you can probably use:
>> top -l 1 | grep 616
Using just top | grep 616 doesn't work, because it runs continuously. You probably just want a single value, and then should let top exit.
Note: you'll probably need to install grep from Cydia, also. Just search for grep. It's a package published by Saurik.
Warning: because you're using grep to search for the right PID, you may need to have your code that parses the log file validate its log input. The right output will be in the file, but if the numeric PID matches any other lines, also, you'll get additional data. For example, if the PID you search for also happens to be the number of MB of memory used by another process, you'll get additional lines of output. The first column in your file, however, will always be the PID.
I've set up mahout to provide some classification for news articles, so i can extract only those news articles which are of interest.
I've gone through an manually trained the titles of these news articles, done approximately 80,000 (both articles i want and don't want)
I have written an app which outputs the top words and their scores, and it seems certain keywords are creeping high up on the top words.
Some of the so called top words are false positives, - they are only top because every title page has them.
such as 'stratford herald' (which is a name of the newspaper) - is there anyway to remove them once a model is already created?
There are about 20 top words which i would like to simply get rid off (or get mahout to ignore when providing best labels), but i don't want this to be an exercise on input (i.e. filtering those names id like to exclude on training input), i'd prefer to post remove as I've already spent a lot of time manually training.
home: 1067
dorset: 1493
details: 908
back: 867
poole: 1651
set: 819
help: 743
get: 812
bournemouth: 14728
new: 2661
avon: 2684
local: 3092
cherries: 1244
police: 1011
over: 1813
echo: 6526
null: 79983
after: 2292
stratford: 2657
school: 1395
jobs: 881
job: 6982
car: 772
herald: 2817
nurse: 1174
man: 1335
manager: 1071
day: 759
time: 764
council: 824
upon: 2676
Number of labels: 2
Number of documents in training set: 79983
Top 75 words for label negative_article
stratford: 10748.598348617554
herald: 7579.555884361267
avon: 7484.692479610443
upon: 7476.3635239601135
local: 7426.4039397239685
after: 3837.6605548858643
man: 3512.4373264312744
police: 2586.899124145508
over: 1537.557123184204
woman: 1434.1630334854126
Top 75 words for label other
bournemouth: 39076.86379265785
job: 24028.39960718155
echo: 22974.801107406616
new: 10888.526140213013
stratford: 8045.635549545288
poole: 7493.278381347656
over: 7077.8266887664795
school: 7011.863867282867
local: 7004.647378444672
dorset: 6961.040742397308
Environment: Linux Mint 32 bit, JRuby-1.6.5 [ i386 ], Rails 3.1.3.
I am trying to profile my rails application deployed on JRuby 1.6.5 on WEBrick (in development mode).
My JRUBY_OPTS: "-Xlaunch.inproc=false --profile.flat"
In one of my models, I introduced an explicit sleep(5) and ensured that this method is called as part of before_save hook while saving the model. Pseudo code...
class Invoice < ActiveRecord::Base
<some properties here...>
before_save :delay
private
def delay
sleep(5)
end
end
The above code ensures that just before an instance of Invoice gets persisted, the method, delay is invoked automatically.
Now, when I profile the code that creates this model instance (through an rspec unit test), I get the following output:
6.31 0.00 6.31 14 RSpec::Core::ExampleGroup.run
6.30 0.00 6.30 14 RSpec::Core::ExampleGroup.run_examples
6.30 0.00 6.30 1 RSpec::Core::Example#run
6.30 0.00 6.30 1 RSpec::Core::Example#with_around_hooks
5.58 0.00 5.58 1 <unknown>
5.43 0.00 5.43 2 Rails::Application::RoutesReloader#reload!
5.00 0.00 5.00 1 <unknown>
5.00 5.00 0.00 1 Kernel#sleep
4.87 0.00 4.87 40 ActiveSupport.execute_hook
4.39 0.00 4.39 3 ActionDispatch::Routing::RouteSet#eval_block
4.38 0.00 4.38 2 Rails::Application::RoutesReloader#load_paths
In the above output, why do I see those two elements instead of Invoice.delay or something similar.
In fact, when I start my rails server (WEBrick) with the same JRUBY_OPTS (mentioned above), all my application code frames show up as unknown elements in the profiler output !
Am I doing anything wrong ?
Looks like you max of the profile methods limit
Set -Xprofile.max.methods JRUBY_OPTS to a big number (default is 100000 and is never enough). E.g.
export JRUBY_OPTS="--profile.flat -Xprofile.max.methods=10000000"