Why are some external libc symbols not resolved in my "perf report" output? - perf

My "perf report" output has things like this:
18.31% 0.00% tclsqlite3 libc.so.6 [.] 0x00007f0aaa322be4
|
---0x7f0aaa322be4
18.28% 18.28% tclsqlite3 libc.so.6 [.] 0x00000000001a0be4
|
---__strncat_ssse3
ttThreadMain
sqlite3_step
sqlite3VdbeExec
sqlite3VdbeHalt
sqlite3HctBtreeCommitPhaseTwo
btreeFlushData
|
--18.27%--sqlite3HctDbInsert
|
--18.21%--0x7f0aaa322be4
and elsewhere stuff like:
|--63.81%--sqlite3HctDbInsert
| |--43.69%--hctDbInsert
| --18.24%--0x7f0aaa322be4
How should this be interpreted? Is the 18% some libc function? How does one get a symbol for it?

Related

Include path has been specified but still failed to include the header in the path in a Bazel C++ project

I have projects with a directory structure like that
---root
| |--src
| |--project1
| |--model
| | |--incude
| | | |--model
| | | |--modelA.hpp
| | | |--modelB.hpp
| | |--modelA.cpp
| | |--modelB.cpp
| | |--BUILD #1
| |...
| |--view
| |...
| |--common
| | |--include
| | |--common
| | |--data_type.hpp
| |--BUILD #2
|--WORKSPACE
As I have other package in this project and some of them use the same self-defined data type, I defined them in a package named common.
Now I include the data_type.hpp in file modelA.hpp
...
#include "common/data_type.hpp
...
Refering to the stage3 example in the tutorial, the BUID(#1) is like that
cc_library(
name = "modelA",
hdrs = "include/model/modelA.hpp",
deps = ["//src/project/common:data_type"],
copts = ["-Isrc/project/common/include"],
)
and the BUILD(#2) which defines the depedency module data_typeis like that
cc_library(
name = "data_type",
hdrs = ["include/common/data_type.hpp"],
visibility = ["//visibility:public"],
)
However, when I built the code, I got
src/project/model/include/model/modelA.hpp: fatal error: common/data_type.hpp: No such file or directory
Why I have defined copts = ["-Isrc/heimdallr/common/include"] but still got this error?
Please check the Header inclusion checking section of C/C++ Rules from the Bazel document. Relative to the workspace directory, all include paths should be created. Kindly refer to this issue for more information. Thank you!

any doable approach to use multiple GPUs, multiple process with tensorflow?

I am using docker container to run my experiment. I have multiple GPUs available and I want to use all of them for my experiment. I mean I want to utilize all GPUs for one program. To do so, I used tf.distribute.MirroredStrategy that suggested on tensorflow site, but it is not working. here is the full error messages on gist.
here is available GPUs info:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.51.06 Driver Version: 450.51.06 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:6A:00.0 Off | 0 |
| N/A 31C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla T4 Off | 00000000:6B:00.0 Off | 0 |
| N/A 31C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 Tesla T4 Off | 00000000:6C:00.0 Off | 0 |
| N/A 34C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 Tesla T4 Off | 00000000:6D:00.0 Off | 0 |
| N/A 34C P8 15W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
my current attempt
here is my attempt using tf.distribute.MirroredStrategy:
device_type = "GPU"
devices = tf.config.experimental.list_physical_devices(device_type)
devices_names = [d.name.split("e:")[1] for d in devices]
strategy = tf.distribute.MirroredStrategy(devices=devices_names[:3])
with strategy.scope():
model.compile(optimizer=opt, loss="categorical_crossentropy", metrics=["accuracy"])
The above attempt is not working and gave the error that listed on above gist. I don't find another way of using multiple GPUs for a single experiment.
does anyone any workable approach to make this happens? any thoughts?
Is the MirrordStrategy proper way to distribute the workload
The approach is correct, as long as the GPUs are on the same host. The TensorFlow manual has examples how the tf.distribute.MirroredStrategy can be used with keras to train the MNIST set.
Is it the MirrordStrategy the only strategy
No, there are multiple strategies that can be used to acheive the workload distribution. For example, the tf.distribute.MultiWorkerMirroredStrategy can also be used to distribute the work on multiple devices trough multiple workers.
The TF documentation explains the strategies, the limitations associated with the strategies and provides some examples to help kick-start the work.
The strategy is throwing an error
According to the issue from github, the ValueError: SyncOnReadVariable does not support 'assign_add' ..... is a bug in TensorFlow which is fixed in TF 2.4
You can try to upgrade the tensorflow libraries by
pip install --ignore-installed --upgrade tensorflow
Implementing variables that are not aware of distributed strategy
If you have tried the standard example from the documentation, and it works fine, but your model is not working, you might be having variables that are incorrectly set-up or you are using distributed variables that do not have support for the aggregation functions required by the distributed strategy.
As per the TF documentation:
..."
A distributed variable is variables created on multiple devices. As discussed in the glossary, mirrored variable and SyncOnRead variable are two examples.
"...
To better understand how to implement the custom support for the distributed varialbes, check the following page in the documentation

Alternative of grep in lua script

I have the following text file output.txt that I created (it has 15 colums including symbol | ):
[66] | alert:n | 3.0 | 10/22/2020-14:45:50.066928 | local_ip | 123.123.123.123 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[67] | alert:n | 3.0 | 10/22/2020-14:45:51.096955 | local_ip | 12.12.12.11 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[68] | alert:n | 3.0 | 10/22/2020-14:45:53.144942 | 123.123.123.123 | local_ip | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[69] | alert:n | 3.0 | 10/22/2020-14:45:57.176956 | local_ip | 68.73.203.109 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[70] | alert:n | 3.0 | 10/22/2020-14:46:05.240953 | 123.123.123.123 | local_ip | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
[71] | alert:n | 3.0 | 10/22/2020-14:46:21.624979 | local_ip | 68.73.203.109 | United States of America | SURICATA STREAM ESTABLISHED SYNACK resend with different ACK
I'm familiar with the bash script, let say if I want to count total specific ip of 123.123.123.123 that can be found in the 9th column, I can implement like this:
#!/bin/bash
ip = "123.123.123.123"
report = output.txt
src_ip_count=$(grep "${ip}" "${report}" | awk '{ print $9 }' | grep -v "local_ip" | uniq -c | awk '{ print $1 }')
and the output is:
[root#me lua-output]# ./test.sh
2
How do I implement the same code above in lua ? I know there is popen function can be used.. but is there a native way to do this in lua ? Also if I use popen, I also need to pass variable $ip and $report inside that command which I'm not sure if it's possible.
There's a bunch of ways to go about this, really. Assuming you read your data from stdin (though the same works for any file you manually open), you can do something like this:
local c = 0
for line in io.lines() do -- or or file:lines() if you have a different file
if line:find("123.123.123.123") -- Only lines containing the IP we care about
if (true) -- Whatever other conditions you want to apply
c = c + 1
end
end
end
print(c)
Lua doesn't have a concept of what a "column" is, so you have to build that yourself as well. Either use a pattern to count spaces, or split the string into a table and index it.
You mentioned that if it is possible to use variable inside popen in lua. It is possible, and you can use grep command in lua.
So in lua you can do this:
-- lua script using grep example
ip = "123.123.123.123"
report = output.txt
local cmd = "grep -F " .. ip .. " " .. report .. " | awk '{ print $9 }' | grep -v 'local_ip' | uniq -c | awk '{ print $1 }'"
local handle = io.popen(cmd)
local src_ip_count = handle:read("*a")
print(src_ip_count)
handle:close()
output:
2

FItNesse: Convert Fit fixture to Slim

Looking for a solution to convert Fit fixture for FitNesse test to Slim.
I got the Command-Line Fit fixture.
Since all my Fitnesse test system is running on Slim I need to have CommandLineFixture as Slim to execute bash script from my test.
Any other workaround for this should work for me.
I am trying to execute a script from FitNesse test and this script writes some text in file present in a server where my Fitnesse server is running.
But what I am observing with the provided fixture its opening file and not writing any text into it.
So just wanted to check do we have any constraint with Fitnesse to execute a script which writes into a file.
Also, I have given all rwx permission to the text file
Below is my modified script:
!define TEST_SYSTEM {slim}
!path ../fixtures/*.jar
|Import |
| nl.hsac.fitnesse.fixture.slim.ExecuteProgramTest |
|script |
|set |-c |as argument|0 |
|set |ls -l / |as argument|1 |
|execute|/bin/bash |
|check |exit code |0 |
|show |standard out |
|check |standard error|!--! |
Executing the above test fetched no response and gives the result as:
Test Pages: 0 right, 0 wrong, 1 ignored, 0 exceptions
Assertions: 0 right, 0 wrong, 0 ignored, 0 exceptions
(0.456 seconds)
I had a helper method to start a program in my my fixture library already, but I started work on fixture today. Would the execute program test fixture work for you?
Example usage:
We can run a program with some arguments, check its exit code and show its output.
|script |execute program test |
|set |-c |as argument|0|
|set |ls -l / |as argument|1|
|execute|/bin/bash |
|check |exit code |0 |
|show |standard out |
|check |standard error|!--! |
The default timeout for program execution is one minute, but we can set a custom timeout. Furthermore we can control the directory it is invoked from, set all arguments using a list and get its output 'unformatted'.
|script |execute program test |
|check |timeout |60000 |
|set timeout of |100 |milliseconds|
|set working directory|/ |
|set |-c, ls -l |as arguments|
|execute |/bin/bash |
|check |exit code |0 |
|show |raw standard out |
|check |raw standard error|!--! |
The timeout can also be set in seconds, and pass environment variables (and the process's output is escaped to ensure it is displayed properly).
|script |execute program test |
|set timeout of|1 |seconds |
|set value |Hi <there> |for |BLA|
|set |-c |as argument|0 |
|set |!-echo ${BLA}-!|as argument|1 |
|execute |/bin/bash |
|check |exit code |0 |
|check |raw standard out |!-Hi <there>
-!|
|check|standard out|{{{Hi <there>
}}}|

" Error Compilation error: encoded string too long:" when making a build

I have a Grails project that is running correctly in dev mode but when I try to create a war file it gives me following message and stops the build
| Compiling 1 source files
| Compiling 1 source files.
| Compiling 1 source files..
| Compiling 1 source files...
| Compiling 1 source files....
| Compiling 1 source files.....
| Compiling 16 GSP files for package [ProjectName]
| Compiling 16 GSP files for package [ProjectName].
| Error Compilation error: encoded string too long: 108421 bytes
Grails doesn't give me any other info in terms of which GSP or line has the problem, anyone seen this happening?
Here are the grails stats, I would say its a fairly small project
+----------------------+-------+-------+
| Name | Files | LOC |
+----------------------+-------+-------+
| Controllers | 6 | 624 |
| Domain Classes | 6 | 109 |
| Java Helpers | 1 | 96 |
| Unit Tests | 12 | 565 |
| Scripts | 1 | 4 |
+----------------------+-------+-------+
| Totals | 26 | 1398 |
+----------------------+-------+-------+
It seems this is grails bug with versions prior to 2.3.7 but it's fixed in 2.3.7 and above.
You have two options upgrade or follow the below steps
Find all the gsp pages with file size greater than 64K.
Add <% /* comment to break the static gsp block */ %> to the middle of your static pages (add it to the end of html tags, for example after </P> etc).
This will make grails think that it's processing two chunks and allows it to get processed.
I've seen this before. Exactly what #tim_yates commented! Refactored some gsp's [include for example] and all was good again. Also, making a little research about this I found some interesting things about DataOutputStream.java. It seems to have a 64kb limit for String objects.
Maybe this can also help you.
Cheers!
I never knew what the problem was, all I did is moved all the needed file to a brand new project and this error disappeared!

Resources