How to use pin to trace memory access of a process - memory

I am using pin to trace the memory accesses of a program.
The following command works as expect:
pin -t obj-intel64/pinatrace.so -- ./HelloWorld
And it will start the 'HelloWorld' process.
But if I start HelloWorld first, and then attach the pintool to the process via following command:
pin -pid pid_of_helloworld -t obj-intel64/pinatrace.so
There is no output generated (i.e. pinatrace.out).
Does anyone know how to trace the memory accesses of an existing process with pintool? Thanks in advance.

Related

perf can't find ELF for unwinding when tracing app in docker

I am tracing an application running inside a docker container. To do so, I am attaching to it with the following command
perf record -o /tmp/perd.data --call-graph dwarf --pid <pid>
The tracing works fine, but when I try to get a report I get the following issue, it doesn't show any of my application functions, they are all unknown.
In have also tried hotspot, and I get the following error
PerfUnwind::MissingElfFile: Could not find ELF file for /workspace/build/release/bin/shared-libs/libdeLog.so. This can break stack unwinding and lead to missing symbols.
I think the issue is that, since the app is running in a container, the libraries are in a particular directory (/workspace/build/release/bin/shared-libs), and when I run perf report on the host, it can't find where the libraries are, since the library directory only exists on the container.
How can I fix that?

Docker Compose down. Can I run a command before the actual stop?

I have a setup with docker-compose which creates a screen and runs a process in it.
That's because when I use the docker-compose with -d it will run the process in the background and attaching to the shell will actually give me a new shell.
What I need is the shell with the actual process...
So when I use my docker-compose script I use screen to run the process in a screen instance.
When I open a shell I can connect to the shell of the running process using the screen -r <screen_name> command
But because it is running in a screen the docker-compose down command won't actually stop properly and will stuck while trying to stop. Instead, I need to force the stop and this is not what I want because this is not the proper way of ending the process I have.
So I thought I need a way to define a stop command before the actual stopping happens.
Any tips are appreciated.
PS: Yes, it's Minecraft
EDIT 1: After the comment of Calum Halpin I don't need a screen anymore. So that I now only need a way to pipe something like "exit" to stdin.
EDIT 2: I guess I still need screen. When attaching to the shell I cant escape from there without killing the terminal session and therefore the process...

How to stop and run Vapor again in Xcode?

I've followed the Vapor tutorial to create a hello app. In Xcode, when I run the Run scheme on my Mac, the app starts and runs as I can see by opening http://localhost:8080/. After making some changes in the code, I stop the Run scheme and I expect the Vapor server to shutdown. However, it continues to serve requests.
Message from debugger: The LLDB RPC server has exited unexpectedly. Please file a bug if you have reproducible steps.
Program ended with exit code: -1
Obviously when I make some changes and run the Run scheme again, I get the following runtime error:
Swift/ErrorType.swift:200: Fatal error: Error raised at top level: bind(descriptor:ptr:bytes:) failed: Address already in use (errno: 48)
Program ended with exit code: 9
How do I stop or restart the server?
This is a long standing issue with Xcode/LLDB. You have a few options:
attach to the process and stop it via Xcode
run killall Run
run lsof -i :8080 to find the process connected to port 8080 and then kill <process_id> (this is useful if you're running multiple apps side by side and only want to terminate the orphaned one)
This is quite frustrating and to be honest I dont know why this happens, but I do the following to terminate the process:
In Xcode,
Go to Debug -> Attach to Process
At the very top of the sub-menu is: Likley targets section with an entry Run (nnnn). It will have an icon of the Terminal application
Click to attach
Then stop the Xcode Run in the usual way.
Out of interest, the next time you run your vapor app, if you open the Debug Navigator, at the top you will see the Terminal icon with Run PID nnnn. Where nnnn is the PID. If you go to Debug -> Attach to Process again, you can see this at the top of the sub-menu as before. But you wont be able to attach to it because it is already being debugged.
Hope this helps you or someone in the future.

gdbserver does not attach to a running process in a docker container

In my docker container (based on SUSE distribution SLES 15) both the C++ executable (with debug enhanced code) and the gdbserver executable are installed.
Before doing anything productive the C++ executable sleeps for 5 seconds, then initializes and processes data from a database. The processing time is long enough to attach it to gdbserver.
The C++ executable is started in the background and its process id is returned to the console.
Immediately afterwards the gdbserver is started and attaches to the same process id.
Problem: The gdbserver complains not being able to connect to the process:
Cannot attach to lwp 59: No such file or directory (2)
Exiting
In another attempt, I have copied the same gdbserver executable to /tmp in the docker container.
Starting this gdbserver gave a different error response:
Cannot attach to process 220: Operation not permitted (1)
Exiting
It has been verified, that in both cases the process is still running. 'ps -e' clearly shows the process id and the process name.
If the process is already finished, a different error message is thrown; this is clear and needs not be explained:
gdbserver: unable to open /proc file '/proc/79/status'
The gdbserver was started once from outside of the container and once from inside.
In both scenarios the gdbserver refused to attach the running process:
$ kubectl exec -it POD_NAME --container debugger -- gdbserver --attach :44444 59
Cannot attach to lwp 59: No such file or directory (2)
Exiting
$ kubectl exec -it POD_NAME -- /bin/bash
bash-4.4$ cd /tmp
bash-4.4$ ./gdbserver 10.0.2.15:44444 --attach 220
Cannot attach to process 220: Operation not permitted (1)
Exiting
Can someone explain what causes gdbserver refusing to attach to the specified process
and give advice how to overcome the mismatch, i.e. where/what do I need to examine for to prepare the right handshake between the C++ executable and the gdbserver?
The basic reason why gdbserver could not attach to the running C++ process is due to
a security enhancement in Ubuntu (versions >= 10.10):
By default, process A cannot trace a running process B unless B is a direct child of A
(or A runs as root).
Direct debugging is still always allowed, e.g. gdb EXE and strace EXE.
The restriction can be loosen by changing the value of /proc/sys/kernel/yama/ptrace_scope from 1 (=default) to 0 (=tracing allowed for all processes). The security setting can be changed with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
All credits for the description of ptrace scope belong to the following post,
see 2nd answer by Eliah Kagan - thank you for the thorough explanation! - here:
https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root

Stopping OrientDB service fails, ETL import not possible

My goal is to import data from CSV-files into OrientDB.
I use the OrientDB 2.2.22 Docker image.
When I try to execute the /orientdb/bin/oetl.sh config.json script within Docker, I get the error: "Can not open storage it is acquired by other process".
I guess this is, because the OrientDB - service is still running. But, if I try to stop it i get the next error.
./orientdb.sh stop
./orientdb.sh: return: line 70: Illegal number: root
or
./orientdb.sh status
./orientdb.sh: return: line 89: Illegal number: root
The only way for to use the ./oetl.sh script is to stop the Docker instance and restart it in the interactive mode running the shell, but this is awkward because to use the "OrientDB Studio" I have to stop docker again and start it in the normal mode.
As Roberto Franchini mentioned above setting the dbURL parameter in the Loader to use a remote URL fixed the first issue "Can not open storage it is acquired by other process".
The issues with the .orientdb.sh still exists, but with the remote-URL approach I don't need to shutdown and restart the service anymore.

Resources