perf: Couldn't synthesize bpf events - perf

I am trying to get perf tool running in one of our linux setups, which don't/can't have linux sources.
So, I downloaded the linux code in another machine and compiled perf (cd tools/perf; make).
I copied the perf binary to my target machine.
However, while starting to record, it says "couldn't synthesize bpf events".
root> perf record -a -g --call-graph dwarf -p 836
Warning:
PID/TID switch overriding SYSTEM
Couldn't synthesize bpf events.
[ perf record: Woken up 1 times to write data ]
Failed to read max cpus, using default of 4096
[ perf record: Captured and wrote 0.057 MB perf.data ]
Linux version running in our target machine: 5.4.66-rt38-intel-pk-preempt-rt
Code I used to compile perf: https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/log/?h=v5.4-rt
Because I get this 'couldn't synthesize bpf events', I think I am not getting the user-space stack in the perf report.
What should I do to get rid of this error to fetch the user-space stack of a running process using perf? Advice please!

CONFIG_BPF_SYSCALL was not enabled in kernel config.
After enabling it, I can see that 'couldn't synthesize bpf' was gone.
Marking it as answered.

Related

perf can't find ELF for unwinding when tracing app in docker

I am tracing an application running inside a docker container. To do so, I am attaching to it with the following command
perf record -o /tmp/perd.data --call-graph dwarf --pid <pid>
The tracing works fine, but when I try to get a report I get the following issue, it doesn't show any of my application functions, they are all unknown.
In have also tried hotspot, and I get the following error
PerfUnwind::MissingElfFile: Could not find ELF file for /workspace/build/release/bin/shared-libs/libdeLog.so. This can break stack unwinding and lead to missing symbols.
I think the issue is that, since the app is running in a container, the libraries are in a particular directory (/workspace/build/release/bin/shared-libs), and when I run perf report on the host, it can't find where the libraries are, since the library directory only exists on the container.
How can I fix that?

gdbserver does not attach to a running process in a docker container

In my docker container (based on SUSE distribution SLES 15) both the C++ executable (with debug enhanced code) and the gdbserver executable are installed.
Before doing anything productive the C++ executable sleeps for 5 seconds, then initializes and processes data from a database. The processing time is long enough to attach it to gdbserver.
The C++ executable is started in the background and its process id is returned to the console.
Immediately afterwards the gdbserver is started and attaches to the same process id.
Problem: The gdbserver complains not being able to connect to the process:
Cannot attach to lwp 59: No such file or directory (2)
Exiting
In another attempt, I have copied the same gdbserver executable to /tmp in the docker container.
Starting this gdbserver gave a different error response:
Cannot attach to process 220: Operation not permitted (1)
Exiting
It has been verified, that in both cases the process is still running. 'ps -e' clearly shows the process id and the process name.
If the process is already finished, a different error message is thrown; this is clear and needs not be explained:
gdbserver: unable to open /proc file '/proc/79/status'
The gdbserver was started once from outside of the container and once from inside.
In both scenarios the gdbserver refused to attach the running process:
$ kubectl exec -it POD_NAME --container debugger -- gdbserver --attach :44444 59
Cannot attach to lwp 59: No such file or directory (2)
Exiting
$ kubectl exec -it POD_NAME -- /bin/bash
bash-4.4$ cd /tmp
bash-4.4$ ./gdbserver 10.0.2.15:44444 --attach 220
Cannot attach to process 220: Operation not permitted (1)
Exiting
Can someone explain what causes gdbserver refusing to attach to the specified process
and give advice how to overcome the mismatch, i.e. where/what do I need to examine for to prepare the right handshake between the C++ executable and the gdbserver?
The basic reason why gdbserver could not attach to the running C++ process is due to
a security enhancement in Ubuntu (versions >= 10.10):
By default, process A cannot trace a running process B unless B is a direct child of A
(or A runs as root).
Direct debugging is still always allowed, e.g. gdb EXE and strace EXE.
The restriction can be loosen by changing the value of /proc/sys/kernel/yama/ptrace_scope from 1 (=default) to 0 (=tracing allowed for all processes). The security setting can be changed with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
All credits for the description of ptrace scope belong to the following post,
see 2nd answer by Eliah Kagan - thank you for the thorough explanation! - here:
https://askubuntu.com/questions/143561/why-wont-strace-gdb-attach-to-a-process-even-though-im-root

Docker out of memory exception

So I currently tried to add this plugin: https://github.com/FriendsOfSylius/SyliusImportExportPlugin to my sylius project, which runs over docker. But I got an out of memory exception all the time.
I tried: docker-compose exec php composer require friendsofsylius/sylius-import-export-plugin --dev
I got: Fatal error: Allowed memory size of 2147483648 bytes exhausted (tried to allocate 4096 bytes) in phar:///usr/bin/composer/src/Composer/DependencyResolver/Solver.php on line 223
After some time I tried some others* too and got the same error again so I don't think it is my memory limit, also due to the fact that this is a memory size over 2GB. Has anyone an idea why my memory size exhaust all the time?
I am using an MacOS system with mojave 10.14.5.
*For example:
docker-compose exec php composer update --profile --ignore-platform-reqs --dry-run
docker-compose exec php composer require rubenrua/symfony-clean-tags-composer-plugin

How do I run Docker Swarm's integration tests?

I've followed the instructions at https://github.com/docker/swarm/blob/master/CONTRIBUTING.md to run Swarm's integration tests, but they do not work. The command ./test/integration/run.sh gives unusual error messages. (See http://pastebin.com/hynTXkNb for the full output).
The message about swappiness is the first thing that looks wrong. My kernel does support it - I tested it. /proc/sys/vm/swappiness is readable and writable, and configurable through sysctl.conf .
The next line that looks wrong is the chmod. It tries to access a file in /usr/local/bin - which is wrong because Docker is installed to /usr/bin , and because that file wouldn't be writable by anyone but root anyway.
I know the daemon is running, and working correctly. For example:
user#box:~$ docker run debian /bin/echo 'hello world asdf'
WARNING: Your kernel does not support memory swappiness capabilities, memory swappiness discarded.
hello world asdf
Does anyone know how to fix this issue, and get the integration tests running?
If not, does anyone at least know where to dig into the code in Docker to find out what is failing?

Where is my /etc/sysctl.conf file? Postgresql Fatal could not create shared memory segment

My goal is to install and fully setup Postgresql by following railscast video.
P.S I am on a Mountain Lion 10.8
$ brew install postgresql
seems okay.
$ initdb /usr/local/var/postgres
ok's ok's then...
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=1, size=2072576, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space, or exceeded your kernel's SHMALL parameter. You can either reduce the request size or reconfigure the kernel with larger SHMALL. To reduce the request size (currently 2072576 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
So like a good young SO grasshopper I start googling. and come to This SO post:
PostgreSQL installation error -- Cannot allocate memory
the suggested answer in this post lead me to this answer:http://willbryant.net/software/mac_os_x/postgres_initdb_fatal_shared_memory_error_on_leopard
$ sudo sysctl -w kern.sysv.shmall=65536
Password:
kern.sysv.shmall: 1024 -> 65536
$ sudo sysctl -w kern.sysv.shmmax=16777216
kern.sysv.shmmax: 4194304 -> 16777216
looks like everything worked so far, but in order to protect my changes from reboot, I need to update my /etc/sysctl.conf file. The problem is that I can't find it!
how do I locate this file? From my peanut sized understanding of computers, there is no filepath that exists, and if it did what is before the /etc ?? it certainly is not on my desktop. all I get is no such file exists, but I don't know how to find this file.
Embarrassing. I was trying to CD into my file. just do $ cd /etc

Resources