I have an AIX 5.3 Server that has multiple users.
Is there a way I can dump the memory of a specific user or a specific process?
You can generate a core with the gecore command:
gencore <pid of process> <file_name>
Make sure that full core dump suport is enabled or enable it before with:
chdev -l sys0 -a fullcore=true
Related
I am trying to get perf tool running in one of our linux setups, which don't/can't have linux sources.
So, I downloaded the linux code in another machine and compiled perf (cd tools/perf; make).
I copied the perf binary to my target machine.
However, while starting to record, it says "couldn't synthesize bpf events".
root> perf record -a -g --call-graph dwarf -p 836
Warning:
PID/TID switch overriding SYSTEM
Couldn't synthesize bpf events.
[ perf record: Woken up 1 times to write data ]
Failed to read max cpus, using default of 4096
[ perf record: Captured and wrote 0.057 MB perf.data ]
Linux version running in our target machine: 5.4.66-rt38-intel-pk-preempt-rt
Code I used to compile perf: https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-stable-rt.git/log/?h=v5.4-rt
Because I get this 'couldn't synthesize bpf events', I think I am not getting the user-space stack in the perf report.
What should I do to get rid of this error to fetch the user-space stack of a running process using perf? Advice please!
CONFIG_BPF_SYSCALL was not enabled in kernel config.
After enabling it, I can see that 'couldn't synthesize bpf' was gone.
Marking it as answered.
influxdb 1.5.2
I've tried switching from inmem index type to tsi1 according documentation
https://docs.influxdata.com/influxdb/v1.5/administration/upgrading/#switching-from-in-memory-tsm-based-index-to-disk-tsi-based-index
change index-version = "tsi1" in config file
stop influxdb
run index migration for all data sudo -H -u influxdb bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal/'
run influxdb service
Index dirs were created but system start using even more memory than previous :(
Also I've checked modification date of files inside index dir and it wasn't changed after hours (the same time when I complete buildtsi command).
How I can be sure that influxdb start using new index type?
I see that devs work on visibility in new versions of influxdb
https://github.com/influxdata/influxdb/pull/9777
https://github.com/influxdata/influxdb/issues/9707
But now (in 1.5.x version) it's absolutely unclear for me
Make sure that the index was build sucessfully.
If your memory is not sufficient the build process will killed from the out of memory detection mechanism before it successfully ends.
Infludb will then ignore the incomplete index files and use inmem index instead.
Check /var/log/messages for OOM kills.
I'm trying to increase C-stack size in rstudio server 0.99 on CentOS6 editing /etc/rstudio/rserver.conf file as follow:
rsession-stack-limit-mb=20
But "rstudio-server verify-installation" returns this message:
The option 'rsession-stack-limit-mb' is deprecated and will be discarded.
If I put this setting within /etc/rstudio/rsession.conf I obtain this message:
unrecognised option 'rsession-stack-limit-mb'
Someone can help me to find right configuration?
Thanks in advance
Diego
I guess you use the free version of RStudio Server. According to https://github.com/rstudio/rstudio/blob/master/src/cpp/server/ServerOptions.cpp, it seems like you have to need a commercial version if you'd like to manage memory limits in RStudio Server.
Or, you can use "ulimit" command on CentOS, e.g., "ulimit -s 20000". Then, run R from the Linux command line or in batch mode.
I've just been looking into how to backup the database and have found that neo4j-shell -c dump > my-db-dumb.cql looks like a good solution, which exports everything to a cypher query which creates everything when run (a bit like mysqldump for MySQL).
However, according to the official documentation, neo4j-shell has beed deprecated in favour of cypher-shell, and I can't find the equivalent dump function for cypher-shell. Is there one? If not, what should I do instead of neo4j-shell -c dump? Or is there a better way of backing up the database (I have the community edition)? One advantage of the above solution is you don't have to stop the database.
The most useful option is to shutdown the data and then take backups using the new neo4j-admin command.
If you cannot shutdown the graph, then you can manually copy the "graph.db" directory to someplace else, and then use neo4j-shell using the -path option on new location. As far as version 3.1.1 is concerned, the neo4j-shell is working perfectly.
I have a solr setup with two cores. I want to schedule a core(core1, backend) for full import frequently(e.g. after every 5 mins), then swap with the live(core0, serving) core from shell command through a shceduler.
For full-import command, I am using following shell command
wget -o - -q -t 1 http://localhost:8080/solr/core1/dataimport?command=full-import
Which works fine. If I do a core swap from browser by hitting
http://localhost:8080/solr/admin/cores?action=SWAP&core=core1&other=core0, I get latest update instantly on search. But if I schedule this URL as shell command similar to dataimport, it doesn't do that swap.
Did you try with
curl
"http://'localhost':8080/solr/admin/cores?action=SWAP&core=core1&other=core0"
from shell?
There is catch with the SWAPs
Apache Solr allows to swap two cores around for non-Cloud configurations. They take each other’s name, so it is a good way to push an updated core into a production without downtime.
But an interesting question is how this is achieved. Normally, core name is it’s directory name too. So, does Solr rename the directory on the filesystem too?
Not really! Instead name property in the core.properties file is updated to use the name of the other core. Usually that property is used to give an alternave name of the core for when the directory naming conventions are not suitable.
The gotcha is - of course - that you still have two directories with right looking names for the cores you see in the Admin UI. So, it is very easy to forget that extra redirection/rename step when troubleshooting somebody else’s - or even your own old - setup.