Check total memory of Redis v2.8 - memory

I'm using Redis Desktop Manager For Windows to connect and run commands.
INFO command returns information and statistics about the Redis server.
A part of return string for the INFO command is as below (taken from here):
.
.
.
# Memory
used_memory:9338208
used_memory_human:8.91M
used_memory_rss:14454784
used_memory_rss_human:13.79M
used_memory_peak:13677584
used_memory_peak_human:13.04M
total_system_memory:4142215168
total_system_memory_human:3.86G
.
.
.
But in my case I'm not getting total_system_memory and total_system_memory_human values.
It's just missing when I run INFO command.
Actual output when I run INFO:
.
.
.
# Memory
used_memory:561892576
used_memory_human:535.86M
used_memory_rss:575049728
used_memory_peak:562210816
used_memory_peak_human:536.17M
used_memory_lua:36864
mem_fragmentation_ratio:1.02
mem_allocator:jemalloc-3.6.0
# Persistence
loading:0
.
.
.
So, How can I know the total memory of my Redis instance?

It's about the version change.
First one you are referring is a higher version probably 3.2 (guess)
Your server is a previous one.
You can check the version in INFO command. redis_version it is.
Edit :
Think of it as a feature. In the previous version there is no such thing called total system memory. In the higher versions they have provided that feature. That's all.
Basically in v2.8 the total system memory is the ram of your system unless until you have changed the redis.conf file with the maxmemory value.

Related

influxdb: unclear usage of tsi1 index after upgrading from in-memory type

influxdb 1.5.2
I've tried switching from inmem index type to tsi1 according documentation
https://docs.influxdata.com/influxdb/v1.5/administration/upgrading/#switching-from-in-memory-tsm-based-index-to-disk-tsi-based-index
change index-version = "tsi1" in config file
stop influxdb
run index migration for all data sudo -H -u influxdb bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data -waldir /var/lib/influxdb/wal/'
run influxdb service
Index dirs were created but system start using even more memory than previous :(
Also I've checked modification date of files inside index dir and it wasn't changed after hours (the same time when I complete buildtsi command).
How I can be sure that influxdb start using new index type?
I see that devs work on visibility in new versions of influxdb
https://github.com/influxdata/influxdb/pull/9777
https://github.com/influxdata/influxdb/issues/9707
But now (in 1.5.x version) it's absolutely unclear for me
Make sure that the index was build sucessfully.
If your memory is not sufficient the build process will killed from the out of memory detection mechanism before it successfully ends.
Infludb will then ignore the incomplete index files and use inmem index instead.
Check /var/log/messages for OOM kills.

Geth 1.8.3, cant sync to rinkeby.io - Stuck at: IPC endpoint opened

I cant seem to get the latest version of Geth 1.8.3 working. It stops and never starts syncing.
I'm trying to get it on to the rinkeby.io testnet for smart contract testing.
I had success with 1.7.3 after downloading and using ADD to copy the files to the image. But I would like to automate the build, so that it can be used in Google cloud with a deployment.yml.
Currently I'm testing on my local machine since I know the firewall worked with 1.7.3. (no specific rules set)
The Dockerfile builds just fine and I can see that it shows up in the rinkeby.io node list, but even after 1 hour not a single block has been synced.
its stuck on: IPC endpoint opened
With 1.7.3 it took 10-15 seconds to start the sync.
Dockerfile
# ----- 1st stage build -----
FROM golang:1.9-alpine as builder
RUN apk add --no-cache make gcc musl-dev linux-headers git curl
WORKDIR /
RUN curl -o rinkeby.json https://www.rinkeby.io/rinkeby.json
RUN git clone https://github.com/ethereum/go-ethereum.git
RUN cd /go-ethereum && make geth
# ----- 2nd stage build -----
FROM alpine:latest
RUN apk add --no-cache ca-certificates
COPY --from=builder rinkeby.json $HOME/rinkeby.json
COPY --from=builder /go-ethereum/build/bin/geth /usr/local/bin/
VOLUME [ "/root/.rinkeby/geth/lightchaindata" ]
EXPOSE 8545 8546 30303 30304 30303/udp 30304/udp
RUN geth --datadir=$HOME/.rinkeby init rinkeby.json
CMD ["sh", "-c", "geth --networkid=4 --datadir=$HOME/.rinkeby --rpcaddr 0.0.0.0 --syncmode=fast --ethstats='Oxy:Respect my authoritah!#stats.rinkeby.io' --bootnodes=enode://a24ac7c5484ef4ed0c5eb2d36620ba4e4aa13b8c84684e1b4aab0cebea2ae45cb4d375b77eab56516d34bfbd3c1a833fc51296ff084b770b94fb9028c4d25ccf#52.169.42.101:30303?discport=30304"]
output when i run the docker image.
Console:
INFO [04-04|13:14:10] Maximum peer count ETH=25 LES=0 total=25
INFO [04-04|13:14:10] Starting peer-to-peer node instance=Geth/v1.8.4-unstable-6ab9f0a1/linux-amd64/go1.9.2
INFO [04-04|13:14:10] Allocated cache and file handles database=/root/.rinkeby/geth/chaindata cache=768 handles=1024
WARN [04-04|13:14:10] Upgrading database to use lookup entries
INFO [04-04|13:14:10] Database deduplication successful deduped=0
INFO [04-04|13:14:10] Initialised chain configuration config="{ChainID: 4 Homestead: 1 DAO: <nil> DAOSupport: false EIP150: 2 EIP155: 3 EIP158: 3 Byzantium: 1035301 Constantinople: <nil> Engine: clique}"
INFO [04-04|13:14:10] Initialising Ethereum protocol versions="[63 62]" network=4
INFO [04-04|13:14:10] Loaded most recent local header number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Loaded most recent local full block number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Loaded most recent local fast block number=0 hash=6341fd…67e177 td=1
INFO [04-04|13:14:10] Regenerated local transaction journal transactions=0 accounts=0
INFO [04-04|13:14:10] Starting P2P networking
INFO [04-04|13:14:12] UDP listener up self=enode://6d27f79b944aa75787213835ff512b03ec51434b2508a12735bb365210e57b0084795e5275150974cb976525811c65a49b756ac069ca78e4bd6929ea4d609b65#[::]:30303
INFO [04-04|13:14:12] Stats daemon started
INFO [04-04|13:14:12] RLPx listener up self=enode://6d27f79b944aa75787213835ff512b03ec51434b2508a12735bb365210e57b0084795e5275150974cb976525811c65a49b756ac069ca78e4bd6929ea4d609b65#[::]:30303
INFO [04-04|13:14:12] IPC endpoint opened url=/root/.rinkeby/geth.ipc
I did find out what was wrong after rebuilding the Dockerfile from scratch.
At the end of my CMD line there is this ending.
?discport=30304
By removing it, it works as intended.
This was a remnant from a different users code.
For me the problem is the fast sync still running for hours, takes forever and never stops on its own.
I ran this command :
**geth --rinkeby --syncmode "fast" --rpc --rpcapi db,eth,net,web3,personal --cache=1024 --rpcport 8545 --rpcaddr 127.0.0.1 --rpccorsdomain "*" **
consuming storage as it´s still running(over 5GB), sometimes I got an error : No space left on device, I deleted everything and then start over.
I run eth.syncing , here´s what I got:
https://imgur.com/a/fj2fvpy ( after 3 hours of sync )
https://imgur.com/a/SqrnW8x ( 30 min after the first eth.syning check )
I tried 100 * eth.syncing.currentBlock / eth.syncing.highestBlock and I got 99.9969.... and then it keeps slowing down and decreasing and never reached 99.9970.
I am running Geth/v1.8.4-stable-2423ae01/linux-amd64/go1.10 on Ubuntu 16.04LTS
After going round and round, I discovered that using the '--bootnodes' flag is the simplest way to get your private peers to recognize each other across a network:
Geth.exe --bootnodes "enode://A#B:30303" --datadir="C:\Ledger"--nat=extip:192.168.0.9 ...
When spinning up your IPC pipe, specify a node that is likely to always be up (likely your first init'd node). You also need to be very explicit about IP addresses using the '--nat' flag. In the above example A is the IP address of the node that will be used to auto-detect all other nodes, and build peer connections.
I dervied this approach using information found in this link on the Geth Github site.

RStudio Server C-Stack memory allocation setting in rsession.conf

I'm trying to increase C-stack size in rstudio server 0.99 on CentOS6 editing /etc/rstudio/rserver.conf file as follow:
rsession-stack-limit-mb=20
But "rstudio-server verify-installation" returns this message:
The option 'rsession-stack-limit-mb' is deprecated and will be discarded.
If I put this setting within /etc/rstudio/rsession.conf I obtain this message:
unrecognised option 'rsession-stack-limit-mb'
Someone can help me to find right configuration?
Thanks in advance
Diego
I guess you use the free version of RStudio Server. According to https://github.com/rstudio/rstudio/blob/master/src/cpp/server/ServerOptions.cpp, it seems like you have to need a commercial version if you'd like to manage memory limits in RStudio Server.
Or, you can use "ulimit" command on CentOS, e.g., "ulimit -s 20000". Then, run R from the Linux command line or in batch mode.

How can I run Neo4j with larger heap size, specify -server and correct GC strategy

As a someone who never really messed with the JVM much how can I ensure my Neo4j instances are running with all of the recommended JVM settings. E.g. Heap size, server mode, and -XX:+UseConcMarkSweepGC
Should these be set inside a config file? Can I set the dynamically at runtime? Are they set at a system level? Can I have different settings when running two instances of neo4j on the same machine?
It is a bit fuzzy at what point all of these things get set.
I am running neo4j inside a docker container so that is something to consider as well.
Dockerfile as follows. I am starting neo4j with the console command
FROM dockerfile/java:oracle-java8
# INSTALL OS DEPENDENCIES AND NEO4J
ADD /files/neo4j-enterprise-2.1.3-unix.tar.gz /opt/neo
RUN rm /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
ADD /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
#RUN mv -f /files/neo4j-server.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j-server.properties
EXPOSE 7474
CMD ["console"]
ENTRYPOINT ["/opt/neo/neo4j-enterprise-2.1.3/bin/neo4j"]
Ok, so you are using the Neo4j server script. In this case you should configure the low level JVM properties in neo4j.properties which should also live in the conf directory. Basically do the same thing for neo4j.properties as you already do for neo4j-server.properties. Create the properties file in your Docker context and configure the properties you want to add. Then in the Dockerfile use:
ADD /files/neo4j.properties /opt/neo/neo4j-enterprise-2.1.3/conf/neo4j.properties
The syntax in the properties files is the following (from the documetnation):
# initial heap size (in MB)
wrapper.java.initmemory=<value>
# maximum heap size (in MB)
wrapper.java.maxmemory=<value>
# additional literal JVM parameter, where N is a number for each
wrapper.java.additional.N=<value>
See also http://docs.neo4j.org/chunked/stable/server-performance.html.
One way to test whether the settings are applied is to run jinfo <pid> in the Docker container, where is the process id of the Neo4j JVM. To enter the container, you can either change the entrypoint to /bin/bash at the command line when you run the container or you use nsenter. The latter would be my choice.

Criu/crtools restore fails to restore process on a different machine

I am trying to save a process to the disk using CRIU, I am able to save and restore it on the same machine, but when I try to restore the saved image on different machine it gives me an error.
I executed the yes command found its pid using ps aux|grep yes
then to save I did:
sudo ./criu dump -t 7483 -D ~/dumped --shell-job
then I copied the "dumped" directory to another machine and tried to restore it using following command:
sudo ./criu restore -t 7483 -D ../dumped/ --shell-job
but got the following error
(00.058476) Error (cr-restore.c:956): 7483 killed by signal 7
(00.058526) Error (cr-restore.c:1279): Restoring FAILED.
How do I resolve this? I want to migrate a process to a different machine having exactly similar configuration.
Configuration:
Ubuntu 12.04 64-bit desktop
linux 3.11.0.19-generic
RAM: 4 GB
Output of lscpu
Are you able to restore this process on the machine where you dumped it?
Could you run restore with additional keys to get verbose log? Like so:
sudo ./criu restore -D ../dumped/ --shell-job -v4 -o restore.log
And provide somehow this log?
Btw, -t option at restore is obsoleted. But it doesn't matter in this case, though. =)

Resources