Redis-server closed when using redis-benchmark - docker

I am using redis-benchmark on Ubuntu by using docker container.
I am experimenting performance of memory and CPU by limiting resources by Docker.
For example:
docker run --cpus="2" -m=4M -it [imageID]
My image is based on ubntu:latest and installed redis-server.
So, when I get into a container I run redis-server on background and use redis-benchmark:
10000 changes in 60 seconds. Saving...
Background saving started by pid 17
DB saved on disk
RDB: 0MB of memory used by copy-on-write
Background saving terminated with success
Error: Server closed the connection
[1]+ Killed redis-server
An error shows up.
My redis-benchmark command to make key/value size 100000:
redis-benchmark -q -r 100000 -c 1
If this is because of limitation of memory size, I think this command:
redis-benchamrk -c 1 -r 100000 -n 400
should also shut down the redis-server.

Related

How to use libvips to shrink giant images with limited memory

I have a Ruby on Rails web application that allow users to upload images which then automatically get resized as small thumbnails using libvips and the ImageProcessing ruby gem. Sometimes users legitimately need to upload 100MP+ images. These large images break our server that only has 1GB of RAM. If it's relevant, these images are almost always JPEGs.
What I'm hoping is to use libvips to first scale down these images to a size that my server can handle--maybe like under 8,000x8,000 pixels--without using lots of RAM. Then I would use that image to do the other things we already do, like change the colorspace to sRGB and resize and strip metadata, etc.
Is this possible? If so can you give an example of a vips or vipsthumbnail linux CLI command?
I found a feature in Imagemagick that should theoretically solve this issue, mentioned in the two links below. But I don't want to have to switch the whole system to Imagemagick just for this.
https://legacy.imagemagick.org/Usage/formats/#jpg_read
https://github.com/janko/image_processing/wiki/Improving-ImageMagick-performance
P.S.: I'm using Heroku so if the RAM usage peaks at up to 2GB the action should still work.
(I've always been confused about why image processing seems to always require loading the entire image in RAM at once...)
UPDATE:
I'm providing more context because jcupitt's command is still failing for me.
This is the main software that is installed on the Docker container that is running libvips, as defined in the Dockerfile:
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y postgresql-client
# uglifier requires nodejs -- `apt-get install nodejs` only installs older version by default
RUN apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y libvips libvips-dev libvips-tools
# install pdftotext
RUN apt-get install -y xpdf
I am limiting the memory usage of the sidekiq container to 500MB to be more similar to production server. (I also tried this when limiting memory and reserved memory to 1GB and the same thing happens.) This is the config as specified in docker-compose.yml
sidekiq:
depends_on:
- db
- redis
build: .
command: sidekiq -c 1 -v -q mailers -q default -q low -q searchkick
volumes:
- '.:/myapp'
env_file:
- '.env'
deploy:
resources:
limits:
memory: 500M
reservations:
memory: 500M
This is the exact command I'm trying, based on the command that jcupitt suggested:
first I run docker stats --all to see the sidekiq container's memory usage after booting up, not running libvips:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
4d7e9ff9c7c7 sidekiq_1 0.48% 210.2MiB / 500MiB 42.03% 282kB / 635kB 133MB / 0B 7
I also check docker-compose exec sidekiq top and get a higher RAM limit, which I think is normal for Docker
top - 18:39:48 up 1 day, 3:21, 0 users, load average: 0.01, 0.08, 0.21
Tasks: 3 total, 1 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.2 us, 1.5 sy, 0.0 ni, 97.1 id, 0.2 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3929.7 total, 267.4 free, 1844.1 used, 1818.1 buff/cache
MiB Swap: 980.0 total, 61.7 free, 918.3 used. 1756.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 607688 190620 12848 S 0.3 4.7 0:10.31 ruby
54 root 20 0 6984 3260 2772 R 0.3 0.1 0:00.05 top
39 root 20 0 4092 3256 2732 S 0.0 0.1 0:00.03 bash
then I run the command
docker-compose exec sidekiq bash
root#4d7e9ff9c7c7:/myapp# vipsheader /tmp/shrine20220728-1-8yqju5.jpeg
/tmp/shrine20220728-1-8yqju5.jpeg: 23400x15600 uchar, 3 bands, srgb, jpegload
VIPS_CONCURRENCY=1 vipsthumbnail /tmp/shrine20220728-1-8yqju5.jpeg --size 500x500
Then in another Terminal window I check docker stats --all again
In maybe 0.5s the memory usage quickly shoots to 500MB and the vipsthumbnail process dies and just returns "Killed".
libvips will almost always stream images rather than loading them in memory, so you should not see high memory use.
For example:
$ vipsheader st-francis.jpg
st-francis.jpg: 30000x26319 uchar, 3 bands, srgb, jpegload
$ ls -l st-francis.jpg
-rw-rw-r-- 1 john john 227612475 Sep 17 2020 st-francis.jpg
$ /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg --size 500x500
87412:2.57
So 87MB of memory and 2.5s. The image is around 3gb uncompressed. You should get the same performance with ActiveRecord.
In fact there's not much useful concurrency for this sort of operation, so you can run libvips with a small threadpool.
$ VIPS_CONCURRENCY=1 /usr/bin/time -f %M:%e vipsthumbnail st-francis.jpg --size 500x500
52624:2.49
So with one thread in the threadpool it's about the same speed, but memory use is down to 50MB.
There are a few cases when this will fail. One is with interlaced (also called progressive) images.
These represent the image as a series of passes of increasingly higher detail. This can help when displaying an image to the user (the image appears in slowly increasing detail, rather than as a line moving down the screen), but unfortunately this also means that you don't get the final value of the first pixel until the entire image has been decompressed. This means you have to decompress the whole image into memory, and makes this type of file extremely unsuitable for large images of the sort you are handling.
You can detect an interlaced image in ruby-vips with:
if image.get_typeof("interlaced") != 0
error "argh! can't handle this"
end
I would do that test early on in your application and block upload of this type of file.

Cannot run nodetool commands and cqlsh to Scylla in Docker

I am new to Scylla and I am following the instructions to try it in a container as per this page: https://hub.docker.com/r/scylladb/scylla/.
The following command ran fine.
docker run --name some-scylla --hostname some-scylla -d scylladb/scylla
I see the container is running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e6c4e19ff1bd scylladb/scylla "/docker-entrypoint.…" 14 seconds ago Up 13 seconds 22/tcp, 7000-7001/tcp, 9042/tcp, 9160/tcp, 9180/tcp, 10000/tcp some-scylla
However, I'm unable to use nodetool or cqlsh. I get the following output.
$ docker exec -it some-scylla nodetool status
Using /etc/scylla/scylla.yaml as the config file
nodetool: Unable to connect to Scylla API server: java.net.ConnectException: Connection refused (Connection refused)
See 'nodetool help' or 'nodetool help <command>'.
and
$ docker exec -it some-scylla cqlsh
Connection error: ('Unable to connect to any servers', {'172.17.0.2': error(111, "Tried connecting to [('172.17.0.2', 9042)]. Last error: Connection refused")})
Any ideas?
Update
Looking at docker logs some-scylla I see some errors in the logs, the last one is as follows.
2021-10-03 07:51:04,771 INFO spawned: 'scylla' with pid 167
Scylla version 4.4.4-0.20210801.69daa9fd0 with build-id eb11cddd30e88ef39c32c847e70181b5cf786355 starting ...
command used: "/usr/bin/scylla --log-to-syslog 0 --log-to-stdout 1 --default-log-level info --network-stack posix --developer-mode=1 --overprovisioned --listen-address 172.17.0.2 --rpc-address 172.17.0.2 --seed-provider-parameters seeds=172.17.0.2 --blocked-reactor-notify-ms 999999999"
parsed command line options: [log-to-syslog: 0, log-to-stdout: 1, default-log-level: info, network-stack: posix, developer-mode: 1, overprovisioned, listen-address: 172.17.0.2, rpc-address: 172.17.0.2, seed-provider-parameters: seeds=172.17.0.2, blocked-reactor-notify-ms: 999999999]
ERROR 2021-10-03 07:51:05,203 [shard 6] seastar - Could not setup Async I/O: Resource temporarily unavailable. The most common cause is not enough request capacity in /proc/sys/fs/aio-max-nr. Try increasing that number or reducing the amount of logical CPUs available for your application
2021-10-03 07:51:05,316 INFO exited: scylla (exit status 1; not expected)
2021-10-03 07:51:06,318 INFO gave up: scylla entered FATAL state, too many start retries too quickly
Update 2
The reason for the error was described on the docker hub page linked above. I had to start container specifying the number of CPUs with --smp 1 as follows.
docker run --name some-scylla --hostname some-scylla -d scylladb/scylla --smp 1
According to the above page:
This command will start a Scylla single-node cluster in developer mode
(see --developer-mode 1) limited by a single CPU core (see --smp).
Production grade configuration requires tuning a few kernel parameters
such that limiting number of available cores (with --smp 1) is the
simplest way to go.
Multiple cores requires setting a proper value to the
/proc/sys/fs/aio-max-nr. On many non production systems it will be
equal to 65K. ...
As you have found out, in order to be able to use additional CPU cores you'll need to increase fs.aio-max-nr kernel parameter.
You may run as root:
# sysctl -w fs.aio-max-nr=65535
Which should be enough for most systems. Should you still have any error preventing it to use all of your CPU cores, increase its value further.
Do notice that the above configuration is not persistent. Edit /etc/sysctl.conf in order to make it persistent across reboots.

docker - driver "devicemapper" failed to remove root filesystem after process in container killed

I am using Docker version 17.06.0-ce on Redhat with devicemapper storage. I am launching a container running a long-running service. The master process inside the container sometimes dies for whatever reason. I get the following error message.
/bin/bash: line 1: 40 Killed python -u scripts/server.py start go
I would like the container to exit and to be restarted by docker. However docker never exits. If I do it manually I get the following error:
Error response from daemon: driver "devicemapper" failed to remove root filesystem.
After googling, I tried a bunch of things:
docker rm -f <container>
rm -f <pth to mount>
umount <pth to mount>
All result in device is busy. The only remedy right now is to reboot the host system which is obviously not a long-term solution.
Any ideas?
I had the same problem and the solution was a real surprise.
So here is the error om docker rm:
$ docker rm 08d51aad0e74
Error response from daemon: driver "devicemapper" failed to remove root filesystem for 08d51aad0e74060f54bba36268386fe991eff74570e7ee29b7c4d74047d809aa: remove /var/lib/docker/devicemapper/mnt/670cdbd30a3627ae4801044d32a423284b540c5057002dd010186c69b6cc7eea: device or resource busy
Then I did the following (basically go through all processes and look for docker in mountinfo):
$ grep docker /proc/*/mountinfo | grep 958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac
/proc/20416/mountinfo:629 574 253:15 / /var/lib/docker/devicemapper/mnt/958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,relatime shared:288 - xfs /dev/mapper/docker-253:5-786536-958722d105f8586978361409c9d70aff17c0af3a1970cb3c2fb7908fe5a310ac rw,nouuid,attr2,inode64,logbsize=64k,sunit=128,swidth=128,noquota
This got be the PID of the offending process keeping it busy - 20416 (the item after /proc/)
So I did a ps -p and to my surprise find:
[devops#dp01app5030 SeGrid]$ ps -p 20416
PID TTY TIME CMD
20416 ? 00:00:19 ntpd
A true WTF moment. So I pair problem solved with Google and found this:
Then found this https://github.com/docker/for-linux/issues/124
Turns out I had to restart ntp daemon and that fixed the issue!!!

How to increase the swap space available in the boot2docker virtual machine?

I would like to run a docker container that requires a lot of memory on a machine that doesn't have much RAM. I have been trying to increase the swap space available for the container to no avail. Here is the last command I tried:
docker run -d -m 1000M --memory-swap=10000M --name=my_container my_image
Following these tips on how to check memory metrics I found the following:
$ boot2docker ssh
docker#boot2docker:~$ cat /sys/fs/cgroup/memory/docker/35af5a072751c7af80ce7a255a01ab3c14b3ee0e3f15341f7bb22a777091c67b/memory.stat
cache 454656
rss 65015808
rss_huge 29360128
mapped_file 208896
writeback 0
swap 0
pgpgin 31532
pgpgout 22702
pgfault 49372
pgmajfault 0
inactive_anon 28672
active_anon 65183744
inactive_file 241664
active_file 16384
unevictable 0
hierarchical_memory_limit 1048576000
hierarchical_memsw_limit 10485760000
total_cache 454656
total_rss 65015808
total_rss_huge 29360128
total_mapped_file 208896
total_writeback 0
total_swap 0
total_pgpgin 31532
total_pgpgout 22702
total_pgfault 49372
total_pgmajfault 0
total_inactive_anon 28672
total_active_anon 65183744
total_inactive_file 241664
total_active_file 16384
total_unevictable 0
Is it possible to run a container that requires 5G of memory on a machine that only has 4G of physical memory?
This GitHub issue was very helpful in figuring out how to increase the swap space available in the boot2docker-vm. Adapting it to my situation I used the following commands to ssh into the boot2docker-vm and set up a new swapfile:
boot2docker ssh
export SWAPFILE=/mnt/sda1/swapfile
sudo dd if=/dev/zero of=$SWAPFILE bs=1024 count=4194304
sudo mkswap $SWAPFILE
sudo chmod 600 $SWAPFILE
sudo swapon $SWAPFILE
exit

SSH and -bash: fork: Cannot allocate memory VPS Ubuntu

I am hosting my Rails app on Ubuntu 12.04 VPS, Nginx + Unicorn, after deployment everything is fine, but few hours later, when I ssh to the VPS I get this message
-bash: fork: Cannot allocate memory
-bash: wait_for: No record of process 4201
-bash: wait_for: No record of process 4201
If I run any command, it would just return
-bash: fork: Cannot allocate memory.
Seems you have run out of memory. Many VPS servers are setup with no swap, so when you run out of memory, it will kill things off in a seemingly random manner.
The easiest way to fix it is to get more memory provisioned to your VPS, likely costing more money. The next best way (other than running less stuff and memory optimizing everything running) would be to add a swap partition or swap file.
For a 1GB swap file (as root):
dd if=/dev/zero of=/swapfile bs=1M count=1024
mkswap /swapfile
swapon /swapfile
Be sure to add it to /etc/fstab too as:
/swapfile none swap defaults 0 0
That will make it come back after reboot.
To get out of this condition without rebooting, you can trigger the OOM killer manually as follows:
echo 1 > /proc/sys/kernel/sysrq
echo f > /proc/sysrq-trigger
echo 0 > /proc/sys/kernel/sysrq
Having done so you can inspect dmesg to find the process responsible for taking all your memory.

Resources