How to generate grayscale PAM file using imagemagick? - image-processing

It is trivial to generate P5 PGM / P6 PPM using imagemagick, using simply
$ convert -depth 12 -size 512x512 xc:white white12.pgm
$ convert -depth 12 -size 512x512 xc:white white12.ppm
Which correctly gives:
% head -3 white12.ppm white12.pgm
==> white12.ppm <==
P6
512 512
4095
==> white12.pgm <==
P5
512 512
4095
Now I am struggling to get PAM: P7 + Grayscale working, all I could find is:
$ convert -depth 12 -size 512x512 xc:white white12.pam
which gives the RGB one:
% head -7 white12.pam
P7
WIDTH 512
HEIGHT 512
DEPTH 3
MAXVAL 4095
TUPLTYPE RGB
ENDHDR
How can I generate the GRAYSCALE PAM one ?
I did report a bug upstream, maybe there is something bogus in my imagemagick version:
https://github.com/ImageMagick/ImageMagick/issues/5027

I think there is a bug here in ImageMagick. This might be a workaround:
magick -depth 12 -size 512x512 gradient: -colorspace gray -compress none pgm:- | awk '
NR==2 {w=$1; h=$2} # pick up and save height and width on line 2
NR<4 {next} # ignore lines 1-3
NR==4 {printf("P7\nWIDTH %d\nHEIGHT %d\nDEPTH 1\nMAXVAL 4095\nTUPLTYPE GRAYSCALE\nENDHDR\n", w,h);}
1 # pass remaining lines through unchanged' > result.pam
That produces this:
P7
WIDTH 512
HEIGHT 512
DEPTH 1
MAXVAL 4095
TUPLTYPE GRAYSCALE
ENDHDR
65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535 65535

In Imagemagick, to create text format NetPBM files, add -compress none.
convert -depth 12 -size 512x512 xc:white -compress none white12.pam
head -7 white12.pam
P7
WIDTH 512
HEIGHT 512
DEPTH 3
MAXVAL 4095
TUPLTYPE RGB
ENDHDR

Related

Destination Host unreachable from inside Docker Container to remote SQL Server Database behind VPN

I'm trying to "dockerize" a net application.
My database is in the company servers and we connect to them through VPN with double factor authentication. I can do this correctly with no problem.
My app runs correctly without Docker and I can access it with my app and other tools like SSMS.
The problem comes when I try to run the app from a docker container. Here is my docker compose:
services:
orenes.procedimientos.firma.api:
environment:
- TZ=CET
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:6555
- ASPNETCORE_ConnectionStrings__FirmasConnString=Server=10.1.33.34;Database=YYYY;User ID=ZZZZ;password=******
image: ${DOCKER_REGISTRY-}orenesprocedimientosfirmaapi
extra_hosts:
- "SV-GORDEVSQL:10.1.33.34"
build:
context: ../../
dockerfile: Orenes.Procedimientos.Firma.API/Dockerfile
network: host
ports:
- 6555:6555
networks:
- vpn
networks:
vpn:
ipam:
config:
- subnet: 10.1.0.0/20
If I go inside the container and try to do a ping towards the server URL, I receive
From 10.1.0.1 icmp_seq=2 Destination Host Unreachable
This is the output of the ifconfig command:
th0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.1.0.2 netmask 255.255.255.0 broadcast 10.1.0.255
ether 02:42:0a:01:00:02 txqueuelen 0 (Ethernet)
RX packets 330 bytes 394185 (384.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 278 bytes 16675 (16.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 15 bytes 1300 (1.2 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 15 bytes 1300 (1.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Running Docker Desktop on Windows 10.
If someone can help, I will be really grateful.
Thx!

Docker cannot get specific ip

Hi I need to assign a specific ip for each docker container for my test automation program called sipp.
I cannot ping or telnet to 192.168.173.215
Here is my configration:
version: '3.3'
services:
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
networks:
mynetwork:
ipv4_address: 192.168.128.2
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 192.168.128.2 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
networks:
mynetwork:
ipam:
driver: default
config:
- subnet: 192.168.128.0/18
gateway: 192.168.128.200
I am sure about subnet and gateway because I can assign IP with VMware virtual host.
Here is ifconfig inside docker machine (bash)
ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.128.2 netmask 255.255.192.0 broadcast 192.168.191.255
ether 02:42:c0:a8:80:02 txqueuelen 0 (Ethernet)
RX packets 7 bytes 586 (586.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5 bytes 210 (210.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 3 bytes 1728 (1.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3 bytes 1728 (1.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Here is ip
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
389: eth0#if390: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:80:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.128.2/18 brd 192.168.191.255 scope global eth0
valid_lft forever preferred_lft forever
on the other hand when I use the configuration below, it can ping and access to 192.168.173.215 and auto assigning IP is: 172.17.0.1
sipp1:
build:
context: .
dockerfile: Dockerfile
container_name: sipp1
network_mode: host
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: ./sipp 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -i 172.17.0.1 -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err
When I use the configuration below, its getting ip: 172.18.0.2 and cannot ping anywhere again
sipp4:
build:
context: .
dockerfile: Dockerfile
container_name: sipp4
volumes:
- ./sipp-3.4.1/:/opt/app/sipp
environment:
- "TZ=America/Los_Angeles"
ulimits:
nofile:
soft: 200000
hard: 400000
working_dir: /opt/app/sipp
command: 192.168.173.215:5060 -sf callerCall.xml -inf callerCall.csv -aa -m 1 -trace_msg -t un -skip_rlimit -trace_err

EventStoreDB v5 Fail to run after using sudo

I normally run eventstore with eventstore --run-projections=System but trying to run queries, macos throw a security complain: “libjs1.dylib” cannot be opened because the developer cannot be verified.
So I ran sudo eventstore --run-projections=System. This did not fix the issue. I wen to system preferences > security > general and gave access to libjs1.dylib and the Query ran and returned no value.
I realized all the data from event store was gone.
I though maybe it becase sudo? I ran without sudo and now I get this error:
[34307,01,18:12:12.463]
“ES VERSION:” “5.0.5.0” (“HEAD”/“b92e517ced0aada066f9f525e02082cdbdb34d7f”, “Fri, 13 Sep 2019 16:29:49 +0200”)
[34307,01,18:12:12.487] “OS:” MacOS (Unix 19.5.0.0)
[34307,01,18:12:12.494] “RUNTIME:” “5.16.0.220 (2018-06/bb3ae37d71a Fri Nov 16 17:12:11 EST 2018)” (64-bit)
[34307,01,18:12:12.494] “GC:” “2 GENERATIONS”
[34307,01,18:12:12.494] “LOGS:” “/var/log/eventstore”
MODIFIED OPTIONS:
RUN PROJECTIONS: System (Command Line)
DEFAULT OPTIONS:
HELP: False (<DEFAULT>)
VERSION: False (<DEFAULT>)
LOG: /var/log/eventstore (<DEFAULT>)
CONFIG: <empty> (<DEFAULT>)
DEFINES: <empty> (<DEFAULT>)
WHAT IF: False (<DEFAULT>)
START STANDARD PROJECTIONS: False (<DEFAULT>)
DISABLE HTTP CACHING: False (<DEFAULT>)
MONO MIN THREADPOOL SIZE: 10 (<DEFAULT>)
INT IP: 127.0.0.1 (<DEFAULT>)
EXT IP: 127.0.0.1 (<DEFAULT>)
INT HTTP PORT: 2112 (<DEFAULT>)
EXT HTTP PORT: 2113 (<DEFAULT>)
INT TCP PORT: 1112 (<DEFAULT>)
INT SECURE TCP PORT: 0 (<DEFAULT>)
EXT TCP PORT: 1113 (<DEFAULT>)
EXT SECURE TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
EXT SECURE TCP PORT: 0 (<DEFAULT>)
EXT IP ADVERTISE AS: <empty> (<DEFAULT>)
EXT TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
EXT HTTP PORT ADVERTISE AS: 0 (<DEFAULT>)
INT IP ADVERTISE AS: <empty> (<DEFAULT>)
INT SECURE TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
INT TCP PORT ADVERTISE AS: 0 (<DEFAULT>)
INT HTTP PORT ADVERTISE AS: 0 (<DEFAULT>)
INT TCP HEARTBEAT TIMEOUT: 700 (<DEFAULT>)
EXT TCP HEARTBEAT TIMEOUT: 1000 (<DEFAULT>)
INT TCP HEARTBEAT INTERVAL: 700 (<DEFAULT>)
EXT TCP HEARTBEAT INTERVAL: 2000 (<DEFAULT>)
GOSSIP ON SINGLE NODE: False (<DEFAULT>)
CONNECTION PENDING SEND BYTES THRESHOLD: 10485760 (<DEFAULT>)
CONNECTION QUEUE SIZE THRESHOLD: 50000 (<DEFAULT>)
FORCE: False (<DEFAULT>)
CLUSTER SIZE: 1 (<DEFAULT>)
NODE PRIORITY: 0 (<DEFAULT>)
MIN FLUSH DELAY MS: 2 (<DEFAULT>)
COMMIT COUNT: -1 (<DEFAULT>)
PREPARE COUNT: -1 (<DEFAULT>)
ADMIN ON EXT: True (<DEFAULT>)
STATS ON EXT: True (<DEFAULT>)
GOSSIP ON EXT: True (<DEFAULT>)
DISABLE SCAVENGE MERGING: False (<DEFAULT>)
SCAVENGE HISTORY MAX AGE: 30 (<DEFAULT>)
DISCOVER VIA DNS: True (<DEFAULT>)
CLUSTER DNS: fake.dns (<DEFAULT>)
CLUSTER GOSSIP PORT: 30777 (<DEFAULT>)
GOSSIP SEED: <empty> (<DEFAULT>)
STATS PERIOD SEC: 30 (<DEFAULT>)
CACHED CHUNKS: -1 (<DEFAULT>)
READER THREADS COUNT: 4 (<DEFAULT>)
CHUNKS CACHE SIZE: 536871424 (<DEFAULT>)
MAX MEM TABLE SIZE: 1000000 (<DEFAULT>)
HASH COLLISION READ LIMIT: 100 (<DEFAULT>)
DB: /var/lib/eventstore (<DEFAULT>)
INDEX: <empty> (<DEFAULT>)
MEM DB: False (<DEFAULT>)
SKIP DB VERIFY: False (<DEFAULT>)
WRITE THROUGH: False (<DEFAULT>)
UNBUFFERED: False (<DEFAULT>)
CHUNK INITIAL READER COUNT: 5 (<DEFAULT>)
PROJECTION THREADS: 3 (<DEFAULT>)
WORKER THREADS: 5 (<DEFAULT>)
PROJECTIONS QUERY EXPIRY: 5 (<DEFAULT>)
FAULT OUT OF ORDER PROJECTIONS: False (<DEFAULT>)
INT HTTP PREFIXES: <empty> (<DEFAULT>)
EXT HTTP PREFIXES: <empty> (<DEFAULT>)
ENABLE TRUSTED AUTH: False (<DEFAULT>)
ADD INTERFACE PREFIXES: True (<DEFAULT>)
CERTIFICATE STORE LOCATION: <empty> (<DEFAULT>)
CERTIFICATE STORE NAME: <empty> (<DEFAULT>)
CERTIFICATE SUBJECT NAME: <empty> (<DEFAULT>)
CERTIFICATE THUMBPRINT: <empty> (<DEFAULT>)
CERTIFICATE FILE: <empty> (<DEFAULT>)
CERTIFICATE PASSWORD: <empty> (<DEFAULT>)
USE INTERNAL SSL: False (<DEFAULT>)
DISABLE INSECURE TCP: False (<DEFAULT>)
SSL TARGET HOST: n/a (<DEFAULT>)
SSL VALIDATE SERVER: True (<DEFAULT>)
AUTHENTICATION TYPE: internal (<DEFAULT>)
AUTHENTICATION CONFIG: <empty> (<DEFAULT>)
DISABLE FIRST LEVEL HTTP AUTHORIZATION: False (<DEFAULT>)
PREPARE TIMEOUT MS: 2000 (<DEFAULT>)
COMMIT TIMEOUT MS: 2000 (<DEFAULT>)
UNSAFE DISABLE FLUSH TO DISK: False (<DEFAULT>)
BETTER ORDERING: False (<DEFAULT>)
UNSAFE IGNORE HARD DELETE: False (<DEFAULT>)
SKIP INDEX VERIFY: False (<DEFAULT>)
INDEX CACHE DEPTH: 16 (<DEFAULT>)
OPTIMIZE INDEX MERGE: False (<DEFAULT>)
GOSSIP INTERVAL MS: 1000 (<DEFAULT>)
GOSSIP ALLOWED DIFFERENCE MS: 60000 (<DEFAULT>)
GOSSIP TIMEOUT MS: 500 (<DEFAULT>)
ENABLE HISTOGRAMS: False (<DEFAULT>)
LOG HTTP REQUESTS: False (<DEFAULT>)
LOG FAILED AUTHENTICATION ATTEMPTS: False (<DEFAULT>)
ALWAYS KEEP SCAVENGED: False (<DEFAULT>)
SKIP INDEX SCAN ON READS: False (<DEFAULT>)
REDUCE FILE CACHE PRESSURE: False (<DEFAULT>)
INITIALIZATION THREADS: 1 (<DEFAULT>)
STRUCTURED LOG: True (<DEFAULT>)
MAX AUTO MERGE INDEX LEVEL: 2147483647 (<DEFAULT>)
WRITE STATS TO DB: True (<DEFAULT>)
[34307,01,18:12:12.508] {“defaults”:{“Help”:“False”,“Version”:“False”,“Log”:"/var/log/eventstore",“Config”:"",“Defines”:“System.String[]”,“WhatIf”:“False”,“StartStandardProjections”:“False”,“DisableHTTPCaching”:“False”,“MonoMinThreadpoolSize”:“10”,“IntIp”:“127.0.0.1”,“ExtIp”:“127.0.0.1”,“IntHttpPort”:“2112”,“ExtHttpPort”:“2113”,“IntTcpPort”:“1112”,“IntSecureTcpPort”:“0”,“ExtTcpPort”:“1113”,“ExtSecureTcpPortAdvertiseAs”:“0”,“ExtSecureTcpPort”:“0”,“ExtIpAdvertiseAs”:null,“ExtTcpPortAdvertiseAs”:“0”,“ExtHttpPortAdvertiseAs”:“0”,“IntIpAdvertiseAs”:null,“IntSecureTcpPortAdvertiseAs”:“0”,“IntTcpPortAdvertiseAs”:“0”,“IntHttpPortAdvertiseAs”:“0”,“IntTcpHeartbeatTimeout”:“700”,“ExtTcpHeartbeatTimeout”:“1000”,“IntTcpHeartbeatInterval”:“700”,“ExtTcpHeartbeatInterval”:“2000”,“GossipOnSingleNode”:“False”,“ConnectionPendingSendBytesThreshold”:“10485760”,“ConnectionQueueSizeThreshold”:“50000”,“Force”:“False”,“ClusterSize”:“1”,“NodePriority”:“0”,“MinFlushDelayMs”:“2”,“CommitCount”:"-1",“PrepareCount”:"-1",“AdminOnExt”:“True”,“StatsOnExt”:“True”,“GossipOnExt”:“True”,“DisableScavengeMerging”:“False”,“ScavengeHistoryMaxAge”:“30”,“DiscoverViaDns”:“True”,“ClusterDns”:“fake.dns”,“ClusterGossipPort”:“30777”,“GossipSeed”:“System.Net.IPEndPoint[]”,“StatsPeriodSec”:“30”,“CachedChunks”:"-1",“ReaderThreadsCount”:“4”,“ChunksCacheSize”:“536871424”,“MaxMemTableSize”:“1000000”,“HashCollisionReadLimit”:“100”,“Db”:"/var/lib/eventstore",“Index”:null,“MemDb”:“False”,“SkipDbVerify”:“False”,“WriteThrough”:“False”,“Unbuffered”:“False”,“ChunkInitialReaderCount”:“5”,“ProjectionThreads”:“3”,“WorkerThreads”:“5”,“ProjectionsQueryExpiry”:“5”,“FaultOutOfOrderProjections”:“False”,“IntHttpPrefixes”:“System.String[]”,“ExtHttpPrefixes”:“System.String[]”,“EnableTrustedAuth”:“False”,“AddInterfacePrefixes”:“True”,“CertificateStoreLocation”:"",“CertificateStoreName”:"",“CertificateSubjectName”:"",“CertificateThumbprint”:"",“CertificateFile”:"",“CertificatePassword”:"",“UseInternalSsl”:“False”,“DisableInsecureTCP”:“False”,“SslTargetHost”:“n/a”,“SslValidateServer”:“True”,“AuthenticationType”:“internal”,“AuthenticationConfig”:"",“DisableFirstLevelHttpAuthorization”:“False”,“PrepareTimeoutMs”:“2000”,“CommitTimeoutMs”:“2000”,“UnsafeDisableFlushToDisk”:“False”,“BetterOrdering”:“False”,“UnsafeIgnoreHardDelete”:“False”,“SkipIndexVerify”:“False”,“IndexCacheDepth”:“16”,“OptimizeIndexMerge”:“False”,“GossipIntervalMs”:“1000”,“GossipAllowedDifferenceMs”:“60000”,“GossipTimeoutMs”:“500”,“EnableHistograms”:“False”,“LogHttpRequests”:“False”,“LogFailedAuthenticationAttempts”:“False”,“AlwaysKeepScavenged”:“False”,“SkipIndexScanOnReads”:“False”,“ReduceFileCachePressure”:“False”,“InitializationThreads”:“1”,“StructuredLog”:“True”,“MaxAutoMergeIndexLevel”:“2147483647”,“WriteStatsToDb”:“True”},“modified”:{“RunProjections”:“System”}}
[34307,01,18:12:12.518] Quorum size set to 1
[34307,01,18:12:12.526] Cannot find plugins path: “/usr/local/share/eventstore/plugins”
[34307,01,18:12:12.558] Unhandled exception while starting application:
EXCEPTION OCCURRED
Access to the path “/var/lib/eventstore/writer.chk” is denied.
[34307,01,18:12:12.572] “Access to the path “/var/lib/eventstore/writer.chk” is denied.”
EXCEPTION OCCURRED
Access to the path “/var/lib/eventstore/writer.chk” is denied.
(I tried to ran with sudo again and the program starts without problem, but with no data)
I’m guessing sudo changes default paths, and doesn’t get reversed without sudo.
Also added this in the forum: https://discuss.eventstore.com/t/fail-to-run-after-using-sudo/2748/2
Have you checked the docs? https://developers.eventstore.com/server/5.0.9/server/server/default-directories.html#macos
The data isn't gone, but the database path for the root user won't match the path for the regular user, so your database has been created elsewhere.
Here is the docs note:
On macOS you will get permissions error if you run eventstore without sudo. We advise changing the configuration file and change the Db option to a place where you have access as the normal user.
When running with sudo, the database is located at /var/lib/eventstore. I suspect then when you run it without sudo, the database was places under /usr/local/Caskroom/eventstore/5.0.8/EventStore-OSS-MacOS-macOS-v5.0.8 somewhere.
Honestly, I avoid running EventStoreDB on macOS from the cask and use Docker instead. It's more predictable, doesn't have issues and allows you to run different versions without issues.

Phusion Passenger Rails not balancing request across workers

Below is the output when I run passenger-status on the server
Requests in queue: 0
* PID: 1821 Sessions: 0 Processed: 2971 Uptime: 15m 11s
CPU: 14% Memory : 416M Last used: 0s ago
* PID: 1847 Sessions: 0 Processed: 1066 Uptime: 15m 11s
CPU: 6% Memory : 256M Last used: 2s ago
* PID: 1861 Sessions: 0 Processed: 199 Uptime: 15m 11s
CPU: 1% Memory : 238M Last used: 3s ago
* PID: 1875 Sessions: 0 Processed: 37 Uptime: 15m 10s
CPU: 0% Memory : 196M Last used: 15s ago
* PID: 1900 Sessions: 0 Processed: 7 Uptime: 15m 10s
CPU: 0% Memory : 136M Last used: 33s ago
* PID: 1916 Sessions: 0 Processed: 4 Uptime: 15m 10s
CPU: 0% Memory : 126M Last used: 33s ago
* PID: 1932 Sessions: 0 Processed: 1 Uptime: 15m 10s
CPU: 0% Memory : 132M Last used: 14m 44s ago
* PID: 1946 Sessions: 0 Processed: 0 Uptime: 15m 10s
CPU: 0% Memory : 68M Last used: 15m 10s ago
* PID: 1962 Sessions: 0 Processed: 0 Uptime: 15m 9s
CPU: 0% Memory : 53M Last used: 15m 9s ago
* PID: 1980 Sessions: 0 Processed: 0 Uptime: 15m 9s
CPU: 0% Memory : 53M Last used: 15m 9s ago
The stack we are running is Nginx + Passenger + Rails.
My concern here is as the docs say passenger must be distributing load across the workers present spawned by itself, but as we can see from the logs, only the top 2 workers get all the requests, the rest are just idle.
Also with time the Memory usage by top workers increases.
Is this an expected behaviour?
How can I rectify this, and can I improve performance in any way?
Also my passenger conf is below
passenger_max_pool_size 20;
passenger_min_instances 10;
passenger_max_instances_per_app 0;
passenger_pre_start <api-endpoint>;
passenger_pool_idle_time 0;
passenger_max_request_queue_size 0;
Silly me, I made a comment a few minutes ago and now I found the answer.
Summary: Passenger uses a simple algorithm to fill up the top request whenever possible instead of using round robin, nothing wrong with your application.
The link explains most of it.
https://www.phusionpassenger.com/library/indepth/ruby/request_load_balancing.html#traffic-may-appear-unbalanced-between-processes

Heavy load web service because of blocked Phusion Passenger queues

We are developing a Web service using Ruby 2 on Rails 4, Mongoid 4, MongoDB 2.6. It uses Sidekiq 3.3.0 and Redis 2.8 and is running on Phusion Passenger 5.0.4 + Nginx 1.7.10. It only serves mobile clients & AngularJS web client via JSON APIs.
Normally everything works fine, APIs are processed and responded under 1s. But in rush hours, the service is heavy load (APIs are rendered as 503 Service Unavailable). Below are our Nginx & Mongoid config:
Nginx config
passenger_root /home/deployer/.rvm/gems/ruby-2.1.3/gems/passenger-4.0.53;
#passenger_ruby /usr/bin/ruby;
passenger_max_pool_size 70;
passenger_min_instances 1;
passenger_max_requests 20; # A workaround if apps are mem-leaking
passenger_pool_idle_time 300;
passenger_max_instances_per_app 30;
passenger_pre_start http://production_domain/;
## Note: there're 2 apps with the same config
server {
listen 80;
server_name production_domain;
passenger_enabled on;
root /home/deployer/app_name-production/current/public;
more_set_headers 'Access-Control-Allow-Origin: *'
more_set_headers 'Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE, HEAD';
more_set_headers 'Access-Control-Allow-Headers: DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
if ($request_method = 'OPTIONS') {
# more_set_headers 'Access-Control-Allow-Origin: *';
# add_header 'Access-Control-Allow-Origin' '*';
# add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
# add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-FooA$
# add_header 'Access-Control-Max-Age' 1728000;
# add_header 'Content-Type' 'text/plain charset=UTF-8';
# add_header 'Content-Length' 0;
return 200;
}
access_log /var/log/nginx/app_name-production.access.log;
error_log /var/log/nginx/app_name-production.error.log;
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /etc/nginx/html/;
}
rails_env production;
}
Mongoid config
development:
sessions:
default:
another:
uri: mongodb://127.0.0.1:27017/database_name
test:
sessions:
default:
another:
uri: mongodb://127.0.0.1:27017/database_name
options:
pool_size: 10
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
staging:
sessions:
default:
another:
uri: mongodb://staging_domain/staging_database
options:
pool_size: 10
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
production:
sessions:
default:
another:
uri: mongodb://production_domain/production_database
options:
pool_size: 30
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
Sidekiq config
and Passenger logs when heavy load:
Version : 5.0.4
Date : 2015-04-04 09:31:14 +0700
Instance: MxPcaaBy (nginx/1.7.10 Phusion_Passenger/5.0.4)
----------- General information -----------
Max pool size : 120
Processes : 62
Requests in top-level queue : 0
----------- Application groups -----------
/home/deployer/memo_rails-staging/current/public (staging)#default:
App root: /home/deployer/memo_rails-staging/current
Requests in queue: 0
* PID: 20453 Sessions: 0 Processed: 639 Uptime: 14h 34m 26s
CPU: 0% Memory : 184M Last used: 14s ago
* PID: 402 Sessions: 0 Processed: 5 Uptime: 13h 0m 42s
CPU: 0% Memory : 171M Last used: 23m 35s
* PID: 16081 Sessions: 0 Processed: 3 Uptime: 10h 26m 9s
CPU: 0% Memory : 163M Last used: 24m 9s a
* PID: 30300 Sessions: 0 Processed: 1 Uptime: 4h 19m 43s
CPU: 0% Memory : 164M Last used: 24m 15s
/home/deployer/memo_rails-production/current/public (production)#default:
App root: /home/deployer/memo_rails-production/current
Requests in queue: 150
* PID: 25924 Sessions: 1 Processed: 841 Uptime: 20m 49s
CPU: 3% Memory : 182M Last used: 7m 58s ago
* PID: 25935 Sessions: 1 Processed: 498 Uptime: 20m 49s
CPU: 2% Memory : 199M Last used: 5m 40s ago
* PID: 25948 Sessions: 1 Processed: 322 Uptime: 20m 49s
CPU: 1% Memory : 200M Last used: 7m 57s ago
* PID: 25960 Sessions: 1 Processed: 177 Uptime: 20m 49s
CPU: 0% Memory : 158M Last used: 19s ago
* PID: 25972 Sessions: 1 Processed: 115 Uptime: 20m 48s
CPU: 0% Memory : 151M Last used: 7m 56s ago
* PID: 25987 Sessions: 1 Processed: 98 Uptime: 20m 48s
CPU: 0% Memory : 179M Last used: 7m 56s ago
* PID: 25998 Sessions: 1 Processed: 77 Uptime: 20m 48s
CPU: 0% Memory : 145M Last used: 7m 2s ago
* PID: 26012 Sessions: 1 Processed: 97 Uptime: 20m 48s
CPU: 0% Memory : 167M Last used: 19s ago
* PID: 26024 Sessions: 1 Processed: 42 Uptime: 20m 47s
CPU: 0% Memory : 148M Last used: 7m 55s ago
* PID: 26038 Sessions: 1 Processed: 44 Uptime: 20m 47s
CPU: 0% Memory : 164M Last used: 1m 0s ago
* PID: 26050 Sessions: 1 Processed: 29 Uptime: 20m 47s
CPU: 0% Memory : 142M Last used: 7m 54s ago
* PID: 26063 Sessions: 1 Processed: 41 Uptime: 20m 47s
CPU: 0% Memory : 168M Last used: 1m 1s ago
* PID: 26075 Sessions: 1 Processed: 23 Uptime: 20m 47s
CPU: 0% Memory : 126M Last used: 7m 51s ago
* PID: 26087 Sessions: 1 Processed: 19 Uptime: 20m 46s
CPU: 0% Memory : 120M Last used: 7m 50s ago
* PID: 26099 Sessions: 1 Processed: 37 Uptime: 20m 46s
CPU: 0% Memory : 131M Last used: 7m 3s ago
* PID: 26111 Sessions: 1 Processed: 20 Uptime: 20m 46s
CPU: 0% Memory : 110M Last used: 7m 49s ago
* PID: 26126 Sessions: 1 Processed: 28 Uptime: 20m 46s
CPU: 0% Memory : 172M Last used: 1m 56s ago
* PID: 26141 Sessions: 1 Processed: 20 Uptime: 20m 45s
CPU: 0% Memory : 107M Last used: 7m 19s ago
* PID: 26229 Sessions: 1 Processed: 20 Uptime: 20m 21s
CPU: 0% Memory : 110M Last used: 11s ago
* PID: 26241 Sessions: 1 Processed: 9 Uptime: 20m 21s
CPU: 0% Memory : 105M Last used: 7m 47s ago
* PID: 26548 Sessions: 1 Processed: 23 Uptime: 19m 14s
CPU: 0% Memory : 125M Last used: 7m 44s ago
* PID: 27465 Sessions: 1 Processed: 30 Uptime: 15m 23s
CPU: 0% Memory : 109M Last used: 2m 22s ago
* PID: 27501 Sessions: 1 Processed: 28 Uptime: 15m 18s
CPU: 0% Memory : 117M Last used: 7m 15s ago
* PID: 27511 Sessions: 1 Processed: 34 Uptime: 15m 17s
CPU: 0% Memory : 144M Last used: 5m 40s ago
* PID: 27522 Sessions: 1 Processed: 30 Uptime: 15m 17s
CPU: 0% Memory : 110M Last used: 26s ago
* PID: 27533 Sessions: 1 Processed: 38 Uptime: 15m 17s
CPU: 0% Memory : 110M Last used:"4m 44s ago
* PID: 27555 Sessions: 1 Processed: 27 Uptime: 15m 15s
CPU: 0% Memory : 120M Last used: 1m 29s ago
* PID: 27570 Sessions: 1 Processed: 21 Uptime: 15m 14s
CPU: 0% Memory : 107M Last used: 7m 1s ago
* PID: 27590 Sessions: 1 Processed: 8 Uptime: 15m 13s
CPU: 0% Memory : 105M Last used: 7m 34s ago
* PID: 27599 Sessions: 1 Processed: 13 Uptime: 15m 13s
CPU: 0% Memory : 107M Last used: 7m 0s ago
* PID: 27617 Sessions: 1 Processed: 26 Uptime: 15m 12s
CPU: 0% Memory : 114M Last used: 4m 49s ago
* PID: 27633 Sessions: 1 Processed: 19 Uptime: 15m 11s
CPU: 0% Memory : 137M Last used: 1m 14s ago
* PID: 27643 Sessions: 1 Processed: 15 Uptime: 15m 11s
CPU: 0% Memory : 132M Last used: 6m 19s ago
* PID: 27661 Sessions: 1 Processed: 23 Uptime: 15m 10s
CPU: 0% Memory : 112M Last used: 9s ago
* PID: 27678 Sessions: 1 Processed: 24 Uptime: 15m 9s
CPU: 0% Memory : 108M Last used: 6m 53s ago
* PID: 27692 Sessions: 1 Processed: 9 Uptime: 15m 9s
CPU: 0% Memory : 105M Last used: 7m 22s ago
* PID: 28400 Sessions: 1 Processed: 19 Uptime: 12m 45s
CPU: 0% Memory : 111M Last used: 1m 25s ago
* PID: 28415 Sessions: 1 Processed: 26 Uptime: 12m 45s
CPU: 0% Memory : 149M Last used: 3m 45s ago
* PID: 28439 Sessions: 1 Processed: 14 Uptime: 12m 44s
CPU: 0% Memory : 106M Last used: 59s ago
* PID: 28477 Sessions: 1 Processed: 12 Uptime: 12m 42s
CPU: 0% Memory : 108M Last used: 1m 34s ago
* PID: 28495 Sessions: 1 Processed: 14 Uptime: 12m 41s
CPU: 0% Memory : 108M Last used: 18s ago
* PID: 29315 Sessions: 1 Processed: 7 Uptime: 10m 1s
CPU: 0% Memory : 107M Last used: 7m 0s ago
* PID: 29332 Sessions: 1 Processed: 13 Uptime: 10m 0s
CPU: 0% Memory : 108M Last used: 5m 39s ago
* PID: 29341 Sessions: 1 Processed: 7 Uptime: 10m 0s
CPU: 0% Memory : 105M Last used: 6m 53s ago
* PID: 29353 Sessions: 1 Processed: 11 Uptime: 10m 0s
CPU: 0% Memory : 119M Last used: 5m 4s ago
* PID: 29366 Sessions: 1 Processed: 16 Uptime: 9m 59s
CPU: 0% Memory : 119M Last used: 3m 13s ago
* PID: 29377 Sessions: 1 Processed: 10 Uptime: 9m 59s
CPU: 0% Memory : 113M Last used: 1m 34s ago
* PID: 29388 Sessions: 1 Processed: 2 Uptime: 9m 59s
CPU: 0% Memory : 97M Last used: 7m 28s ago
* PID: 29400 Sessions: 1 Processed: 6 Uptime: 9m 59s
CPU: 0% Memory : 103M Last used: 6m 53s ago
* PID: 29422 Sessions: 1 Processed: 17 Uptime: 9m 58s
CPU: 0% Memory : 132M Last used: 1m 24s ago
* PID: 29438 Sessions: 1 Processed: 1 Uptime: 9m 57s
CPU: 0% Memory : 96M Last used: 6m 52s ago
* PID: 29451 Sessions: 1 Processed: 21 Uptime: 9m 56s
CPU: 0% Memory : 133M Last used: 2m 10s ago
* PID: 29463 Sessions: 1 Processed: 19 Uptime: 9m 56s
CPU: 0% Memory : 111M Last used: 27s ago
* PID: 29477 Sessions: 1 Processed: 23 Uptime: 9m 56s
CPU: 0% Memory : 117M Last used: 14s ago
* PID: 30625 Sessions: 1 Processed: 7 Uptime: 6m 49s
CPU: 0% Memory : 106M Last used: 1m 21s ago
* PID: 30668 Sessions: 1 Processed: 2 Uptime: 6m 44s
CPU: 0% Memory : 105M Last used: 1m 13s ago
* PID: 30706 Sessions: 1 Processed: 16 Uptime: 6m 43s
CPU: 0% Memory : 148M Last used: 1m 11s ago
* PID: 30718 Sessions: 1 Processed: 12 Uptime: 6m 43s
CPU: 0% Memory : 112M Last used: 1m 16s ago
I have some questions:
It seems to be that someone with slow internet connection is requesting our service, leading to Passenger processes is blocked. We have to restart Nginx to get the web service works again. Anyone has experiences with this?
We also use Sidekiq as worker queues. Most of our Workers are implemented without hitting MongoDB. They work fine.
But there are 2 Workers we use to update User's data, which query, update & insert data into the database. We've tried to optimize these all tasks using MongoDB bulk commands (update & insert).
Normally, when a small amount of users request the web service, the Workers work fine, busy queues are processed in about 1 minutes, but when it receives more requests, busy queues block the whole system. We have to restart the Nginx, again, to get it works. Below are Sidekiq config:
development:
:concurrency: 5
:logfile: ./log/sidekiq_development.log
:pidfile: ./log/sidekiq.pid
staging:
:concurrency: 5
:logfile: ./log/sidekiq_staging.log
:pidfile: ./log/sidekiq.pid
production:
:concurrency: 15
:logfile: ./log/sidekiq_production.log
:pidfile: ./log/sidekiq.pid
:queues:
- ...
We don't have any experiences with those problems. Anyone has any ideas?
Update 1:
After some monitorings when server is heavy load, we got this result: the MongoDB processes have many faults & stacked read queues, below are what mongostat logged during the downtime:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 2 *0 *0 0 4|0 0 79g 160g 3.36g 137 memo_v2:2.6% 0 17|0 8|0 36k 8k 61 15:05:22
*0 6 *0 *0 0 1|0 0 79g 160g 3.38g 144 memo_v2:2.1% 0 30|0 3|0 722b 11k 61 15:05:23
1595 15 1 *0 0 5|0 0 79g 160g 3.41g 139 memo_v2:19.7% 0 20|0 8|0 164k 179k 61 15:05:25
1 18 2 *0 1 6|0 0 79g 160g 3.38g 198 memo_v2:14.4% 0 31|0 1|0 3k 122k 61 15:05:26
2 20 4 *0 0 7|0 0 79g 160g 3.38g 169 memo_v2:8.6% 0 29|0 1|0 3k 157k 61 15:05:27
1 6 23 *0 0 4|0 0 79g 160g 3.39g 190 memo_v2:18.7% 0 32|0 1|0 1k 63k 61 15:05:28
1 4 42 *0 0 4|0 0 79g 160g 3.1g 115 memo_v2:35.9% 0 30|0 0|1 1k 20k 61 15:05:29
1 5 51 *0 0 4|0 0 79g 160g 3.11g 177 memo_v2:30.0% 0 28|0 1|0 1k 23k 61 15:05:30
*0 6 20 *0 0 2|0 0 79g 160g 3.12g 174 memo_v2:40.9% 0 28|0 1|0 15k 7k 61 15:05:31
2 9 *0 *0 1 7|0 0 79g 160g 3.1g 236 memo_v2:4.4% 0 26|0 2|0 2k 31k 61 15:05:32
Anyone faced this before?
I don't have enough reputation to post a comment, so I'll have to add a very lacklustre answer.
I don't have any experience with the stack, but if you are correct in that slow clients are the cause of the passenger issues then I'd suggest that you ensure there is adequate buffering in front of the passenger processes.
For nginx, the important setting looks to be proxy_buffers. The section titled "Using Buffers to Free Up Backend Servers" in following article talks about the nginx module: https://www.digitalocean.com/community/tutorials/understanding-nginx-http-proxying-load-balancing-buffering-and-caching
For the mongodb issue, it sounds like you just need to dig a little deeper. If you can find where the issue is happening in the code, the solution will probably present itself. The article linked by Hongli looks very good for that.

Resources