I have Telegraf configured and running with -input-filter phpfpm
Input filter configured:
[phpfpm]
urls = ["http://127.0.0.1:8080/fpmstats"]
This url works, and returns correct php-fpm stats:
pool: www
process manager: dynamic
start time: 03/Sep/2016:13:25:25 +0000
start since: 1240
accepted conn: 129
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 2
active processes: 1
total processes: 3
max active processes: 1
max children reached: 0
slow requests: 0
Telegraf Output is configured for Influxdb as follows:
[[outputs.influxdb]]
urls = ["udp://172.17.0.16:8089"] # Stick to UDP
database = "telegraf"
precision = "s"
retention_policy = "autogen"
write_consistency = "any"
timeout = "5s"
username = "telegraf"
password = "password"
user_agent = "telegraf"
udp_payload = 1024
This is 'almost' working, and data is being recieved by Influx - but only a couple of the measurements..
SHOW TAG KEYS FROM "phpfpm"
Shows only the following tagkey
host
pool
I expected to see values for accepted conn, listen queue, idel processes and so on. I cannot see any 'useful' data being posted to Influx.
Am I missing something, in terms of where to look for the phpfpm values being stored in the Influxdb.
Or is this a configuration problem.
I had a problem getting the http collector to work so stuck with UDP - is this a bad idea?
Data in InfluxDB is separated into measurements, tags, and fields.
Measurements are high level bucketing of data.
Tags are index values.
Fields are the actual data.
The data that you're working with has the measurement phpfpm and two tags host and pool.
I expected to see values for accepted conn, listen queue, idel processes and so on. I cannot see any 'useful' data being posted to Influx.
The values that you're looking for are most likely fields. To verify that this is the case run the query
SHOW FIELD KEYS FROM "phpfpm"
First I have to mention that I run on a CentOS 7 tuned up to support 1 million connections. I tested with a simple C server and client and I connected 512000 clients. I could have connect more but I did not have enought RAM to spawn more linux client machines, since from a machine I can open 65536 connections; 8 machines * 64000 connections each = 512000.
I made a simple Erlang server to which I want to connect 1 million or half a million clients, using the same C client. The problem I'm having now is memory related. For each successfully gen_tcp:accept call I spawn a process. Around 50000 open connections costs me 3.7 GB RAM on server, meanwhile using the C server I could have open 512000 connections using 1.9 GB RAM. It is true that on the C server I did not created a process after accept to handle stuff, I just called accept again in while loop, but even so... guys on web did this erlang thing with less memory ( ejabberd riak )
I presume that the flags that I pass to the erlang VM should do the trick. From what I read in documentation and on the web this is what I have: erl +K true +Q 64200 +P 134217727 -env ERL_MAX_PORTS 40960000 -env ERTS_MAX_PORTS 40960000 +a 16 +hms 1024 +hmbs 1024
This is the server code, I open 1 listener that monitors port 5001 by calling start(1, 5001).
start(Num,LPort) ->
case gen_tcp:listen(LPort,[{reuseaddr, true},{backlog,9000000000}]) of
{ok, ListenSock} ->
start_servers(Num,ListenSock),
{ok, Port} = inet:port(ListenSock),
Port;
{error,Reason} ->
{error,Reason}
end.
start_servers(0,_) ->
ok;
start_servers(Num,LS) ->
spawn(?MODULE,server,[LS,0]),
start_servers(Num-1,LS).
server(LS, Nr) ->
io:format("before accept ~w~n",[Nr]),
case gen_tcp:accept(LS) of
{ok,S} ->
io:format("after accept ~w~n",[Nr]),
spawn(ex,server,[LS,Nr+1]),
proc_lib:hibernate(?MODULE, loop, [S]);
Other ->
io:format("accept returned ~w - goodbye!~n",[Other]),
ok
end.
loop(S) ->
ok = inet:setopts(S,[{active,once}]),
receive
{tcp,S, _Data} ->
Answer = 1, % Not implemented in this example
gen_tcp:send(S,Answer),
proc_lib:hibernate(?MODULE, loop, [S]);
{tcp_closed,S} ->
io:format("Socket ~w closed [~w]~n",[S,self()]),
ok
end.
Given this configuration your my beam consumed about 2.5 GB of memory just on start without even your module loaded.
However, if you reduce maximum number of processes to the reasonable value, like +P 60000 for 50 000 connections test, memory consumption drops rapidly.
With 60 000 processes limit VM only used 527MB of virtual memory on start.
I've tried to reproduce your test, but unfortunately I was only able to launch 30 000 netcat's on my system before running out of memory (because of client jobs). However I only observed increase of VM memory consumption up to 570MB.
So my suggestion is that your numbers come from high startup memory consumption and not great number of opened connections. Even then you actually should pay attention to the stats change along with increasing number of opened connections and not absolute values.
I finally used the following configuration for my benchmark:
erl +K true +Q 64200 +P 60000 -env ERL_MAX_PORTS 40960000 -env ERTS_MAX_PORTS 40960000 +a 16 +hms 1024 +hmbs 1024
So I've launched clients with the command
for i in `seq 1 50000`; do nc 127.0.0.1 5001 & done
Apart from tunes you already made you can adjust tcp buffers as well. By default they take OS default values, but you can pass {recbuf, Size}and {sndbuf, Size} to gen_tcp:listen. It may reduce memory footprints significantly.
The redis server version I use is 2.8.9 from MSOpenTech github. Can anyone shed light on why redis "info" command indicates that used memory is 21 GB even though the RDB file that's saved on disk is < than 4 GB? I did successfully run a "save" command before noting down the size of the RDB file. The qfork heapfile is 30 Gb as it's been configured in redis.windows.conf.
Configuration :
maxheap 30gb
max-memory 20 Gb
appendonly no
save 18000 1
The server has 192 GB of physical RAM, but unfortunately only has about 60 GB of free disk space and I had to set max-heap and max-memory to 30 Gb and 20 Gb respectively so that I have additional space to persist the data on disk.
I'm using redis as a cache and the save interval is large as seeding the data takes a long time and I don't want constant writing to file. Once seeding is done, the DB is updated with newer data once a day.
My questions are :
How is the saved RDB file so small? Is it solely due to compression (rdbcompression yes)? If yes, can the same compression mechanism be used to store data in memory too? I make use of lists extensively.
Before I ran the "save" command, the working set and private bytes in process-explorer was very small. Is there a way I can breakdown memory usage by datastructure? For example : List uses x amount, Hash uses y amount etc?
Is there any way I can store the AOF file ( I turned off AOF and use RDB because the AOF files were filling up disk space fast ) in a network path ( shared drive or NAS )? I tried setting the dir config to \someip\some folder but the service failed to start with the message "Cant CHDIR to location"
I'm unable to post images, but this is what process-explorer has to say about the redis-server instance :
Virtual Memory:
Private Bytes : 72,920 K
Peak Private Bytes : 31,546,092 K
Virtual Size : 31,558,356 K
Page faults : 12,479,550
Physical Memory:
Working Set : 26,871,240 K
WS Private : 63,260 K
WS Shareable : 26,807,980 K
WS Shared : 3,580 K
Peak Working Set : 27,011,488 K
The latest saved dump.rdb is 3.81 GB and the heap file is 30 GB.
# Server
redis_version:2.8.9
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:1fe181ad2447fe38
redis_mode:standalone
os:Windows
arch_bits:64
multiplexing_api:winsock_IOCP
gcc_version:0.0.0
process_id:12772
run_id:553f2b4665edd206e632b7040aa76c0b76083f4d
tcp_port:6379
uptime_in_seconds:24087
uptime_in_days:0
hz:50
lru_clock:14825512
config_file:D:\RedisService/redis.windows.conf
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
# Memory
used_memory:21484921736
used_memory_human:20.01G
used_memory_rss:21484870536
used_memory_peak:21487283360
used_memory_peak_human:20.01G
used_memory_lua:3156992
mem_fragmentation_ratio:1.00
mem_allocator:dlmalloc-2.8
# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1407328559
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:1407328560
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
# Stats
total_connections_received:9486
total_commands_processed:241141370
instantaneous_ops_per_sec:0
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:30143
keyspace_misses:81
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:1341134
FYI, This question is posted on PG-General mailing list too,
We have a problem since we migrate from 8.4 to 9.1.
When we play:
ANALYSE VERBOSE;
( stat on all databases, with 500 tables and 1to DATA in all tables)
We now have this message :
org.postgresql.util.PSQLException: ERROR: out of shared memory Indice : You might need to increase max_locks_per_transaction.
When we was in 8.4, there was no error, there is our specific postgresql.conf configuration on the server:
default_statistics_target = 200
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effective_cache_size = 7GB
work_mem = 48MB
wal_buffers = 32MB
checkpoint_segments = 64
shared_buffers = 2304MB
max_connections = 150
random_page_cost = 2.0
max_locks_per_transaction = 128 **
max_lock_per_transaction was at is default value before (64?), we already tried to increase it according to error hint.
We already tried to increase linux shared memory too. Have you any suggestions?
I'd try lowering maintenance_work_mem (try 256MB) and setting max_locks_per_transaction to much higher value e.g. 1024.
I have initialized 250 threads at a time and they are returning back to update some data in the database. I am using Postgresql database in my rails 2 application. I have set Pool size 100 and max connections 100 but the problem is after 100 connections remaining threads are causing problem like "FATAL ERROR: Too many clients". So now i want is as soon as any thread complete its process then kill that thread. SO to achieve this what should i do?
Here is my code:
consider detail = "contains 250 items in an array"
threads = []
detail.each do |item|
threads << Thread.new( item) do | item |
# block of code
end
end
threads.each { | t | t.join }
I hope you are using Rails 2.2 where connection pooling has been implemented. Check this:
http://guides.rubyonrails.org/2_2_release_notes.html#connection-pooling
and this http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/ConnectionPool.html