We are performing Set Operations on Redis to extract and filter data for targeting. The sets are namely represented in the following manner
fruits={'orange','lemon','apple'}
vegetables={'tomato'}
citric={'orange','lemon','tomato'}
We are using the Jedis client to do SUNION and SINTER, howerver what we have observed that evening with a concurrency of 100 the Redis Service returns time outs even with simple opration like
SMEMBERS
The set contains not more than 7 items
jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.maxActive = 1000;//Arguably High
jedisPoolConfig.minIdle = 300;//Arguably High
jedisPoolConfig.maxIdle = 500; //Arguably High
jedisPool = new JedisPool(jedisPoolConfig, "localhost", 6379, 1000);
jedis = jedisPool.getResource();
availableAds = new ArrayList<String>( jedis.smembers("fruits"));
jedisPool.returnResource(jedis);
Java Exception
redis.clients.jedis.exceptions.JedisConnectionException : It seems
like server has closed the connection.
Redis Error Log
Protocol error from client: addr=x.x.x.x:xxxx fd=270 idle=0 flags=N
db=0 sub=0 psub=0 qbuf=96 obl=47 oll=0 events=rw cmd=smembers
The test was run on Amazon EC2 Medium Instance (C1.medium) and the
servlet was testing with blitz.io as the load testing tool
Related
I followed this URL -- https://www.linkedin.com/pulse/integrating-gatling-influx-graphite-grafana-live-tests-phani-bushan/
And it works well with the gatling-demo project provided by them but it is not working with my Gatling project
pom Gatling versions used :
<gatling.version>3.7.4</gatling.version>
<gatling-plugin.version>4.1.1</gatling-plugin.version>
<maven-jar-plugin.version>3.2.0</maven-jar-plugin.version>
<scala-maven-plugin.version>4.5.6</scala-maven-plugin.version>
Gatling Config setup:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite)
console {
#light = false # When set to true, displays a light version without detailed request stats
#writePeriod = 5 # Write interval, in seconds
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
}
}
I followed the BlazeMeter article to monitor Gatling tests with Grafana and InfluxDB but no data is sent to InfluxDB and not any database created with the name "graphite".
InfluxDB is up and listen to port :2003. This is the log from InfluxDB:
2018-06-24T09:48:17Z Listening on TCP: [::]:2003 service=graphite addr=:2003
And I set gatling.conf fields to these:
data {
#writers = [console, file] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
}
gatling.conf is in src/test/resources folder and I ensured that this config file is loaded by Gatling by debugging it.
What I have missed?
You have invalid data writers configuration. Set it to:
writers = [console, file, graphite]
My question was already asked but I didn't succeed to solve my issue.
I don't succeed to send my data from Gatling in real time to InfluxDB.
I'm on Windows 10.
Gatling Version: 2.3.0 (the last one).
InfluxDB version: 1.3.5 (the last is 1.3.6).
My gatling.conf:
data {
writers = [console, file, graphite] # The list of DataWriters to which Gatling write simulation data (currently supported : console, file, graphite, jdbc)
console {
#light = false # When set to true, displays a light version without detailed request stats
}
file {
#bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
#light = false # only send the all* stats
host = "127.0.0.1" # The host where the Carbon server is located
port = "2003" # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
#bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
#writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
}
My influxdb.conf:
[http]
# Determines whether HTTP endpoint is enabled.
enabled = true
# The bind address used by the HTTP service.
bind-address = "127.0.0.1:8086"
###
### [[graphite]]
###
### Controls one or many listeners for Graphite data.
###
[[graphite]]
# Determines whether the graphite endpoint is enabled.
enabled = true
database = "gatlingdb"
# retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
# consistency-level = "one"
templates = [
"gatling.*.*.*.*.measurement.simulation.request.status.field"
]
My gatlingdb database is created on InfluxDB, it stays empty.
When I try:
C:\InfluxDB-1.3.5-1>influx -host 127.0.0.1
I'm connected to InfluxDB
>USE gatlingdb
I'm connected to my database. Then:
>SHOW SERIES
and
>SELECT * FROM gatling
Don't return anything. It's empty.
Note: I put "FROM gatling" because I put that in my gatling.conf: rootPathPrefix = "gatling"
I didn't download Graphite but I saw that InfluxDB accept the graphite protocol. I assume I can send data from Gatling to InfluxDB. I certainly missed something.
I succeeded in connecting InfluxDB to Grafana and I display data from other databases. I just missed the connection between Gatling and InfluxDB.
Thanks in advance for your help, I definitely need it!
Anthony
I'm almost finished the article which shows all the steps required to create the whole monitoring infrastructure using the Gatling, Grafana and InfluxDB (btw, without Graphite installed separately) which worked very well for me.
I think I'll publish it in my blog on the blazemeter.com just in few days! So stay tuned there!
http://blazemeter.com/blog
There you will even find the ready solution to spin up everything inside the Docker.
But until this (if it is urgent for you), can share my InfluxDB config section:
[[graphite]]
enabled = true
bind-address = ":2003"
database = "graphite"
retention-policy = ""
protocol = "tcp"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
consistency-level = "one"
separator = "."
udp-read-buffer = 0
gatling.conf:
graphite {
light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # GraphiteDataWriter's internal data buffer size, in bytes
writeInterval = 1 # GraphiteDataWriter's write interval, in seconds
}
The first thing you need to check is that InfluxDB actually accepts incoming metrics via graphite protocol. For example, during InfluxDB startup logs you should find this line:
influxdb_1 | [I] 2018-01-26T13:40:37Z Listening on TCP: [::]:2003 service=graphite addr=:2003
I want to create connection to Tarantool database in init_by_lua_block or init_worker_by_lua_block and then use that created connection in each content_by_lua_block:
init_by_lua_block {
local tnt = require 'resty.tarantool'
local tar, err = tnt:new({
host = '127.0.0.1',
port = 3312,
user = 'user',
password = 'password',
socket_timeout = 2000
})
local res, err = tar:connect()
}
But cosocket api is disabled in directives init_*_by_lua*. How I can create connection one time instead of creating connections for each request?
Use https://github.com/perusio/lua-resty-tarantool#set_keepalive
Makes the connection created get pushed to a connection pool so that the connection is kept alive across multiple requests.
I have a webservice built with grails that connects to a MySQL database. Since i upgraded to 2.4.3 I've had problems with the connectionpool not releasing the connections, resulting in an exception:
org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-bio-8080-exec-216] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:50; busy:50; idle:0; lastwait:30000]
This is my Datasources.groovy
dataSource {
url = "jdbc:mysql://..."
username = "xxx"
password = "xxx"
pooled = true
properties {
maxActive = 50
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
dataSource_survey {
url = "jdbc:mysql://..."
username = "xxx"
password = "xxx"
pooled = true
properties {
maxActive = 50
maxAge = 10 * 60000
timeBetweenEvictionRunsMillis = 5000
minEvictableIdleTimeMillis = 60000
numTestsPerEvictionRun=3
testOnBorrow=true
testWhileIdle=true
testOnReturn=true
validationQuery="SELECT 1"
}
}
I've read grails JIRA and some people seem to have similar problems. But I haven't been able to fix it with the information provided there.
Accessing the status of the connectionpool would help debugging a great deal. How can I check the status of the connectionpool to see how many connections are idle/busy during runtime?
The connection pool is registered as a javax.sql.DataSource but that interface only has methods for getting a Connection (one with username/password and one without), accessing a log writer, and getting/setting the login timeout. Everything else is left to the vendor to decide, and there's very little commonality between vendors in their methods for configuring pools initially, and working with and monitoring them throughout the app run.
So you really need to find out which library is used for the pool and use their API. That would ideally be as simple as accessing the dataSource bean (that's easy, just dependency-inject it into a service/controller/etc. like any bean - as a class-scope field, in this case def dataSource) and printing its class name. But we wrap the datasource in a few proxies to add some important behaviors, so it not easy to access
You're in luck though - for cases like this, we leave the original unproxied instance alone and register it as the dataSourceUnproxied bean which you can also dependency-inject (just don't access any of its connections, only information).
For a long time we used commons-pool to manage datasources, but a while back we switched to the Tomcat JDBC Pool because benchmark tests showed that it's faster than any other they looked at (including C3P0), and its configuration methods are based on commons-pool's, so it was basicallly a drop-in replacement with a significant performance boost and more configurability.