Gatling to InfluxDB connection in windows - influxdb

I am using gatling and influxdb in windows 10. I am trying to send some results from gatling to influxdb. But the results are not being pushed to the influxdb. Can someone help me.
My graphite config file
data {
#writers = [console, file, graphite]
console {
#light = false
#writePeriod = 5
}
file {
bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
# light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
}
My influxdb config file is
[[graphite]]
enabled = true
database = "gatlingdb"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
udp-read-buffer = 0
separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
Not sure why it is not working.

Uncomment #writers = [console, file, graphite]

Related

Flume configuration not working showing exception

I have the followinf flume configuration. I am trying to transfer a file of size 9GB to hdfs using flume from spool directory. I have the following flume configuration.
#initialize agent's source, channel and sink
wagent.sources = wavetronix
wagent.channels = memoryChannel2
wagent.sinks = flumeHDFS
# Setting the source to spool directory where the file exists
wagent.sources.wavetronix.type = spooldir
wagent.sources.wavetronix.spoolDir = /johir/WAVETRONIX/output/Yesterday
wagent.sources.wavetronix.fileHeader = false
wagent.sources.wavetronix.basenameHeader = true
#agent.sources.wavetronix.fileSuffix = .COMPLETED
# Setting the channel to memory
wagent.channels.memoryChannel2.type = memory
# Max number of events stored in the memory channel
wagent.channels.memoryChannel2.capacity = 50000
agent.channels.memoryChannel2.batchSize = 1000
wagent.channels.memoryChannel2.transactioncapacity = 1000
# Setting the sink to HDFS
wagent.sinks.flumeHDFS.type = hdfs
#agent.sinks.flumeHDFS.useLocalTimeStamp = true
wagent.sinks.flumeHDFS.hdfs.path =/user/root/WAVETRONIXFLUME/%Y-%m-%d/
wagent.sinks.flumeHDFS.hdfs.useLocalTimeStamp = true
wagent.sinks.flumeHDFS.hdfs.filePrefix= %{basename}
wagent.sinks.flumeHDFS.hdfs.fileType = DataStream
# Write format can be text or writable
wagent.sinks.flumeHDFS.hdfs.writeFormat = Text
# use a single csv file at a time
wagent.sinks.flumeHDFS.hdfs.maxOpenFiles = 1
wagent.sinks.flumeHDFS.hdfs.rollCount=0
wagent.sinks.flumeHDFS.hdfs.rollInterval=0
wagent.sinks.flumeHDFS.hdfs.rollSize = 6400000
wagent.sinks.flumeHDFS.hdfs.batchSize =1000
# never rollover based on the number of events
wagent.sinks.flumeHDFS.hdfs.rollCount = 0
# rollover file based on max time of 1 min
#agent.sinks.flumeHDFS.hdfs.rollInterval = 0
# agent.sinks.flumeHDFS.hdfs.idleTimeout = 600
# Connect source and sink with channel
wagent.sources.wavetronix.channels = memoryChannel2
wagent.sinks.flumeHDFS.channel = memoryChannel2
But I am getting the following exception.
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor"
java.lang.OutOfMemoryError: Java heap space
at java.util.concurrent.ConcurrentHashMap.putVal(ConcurrentHashMap.java:1043)
at java.util.concurrent.ConcurrentHashMap.putIfAbsent(ConcurrentHashMap.java:1535)
at java.lang.ClassLoader.getClassLoadingLock(ClassLoader.java:463)
at java.lang.ClassLoader.loadClass(ClassLoader.java:404)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.log4j.spi.LoggingEvent.(LoggingEvent.java:165)
at org.apache.log4j.Category.forcedLog(Category.java:391)
at org.apache.log4j.Category.log(Category.java:856)
at org.slf4j.impl.Log4jLoggerAdapter.warn(Log4jLoggerAdapter.java:479)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:461)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Can anyone help me to solve this problem?
Please edit the file ${FLUME_HOME}/conf/flume-env.sh, then add following code:
export JAVA_OPTS="-Xms1000m -Xmx12000m -Dcom.sun.management.jmxremote"
You can adjust the options "Xmx" and "Xms".

Flume Multiplexing not working

I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.

scapy dns sniff with additional records

i have python/scapy sniffer for DNS.
I am able to sniff DNS messages and get IP/UDP source and destination IP address and ports as well as DNS but I have problems parsing and getting additional answers and additional records if there is more then one.
from scapy i see DNS data i can get but do not know how to get additional records with ls(DNS),ls(DNSQR) and ls(DNSRR)
I would appreciate some help or solution to work this out.
My python/scapy script is below
#!/usr/bin/env python
from scapy.all import *
from datetime import datetime
import time
import datetime
import sys
############# MODIFY THIS PART IF NECESSARY ###############
interface = 'eth0'
filter_bpf = 'udp and port 53'
# ------ SELECT/FILTER MSGS
def select_DNS(pkt):
pkt_time = pkt.sprintf('%sent.time%')
# ------ SELECT/FILTER DNS MSGS
try:
if DNSQR in pkt and pkt.dport == 53:
# queries
print '[**] Detected DNS Message at: ' + pkt_time
p_id = pkt[DNS].id
cli_ip = pkt[IP].src
cli_port = pkt.sport
srv_ip = pkt[IP].dst
srv_port = pkt.dport
query = pkt[DNSQR].qname
q_class = pkt[DNSQR].qclass
qr_class = pkt[DNSQR].sprintf('%qclass%')
type = pkt[DNSQR].sprintf('%qtype%')
#
elif DNSRR in pkt and pkt.sport == 53:
# responses
pkt_time = pkt.sprintf('%sent.time%')
p_id = pkt[DNS].id
srv_ip = pkt[IP].src
srv_port = pkt.sport
cli_ip = pkt[IP].dst
cli_port = pkt.dport
response = pkt[DNSRR].rdata
r_class = pkt[DNSRR].rclass
rr_class = pkt[DNSRR].sprintf("%rclass%")
type = pkt[DNSRR].sprintf("%type%")
ttl = pkt[DNSRR].ttl
len = pkt[DNSRR].rdlen
#
print response
except:
pass
# ------ START SNIFFER
sniff(iface=interface, filter=filter_bpf, store=0, prn=select_DNS)

Graphite carbon-relay not working

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

Wireshark does not get Scapy Modbus response

I'm running the code example below against a remote PLC with Wireshark running. Why do I only get the query (I should get the response too)? It seems that the PLC sends the response, since the output of Scapy says Received 1 packets, got 1 answers, remaining 0 packets.
Any ideas of why is this happening?
I also performed the sniffing using the sniff() function from Scapy but the result is the same (only get the query).
#! /usr/bin/env python
import logging
logging.getLogger("scapy").setLevel(1)
from scapy import *
from modLib import *
# IP for all transmissions
ip = IP(dst="192.168.10.131")
# Sets up the session with a TCP three-way handshake
# Send the syn, receive the syn/ack
tcp = TCP( flags = 'S', window = 65535, sport = RandShort(), dport = 502, options = [('MSS', 1360 ), ('NOP', 1), ('NOP', 1), ('SAckOK', '')])
synAck = sr1 ( ip / tcp )
# Send the ack
tcp.flags = 'A'
tcp.sport = synAck[TCP].dport
tcp.seq = synAck[TCP].ack
tcp.ack = synAck[TCP].seq + 1
tcp.options = ''
send( ip / tcp )
# Creates and sends the Modbus Read Holding Registers command packet
# Send the ack/push i.e. the request, receive the data i.e. the response
tcp.flags = 'AP'
adu = ModbusADU()
pdu = ModbusPDU03()
adu = adu / pdu
tcp = tcp / adu
data = sr1(( ip / tcp ), timeout = 2)
data.show()
# Acknowledges the response
# Ack the data response
# TODO: note, the 17 below should be replaced with a read packet length method...
tcp.flags = 'A'
tcp.seq = data[TCP].ack
tcp.ack = data[TCP] + 17
tcp.payload = ''
finAck = sr1( ip / tcp )
First, you have a bug in your code (which is present in the original version http://www.digitalbond.com/scadapedia/security-controls/scapy-modbus-extensions/), you need to add .seq there: tcp.ack = data[TCP].seq + 17.
As said in the comment, you could write tcp.ack = data[TCP].seq + len(data[TCP].payload).
Anyway it's generally useless to do the TCP stack's work for the kind of things you try to do.
I would do something like that:
from scapy import *
from modLib import *
import socket
sock = socket.socket()
sock.connect(("192.168.10.131", 502))
s = StreamSocket(sock, basecls=ModbusADU)
ans, unans = s.sr(ModbusADU()/ModbusPDU03())
ans.show()
Does this work better?

Resources