scapy dns sniff with additional records - parsing

i have python/scapy sniffer for DNS.
I am able to sniff DNS messages and get IP/UDP source and destination IP address and ports as well as DNS but I have problems parsing and getting additional answers and additional records if there is more then one.
from scapy i see DNS data i can get but do not know how to get additional records with ls(DNS),ls(DNSQR) and ls(DNSRR)
I would appreciate some help or solution to work this out.
My python/scapy script is below
#!/usr/bin/env python
from scapy.all import *
from datetime import datetime
import time
import datetime
import sys
############# MODIFY THIS PART IF NECESSARY ###############
interface = 'eth0'
filter_bpf = 'udp and port 53'
# ------ SELECT/FILTER MSGS
def select_DNS(pkt):
pkt_time = pkt.sprintf('%sent.time%')
# ------ SELECT/FILTER DNS MSGS
try:
if DNSQR in pkt and pkt.dport == 53:
# queries
print '[**] Detected DNS Message at: ' + pkt_time
p_id = pkt[DNS].id
cli_ip = pkt[IP].src
cli_port = pkt.sport
srv_ip = pkt[IP].dst
srv_port = pkt.dport
query = pkt[DNSQR].qname
q_class = pkt[DNSQR].qclass
qr_class = pkt[DNSQR].sprintf('%qclass%')
type = pkt[DNSQR].sprintf('%qtype%')
#
elif DNSRR in pkt and pkt.sport == 53:
# responses
pkt_time = pkt.sprintf('%sent.time%')
p_id = pkt[DNS].id
srv_ip = pkt[IP].src
srv_port = pkt.sport
cli_ip = pkt[IP].dst
cli_port = pkt.dport
response = pkt[DNSRR].rdata
r_class = pkt[DNSRR].rclass
rr_class = pkt[DNSRR].sprintf("%rclass%")
type = pkt[DNSRR].sprintf("%type%")
ttl = pkt[DNSRR].ttl
len = pkt[DNSRR].rdlen
#
print response
except:
pass
# ------ START SNIFFER
sniff(iface=interface, filter=filter_bpf, store=0, prn=select_DNS)

Related

Gatling to InfluxDB connection in windows

I am using gatling and influxdb in windows 10. I am trying to send some results from gatling to influxdb. But the results are not being pushed to the influxdb. Can someone help me.
My graphite config file
data {
#writers = [console, file, graphite]
console {
#light = false
#writePeriod = 5
}
file {
bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
# light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
}
My influxdb config file is
[[graphite]]
enabled = true
database = "gatlingdb"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
udp-read-buffer = 0
separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
Not sure why it is not working.
Uncomment #writers = [console, file, graphite]

how to fetch all the records every minute from a sql table using Apache Flume

I am trying to get all the data from sql table every minute using Flume.
Can someone please suggest what config changes needs to be done?
Configs :
agent.channels = ch1
agent.sinks = kafkaSink
agent.sources = sql-source
agent.channels.ch1.type = memory
agent.channels.ch1.capacity = 1000000
agent.sources.sql-source.channels = ch1
agent.sources.sql-source.type = org.keedio.flume.source.SQLSource
# URL to connect to database
agent.sources.sql-source.connection.url = jdbc:sybase:Tds:abcServer:4500
# Database connection properties
agent.sources.sql-source.user = user
agent.sources.sql-source.password = XXXXXXX
agent.sources.sql-source.table = person
agent.sources.sql-source.columns.to.select = *
# Increment column properties
agent.sources.sql-source.incremental.column.name = person_id
# Increment value is from you want to start taking data from tables (0 will import entire table)
agent.sources.sql-source.incremental.value = 0
# Query delay, each configured milisecond the query will be sent
agent.sources.sql-source.run.query.delay=1000
# Status file is used to save last readed row
agent.sources.sql-source.status.file.path = /dump/apache-flume-1.6.0-bin
agent.sources.sql-source.status.file.name = sql-source.status
Change value of agent.sources.sql-source.run.query.delay to 60000..

Flume Multiplexing not working

I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.

Graphite carbon-relay not working

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

Wireshark does not get Scapy Modbus response

I'm running the code example below against a remote PLC with Wireshark running. Why do I only get the query (I should get the response too)? It seems that the PLC sends the response, since the output of Scapy says Received 1 packets, got 1 answers, remaining 0 packets.
Any ideas of why is this happening?
I also performed the sniffing using the sniff() function from Scapy but the result is the same (only get the query).
#! /usr/bin/env python
import logging
logging.getLogger("scapy").setLevel(1)
from scapy import *
from modLib import *
# IP for all transmissions
ip = IP(dst="192.168.10.131")
# Sets up the session with a TCP three-way handshake
# Send the syn, receive the syn/ack
tcp = TCP( flags = 'S', window = 65535, sport = RandShort(), dport = 502, options = [('MSS', 1360 ), ('NOP', 1), ('NOP', 1), ('SAckOK', '')])
synAck = sr1 ( ip / tcp )
# Send the ack
tcp.flags = 'A'
tcp.sport = synAck[TCP].dport
tcp.seq = synAck[TCP].ack
tcp.ack = synAck[TCP].seq + 1
tcp.options = ''
send( ip / tcp )
# Creates and sends the Modbus Read Holding Registers command packet
# Send the ack/push i.e. the request, receive the data i.e. the response
tcp.flags = 'AP'
adu = ModbusADU()
pdu = ModbusPDU03()
adu = adu / pdu
tcp = tcp / adu
data = sr1(( ip / tcp ), timeout = 2)
data.show()
# Acknowledges the response
# Ack the data response
# TODO: note, the 17 below should be replaced with a read packet length method...
tcp.flags = 'A'
tcp.seq = data[TCP].ack
tcp.ack = data[TCP] + 17
tcp.payload = ''
finAck = sr1( ip / tcp )
First, you have a bug in your code (which is present in the original version http://www.digitalbond.com/scadapedia/security-controls/scapy-modbus-extensions/), you need to add .seq there: tcp.ack = data[TCP].seq + 17.
As said in the comment, you could write tcp.ack = data[TCP].seq + len(data[TCP].payload).
Anyway it's generally useless to do the TCP stack's work for the kind of things you try to do.
I would do something like that:
from scapy import *
from modLib import *
import socket
sock = socket.socket()
sock.connect(("192.168.10.131", 502))
s = StreamSocket(sock, basecls=ModbusADU)
ans, unans = s.sr(ModbusADU()/ModbusPDU03())
ans.show()
Does this work better?

Resources