Graphite carbon-relay not working - monitoring

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful

As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

Related

The first few messages are lost when transmitted to mqtt clients that were offline

I have vernemq server and mqtt clients using paho mqtt library (with python or C - no matter). Both subscribers and publishers use Qos2 and clean_session == False. So the problem is when subscriber is offline, I try to send some messages. Some of them are lost. After a detailed study of the parameters, I found out that the first max_inflight_messages are lost. What I mean. In the config file vernemq.conf I set max_inflight_messages = 20 (by default). Then subscriber go to offline, I send 21 messages, then subscriber go online, and first 20 are lost, 21s is delivered. I tried it many times with different amount of messages - the same result, first 20 messages are lost, from 21 and next are received. When I try max_inflight_messages = 1, first message is lost, others are received. Any ideas? My file vernemq.conf:
allow_anonymous = on
allow_register_during_netsplit = off
allow_publish_during_netsplit = off
allow_subscribe_during_netsplit = off
allow_unsubscribe_during_netsplit = off
allow_multiple_sessions = off
coordinate_registrations = on
max_inflight_messages = 20
max_online_messages = 1000
max_offline_messages = 1000
max_message_size = 0
upgrade_outgoing_qos = off
listener.max_connections = 10000
listener.nr_of_acceptors = 10
listener.tcp.default = 0.0.0.0:1883
listener.vmq.clustering = 0.0.0.0:44053
listener.http.default = 0.0.0.0:8888
systree_enabled = on
systree_interval = 20000
graphite_enabled = off
graphite_host = localhost
graphite_port = 2003
graphite_interval = 20000
shared_subscription_policy = prefer_local
plugins.vmq_passwd = off
plugins.vmq_acl = on
plugins.vmq_diversity = off
plugins.vmq_webhooks = off
plugins.vmq_bridge = off
metadata_plugin = vmq_plumtree
vmq_acl.acl_file = ./etc/vmq.acl
vmq_acl.acl_reload_interval = 10
vmq_passwd.password_file = ./etc/vmq.passwd
vmq_passwd.password_reload_interval = 10
vmq_diversity.script_dir = ./share/lua
vmq_diversity.auth_postgres.enabled = off
vmq_diversity.postgres.ssl = off
vmq_diversity.postgres.password_hash_method = crypt
vmq_diversity.auth_cockroachdb.enabled = off
vmq_diversity.cockroachdb.ssl = on
vmq_diversity.cockroachdb.password_hash_method = bcrypt
vmq_diversity.auth_mysql.enabled = off
vmq_diversity.mysql.password_hash_method = password
vmq_diversity.auth_mongodb.enabled = off
vmq_diversity.mongodb.ssl = off
vmq_diversity.auth_redis.enabled = off
vmq_bcrypt.pool_size = 1
log.console = both
log.console.level = debug
log.console.file = ./log/console.log
log.error.file = ./log/error.log
log.syslog = off
log.crash = on
log.crash.file = ./log/crash.log
log.crash.maximum_message_size = 64KB
log.crash.size = 10MB
log.crash.rotation = $D0
log.crash.rotation.keep = 5
nodename = VerneMQ#127.0.0.1
distributed_cookie = vmq
erlang.async_threads = 64
erlang.max_ports = 262144
leveldb.maximum_memory.percent = 70
The problem was in paho mqtt library. When client connect to broker, he receive all messages, but handlers for this messages assigned only when it subscrybe to concrete topic.

Gatling to InfluxDB connection in windows

I am using gatling and influxdb in windows 10. I am trying to send some results from gatling to influxdb. But the results are not being pushed to the influxdb. Can someone help me.
My graphite config file
data {
#writers = [console, file, graphite]
console {
#light = false
#writePeriod = 5
}
file {
bufferSize = 8192 # FileDataWriter's internal data buffer size, in bytes
}
leak {
#noActivityTimeout = 30 # Period, in seconds, for which Gatling may have no activity before considering a leak may be happening
}
graphite {
# light = false # only send the all* stats
host = "localhost" # The host where the Carbon server is located
port = 2003 # The port to which the Carbon server listens to (2003 is default for plaintext, 2004 is default for pickle)
protocol = "tcp" # The protocol used to send data to Carbon (currently supported : "tcp", "udp")
rootPathPrefix = "gatling" # The common prefix of all metrics sent to Graphite
bufferSize = 8192 # Internal data buffer size, in bytes
writePeriod = 1 # Write period, in seconds
}
}
My influxdb config file is
[[graphite]]
enabled = true
database = "gatlingdb"
retention-policy = ""
bind-address = ":2003"
protocol = "tcp"
consistency-level = "one"
batch-size = 5000
batch-pending = 10
batch-timeout = "1s"
udp-read-buffer = 0
separator = "."
templates = [
"gatling.*.*.*.count measurement.simulation.request.status.field",
"gatling.*.*.*.min measurement.simulation.request.status.field",
"gatling.*.*.*.max measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles50 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles75 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles95 measurement.simulation.request.status.field",
"gatling.*.*.*.percentiles99 measurement.simulation.request.status.field"
]
Not sure why it is not working.
Uncomment #writers = [console, file, graphite]

How to connect to PDB in Oracle 12c

I'm running a fresh installation of Oracle 12c on Solaris 10 and I can connect to the CDB using toad just fine, please tell me how can I now connect to the PDB database named PDBORCL as mentioned in the guide: https://oracle-base.com/articles/12c/multitenant-connecting-to-cdb-and-pdb-12cr1
Following are the contents of my tnsnames.ora file:
# tnsnames.ora Network Configuration File: /bkofa/oracle/app/oracle
/product/12.1.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
ORCL12 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = afxortsts)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl12)
)
)
pdbORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = afxortsts)(PORT = 1523))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdborcl)
)
)
Here are the contents of my listener.ora file:
# listener.ora Network Configuration File: /bkofa/oracle/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
# Generated by Oracle configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = afxortsts)(PORT = 1523))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = orcl12)
(SID_NAME = orcl12)
)
(SID_DESC =
(GLOBAL_DBNAME = pdborcl)
(SID_NAME = pdborcl)
)
)
These are the containers by the way:
SELECT name, pdb
FROM v$services
ORDER BY name;
NAME PDB
SYS$BACKGROUND CDB$ROOT
SYS$USERS CDB$ROOT
orcl12 CDB$ROOT
orcl12XDB CDB$ROOT
pdborcl PDBORCL
Still when I try to connect to PDB using any combination of commands this is what I get:
bash-3.2$ lsnrctl status
LSNRCTL for Solaris: Version 12.1.0.2.0 - Production on 13-APR-2016 15:42:28
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=afxortsts)(PORT=1523)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Solaris: Version 12.1.0.2.0 - Production
Start Date 12-APR-2016 13:56:56
Uptime 1 days 1 hr. 45 min. 36 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /bkofa/oracle/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
Listener Log File /bkofa/oracle/app/oracle/diag/tnslsnr/afxortsts/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=afxortsts)(PORT=1523)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "orcl12" has 1 instance(s).
Instance "orcl12", status UNKNOWN, has 1 handler(s) for this service...
Service "pdborcl" has 1 instance(s).
Instance "pdborcl", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
bash-3.2$ sqlplus '/ as sysdba'
SQL*Plus: Release 12.1.0.2.0 Production on Wed Apr 13 15:42:44 2016
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> connect sys/oracle123#172.16.1.118:1523/pdborcl as sysdba
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect sys#pdborcl
Enter password:
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
SVR4 Error: 2: No such file or directory
Additional information: 2581
Additional information: -2057892281
Process ID: 0
Session ID: 0 Serial number: 0
SQL> connect sys#172.16.1.118:1523/pdborcl as sysdba
ERROR:
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA
SQL>
Oh I should make this clear that I'm using port 1523 because there is another instance of older Oracle 10g already running on the system that uses this port so I wanted to avoid any conflict with that.
You should not declare the services in the SID_LIST_LISTENER. Especially the pdborcl which is not an instance but a service within the instance. So remove this part:
(SID_DESC =
(GLOBAL_DBNAME = pdborcl)
(SID_NAME = pdborcl)
)
The instance should register itself to the listener. If not done, you should, when connected to the CDB:
alter system set local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=afxortsts)(PORT=1523))) ';
alter system register;
Below my config which works:
listener.ora:
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1525))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1525))
)
)
tnsnames.ora:
LISTENER_CATCDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1526))
# CDB
CATCDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1526))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = catcdb)
)
)
# PDB
CATDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1526))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = catdb)
)
)

Flume Multiplexing not working

I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.

scapy dns sniff with additional records

i have python/scapy sniffer for DNS.
I am able to sniff DNS messages and get IP/UDP source and destination IP address and ports as well as DNS but I have problems parsing and getting additional answers and additional records if there is more then one.
from scapy i see DNS data i can get but do not know how to get additional records with ls(DNS),ls(DNSQR) and ls(DNSRR)
I would appreciate some help or solution to work this out.
My python/scapy script is below
#!/usr/bin/env python
from scapy.all import *
from datetime import datetime
import time
import datetime
import sys
############# MODIFY THIS PART IF NECESSARY ###############
interface = 'eth0'
filter_bpf = 'udp and port 53'
# ------ SELECT/FILTER MSGS
def select_DNS(pkt):
pkt_time = pkt.sprintf('%sent.time%')
# ------ SELECT/FILTER DNS MSGS
try:
if DNSQR in pkt and pkt.dport == 53:
# queries
print '[**] Detected DNS Message at: ' + pkt_time
p_id = pkt[DNS].id
cli_ip = pkt[IP].src
cli_port = pkt.sport
srv_ip = pkt[IP].dst
srv_port = pkt.dport
query = pkt[DNSQR].qname
q_class = pkt[DNSQR].qclass
qr_class = pkt[DNSQR].sprintf('%qclass%')
type = pkt[DNSQR].sprintf('%qtype%')
#
elif DNSRR in pkt and pkt.sport == 53:
# responses
pkt_time = pkt.sprintf('%sent.time%')
p_id = pkt[DNS].id
srv_ip = pkt[IP].src
srv_port = pkt.sport
cli_ip = pkt[IP].dst
cli_port = pkt.dport
response = pkt[DNSRR].rdata
r_class = pkt[DNSRR].rclass
rr_class = pkt[DNSRR].sprintf("%rclass%")
type = pkt[DNSRR].sprintf("%type%")
ttl = pkt[DNSRR].ttl
len = pkt[DNSRR].rdlen
#
print response
except:
pass
# ------ START SNIFFER
sniff(iface=interface, filter=filter_bpf, store=0, prn=select_DNS)

Resources