Ganglia:No nodes were viewed in ganglia web (centOS7) - monitor

I installed ganglia server and client at the same machine. But no nodes can view in the web when it finished. No matter google or baidu,no resolution about this problem appeared.I need help.
So this is my gmetad.conf:
[root#tools etc]# egrep -v "^#|^$" gmetad.conf
data_source "trainor" localhost 127.0.0.1
setuid_username "apache"
rrd_rootdir "/var/lib/ganglia/rrds"
case_sensitive_hostnames 0
here is my gmond.conf:
[root#tools etc]# egrep -v "^#|^$" gmond.conf
globals {
user = apache
}
cluster{
name = "trainor"
owner = "apache"
latlong = "unspecified"
url = "unspecified"
}
udp_recv_channel {
port = 8649
}
tcp_accept_channel {
port = 8649
}

Do you have a udp_send_channel set? In my experience (3.1.7), gmond doesn't report a node's own stats over the TCP channel (xml reporting) unless it receives them over UDP (raw stats collection).
You can use "gstat" to connect to gmond to see what it's outputting, or netcat to the TCP port:
nc node1.domain.com 8649
I found these pages the most useful:
https://github.com/ganglia/monitor-core/wiki/Ganglia-Quick-Start
http://timstaley.co.uk/posts/ganglia-setup-explained/

Related

MQTT5 User Properties with Mosquitto Bridge

I am running a local Mosquitto (MQTT) Broker that connects to a remote Mosquitto Broker using the build in MQTT Bridge functionality of Mosquitto. My mosquitto.conf looks like this:
# =================================================================
# Listeners
# =================================================================
listener 1883
# =================================================================
# Security
# =================================================================
allow_anonymous true
# =================================================================
# Bridges
# =================================================================
connection myConnectionName
address <<Remote Broker IP>>:1883
remote_username <<Remote Broker Username>>
remote_password <<Remote Broker Password>>
topic mytopic/# out 1 "" B2/
bridge_protocol_version mqttv50
cleansession false
bridge_attempt_unsubscribe true
upgrade_outgoing_qos true
max_queued_messages 5000
For testing I run a MqttPublisher using a C# console application which uses the MQTTnet library (version 3) and a MqttSubsbriber (also C# console application with MqttNet).
Now I want the Publisher to publish MQTT messages with User Properties (introduced by MQTT 5).
I build the message like this:
using MQTTnet;
using MQTTnet.Client;
using MQTTnet.Client.Options;
class Program
{
static void Main()
{
// Create a new MQTT client instance
var factory = new MqttFactory();
var mqttClient = factory.CreateMqttClient();
// Setup the options for the MQTT client
var options = new MqttClientOptionsBuilder()
.WithClientId("MqttPublisher")
.WithTcpServer("localhost", 1883)
.WithProtocolVersion(MQTTnet.Formatter.MqttProtocolVersion.V500)
.WithCleanSession()
.Build();
mqttClient.ConnectAsync(options).Wait();
var i = 0;
while (true)
{
Console.WriteLine("Client connected: " + mqttClient.IsConnected);
var message = new MqttApplicationMessageBuilder()
.WithTopic("mytopic/test")
.WithUserProperty("UPTest","Hi UP")
.WithPayload("Hello World: " + i)
.Build();
mqttClient.PublishAsync(message).Wait();
Console.WriteLine("Published message with payload: " + System.Text.Encoding.UTF8.GetString(message.Payload));
i++;
System.Threading.Thread.Sleep(1000);
}
mqttClient.DisconnectAsync().Wait();
}
}
With the subscriber (also with WithProtocolVersion(MQTTnet.Formatter.MqttProtocolVersion.V500) if I subscribe to the topic I get all the messages and I can read the MQTTnet.MqttApplicationMessage like shown in the following screenshot:
The messages are also published to the remote MQTT Broker due to the MQTT Bride configured. However, if I subscribe to the remote Broker with my MqttSubscriber, I am not getting the User Properties anymore:
Is there any way to configure the Mosquitto Bridge that also the user properties are send? I cant find a way and any help and comments are appreciated.
Thanks
Joshua
Using mosqutto 2.0.15 I have verified that MQTTv5 message properties are passed over the bridge.
Broker one.conf
listener 1883
allow_anonymous true
Broker two.conf
listener 1884
allow_anonymous true
connection one
address 127.0.0.1:1883
topic foo/# both 0
bridge_protocol_version mqttv50
Subscriber connected to Broker two
$ mosquitto_sub -t 'foo/#' -V mqttv5 -p 1884 -F "%t %P %p"
foo/bar foo:bar ben
Publisher connected to Broker one
$ mosquitto_pub -D PUBLISH user-property foo bar -t 'foo/bar' -m ben -p 1883
As you can see the the %P in the output format for the subscriber is outputting the user property foo with a value of bar when subscribed over the bridge.

Why does FreeRadius fail to process the accounting response from Fortigate?

I have configured a freeradius proxy (3.0.16) on Ubuntu (4.15.0-47-generic). It receives the radius accounting packets from another radius server running on Ubuntu and writes those to another radius server on running on Fortigate.
Radius Server ---> Proxy Radius Server ---> Fortigate Radius Server
I have configured copy-acct-to-home-server to include the Realm in proxy.conf
proxy.conf ( Realm definition )
home_server myFortigate {
type = acct
ipaddr = <IP address of Fortigate Interface Running Radius>
port = 1813
secret = superSecret
}
home_server_pool myFortigatePool {
type = fail-over
home_server = myFortigate
}
realm myFortigateRealm {
acct_pool = myFortigatePool
nostrip
}
copy-acct-to-home-server entry
preacct {
preprocess
update control {
Proxy-To-Realm := myFortigateRealm
}
suffix
}
After I run the freeradius -X, I also run tcpdump from a new session
tcpdump -ni eth01 port 1812 or port 1813
and get the following log
15:03:40.225570 IP RADIUS_PROXY_IP.56813 > FORTIGATE_INTERFACE_IP.1813: RADIUS, Accounting-Request (4), id: 0x31 length: 371
15:03:40.236155 IP FORTIGATE_INTERFACE_IP.1813 > RADIUS_PROXY_IP.56813: RADIUS, Accounting-Response (5), id: 0x31 length: 27
Which basically shows it is sending the account request to fortigate radius server and receiving the accounting response.
But strangely freeradius -X debug output shows a request time out for the same radius server on Fortigate and it ultimately tags the server as zombie
Starting proxy to home server FORTIGATE_INTERFACE_IP port 1813
(14) Proxying request to home server FORTIGATE_INTERFACE_IP port 1813 timeout 30.000000
Waking up in 0.3 seconds.
(14) Expecting proxy response no later than 29.667200 seconds from now
Waking up in 3.5 seconds.
and Finally it gives up
25) accounting {
(25) [ok] = ok
(25) } # accounting = ok
(25) ERROR: Failed to find live home server: Cancelling proxy
(25) WARNING: No home server selected
(25) Clearing existing &reply: attributes
(25) Found Post-Proxy-Type Fail-Accounting
(25) Post-Proxy-Type sub-section not found. Ignoring.
So the situation is the Radius proxy is sending accounting packets to Fortigate Radius server (could be seen in both freeradius and fortigate logs)
tcpdump shows that Radius proxy is receiving accounting response from the fortigate, but for some reason freeradius process doesn't recognize (or can not read) accounting response. It may be some interoperability issue or I have missed to set some flag. Requesting help from the experts to isolate and rectify the issue.

source data from syslog into flume

I tried to setup a flume agent to source data from syslog server.
basically, I have setup a syslog server on an server so-called (server1) to receive syslog events, then forward all messages to different server (server2) where the flume agent installed, then finally all data will be sink to kafka cluster.
Flume configuration as below.
# For each one of the sources, the type is defined
agent.sources.syslogSrc.type = syslogudp
agent.sources.syslogSrc.port = 9090
agent.sources.syslogSrc.host = server2
# The channel can be defined as follows.
agent.sources.syslogSrc.channels = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
# config for kafka sink
agent.sinks.kafkaSink.channel = memoryChannel
agent.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.kafkaSink.kafka.topic = flume
agent.sinks.kafkaSink.kafka.bootstrap.servers = <kafka.broker.list>:9092
agent.sinks.kafkaSink.kafka.flumeBatchSize = 20
agent.sinks.kafkaSink.kafka.producer.acks = 1
agent.sinks.kafkaSink.kafka.producer.linger.ms = 1
agent.sinks.kafkaSink.kafka.producer.compression.type = snappy
But, somehow logsys is not getting injected into flume agent.
appricate for your advice.
I have setup a syslog server on an server so-called (server1)
The syslogudp Source must bind to server1 host
agent.sources.syslogSrc.host = server1
then forward all messages to different server (server2)
the different server refers to the Sink
agent.sinks.kafkaSink.kafka.bootstrap.servers = server2:9092
Flume agent is only a process that hosts these components (Source, Sink, Channel) to facilitate the flow of events.

Where does stomp_interface come from?

In order to enable https communications between OpsCenter and DSE nodes, I have to set stomp_interface to opscenter.mydomain.com in /var/lib/datastax-agent/conf/address.yaml on each node. (After the fix, I no longer have to do this.)
Whenever I do a configure job from OpsCenter, it changes this stomp_interface value back to nn.nn.nn.nn. (After the fix, it still does this, but it doesn't break the agent HTTP communications anymore.)
Where does this parameter come from? Can I set it on the OpsCenter node in the /etc/opscenter/clusters/cluster_name.conf file?
Is it part of the [agents] section?
What is the parameter name and value that I should be adding?
opscenterd is now (the fix was to add the incoming_interface line):
# opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
ssl_keyfile = /var/lib/opscenter/ssl/opscenter.key
ssl_certfile = /var/lib/opscenter/ssl/opscenter.pem
ssl_port = 8443
[authentication]
enabled = True
[stat_reporter]
[agents]
use_ssl = true
incoming_interface = opscenter.mydomain.com
address.yaml before fix:
use_ssl: 1
stomp_interface: 1.2.3.4 (the opscenter external IP.
opscenter.mydomain.com also works)
stomp_port: 61620
local_interface: 2.3.4.5 (the external IP for this cluster node)
agent_rpc_interface: 0.0.0.0
agent_rpc_broadcast_address: 2.3.4.5
poll_period: 60
disk_usage_update_period: 60
rollup_rate: 200
rollup_rate_unit: second
jmx_host: 127.0.0.1
jmx_port: 7199
jmx_user: someuser
jmx_pass: somepassword
status_reporting_interval: 20
ec2_metadata_api_host: 169.254.169.254
metrics_enabled: true
jmx_metrics_threadpool_size: 5
hosts: ["2.3.4.5", "3.4.5.6", "4.5.6.7", "5.6.7.8"]
cassandra_port: 9042
thrift_port: 9160
cassandra_user: someuser
cassandra_pass: somepassword
runs_sudo: true
cassandra_install_location: /usr/share/dse
cassandra-conf: /etc/dse/cassandra/cassandra.yaml
cassandra_binary_location: /usr/bin
cassandra_conf_location: /etc/dse/cassandra
dse_env_location: /etc/dse
dse_binary_location: /usr/bin
dse_conf_location: /etc/dse
spark_conf_location: /etc/dse/spark
monitored_cassandra_user: someuser
monitored_cassandra_pass: somepassword
tcp_response_timeout: 120000
pong_timeout_ms: 120000
cluster_name.conf (I updated the seed_hosts to match those in the address.yaml hosts config in order to satisfy a Best Practices alert
that they should all be the same):
[destinations]
active =
[kerberos]
default_service =
opscenterd_client_principal =
opscenterd_keytab_location =
agent_keytab_location =
agent_client_principal =
[agents]
ssl_keystore_password =
ssl_keystore =
[jmx]
password = somepassword
port = 7199
username = someuser
[cassandra]
ssl_truststore_password =
cql_port = 9042
seed_hosts = 2.3.4.5, 3.4.5.6, 4.5.6.7, 5.6.7.8
username = someuser
password = somepassword
ssl_keystore_password =
ssl_keystore =
ssl_truststore =
Based on your comment for further information, I figured it out.
I added the incoming_interface = opscenter.mydomain.com to the [agents] section of the opscenterd.conf. (That wasn't present before markc's comment.)
I restarted service opscenterd.
Next, I was able to go back to OpsCenter LifeCycle Manager and do a fresh Install and Configure on the cluster, and all of the job steps completed successfully.
(Note: Don't change the rack names on nodes from what they were before, and select autoBootStrap = true on the Configure / Install requests.)
The datastax-agents are fully Up and Active. After the Configure and Install, the address.yaml files contained the public IP address of the OpsCenter node as the stomp_interface. (I changed one stomp_interface manually to be opscenter.mydomain.com, and that also works.)
I will also edit the question and post the requested information.
Thanks markc!

Can't connect java client to Marklogic database

I've just installed a MarkLogic nosql database out of the box on a windows machine.
I wrote a simple javaclient to put data in to the database but I get this error:
org.apache.http.conn.HttpHostConnectException: Connection to http://my.caci.local:8003 refused
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:158)
The Marklogic database is started. This is the code :
DatabaseClient client = DatabaseClientFactory.newClient("localhost", 8003, "admin", "admin", Authentication.DIGEST);
XMLDocumentManager docMgr = client.newXMLDocumentManager(); BinaryDocumentManager binMgr = client.newBinaryDocumentManager();
DOMHandle handle = new DOMHandle(); for (int i = 0; i < AANT_PERSONEN; i++) {
Document document = createDocument(i);
String docId = "/zaak/" + 20;
handle.set(document);
docMgr.write(docId, handle); }
....
The Marklogic console reports the following ports to be active on my.caci.local:
Default :: Admin : 8001 [HTTP]
Default :: App-Services : 8000 [HTTP]
Default :: HealthCheck : 7997 [HTTP]
Default :: Manage : 8002 [HTTP]
I'm new to marklogic and this is my question:
- what port should I use to connect to from my java client?
In agreement with MystyxMac, I notice the console does not report a REST server on 8003.
Here's the documentation for setting up a REST server:
http://docs.marklogic.com/guide/rest-dev/intro#id_97899
You should also add users for the rest-reader, rest-writer, and rest-admin roles.
Hoping that helps,
Erik Hennum
For testing purposes you can simply switch the port you are using to 8000.
From the documentation:
When you install MarkLogic Server, a pre-configured REST API instance
is available on port 8000. This instance uses the Documents database
as the content database and the Modules database as the modules
database.
The instance on port 8000 is convenient for getting started, but you
will usually create a dedicated instance for production purposes.
http://docs.marklogic.com/guide/rest-dev/service#id_15309

Resources