Flume: Multiple sources adding logs to single sink - flume

I am trying to collect the logs from different directories on single machine to local file system file or HDFS.
I have registered 2 sources r1, r2.
Both the sources are pointing to single channel C1.
There is one sink attached to the channel. K1
Please find the configuration file below:
# Name the components on this agent
a1.sources = r1
a1.sources = r2
a1.sinks = k1
a1.channels = c1
a1.sources.r2.type = exec
a1.sources.r2.command = tail -f /PATH/bper-peg-pt-rest.log
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /PATH/bper-peg-ejb.log
# Describe the sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /home/vbsc/Desktop/flume_project_logging/logs_aggregated
a1.sinks.k1.sink.rollInterval = 0
# Use file channel
a1.channels.c1.type = file
# Bind the source and sink to the channel
a1.sinks.k1.channel = c1
a1.sources.r2.channels = c1
a1.sources.r1.channels = c1
But when i start the Flume with Agent a1, only one source (r2) is getting started.
Flume agent startup logs:
16/06/14 14:38:09 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
16/06/14 14:38:09 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/vbsc/Desktop/flume_project_logging/flume_tailSource.conf
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
16/06/14 14:38:09 INFO node.AbstractConfigurationProvider: Creating channels
16/06/14 14:38:09 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type file
16/06/14 14:38:10 INFO node.AbstractConfigurationProvider: Created channel c1
16/06/14 14:38:10 INFO source.DefaultSourceFactory: Creating instance of source r2, type exec
16/06/14 14:38:10 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: file_roll
16/06/14 14:38:10 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r2, k1]
16/06/14 14:38:10 INFO node.Application: Starting new configuration:{ sourceRunners:{r2=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r2,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#4ad9cb27 counterGroup:{ name:null counters:{} } }} channels:{c1=FileChannel c1 { dataDirs: [/root/.flume/file-channel/data] }} }
16/06/14 14:38:10 INFO node.Application: Starting Channel c1
16/06/14 14:38:10 INFO file.FileChannel: Starting FileChannel c1 { dataDirs: [/root/.flume/file-channel/data] }...
16/06/14 14:38:10 INFO file.Log: Encryption is not enabled
16/06/14 14:38:10 INFO file.Log: Replay started
16/06/14 14:38:10 INFO file.Log: Found NextFileID 13, from [/root/.flume/file-channel/data/log-9, /root/.flume/file-channel/data/log-11, /root/.flume/file-channel/data/log-13, /root/.flume/file-channel/data/log-12, /root/.flume/file-channel/data/log-10]
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFileV3: Starting up with /root/.flume/file-channel/checkpoint/checkpoint and /root/.flume/file-channel/checkpoint/checkpoint.meta
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /root/.flume/file-channel/checkpoint/checkpoint.meta
16/06/14 14:38:10 INFO file.FlumeEventQueue: QueueSet population inserting 0 took 0
16/06/14 14:38:10 INFO file.Log: Last Checkpoint Tue Jun 14 14:37:49 CEST 2016, queue depth = 0
16/06/14 14:38:10 INFO file.Log: Replaying logs with v2 replay logic
16/06/14 14:38:10 INFO file.ReplayHandler: Starting replay of [/root/.flume/file-channel/data/log-9, /root/.flume/file-channel/data/log-10, /root/.flume/file-channel/data/log-11, /root/.flume/file-channel/data/log-12, /root/.flume/file-channel/data/log-13]
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-9
16/06/14 14:38:10 INFO tools.DirectMemoryUtils: Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
16/06/14 14:38:10 INFO tools.DirectMemoryUtils: Direct Memory Allocation: Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20316160, Remaining = 20316160
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 58602
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 58602 in /root/.flume/file-channel/data/log-9
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-10
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 20798
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 20798 in /root/.flume/file-channel/data/log-10
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-11
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3178
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3178 in /root/.flume/file-channel/data/log-11
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-12
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3264
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3264 in /root/.flume/file-channel/data/log-12
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-13
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3264
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3264 in /root/.flume/file-channel/data/log-13
16/06/14 14:38:10 INFO file.ReplayHandler: read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
16/06/14 14:38:10 INFO file.FlumeEventQueue: Search Count = 0, Search Time = 0, Copy Count = 0, Copy Time = 0
16/06/14 14:38:10 INFO file.Log: Rolling /root/.flume/file-channel/data
16/06/14 14:38:10 INFO file.Log: Roll start /root/.flume/file-channel/data
16/06/14 14:38:10 INFO file.LogFile: Opened /root/.flume/file-channel/data/log-14
16/06/14 14:38:10 INFO file.Log: Roll end
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFile: Start checkpoint for /root/.flume/file-channel/checkpoint/checkpoint, elements to sync = 0
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1465907890431, queueSize: 0, queueHead: 373
16/06/14 14:38:10 INFO file.Log: Updated checkpoint for file: /root/.flume/file-channel/data/log-14 position: 0 logWriteOrderID: 1465907890431
16/06/14 14:38:10 INFO file.FileChannel: Queue Size after replay: 0 [channel=c1]
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
16/06/14 14:38:11 INFO node.Application: Starting Sink k1
16/06/14 14:38:11 INFO node.Application: Starting Source r2
16/06/14 14:38:11 INFO source.ExecSource: Exec source starting with command:tail -f /PATH/bper-peg-pt-rest.log
16/06/14 14:38:11 INFO sink.RollingFileSink: Starting org.apache.flume.sink.RollingFileSink{name:k1, channel:c1}...
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
16/06/14 14:38:11 INFO sink.RollingFileSink: RollInterval is not valid, file rolling will not happen.
16/06/14 14:38:11 INFO sink.RollingFileSink: RollingFileSink k1 started.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r2: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r2 started
16/06/14 14:38:11 INFO source.ExecSource: Command [tail -f /PATH/bper-peg-pt-rest.log] exited with 1
Thanks

I need to declare two sources as below:
a1.sources = r1 r2
Earlier, i was doing it as
a1.sources = r1
a1.sources = r2
So only one source was getting registered.

Related

Milo OPC UA - unable to connect to server from client when server restarts. server is set to 'USER_TOKEN_POLICY_USERNAME'

I am having milo opcua server with USER_TOKEN_POLICY_USERNAME enabled and used UsernameIdentityValidator to set username and password.
From milo client side, I have used UsernameProvider to set setIdentityProvider.
When I run this setup everything works fine.
But when I restart opcua server, milo client won't reconnect. I'm getting below exception:
[milo-shared-thread-pool-2] Skipping validation for certificate: C=DE, ST=" ", L=Locality, OU=OrganizationUnit, O=Organization, CN=AggrServer#7aaf488fd8d6
29.01.2021 09:25:48.282+0000 INFO [m.o.serv.KafkaConsumer(1bc715b8)] [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] Sent record successfully to topic : NSCH_TEST_Data.
29.01.2021 09:26:55.681+0000 WARN [o.e.m.opcua.sdk.client.SessionFsm] [milo-shared-thread-pool-3] [2] Keep Alive failureCount=4 exceeds failuresAllowed=3
29.01.2021 09:26:55.681+0000 WARN [o.e.m.opcua.sdk.client.SessionFsm] [milo-shared-thread-pool-3] [2] Keep Alive failureCount=5 exceeds failuresAllowed=3
29.01.2021 09:26:55.682+0000 INFO [m.o.MiloConnectorRemote(7b76b59d)] [milo-shared-thread-pool-6] opc.tcp://192.168.56.101:4840: onSessionInactive: OpcUaSession{sessionId=NodeId{ns=1, id=Session:fc6fdb4f-0e8a-441d-ba25-45d067d434e7}, sessionName=OpcUa#0b8bc292754c}
29.01.2021 09:26:55.682+0000 INFO [m.o.MiloConnectorRemote(7b76b59d)] [milo-shared-thread-pool-6] opc.tcp://192.168.56.101:4840: sessionInactive: OpcUaSession{sessionId=NodeId{ns=1, id=Session:fc6fdb4f-0e8a-441d-ba25-45d067d434e7}, sessionName=OpcUa#0b8bc292754c}
29.01.2021 09:26:55.682+0000 INFO [m.o.MiloConnectorRemote(7b76b59d)] [milo-shared-thread-pool-6] opc.tcp://192.168.56.101:4840: notify Observer-opc.tcp://192.168.56.101:4840 about ConnectionEvent(state=Connecting, prevState=Connected, label=opc.tcp://192.168.56.101:4840)
29.01.2021 09:26:55.683+0000 INFO [m.opcua.OpcUaObserverImpl(754d0f4a)] [milo-shared-thread-pool-6] Observer-opc.tcp://192.168.56.101:4840: handle the event ConnectionEvent(state=Connecting, prevState=Connected, label=opc.tcp://192.168.56.101:4840)
29.01.2021 09:26:55.683+0000 INFO [m.o.OpcUaObserverImpl$ModelReadyChangeChecker(3dd6dea0)] [milo-shared-thread-pool-6] OpcUaObserverImpl-opc.tcp://192.168.56.101:4840: stop
29.01.2021 09:26:55.683+0000 INFO [m.opcua.OpcUaObserverImpl(754d0f4a)] [milo-shared-thread-pool-6] Observer-opc.tcp://192.168.56.101:4840: notify 2 listeners about ModelUnavailableEvent#1791022155[uri=opc.tcp://192.168.56.101:4840,nodesCount=0,label=Observer-opc.tcp://192.168.56.101:4840]
29.01.2021 09:26:55.683+0000 INFO [m.opcua.OpcUaObserverImpl(754d0f4a)] [DefaultDispatcher-worker-1] Observer-opc.tcp://192.168.56.101:4840: notify Subscriber-opc.tcp://192.168.56.101:4840 about ModelUnavailableEvent#1791022155[uri=opc.tcp://192.168.56.101:4840,nodesCount=0,label=Observer-opc.tcp://192.168.56.101:4840]
29.01.2021 09:26:55.683+0000 INFO [opcua.MiloSubscriber(364cd1b9)] [DefaultDispatcher-worker-1] Subscriber-opc.tcp://192.168.56.101:4840: unsubscribe 1 subscriptions
29.01.2021 09:26:55.683+0000 INFO [m.opcua.OpcUaObserverImpl(754d0f4a)] [DefaultDispatcher-worker-2] Observer-opc.tcp://192.168.56.101:4840: notify SyncProcessor-opc.tcp://192.168.56.101:4840 about ModelUnavailableEvent#1791022155[uri=opc.tcp://192.168.56.101:4840,nodesCount=0,label=Observer-opc.tcp://192.168.56.101:4840]
29.01.2021 09:26:55.683+0000 INFO [m.opcua.serv.SyncProcessor(2474528)] [DefaultDispatcher-worker-2] SyncProcessor: ignore the event ModelUnavailableEvent#1791022155[uri=opc.tcp://192.168.56.101:4840,nodesCount=0,label=Observer-opc.tcp://192.168.56.101:4840]
29.01.2021 09:26:55.686+0000 INFO [opcua.MiloSubscriber(364cd1b9)] [DefaultDispatcher-worker-1] SyncExecutor-Subscriber(364cd1b9)-opc.tcp://192.168.56.101:4840: SyncExecutor-Subscriber(364cd1b9)-opc.tcp://192.168.56.101:4840: unsubscribe, subscriptionId=1
29.01.2021 09:26:55.686+0000 INFO [opcua.MiloSubscriber(364cd1b9)] [DefaultDispatcher-worker-1] Subscriber-opc.tcp://192.168.56.101:4840: delete subscription SyncExecutor-Subscriber(364cd1b9)-opc.tcp://192.168.56.101:4840(SyncExecutor-Subscriber(364cd1b9)-opc.tcp://192.168.56.101:4840)
29.01.2021 09:27:11.685+0000 WARN [opcua.MiloSubscriber(364cd1b9)] [DefaultDispatcher-worker-1] [Subscriber-opc.tcp://192.168.56.101:4840: deleteSubscription(1) of SyncExecutor-Subscriber(364cd1b9)-opc.tcp://192.168.56.101:4840] return null, because of UaException: status=Bad_ConnectionRejected, message=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /192.168.56.101:4840
29.01.2021 09:27:27.703+0000 WARN [o.e.m.o.s.c.s.ClientCertificateValidator$InsecureValidator] [milo-shared-thread-pool-5] Skipping validation for certificate: C=DE, ST=" ", L=Locality, OU=OrganizationUnit, O=Organization, CN=AggrServer#7aaf488fd8d6
29.01.2021 09:27:31.782+0000 WARN [o.e.m.o.s.c.s.ClientCertificateValidator$InsecureValidator] [milo-shared-thread-pool-2] Skipping validation for certificate: C=DE, ST=" ", L=Locality, OU=OrganizationUnit, O=Organization, CN=AggrServer#7aaf488fd8d6
29.01.2021 09:27:39.806+0000 WARN [o.e.m.o.s.c.s.ClientCertificateValidator$InsecureValidator] [milo-shared-thread-pool-6] Skipping validation for certificate: C=DE, ST=" ", L=Locality, OU=OrganizationUnit, O=Organization, CN=AggrServer#7aaf488fd8d6
29.01.2021 09:27:55.830+0000 WARN [o.e.m.o.s.c.s.ClientCertificateValidator$InsecureValidator] [milo-shared-thread-pool-3] Skipping validation for certificate: C=DE, ST=" ", L=Locality, OU=OrganizationUnit, O=Organization, CN=AggrServer#7aaf488fd8d6
NEW LOGS
02.02.2021 18:32:55.541+0000 WARN [opcua.MiloSubscriber(3c5d9688)] [DefaultDispatcher-worker-3] [Subscriber-opc.tcp://192.168.56.101:4840: deleteSubscription(1) of SyncExecutor-Subscriber(3c5d9688)-opc.tcp://192.168.56.101:4840] return null, because of UaException: status=Bad_ConnectionRejected, message=io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /192.168.56.101:4840
02.02.2021 18:32:55.542+0000 INFO [opcua.MiloBrowser(1d141b2d)] [DefaultDispatcher-worker-2] idNameTypeSet.nodes.size
02.02.2021 18:32:55.542+0000 INFO [m.o.OpcUaObserverImpl$ModelReadyChangeChecker(3c8bf12c)] [DefaultDispatcher-worker-2] OpcUaObserverImpl-opc.tcp://192.168.56.101:4840: exit model checking, because stopped externally
02.02.2021 18:33:59.790+0000 INFO [m.o.MiloConnectorRemote(74c9951c)] [milo-shared-thread-pool-3] opc.tcp://192.168.56.101:4840: onSessionActive: OpcUaSession{sessionId=NodeId{ns=1, id=Session:d27e7db7-4401-4f08-8c17-7bfaf9075fe4}, sessionName=OpcUa#154c9f72aa09}
02.02.2021 18:33:59.790+0000 INFO [m.o.MiloConnectorRemote(74c9951c)] [milo-shared-thread-pool-3] opc.tcp://192.168.56.101:4840: notify Observer-opc.tcp://192.168.56.101:4840 about ConnectionEvent(state=Connected, prevState=Connecting, label=opc.tcp://192.168.56.101:4840)
02.02.2021 18:33:59.790+0000 INFO [m.opcua.OpcUaObserverImpl(ff09afd)] [milo-shared-thread-pool-3] Observer-opc.tcp://192.168.56.101:4840: handle the event ConnectionEvent(state=Connected, prevState=Connecting, label=opc.tcp://192.168.56.101:4840)
02.02.2021 18:33:59.790+0000 INFO [m.o.OpcUaObserverImpl$ModelReadyChangeChecker(3c8bf12c)] [milo-shared-thread-pool-3] OpcUaObserverImpl-opc.tcp://192.168.56.101:4840: start
02.02.2021 18:33:59.790+0000 INFO [m.o.OpcUaObserverImpl$ModelReadyChangeChecker(3c8bf12c)] [milo-shared-thread-pool-3] OpcUaObserverImpl-opc.tcp://192.168.56.101:4840: modelReadyChecking=MinMaxInterval(min=10, max=30, timeUnit=SECONDS, current=10, step=3), modelChangeChecking=MinMaxInterval(min=60, max=1800, timeUnit=SECONDS, current=60, step=180), modelReadyMinNodesCount=0
02.02.2021 18:33:59.804+0000 INFO [m.o.OpcUaObserverImpl$ModelReadyChangeChecker(3c8bf12c)] [DefaultDispatcher-worker-2] OpcUaObserverImpl-opc.tcp://192.168.56.101:4840: -> check(modelReadyMinNodesCount=0,modelChangeCheckingRunning=false)
02.02.2021 18:33:59.804+0000 INFO [opcua.MiloBrowser(1d141b2d)] [DefaultDispatcher-worker-2] In nodesCount method
02.02.2021 18:33:59.817+0000 INFO [opcua.MiloBrowser(1d141b2d)] [DefaultDispatcher-worker-2] nodesCount=3605
Seems there is an issue with client/server certificate validation.
UA PKI, X509 and other is complex and hard to understand and even harder to configure properly, can't answer this with few words. If you are just starting with OPC UA, try to skip server policies and user identification until you have learned about.
Server and client will need certificates in order the decrypt or encrypt the user authentification.
But do some checks:
Check, if the client has the server certificate in its trusted path.
Check, if the server certificate has altered. The server should not regenerate its self signed certificate with each server start, but only with installation setup or administration.
Workarounds:
Disable client and/or server security checks if possible
Use another security profile, e.g. http://opcfoundation.org/UA/SecurityPolicy#None, but then you may not use user identification policies.
I think the meaningful Exception to extract from your new logs is this:
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /192.168.56.101:4840
Simple networking error. The server isn't there, isn't running, a firewall is in the way, etc...
It's not anything you're doing wrong in client code right now.

Graphaware Framework and UUID not starting on Neo4j GrapheneDB

I am trying to get the Graphaware Framework and UUID running on a GrapheneDB instance. I have followed the instructions to zip the JAR and neo4j.properties files and uploaded using the GrapheneDB Web Interface but UUID's are not added when I create a new node.
neo4j.properties file
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
#UIDM becomes the module ID:
com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper
#optional, default is uuid:
com.graphaware.module.UIDM.uuidProperty=uuid
#optional, default is false:
com.graphaware.module.UIDM.stripHyphens=true
#optional, default is all nodes:
#com.graphaware.module.UIDM.node=hasLabel('Label1') || hasLabel('Label2')
#optional, default is no relationships:
#com.graphaware.module.UIDM.relationship=isType('Type1')
com.graphaware.module.UIDM.relationship=com.graphaware.runtime.policy.all.IncludeAllBusinessRelationships
#optional, default is uuidIndex
com.graphaware.module.UIDM.uuidIndex=uuidIndex
#optional, default is uuidRelIndex
com.graphaware.module.UIDM.uuidRelationshipIndex=uuidRelIndex
Log Output
2017-03-02 10:20:40.184+0000 INFO Neo4j Server shutdown initiated by
request 2017-03-02 10:20:40.209+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] stopped 2017-03-02
10:20:40.209+0000 INFO Stopping... 2017-03-02 10:20:40.982+0000 INFO
Stopped. 2017-03-02 10:20:43.402+0000 INFO Starting... 2017-03-02
10:20:43.820+0000 INFO Bolt enabled on 0.0.0.0:7475. 2017-03-02
10:20:45.153+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware
Runtime disabled. 2017-03-02 10:20:48.130+0000 INFO Started.
2017-03-02 10:20:48.343+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] started 2017-03-02
10:20:48.350+0000 INFO Mounted unmanaged extension
[com.graphaware.server] at [/graphaware] 2017-03-02 10:20:48.724+0000
INFO Mounting GraphAware Framework at /graphaware 2017-03-02
10:20:48.755+0000 INFO Will try to scan the following packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:20:52.633+0000 INFO Remote interface available at
http://localhost:7474/
Messages.log Extract
2017-03-02 10:33:59.991+0000 INFO [o.n.k.i.DiagnosticsManager] ---
STARTED diagnostics for KernelDiagnostics:StoreFiles END ---
2017-03-02 10:34:01.846+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED START --- 2017-03-02 10:34:02.526+0000 INFO
[c.g.s.f.b.GraphAwareBootstrappingFilter] Mounting GraphAware
Framework at /graphaware 2017-03-02 10:34:02.547+0000 INFO
[c.g.s.f.c.GraphAwareWebContextCreator] Will try to scan the following
packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:34:06.100+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED END ---
It looks like the framework is not started but I have set enabled=true in the properties file.
Environment Setup
Neo4j Community Edition 3.1.1
graphaware-server-3.1.0.44
graphaware-uuid-3.1.0.44.13
Thanks

Neo4j randomly shutting down

I am running neo4j on an EC2 instance. But for some reason it randomly shuts down from time to time. Is there a way to check the shutdown logs? And is there a way to automatically restart the server? I couldn't locate the log folder. But here's what my messages.log file looks like. This section covers the timeframe when the server went down (before 2015-04-13 05:39:59.084+0000) and when I manually restarted the server (at 2015-04-13 05:39:59.084+0000). You can see that there is no record of server issue or shutdown. Time frame before 2015-03-05 08:18:47.084+0000 contains info of the previous server restart.
2015-03-05 08:18:44.180+0000 INFO [o.n.s.m.Neo4jBrowserModule]: Mounted Neo4j Browser at [/browser]
2015-03-05 08:18:44.253+0000 INFO [o.n.s.w.Jetty9WebServer]: Mounting static content at [/webadmin] from [webadmin-html]
2015-03-05 08:18:44.311+0000 INFO [o.n.s.w.Jetty9WebServer]: Mounting static content at [/browser] from [browser]
2015-03-05 08:18:47.084+0000 INFO [o.n.s.CommunityNeoServer]: Server started on: http://0.0.0.0:7474/
2015-03-05 08:18:47.084+0000 INFO [o.n.s.CommunityNeoServer]: Remote interface ready and available at [http://0.0.0.0:7474/]
2015-03-05 08:18:47.084+0000 INFO [o.n.k.i.DiagnosticsManager]: --- SERVER STARTED END ---
2015-04-13 05:39:59.084+0000 INFO [o.n.s.CommunityNeoServer]: Setting startup timeout to: 120000ms based on -1
2015-04-13 05:39:59.265+0000 INFO [o.n.k.InternalAbstractGraphDatabase]: No locking implementation specified, defaulting to 'community'
2015-04-13 05:39:59.383+0000 INFO [o.n.k.i.DiagnosticsManager]: --- INITIALIZED diagnostics START ---
2015-04-13 05:39:59.384+0000 INFO [o.n.k.i.DiagnosticsManager]: Neo4j Kernel properties:
2015-04-13 05:39:59.389+0000 INFO [o.n.k.i.DiagnosticsManager]: neostore.propertystore.db.mapped_memory=78M
2015-04-13 05:39:59.389+0000 INFO [o.n.k.i.DiagnosticsManager]: neostore.nodestore.db.mapped_memory=21M

Error in TwiterAgent in Cloudera flume

execution struck somewhere after this
14/10/02 07:33:31 INFO channel.DefaultChannelFactory: Creating instance of channel MemChannel type memory
14/10/02 07:33:31 INFO node.AbstractConfigurationProvider: Created channel MemChannel
14/10/02 07:33:31 INFO sink.DefaultSinkFactory: Creating instance of sink: HDFS, type: hdfs
14/10/02 07:33:32 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/10/02 07:33:32 INFO node.AbstractConfigurationProvider: Channel MemChannel connected to [HDFS]
14/10/02 07:33:32 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{HDFS=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#72cf095e counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} }
14/10/02 07:33:32 INFO node.Application: Starting Channel MemChannel
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: MemChannel started
14/10/02 07:33:33 INFO node.Application: Starting Sink HDFS
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started

Flume 1.5.0 + Reading log data from remote Linux server

I am new to Flume. I have Flume and Hadoop installed in one server and logs are available in other server.
Through Flume, I am trying to read the logs. Here is my configuration file.
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.type = syslogtcp
agent1.sources.avro-source1.bind = 10.209.4.224
agent1.sources.avro-source1.port = 5140
# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://delvmplldsst02:54310/flume/events
agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
agent1.sinks.hdfs-sink1.hdfs.batchSize = 20
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1
#chain the different components together
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1
I am not sure what exact source type to use in this scenario. I am starting Flume agent like below in the other server:
bin/flume-ng agent --conf-file conf/flume.conf -f /var/log/wtmp -Dflume.root.logger=DEBUG,console -n agent1
Here is the log for the above command
14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:conf/flume.conf
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Added sinks: hdfs-sink1 Agent: agent1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent1]
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Creating channels
14/06/25 00:37:17 INFO channel.DefaultChannelFactory: Creating instance of channel ch1 type memory
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Created channel ch1
14/06/25 00:37:17 INFO source.DefaultSourceFactory: Creating instance of source avro-source1, type syslogtcp
14/06/25 00:37:17 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink1, type: hdfs
14/06/25 00:37:17 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Channel ch1 connected to [avro-source1, hdfs-sink1]
14/06/25 00:37:17 INFO node.Application: Starting new configuration:{ sourceRunners:{avro-source1=EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:avro-source1,state:IDLE} }} sinkRunners:{hdfs-sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#5954864a counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} }
14/06/25 00:37:17 INFO node.Application: Starting Channel ch1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
14/06/25 00:37:17 INFO node.Application: Starting Sink hdfs-sink1
14/06/25 00:37:17 INFO node.Application: Starting Source avro-source1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: hdfs-sink1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink1 started
14/06/25 00:37:17 INFO source.SyslogTcpSource: Syslog TCP Source starting...
Here the the ptocess is getting stuck and not at all proceeding further. I am not knowing where it would have went wrong
Could someone please help me on the same
I did not installed flume in the server where I have log files. Shall I install flume there as well??
Flume version using - 1.5.0
Hadoop version installed - 1.0.4
Thanks in advance
You will need to configure the other server to forward its syslog output to your logging server. That configuration depends on exactly which syslog daemon you are running.
The log output makes it appear that it started correctly to me.
Problem is probably from syslog.
Your flume appears to have started fine , the reason it appears to be idle is that it is not recieving any events from syslog.
make sure your syslog daemon is sending events to
port = 5140
and for
agent1.sources.avro-source1.bind , you can bind to any source by replacing ip with 0.0.0.0 (if you plan to listen from multiple servers)
you can check that in /etc/rsyslog.conf
. #hostnameofflume:flumesourceport
in your case it should be
*.* #10.209.4.224:5140 (assuming this ip is of your flume host)

Resources