Flume 1.5.0 + Reading log data from remote Linux server - flume

I am new to Flume. I have Flume and Hadoop installed in one server and logs are available in other server.
Through Flume, I am trying to read the logs. Here is my configuration file.
# Define a memory channel called ch1 on agent1
agent1.channels.ch1.type = memory
# Define an Avro source called avro-source1 on agent1 and tell it
# to bind to 0.0.0.0:41414. Connect it to channel ch1.
agent1.sources.avro-source1.type = syslogtcp
agent1.sources.avro-source1.bind = 10.209.4.224
agent1.sources.avro-source1.port = 5140
# Define a logger sink that simply logs all events it receives
# and connect it to the other end of the same channel.
agent1.sinks.hdfs-sink1.type = hdfs
agent1.sinks.hdfs-sink1.hdfs.path = hdfs://delvmplldsst02:54310/flume/events
agent1.sinks.hdfs-sink1.hdfs.fileType = DataStream
agent1.sinks.hdfs-sink1.hdfs.writeFormat = Text
agent1.sinks.hdfs-sink1.hdfs.batchSize = 20
agent1.sinks.hdfs-sink1.hdfs.rollSize = 0
agent1.sinks.hdfs-sink1.hdfs.rollCount = 0
# Finally, now that we've defined all of our components, tell
# agent1 which ones we want to activate.
agent1.channels = ch1
agent1.sources = avro-source1
agent1.sinks = hdfs-sink1
#chain the different components together
agent1.sinks.hdfs-sink1.channel = ch1
agent1.sources.avro-source1.channels = ch1
I am not sure what exact source type to use in this scenario. I am starting Flume agent like below in the other server:
bin/flume-ng agent --conf-file conf/flume.conf -f /var/log/wtmp -Dflume.root.logger=DEBUG,console -n agent1
Here is the log for the above command
14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
14/06/25 00:37:17 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:conf/flume.conf
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Added sinks: hdfs-sink1 Agent: agent1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Processing:hdfs-sink1
14/06/25 00:37:17 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent1]
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Creating channels
14/06/25 00:37:17 INFO channel.DefaultChannelFactory: Creating instance of channel ch1 type memory
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Created channel ch1
14/06/25 00:37:17 INFO source.DefaultSourceFactory: Creating instance of source avro-source1, type syslogtcp
14/06/25 00:37:17 INFO sink.DefaultSinkFactory: Creating instance of sink: hdfs-sink1, type: hdfs
14/06/25 00:37:17 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/06/25 00:37:17 INFO node.AbstractConfigurationProvider: Channel ch1 connected to [avro-source1, hdfs-sink1]
14/06/25 00:37:17 INFO node.Application: Starting new configuration:{ sourceRunners:{avro-source1=EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:avro-source1,state:IDLE} }} sinkRunners:{hdfs-sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#5954864a counterGroup:{ name:null counters:{} } }} channels:{ch1=org.apache.flume.channel.MemoryChannel{name: ch1}} }
14/06/25 00:37:17 INFO node.Application: Starting Channel ch1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: ch1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: ch1 started
14/06/25 00:37:17 INFO node.Application: Starting Sink hdfs-sink1
14/06/25 00:37:17 INFO node.Application: Starting Source avro-source1
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: hdfs-sink1: Successfully registered new MBean.
14/06/25 00:37:17 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: hdfs-sink1 started
14/06/25 00:37:17 INFO source.SyslogTcpSource: Syslog TCP Source starting...
Here the the ptocess is getting stuck and not at all proceeding further. I am not knowing where it would have went wrong
Could someone please help me on the same
I did not installed flume in the server where I have log files. Shall I install flume there as well??
Flume version using - 1.5.0
Hadoop version installed - 1.0.4
Thanks in advance

You will need to configure the other server to forward its syslog output to your logging server. That configuration depends on exactly which syslog daemon you are running.
The log output makes it appear that it started correctly to me.

Problem is probably from syslog.
Your flume appears to have started fine , the reason it appears to be idle is that it is not recieving any events from syslog.
make sure your syslog daemon is sending events to
port = 5140
and for
agent1.sources.avro-source1.bind , you can bind to any source by replacing ip with 0.0.0.0 (if you plan to listen from multiple servers)
you can check that in /etc/rsyslog.conf
. #hostnameofflume:flumesourceport
in your case it should be
*.* #10.209.4.224:5140 (assuming this ip is of your flume host)

Related

Graphaware Framework and UUID not starting on Neo4j GrapheneDB

I am trying to get the Graphaware Framework and UUID running on a GrapheneDB instance. I have followed the instructions to zip the JAR and neo4j.properties files and uploaded using the GrapheneDB Web Interface but UUID's are not added when I create a new node.
neo4j.properties file
dbms.unmanaged_extension_classes=com.graphaware.server=/graphaware
com.graphaware.runtime.enabled=true
#UIDM becomes the module ID:
com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper
#optional, default is uuid:
com.graphaware.module.UIDM.uuidProperty=uuid
#optional, default is false:
com.graphaware.module.UIDM.stripHyphens=true
#optional, default is all nodes:
#com.graphaware.module.UIDM.node=hasLabel('Label1') || hasLabel('Label2')
#optional, default is no relationships:
#com.graphaware.module.UIDM.relationship=isType('Type1')
com.graphaware.module.UIDM.relationship=com.graphaware.runtime.policy.all.IncludeAllBusinessRelationships
#optional, default is uuidIndex
com.graphaware.module.UIDM.uuidIndex=uuidIndex
#optional, default is uuidRelIndex
com.graphaware.module.UIDM.uuidRelationshipIndex=uuidRelIndex
Log Output
2017-03-02 10:20:40.184+0000 INFO Neo4j Server shutdown initiated by
request 2017-03-02 10:20:40.209+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] stopped 2017-03-02
10:20:40.209+0000 INFO Stopping... 2017-03-02 10:20:40.982+0000 INFO
Stopped. 2017-03-02 10:20:43.402+0000 INFO Starting... 2017-03-02
10:20:43.820+0000 INFO Bolt enabled on 0.0.0.0:7475. 2017-03-02
10:20:45.153+0000 INFO [c.g.r.b.RuntimeKernelExtension] GraphAware
Runtime disabled. 2017-03-02 10:20:48.130+0000 INFO Started.
2017-03-02 10:20:48.343+0000 INFO
[c.g.s.f.b.GraphAwareServerBootstrapper] started 2017-03-02
10:20:48.350+0000 INFO Mounted unmanaged extension
[com.graphaware.server] at [/graphaware] 2017-03-02 10:20:48.724+0000
INFO Mounting GraphAware Framework at /graphaware 2017-03-02
10:20:48.755+0000 INFO Will try to scan the following packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:20:52.633+0000 INFO Remote interface available at
http://localhost:7474/
Messages.log Extract
2017-03-02 10:33:59.991+0000 INFO [o.n.k.i.DiagnosticsManager] ---
STARTED diagnostics for KernelDiagnostics:StoreFiles END ---
2017-03-02 10:34:01.846+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED START --- 2017-03-02 10:34:02.526+0000 INFO
[c.g.s.f.b.GraphAwareBootstrappingFilter] Mounting GraphAware
Framework at /graphaware 2017-03-02 10:34:02.547+0000 INFO
[c.g.s.f.c.GraphAwareWebContextCreator] Will try to scan the following
packages:
{com..graphaware.,org..graphaware.,net..graphaware.}
2017-03-02 10:34:06.100+0000 INFO [o.n.k.i.DiagnosticsManager] ---
SERVER STARTED END ---
It looks like the framework is not started but I have set enabled=true in the properties file.
Environment Setup
Neo4j Community Edition 3.1.1
graphaware-server-3.1.0.44
graphaware-uuid-3.1.0.44.13
Thanks

Flume: Multiple sources adding logs to single sink

I am trying to collect the logs from different directories on single machine to local file system file or HDFS.
I have registered 2 sources r1, r2.
Both the sources are pointing to single channel C1.
There is one sink attached to the channel. K1
Please find the configuration file below:
# Name the components on this agent
a1.sources = r1
a1.sources = r2
a1.sinks = k1
a1.channels = c1
a1.sources.r2.type = exec
a1.sources.r2.command = tail -f /PATH/bper-peg-pt-rest.log
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /PATH/bper-peg-ejb.log
# Describe the sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /home/vbsc/Desktop/flume_project_logging/logs_aggregated
a1.sinks.k1.sink.rollInterval = 0
# Use file channel
a1.channels.c1.type = file
# Bind the source and sink to the channel
a1.sinks.k1.channel = c1
a1.sources.r2.channels = c1
a1.sources.r1.channels = c1
But when i start the Flume with Agent a1, only one source (r2) is getting started.
Flume agent startup logs:
16/06/14 14:38:09 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
16/06/14 14:38:09 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/vbsc/Desktop/flume_project_logging/flume_tailSource.conf
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Added sinks: k1 Agent: a1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Processing:k1
16/06/14 14:38:09 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [a1]
16/06/14 14:38:09 INFO node.AbstractConfigurationProvider: Creating channels
16/06/14 14:38:09 INFO channel.DefaultChannelFactory: Creating instance of channel c1 type file
16/06/14 14:38:10 INFO node.AbstractConfigurationProvider: Created channel c1
16/06/14 14:38:10 INFO source.DefaultSourceFactory: Creating instance of source r2, type exec
16/06/14 14:38:10 INFO sink.DefaultSinkFactory: Creating instance of sink: k1, type: file_roll
16/06/14 14:38:10 INFO node.AbstractConfigurationProvider: Channel c1 connected to [r2, k1]
16/06/14 14:38:10 INFO node.Application: Starting new configuration:{ sourceRunners:{r2=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r2,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#4ad9cb27 counterGroup:{ name:null counters:{} } }} channels:{c1=FileChannel c1 { dataDirs: [/root/.flume/file-channel/data] }} }
16/06/14 14:38:10 INFO node.Application: Starting Channel c1
16/06/14 14:38:10 INFO file.FileChannel: Starting FileChannel c1 { dataDirs: [/root/.flume/file-channel/data] }...
16/06/14 14:38:10 INFO file.Log: Encryption is not enabled
16/06/14 14:38:10 INFO file.Log: Replay started
16/06/14 14:38:10 INFO file.Log: Found NextFileID 13, from [/root/.flume/file-channel/data/log-9, /root/.flume/file-channel/data/log-11, /root/.flume/file-channel/data/log-13, /root/.flume/file-channel/data/log-12, /root/.flume/file-channel/data/log-10]
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFileV3: Starting up with /root/.flume/file-channel/checkpoint/checkpoint and /root/.flume/file-channel/checkpoint/checkpoint.meta
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFileV3: Reading checkpoint metadata from /root/.flume/file-channel/checkpoint/checkpoint.meta
16/06/14 14:38:10 INFO file.FlumeEventQueue: QueueSet population inserting 0 took 0
16/06/14 14:38:10 INFO file.Log: Last Checkpoint Tue Jun 14 14:37:49 CEST 2016, queue depth = 0
16/06/14 14:38:10 INFO file.Log: Replaying logs with v2 replay logic
16/06/14 14:38:10 INFO file.ReplayHandler: Starting replay of [/root/.flume/file-channel/data/log-9, /root/.flume/file-channel/data/log-10, /root/.flume/file-channel/data/log-11, /root/.flume/file-channel/data/log-12, /root/.flume/file-channel/data/log-13]
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-9
16/06/14 14:38:10 INFO tools.DirectMemoryUtils: Unable to get maxDirectMemory from VM: NoSuchMethodException: sun.misc.VM.maxDirectMemory(null)
16/06/14 14:38:10 INFO tools.DirectMemoryUtils: Direct Memory Allocation: Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 20316160, Remaining = 20316160
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 58602
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 58602 in /root/.flume/file-channel/data/log-9
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-10
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 20798
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 20798 in /root/.flume/file-channel/data/log-10
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-11
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3178
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3178 in /root/.flume/file-channel/data/log-11
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-12
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3264
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3264 in /root/.flume/file-channel/data/log-12
16/06/14 14:38:10 INFO file.ReplayHandler: Replaying /root/.flume/file-channel/data/log-13
16/06/14 14:38:10 INFO file.LogFile: fast-forward to checkpoint position: 3264
16/06/14 14:38:10 INFO file.LogFile: Encountered EOF at 3264 in /root/.flume/file-channel/data/log-13
16/06/14 14:38:10 INFO file.ReplayHandler: read: 0, put: 0, take: 0, rollback: 0, commit: 0, skip: 0, eventCount:0
16/06/14 14:38:10 INFO file.FlumeEventQueue: Search Count = 0, Search Time = 0, Copy Count = 0, Copy Time = 0
16/06/14 14:38:10 INFO file.Log: Rolling /root/.flume/file-channel/data
16/06/14 14:38:10 INFO file.Log: Roll start /root/.flume/file-channel/data
16/06/14 14:38:10 INFO file.LogFile: Opened /root/.flume/file-channel/data/log-14
16/06/14 14:38:10 INFO file.Log: Roll end
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFile: Start checkpoint for /root/.flume/file-channel/checkpoint/checkpoint, elements to sync = 0
16/06/14 14:38:10 INFO file.EventQueueBackingStoreFile: Updating checkpoint metadata: logWriteOrderID: 1465907890431, queueSize: 0, queueHead: 373
16/06/14 14:38:10 INFO file.Log: Updated checkpoint for file: /root/.flume/file-channel/data/log-14 position: 0 logWriteOrderID: 1465907890431
16/06/14 14:38:10 INFO file.FileChannel: Queue Size after replay: 0 [channel=c1]
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: c1 started
16/06/14 14:38:11 INFO node.Application: Starting Sink k1
16/06/14 14:38:11 INFO node.Application: Starting Source r2
16/06/14 14:38:11 INFO source.ExecSource: Exec source starting with command:tail -f /PATH/bper-peg-pt-rest.log
16/06/14 14:38:11 INFO sink.RollingFileSink: Starting org.apache.flume.sink.RollingFileSink{name:k1, channel:c1}...
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: k1: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: k1 started
16/06/14 14:38:11 INFO sink.RollingFileSink: RollInterval is not valid, file rolling will not happen.
16/06/14 14:38:11 INFO sink.RollingFileSink: RollingFileSink k1 started.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r2: Successfully registered new MBean.
16/06/14 14:38:11 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r2 started
16/06/14 14:38:11 INFO source.ExecSource: Command [tail -f /PATH/bper-peg-pt-rest.log] exited with 1
Thanks
I need to declare two sources as below:
a1.sources = r1 r2
Earlier, i was doing it as
a1.sources = r1
a1.sources = r2
So only one source was getting registered.

Neo4j randomly shutting down

I am running neo4j on an EC2 instance. But for some reason it randomly shuts down from time to time. Is there a way to check the shutdown logs? And is there a way to automatically restart the server? I couldn't locate the log folder. But here's what my messages.log file looks like. This section covers the timeframe when the server went down (before 2015-04-13 05:39:59.084+0000) and when I manually restarted the server (at 2015-04-13 05:39:59.084+0000). You can see that there is no record of server issue or shutdown. Time frame before 2015-03-05 08:18:47.084+0000 contains info of the previous server restart.
2015-03-05 08:18:44.180+0000 INFO [o.n.s.m.Neo4jBrowserModule]: Mounted Neo4j Browser at [/browser]
2015-03-05 08:18:44.253+0000 INFO [o.n.s.w.Jetty9WebServer]: Mounting static content at [/webadmin] from [webadmin-html]
2015-03-05 08:18:44.311+0000 INFO [o.n.s.w.Jetty9WebServer]: Mounting static content at [/browser] from [browser]
2015-03-05 08:18:47.084+0000 INFO [o.n.s.CommunityNeoServer]: Server started on: http://0.0.0.0:7474/
2015-03-05 08:18:47.084+0000 INFO [o.n.s.CommunityNeoServer]: Remote interface ready and available at [http://0.0.0.0:7474/]
2015-03-05 08:18:47.084+0000 INFO [o.n.k.i.DiagnosticsManager]: --- SERVER STARTED END ---
2015-04-13 05:39:59.084+0000 INFO [o.n.s.CommunityNeoServer]: Setting startup timeout to: 120000ms based on -1
2015-04-13 05:39:59.265+0000 INFO [o.n.k.InternalAbstractGraphDatabase]: No locking implementation specified, defaulting to 'community'
2015-04-13 05:39:59.383+0000 INFO [o.n.k.i.DiagnosticsManager]: --- INITIALIZED diagnostics START ---
2015-04-13 05:39:59.384+0000 INFO [o.n.k.i.DiagnosticsManager]: Neo4j Kernel properties:
2015-04-13 05:39:59.389+0000 INFO [o.n.k.i.DiagnosticsManager]: neostore.propertystore.db.mapped_memory=78M
2015-04-13 05:39:59.389+0000 INFO [o.n.k.i.DiagnosticsManager]: neostore.nodestore.db.mapped_memory=21M

Error in TwiterAgent in Cloudera flume

execution struck somewhere after this
14/10/02 07:33:31 INFO channel.DefaultChannelFactory: Creating instance of channel MemChannel type memory
14/10/02 07:33:31 INFO node.AbstractConfigurationProvider: Created channel MemChannel
14/10/02 07:33:31 INFO sink.DefaultSinkFactory: Creating instance of sink: HDFS, type: hdfs
14/10/02 07:33:32 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
14/10/02 07:33:32 INFO node.AbstractConfigurationProvider: Channel MemChannel connected to [HDFS]
14/10/02 07:33:32 INFO node.Application: Starting new configuration:{ sourceRunners:{} sinkRunners:{HDFS=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#72cf095e counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} }
14/10/02 07:33:32 INFO node.Application: Starting Channel MemChannel
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean.
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: MemChannel started
14/10/02 07:33:33 INFO node.Application: Starting Sink HDFS
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean.
14/10/02 07:33:33 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started

unable to figure jboss error

can someone figure out whats the error in jboss. when i enter localhost:8080 in url it says INVALID REQUEST. PLEASE CHECK URL.
I am using jboss 5.0.1
JBOSS_HOME: D:\Jboss\jboss-5.0.1.GA
JAVA: C:\Program Files\Java\jdk1.6.0_13\bin\java
JAVA_OPTS: -Dfile.encoding=UTF-8 -Dprogram.name=run.bat -server -Xms512m -Xmx1024m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
CLASSPATH: D:\Jboss\jboss-5.0.1.GA\bin\run.jar
===============================================================================
20:46:16,640 INFO [ServerImpl] Starting JBoss (Microcontainer)...
20:46:16,640 INFO [ServerImpl] Release ID: JBoss [Morpheus] 5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221)
20:46:16,640 INFO [ServerImpl] Bootstrap URL: null
20:46:16,656 INFO [ServerImpl] Home Dir: D:\Jboss\jboss-5.0.1.GA
20:46:16,656 INFO [ServerImpl] Home URL: file:/D:/Jboss/jboss-5.0.1.GA/
20:46:16,656 INFO [ServerImpl] Library URL: file:/D:/Jboss/jboss-5.0.1.GA/lib/
20:46:16,656 INFO [ServerImpl] Patch URL: null
20:46:16,656 INFO [ServerImpl] Common Base URL: file:/D:/Jboss/jboss-5.0.1.GA/common/
20:46:16,656 INFO [ServerImpl] Common Library URL: file:/D:/Jboss/jboss-5.0.1.GA/common/lib/
20:46:16,656 INFO [ServerImpl] Server Name: default
20:46:16,656 INFO [ServerImpl] Server Base Dir: D:\Jboss\jboss-5.0.1.GA\server
20:46:16,671 INFO [ServerImpl] Server Base URL: file:/D:/Jboss/jboss-5.0.1.GA/server/
20:46:16,671 INFO [ServerImpl] Server Config URL: file:/D:/Jboss/jboss-5.0.1.GA/server/default/conf/
20:46:16,671 INFO [ServerImpl] Server Home Dir: D:\Jboss\jboss-5.0.1.GA\server\default
20:46:16,671 INFO [ServerImpl] Server Home URL: file:/D:/Jboss/jboss-5.0.1.GA/server/default/
20:46:16,671 INFO [ServerImpl] Server Data Dir: D:\Jboss\jboss-5.0.1.GA\server\default\data
20:46:16,671 INFO [ServerImpl] Server Library URL: file:/D:/Jboss/jboss-5.0.1.GA/server/default/lib/
20:46:16,671 INFO [ServerImpl] Server Log Dir: D:\Jboss\jboss-5.0.1.GA\server\default\log
20:46:16,671 INFO [ServerImpl] Server Native Dir: D:\Jboss\jboss-5.0.1.GA\server\default\tmp\native
20:46:16,687 INFO [ServerImpl] Server Temp Dir: D:\Jboss\jboss-5.0.1.GA\server\default\tmp
20:46:16,687 INFO [ServerImpl] Server Temp Deploy Dir: D:\Jboss\jboss-5.0.1.GA\server\default\tmp\deploy
20:46:17,296 INFO [ServerImpl] Starting Microcontainer, bootstrapURL=file:/D:/Jboss/jboss-5.0.1.GA/server/default/conf/bootstrap.xml
20:46:17,859 INFO [VFSCacheFactory] Initializing VFSCache [org.jboss.virtual.plugins.cache.CombinedVFSCache]
20:46:17,859 INFO [VFSCacheFactory] Using VFSCache [CombinedVFSCache[real-cache: null]]
20:46:18,140 INFO [CopyMechanism] VFS temp dir: D:\Jboss\jboss-5.0.1.GA\server\default\tmp
20:46:18,156 INFO [ZipEntryContext] VFS force nested jars copy-mode is enabled.
20:46:20,218 INFO [ServerInfo] Java version: 1.6.0_13,Sun Microsystems Inc.
20:46:20,234 INFO [ServerInfo] Java Runtime: Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
20:46:20,234 INFO [ServerInfo] Java VM: Java HotSpot(TM) Server VM 11.3-b02,Sun Microsystems Inc.
20:46:20,234 INFO [ServerInfo] OS-System: Windows XP 5.1,x86
20:46:20,265 INFO [JMXKernel] Legacy JMX core initialized
20:46:22,593 INFO [ProfileServiceImpl] Loading profile: default from: org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository#126c5a5(root=D:\Jboss\jboss-5.0.1.GA\server, key=org.jboss.profileservice.spi.Pro
fileKey#143b82c3[domain=default,server=default,name=default])
20:46:22,593 INFO [ProfileImpl] Using repository:org.jboss.system.server.profileservice.repository.SerializableDeploymentRepository#126c5a5(root=D:\Jboss\jboss-5.0.1.GA\server, key=org.jboss.profileservice.spi.ProfileKey#143b82c3[doma
in=default,server=default,name=default])
20:46:22,593 INFO [ProfileServiceImpl] Loaded profile: ProfileImpl#1e779a7{key=org.jboss.profileservice.spi.ProfileKey#143b82c3[domain=default,server=default,name=default]}
20:46:24,421 INFO [WebService] Using RMI server codebase: http://127.0.0.1:9283/
20:46:30,656 INFO [NativeServerConfig] JBoss Web Services - Stack Native Core
20:46:30,671 INFO [NativeServerConfig] 3.0.5.GA
20:46:42,218 INFO [JMXConnectorServerService] JMX Connector server: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:2290/jmxconnector
20:46:42,375 INFO [MailService] Mail Service bound to java:/Mail
20:46:44,015 WARN [JBossASSecurityMetadataStore] WARNING! POTENTIAL SECURITY RISK. It has been detected that the MessageSucker component which sucks messages from one node to another has not had its password changed from the installat
ion default. Please see the JBoss Messaging user guide for instructions on how to do this.
20:46:44,031 WARN [AnnotationCreator] No ClassLoader provided, using TCCL: org.jboss.managed.api.annotation.ManagementComponent
20:46:44,203 INFO [TransactionManagerService] JBossTS Transaction Service (JTA version) - JBoss Inc.
20:46:44,203 INFO [TransactionManagerService] Setting up property manager MBean and JMX layer
20:46:44,656 INFO [TransactionManagerService] Initializing recovery manager
20:46:44,843 INFO [TransactionManagerService] Recovery manager configured
20:46:44,843 INFO [TransactionManagerService] Binding TransactionManager JNDI Reference
20:46:44,875 INFO [TransactionManagerService] Starting transaction recovery manager
20:46:45,453 INFO [Http11Protocol] Initializing Coyote HTTP/1.1 on http-127.0.0.1-8080
20:46:45,453 INFO [AjpProtocol] Initializing Coyote AJP/1.3 on ajp-127.0.0.1-8218
20:46:45,453 INFO [StandardService] Starting service jboss.web
20:46:45,453 INFO [StandardEngine] Starting Servlet Engine: JBoss Web/2.1.2.GA
20:46:45,531 INFO [Catalina] Server startup in 146 ms
20:46:45,562 INFO [TomcatDeployment] deploy, ctxPath=/invoker
20:46:46,156 INFO [TomcatDeployment] deploy, ctxPath=/jbossws
20:46:46,203 INFO [TomcatDeployment] deploy, ctxPath=/web-console
20:46:46,265 INFO [[/web-console]] SystemFolder: Failed to init plugin, Resource not found: SystemFolder.bsh
20:46:46,296 INFO [[/web-console]] J2EEFolder: Failed to init plugin, Resource not found: J2EEFolder.bsh
20:46:46,421 INFO [RARDeployment] Required license terms exist, view vfszip:/D:/Jboss/jboss-5.0.1.GA/server/default/deploy/jboss-local-jdbc.rar/META-INF/ra.xml
20:46:46,531 INFO [RARDeployment] Required license terms exist, view vfszip:/D:/Jboss/jboss-5.0.1.GA/server/default/deploy/jboss-xa-jdbc.rar/META-INF/ra.xml
20:46:46,578 INFO [RARDeployment] Required license terms exist, view vfszip:/D:/Jboss/jboss-5.0.1.GA/server/default/deploy/jms-ra.rar/META-INF/ra.xml
20:46:46,609 INFO [RARDeployment] Required license terms exist, view vfszip:/D:/Jboss/jboss-5.0.1.GA/server/default/deploy/mail-ra.rar/META-INF/ra.xml
20:46:46,640 INFO [RARDeployment] Required license terms exist, view vfszip:/D:/Jboss/jboss-5.0.1.GA/server/default/deploy/quartz-ra.rar/META-INF/ra.xml
20:46:46,796 INFO [SimpleThreadPool] Job execution threads will use class loader of thread: main
20:46:46,843 INFO [QuartzScheduler] Quartz Scheduler v.1.5.2 created.
20:46:46,843 INFO [RAMJobStore] RAMJobStore initialized.
20:46:46,843 INFO [StdSchedulerFactory] Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
20:46:46,843 INFO [StdSchedulerFactory] Quartz scheduler version: 1.5.2
20:46:46,843 INFO [QuartzScheduler] Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
20:46:47,531 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=DefaultDS' to JNDI name 'java:DefaultDS'
20:46:48,062 INFO [ServerPeer] JBoss Messaging 1.4.1.GA server [0] started
20:46:48,296 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4657 has leasing enabled, lease period 10000 milliseconds
20:46:48,296 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#3dc250 started
20:46:48,359 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4657 has leasing enabled, lease period 10000 milliseconds
20:46:48,359 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#188ef97 started
20:46:48,406 INFO [QueueService] Queue[/queue/A] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,406 INFO [QueueService] Queue[/queue/ex] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,406 INFO [QueueService] Queue[/queue/ExpiryQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,406 INFO [QueueService] Queue[/queue/B] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,406 INFO [QueueService] Queue[/queue/C] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,421 WARN [ConnectionFactoryJNDIMapper] supportsFailover attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factory wil
l *not* support failover
20:46:48,421 WARN [ConnectionFactoryJNDIMapper] supportsLoadBalancing attribute is true on connection factory: jboss.messaging.connectionfactory:service=ClusteredConnectionFactory but post office is non clustered. So connection factor
y will *not* support load balancing
20:46:48,421 INFO [ConnectionFactory] Connector bisocket://127.0.0.1:4657 has leasing enabled, lease period 10000 milliseconds
20:46:48,421 INFO [ConnectionFactory] org.jboss.jms.server.connectionfactory.ConnectionFactory#2de670 started
20:46:48,421 INFO [QueueService] Queue[/queue/D] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,421 INFO [QueueService] Queue[/queue/MailQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,421 INFO [TopicService] Topic[/topic/testDurableTopic] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,421 INFO [QueueService] Queue[/queue/DLQ] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,437 INFO [TopicService] Topic[/topic/securedTopic] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,437 INFO [QueueService] Queue[/queue/testQueue] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,437 INFO [TopicService] Topic[/topic/testTopic] started, fullSize=200000, pageSize=2000, downCacheSize=2000
20:46:48,640 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=ConnectionFactoryBinding,name=JmsXA' to JNDI name 'java:JmsXA'
20:46:48,703 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jboss.jca:service=DataSourceBinding,name=PostgresDSDYPK' to JNDI name 'java:PostgresDSDYPK'
20:46:48,734 INFO [TomcatDeployment] deploy, ctxPath=/
20:46:50,312 INFO [TomcatDeployment] deploy, ctxPath=/ap
20:46:50,406 WARN [JAXWSDeployerHookPreJSE] Cannot load servlet class: org.jboss.jmx.adaptor.html.HtmlAdaptorServlet
20:46:50,406 WARN [JAXWSDeployerHookPreJSE] Cannot load servlet class: org.jboss.jmx.adaptor.html.ClusteredConsoleServlet
20:46:50,421 INFO [TomcatDeployment] deploy, ctxPath=/jmx-conso1e
20:46:50,500 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8080
20:46:50,531 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8218
20:46:50,546 INFO [ServerImpl] JBoss (Microcontainer) [5.0.1.GA (build: SVNTag=JBoss_5_0_1_GA date=200902231221)] Started in 33s:859ms
You will also have to specify the binding address for the jboss server. For jboss5.x, the command will be like
Jboss\jboss-5.0.1.GA\bin\run.jar -b0.0.0.0
Check your port number under /deploy/jbossweb.sar/server.xml, I think the problem is right there.
It should be like this ==>
<!-- A HTTP/1.1 Connector on port 8080 -->
<Connector protocol="HTTP/1.1" port="8080" address="${jboss.bind.address}"
maxThreads="250" strategy="ms" maxHttpHeaderSize="8192"
emptySessionPath="true"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"/>
Check the port number!

Resources