How can I get logs collected on console using Flume NG? - flume

I'm testing Flume NG (1.2.0) for collecting logs. It's a simple test that Flume collects a log file flume_test.log and prints collected logs to console as sysout. conf/flume.conf is:
agent.sources = tail
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.tail.type = exec
agent.sources.tail.command = tail -f /Users/pj/work/flume_test.log
agent.sources.tail.channels = memoryChannel
agent.sinks.loggerSink.channel = memoryChannel
agent.sinks.loggerSink.type = logger
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
And I ran Flume as following:
$ $FLUME_HOME/bin/flume-ng agent --conf $FLUME_HOME/conf --conf-file $FLUME_HOME/conf/flume.conf --name agent1 -Dflume.root.logger=DEBUG,console
After running Flume logs on console are:
Info: Sourcing environment configuration script /usr/local/lib/flume-ng/conf/flume-env.sh
+ exec /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/bin/java -Xmx20m -Dflume.root.logger=DEBUG,console -cp '/usr/local/lib/flume-ng/conf:/usr/local/lib/flume-ng/lib/*' -Djava.library.path= org.apache.flume.node.Application --conf-file /usr/local/lib/flume-ng/conf/flume.conf --name agent1
2012-09-12 18:23:52,049 (main) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)] Starting lifecycle supervisor 1
2012-09-12 18:23:52,052 (main) [INFO - org.apache.flume.node.FlumeNode.start(FlumeNode.java:54)] Flume node starting - agent1
2012-09-12 18:23:52,054 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:187)] Node manager starting
2012-09-12 18:23:52,056 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.start(LifecycleSupervisor.java:67)] Starting lifecycle supervisor 9
2012-09-12 18:23:52,054 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.java:67)] Configuration provider starting
2012-09-12 18:23:52,056 (lifecycleSupervisor-1-0) [DEBUG - org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.start(DefaultLogicalNodeManager.java:191)] Node manager started
2012-09-12 18:23:52,057 (lifecycleSupervisor-1-1) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider.start(AbstractFileConfigurationProvider.java:86)] Configuration provider started
2012-09-12 18:23:52,058 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:188)] Checking file:/usr/local/lib/flume-ng/conf/flume.conf for changes
2012-09-12 18:23:52,058 (conf-file-poller-0) [INFO - org.apache.flume.conf.file.AbstractFileConfigurationProvider$FileWatcherRunnable.run(AbstractFileConfigurationProvider.java:195)] Reloading configuration file:/usr/local/lib/flume-ng/conf/flume.conf
2012-09-12 18:23:52,063 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:902)] Added sinks: loggerSink Agent: agent
2012-09-12 18:23:52,063 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)] Processing:loggerSink
2012-09-12 18:23:52,063 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:992)] Created context for loggerSink: type
2012-09-12 18:23:52,063 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:988)] Processing:loggerSink
2012-09-12 18:23:52,063 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.isValid(FlumeConfiguration.java:295)] Starting validation of configuration for agent: agent, initial-configuration: AgentConfiguration[agent]
SOURCES: {tail={ parameters:{command=tail -f /Users/pj/work/flume_test.log, channels=memoryChannel, type=exec} }}
CHANNELS: {memoryChannel={ parameters:{capacity=100, type=memory} }}
SINKS: {loggerSink={ parameters:{type=logger, channel=memoryChannel} }}
2012-09-12 18:23:52,068 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateChannels(FlumeConfiguration.java:450)] Created channel memoryChannel
2012-09-12 18:23:52,082 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSinks(FlumeConfiguration.java:649)] Creating sink: loggerSink using LOGGER
2012-09-12 18:23:52,085 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.isValid(FlumeConfiguration.java:353)] Post validation configuration for agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[agent]
SOURCES: {tail={ parameters:{command=tail -f /Users/pj/work/flume_test.log, channels=memoryChannel, type=exec} }}
CHANNELS: {memoryChannel={ parameters:{capacity=100, type=memory} }}
AgentConfiguration created with Configuration stubs for which full validation was performed[agent]
SINKS: {loggerSink=ComponentConfiguration[loggerSink]
CONFIG:
CHANNEL:memoryChannel
}
2012-09-12 18:23:52,085 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:117)] Channels:memoryChannel
2012-09-12 18:23:52,085 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:118)] Sinks loggerSink
2012-09-12 18:23:52,085 (conf-file-poller-0) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:119)] Sources tail
2012-09-12 18:23:52,085 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:122)] Post-validation flume configuration contains configuration for agents: [agent]
2012-09-12 18:23:52,085 (conf-file-poller-0) [WARN - org.apache.flume.conf.properties.PropertiesFileConfigurationProvider.load(PropertiesFileConfigurationProvider.java:227)] No configuration found for this host:agent1
I think Flume started normally, so I put a bunch of lines to flume_test.log continuously. But it doesn't print added lines to flume_test.log on console.
What is the problem with this test? Thanks for any comments and corrections.

The problem was name mismatch between the agent name in flume.conf (agent) and the agent name after --name (agent1) in the startup script.
After changing the name option from --name agent1 to --name agent, problem solved.
Thanks for my colleague Lenny.

Related

Why starting Artifactory fails

I want to install artifactory on an ubuntu docker image manually without using the artifactory image from docker hub.
what I have done so far is :
Get an ubuntu image with JDK 11 installed.
I used apt-get I have installed the artifactory.
but when starting the artifactory service with service start artifactory I get the following logs with errors:
root#f01a31f43dc0:/# service artifactory start
2021-12-15T23:57:37.545Z [shell] [INFO ] [] [artifactory:81 ] [main] - Starting Artifactory tomcat as user artifactory...
2021-12-15T23:57:37.590Z [shell] [INFO ] [] [installerCommon.sh:1519 ] [main] - Checking open files and processes limits
2021-12-15T23:57:37.637Z [shell] [INFO ] [] [installerCommon.sh:1522 ] [main] - Current max open files is 1048576
2021-12-15T23:57:37.694Z [shell] [INFO ] [] [installerCommon.sh:1533 ] [main] - Current max open processes is unlimited
.shared.security value is of wrong data type. Correct type should be !!map
.shared.node value is of wrong data type. Correct type should be !!map
.shared.database value is of wrong data type. Correct type should be !!map
yaml validation failed
2021-12-15T23:57:37.798Z [shell] [WARN ] [] [installerCommon.sh:721 ] [main] - System.yaml validation failed
Database connection check failed Could not determine database type
2021-12-15T23:57:38.172Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_ID to f01a31f43dc0
2021-12-15T23:57:38.424Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_IP to 172.17.0.2
2021-12-15T23:57:38.652Z [shell] [INFO ] [] [installerCommon.sh:3381 ] [main] - Setting JF_SHARED_NODE_NAME to f01a31f43dc0
2021-12-15T23:57:39.348Z [shell] [INFO ] [] [artifactoryCommon.sh:186 ] [main] - Using Tomcat template to generate : /opt/jfrog/artifactory/app/artifactory/tomcat/conf/server.xml
2021-12-15T23:57:39.711Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.port||8081} to default value : 8081
2021-12-15T23:57:39.959Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.connector.sendReasonPhrase||false} to default value : false
2021-12-15T23:57:40.244Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.connector.maxThreads||200} to default value : 200
2021-12-15T23:57:40.705Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.port||8091} to default value : 8091
2021-12-15T23:57:40.997Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.maxThreads||5} to default value : 5
2021-12-15T23:57:41.278Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${artifactory.tomcat.maintenanceConnector.acceptCount||5} to default value : 5
2021-12-15T23:57:41.751Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.http.port||8040} to default value : 8040
2021-12-15T23:57:42.041Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.tomcat.connector.sendReasonPhrase||false} to default value : false
2021-12-15T23:57:42.341Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${access.tomcat.connector.maxThreads||50} to default value : 50
2021-12-15T23:57:42.906Z [shell] [INFO ] [] [systemYamlHelper.sh:527 ] [main] - Resolved JF_PRODUCT_HOME (/opt/jfrog/artifactory) from environment variable
2021-12-15T23:57:43.320Z [shell] [INFO ] [] [artifactoryCommon.sh:1008 ] [main] - Resolved ${shared.tomcat.workDir||/opt/jfrog/artifactory/var/work/artifactory/tomcat} to default value : /opt/jfrog/artifact
ory/var/work/artifactory/tomcat
========================
JF Environment variables
========================
JF_SHARED_NODE_ID : f01a31f43dc0
JF_SHARED_NODE_IP : 172.17.0.2
JF_ARTIFACTORY_PID : /var/run/artifactory.pid
JF_SYSTEM_YAML : /opt/jfrog/artifactory/var/etc/system.yaml
JF_PRODUCT_HOME : /opt/jfrog/artifactory
JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES : jfrt,jfac,jfmd,jffe,jfob
JF_SHARED_NODE_NAME : f01a31f43dc0
2021-12-15T23:57:45.827Z [shell] [ERROR] [] [installerCommon.sh:3267 ] [main] - ##############################################################################
2021-12-15T23:57:45.890Z [shell] [ERROR] [] [installerCommon.sh:3268 ] [main] - Ownership mismatch. You can try executing following instruction and do a restart
2021-12-15T23:57:45.959Z [shell] [ERROR] [] [installerCommon.sh:3269 ] [main] - Command : chown -R artifactory:artifactory /opt/jfrog/artifactory/var/log
2021-12-15T23:57:46.029Z [shell] [ERROR] [] [installerCommon.sh:3270 ] [main] - ##############################################################################
I'm not sure what I'm messing in this installation process.
The error is clear that there is permission issue on /opt/jfrog/artifactory/var/log folder and you should be running the chown -R artifactory:artifactory /opt/jfrog/artifactory/var/log command to solve it

Management page won't load when using RabbitMQ docker container

I'm running RabbitMQ locally using:
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Some log:
narley#brittes ~ $ docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: list of feature flags found:
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] drop_unroutable_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] empty_basic_get_metric
2020-01-08 22:31:52.079 [info] <0.8.0> Feature flags: [ ] implicit_default_bindings
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] quorum_queue
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: [ ] virtual_host_metadata
2020-01-08 22:31:52.080 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2020-01-08 22:31:52.160 [info] <0.268.0> ra: meta data store initialised. 0 record(s) recovered
2020-01-08 22:31:52.162 [info] <0.273.0> WAL: recovering []
2020-01-08 22:31:52.164 [info] <0.277.0>
Starting RabbitMQ 3.8.2 on Erlang 22.2.1
Copyright (c) 2007-2019 Pivotal Software, Inc.
Licensed under the MPL 1.1. Website: https://rabbitmq.com
## ## RabbitMQ 3.8.2
## ##
########## Copyright (c) 2007-2019 Pivotal Software, Inc.
###### ##
########## Licensed under the MPL 1.1. Website: https://rabbitmq.com
Doc guides: https://rabbitmq.com/documentation.html
Support: https://rabbitmq.com/contact.html
Tutorials: https://rabbitmq.com/getstarted.html
Monitoring: https://rabbitmq.com/monitoring.html
Logs: <stdout>
Config file(s): /etc/rabbitmq/rabbitmq.conf
Starting broker...2020-01-08 22:31:52.166 [info] <0.277.0>
node : rabbit#1586b4698736
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
cookie hash : bwlnCFiUchzEkgAOsZwQ1w==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/rabbit#1586b4698736
2020-01-08 22:31:52.210 [info] <0.277.0> Running boot step pre_boot defined by app rabbit
...
...
...
2020-01-08 22:31:53.817 [info] <0.277.0> Setting up a table for connection tracking on this node: tracked_connection_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.827 [info] <0.277.0> Setting up a table for per-vhost connection counting on this node: tracked_connection_per_vhost_on_node_rabbit#1586b4698736
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step routing_ready defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step pre_flight defined by app rabbit
2020-01-08 22:31:53.828 [info] <0.277.0> Running boot step notify_cluster defined by app rabbit
2020-01-08 22:31:53.829 [info] <0.277.0> Running boot step networking defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.624.0> started TCP listener on [::]:5672
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step cluster_name defined by app rabbit
2020-01-08 22:31:53.833 [info] <0.277.0> Running boot step direct_client defined by app rabbit
2020-01-08 22:31:53.922 [info] <0.674.0> Management plugin: HTTP (non-TLS) listener started on port 15672
2020-01-08 22:31:53.922 [info] <0.780.0> Statistics database started.
2020-01-08 22:31:53.923 [info] <0.779.0> Starting worker pool 'management_worker_pool' with 3 processes in it
completed with 3 plugins.
2020-01-08 22:31:54.316 [info] <0.8.0> Server startup complete; 3 plugins started.
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
Then I go to http:localhost:15672 and page doesn't load. No error is displayed.
Interesting thing is that it worked last time I used it (about 3 weeks ago).
Can anyone give me some help?
Cheers!
have a try:
step 1, going into docker container
docker exec -it rabbitmq bash
step 2, run it in docker container
rabbitmq-plugins enable rabbitmq_management
is work for me
I got it working by simply upgrading docker.
Was running docker 18.09.7 and upgrade to 19.03.5.
In my case, clearing the cookies up has fixed this issue instantly.

Docker run tomcat errors

I got some problem when i was trying to run this command
docker run -d -t -p 203:22 -p 7003:8080 -v /home/test/webapps:/usr/local/tomcat8/webapps/ --name tomcat3 tomcat
this command can be executed correctly,but the tomcat server in the docker would stopped like this:
07-Mar-2017 10:10:24.341 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat8/webapps/examples
07-Mar-2017 10:10:25.011 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat8/webapps/examples has finished in 669 ms
07-Mar-2017 10:10:25.011 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat8/webapps/host-manager
07-Mar-2017 10:10:25.069 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat8/webapps/host-manager has finished in 58 ms
07-Mar-2017 10:10:25.069 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat8/webapps/docs
07-Mar-2017 10:10:25.116 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat8/webapps/docs has finished in 47 ms
07-Mar-2017 10:10:25.117 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory /usr/local/tomcat8/webapps/ROOT
07-Mar-2017 10:10:25.147 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory /usr/local/tomcat8/webapps/ROOT has finished in 30 ms
07-Mar-2017 10:10:25.159 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
07-Mar-2017 10:10:25.171 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
07-Mar-2017 10:10:25.171 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 1757 ms
07-Mar-2017 10:11:31.418 INFO [main] org.apache.catalina.core.StandardServer.await A valid shutdown command was received via the shutdown port. Stopping the Server instance.
07-Mar-2017 10:11:31.420 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-nio-8080"]
07-Mar-2017 10:11:31.425 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-nio-8009"]
07-Mar-2017 10:11:31.426 INFO [main] org.apache.catalina.core.StandardService.stopInternal Stopping service Catalina
07-Mar-2017 10:11:31.495 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["http-nio-8080"]
07-Mar-2017 10:11:31.499 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-8009"]
07-Mar-2017 10:11:31.499 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
07-Mar-2017 10:11:31.500 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
But that is not the case in the following command.
docker run -d -t -p 201:22 -p 7001:8080 --name tomcat1 tomcat
this command would execute exactly correct.The only different between them is the flag -v in my opinion.Here is my Dockerfile and supervisord config
[supervisord]
nodaemon=true
[program:tomcat]
command=/usr/local/tomcat8/bin/catalina.sh run
environment=JAVA_HOME="/usr/local/java/jdk8/",JAVA_BIN="/usr/local/java/jdk8/bin"
autostart = true
autorestart=true
[program:sshd]
command=/usr/sbin/sshd -D
#Dockerfile
FROM tomcat
EXPOSE 22 8080
CMD ["/usr/bin/supervisord"]

docker image of sonarqube is not running with mysql db configuration

I am trying to run docker image of sonarqube with mysql db by below dockercommand:
sudo docker run -d --name hg-sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME='sonar' \
-e SONARQUBE_JDBC_PASSWORD='sonar' \
-e SONARQUBE_JDBC_URL='jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance' \
sonarqube
But container is not running due to error:
2016.12.28 11:20:11 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2016.12.28 11:20:11 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:108)
MySQL service is running and sonar database. I use command to create database and give privileges in Ubuntu-14.04.
echo "GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY 'welcome123'; flush privileges;" | mysql -u root -pwelcome123
echo "CREATE DATABASE sonar CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE USER 'sonar' IDENTIFIED BY 'sonar';GRANT ALL PRIVILEGES ON sonar.* TO 'sonar'#'%' IDENTIFIED BY 'sonar'; GRANT ALL ON sonar.* TO 'sonar'#'localhost' IDENTIFIED BY 'sonar'; flush privileges;" | mysql -u root -pwelcome123
Full Log file:
2016.12.28 11:19:58 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2016.12.28 11:19:58 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process5713024831851311243properties
2016.12.28 11:19:59 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2016.12.28 11:19:59 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2016.12.28 11:19:59 INFO es[][o.elasticsearch.node] [sonarqube] version[2.3.5], pid[18], build[90f439f/2016-07-27T10:36:52Z]
2016.12.28 11:19:59 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2016.12.28 11:19:59 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2016.12.28 11:19:59 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/sda1)]], net usable_space [24.2gb], net total_space [28.8gb], spins? [possibly], types [ext4]
2016.12.28 11:19:59 INFO es[][o.elasticsearch.env] [sonarqube] heap size [1007.3mb], compressed ordinary object pointers [true]
2016.12.28 11:20:03 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2016.12.28 11:20:03 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2016.12.28 11:20:03 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2016.12.28 11:20:03 INFO es[][o.e.discovery] [sonarqube] sonarqube/CPgnfx6NTe2aO07d6fR0Bg
2016.12.28 11:20:06 INFO es[][o.e.cluster.service] [sonarqube] new_master {sonarqube}{CPgnfx6NTe2aO07d6fR0Bg}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2016.12.28 11:20:06 INFO es[][o.elasticsearch.node] [sonarqube] started
2016.12.28 11:20:06 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2016.12.28 11:20:06 INFO app[][o.s.p.m.Monitor] Process[es] is up
2016.12.28 11:20:06 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process6242669754365841464properties
2016.12.28 11:20:08 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2016.12.28 11:20:08 INFO web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonarqube/web
2016.12.28 11:20:08 INFO web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.28 11:20:08 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2016.12.28 11:20:09 INFO web[][o.e.plugins] [Bushwacker] modules [], plugins [], sites []
2016.12.28 11:20:11 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2016.12.28 11:20:11 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.2 / 4a28f29f95254b58f3cf0a0871bc632e998403f5
2016.12.28 11:20:11 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2016.12.28 11:20:11 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:108)
at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.sonar.core.platform.ComponentContainer$1.start(ComponentContainer.java:320)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:141)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:88)
at org.sonar.server.platform.Platform.start(Platform.java:216)
at org.sonar.server.platform.Platform.startLevel1Container(Platform.java:175)
at org.sonar.server.platform.Platform.init(Platform.java:90)
at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:44)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.sonar.db.profiling.NullConnectionInterceptor.getConnection(NullConnectionInterceptor.java:31)
at org.sonar.db.profiling.ProfiledDataSource.getConnection(ProfiledDataSource.java:323)
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:106)
... 30 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Might be helpful for those who still facing this issue
You might be running mysql in another container. You can use below docker-compose file to make it in the same network
# Use sonar/sonar as user/password credentials
version: '3.1'
services:
sonarqube:
image: sonarqube:5.1.1
networks:
- sonarqube-network
ports:
- "9000:9000"
- "3306:3306"
environment:
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- SONARQUBE_JDBC_URL=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
db:
image: mysql
networks:
- sonarqube-network
environment:
- MYSQL_ROOT_PASSWORD=sonar
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
networks:
sonarqube-network:
Save the file as docker-compose.yml and run docker-compose up
Please note this entry,
- 3306:3306
After that try with,
mysql -u sonar -h localhost -p
to connect to MySQL

flume Command [tail -F] exited with 1

Trying to run the following command in Linux:
bin/flume-ng agent -n a1 -c conf -f conf/flume-tail.properties -Dflume.root.logger=INFO,console
However, the processing stops at:
2016-06-26 09:03:44,610 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{r1=EventDrivenSourceRunner: { source:org.apache.flume.source.ExecSource{name:r1,state:IDLE} }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3244eabe counterGroup:{ name:null counters:{} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
2016-06-26 09:03:44,676 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel c1
2016-06-26 09:03:44,744 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: c1: Successfully registered new MBean.
2016-06-26 09:03:44,746 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: c1 started
2016-06-26 09:03:44,747 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink k1
2016-06-26 09:03:44,747 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:184)] Starting Source r1
2016-06-26 09:03:44,748 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.source.ExecSource.start(ExecSource.java:169)] Exec source starting with command:tail -F
2016-06-26 09:03:44,766 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
2016-06-26 09:03:44,766 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: r1 started
2016-06-26 09:03:44,785 (pool-3-thread-1) [INFO - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:376)] Command [tail -F] exited with 1
Could anyone help me address this issue?
Check whether the user has the permissions to run tail -f on your target dir.

Resources