Location of HHVM Config File - hhvm

I have built hhvm from source code. but I can not configure hhvm-fastcgi. I could not find ini file for configuration.
HHvm binary is located at /usr/local/bin. I wanna run hhvm with nginx. My purpose is adding custom php extension to hhvm. The source code at /home/vagrant/dev/hhvm. My operation system is Ubuntu 14.04.

Here is my default ini file. Have a try to creating it.
file is /etc/hhvm/server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
hhvm.server.port = 9001
hhvm.server.type = fastcgi
hhvm.server.default_document = index.php
hhvm.log.level = Warning
hhvm.log.always_log_unhandled_exceptions = true
hhvm.log.runtime_error_reporting_level = 8191
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
hhvm.mysql.typed_results = false
hhvm.debug.full_backtrace = true
hhvm.debug.server_stack_trace = true
hhvm.debug.server_error_message = true
hhvm.debug.translate_source = true

Related

Apache karaf4.2.3 - separate log file for each bundle

How to create a separate log file for each bundle deployed in karaf-4.2.3 using pax logging, which has log4j2 native style config?
I've tried with routing appender, but no results.
I am excepted to write each bundle logs in a separate log file for easy debugging.
I don't know anyway doing this automatically. But what you could do is to create for each module a separate configuration based on the root package name
log4j2.logger.xy.name = com.company.module.xy
log4j2.logger.xy.level = INFO
log4j2.logger.xy.additivity = false
log4j2.logger.xy.appenderRef.inovel.ref = XyFile
log4j2.logger.zz.name = com.company.module.zz
log4j2.logger.zz.level = INFO
log4j2.logger.zz.additivity = false
log4j2.logger.zz.appenderRef.inovel.ref = ZzFile
log4j2.logger.keycloak.name = org.keycloak
log4j2.logger.keycloak.level = INFO
log4j2.logger.keycloak.additivity = false
log4j2.logger.keycloak.appenderRef.keycloak.ref = KeycloakFile
And a ref could look like
# keyclok file appender
log4j2.appender.keycloak.type = RollingRandomAccessFile
log4j2.appender.keycloak.name = KeycloakFile
log4j2.appender.keycloak.fileName = ${karaf.data}/log/keycloak.log
log4j2.appender.keycloak.filePattern = ${karaf.data}/log/keycloak.log.%i
log4j2.appender.keycloak.append = true
log4j2.appender.keycloak.layout.type = PatternLayout
log4j2.appender.keycloak.layout.pattern = %d{ISO8601}
log4j2.appender.keycloak.policies.type = Policies
log4j2.appender.keycloak.policies.size.type = SizeBasedTriggeringPolicy
log4j2.appender.keycloak.policies.size.size = 8MB
log4j2.appender.keycloak.strategy.type = DefaultRolloverStrategy
log4j2.appender.keycloak.strategy.max = 10
This is a lot of manual work. So maybe someone come up with an automatic configuration
Sincerely
Just have a look at the official Log4j 2.x configuration coming with every Karaf distribution and have a look at the commented "Routing" section.
E.g. I've used the following in one of my projects:
# Root logger
log4j2.rootLogger.level = INFO
log4j2.rootLogger.appenderRef.RollingFile.ref = RollingFile
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.RollingFile.filter.threshold.level = WARN
log4j2.rootLogger.appenderRef.PaxOsgi.ref = PaxOsgi
log4j2.rootLogger.appenderRef.Console.ref = Console
log4j2.rootLogger.appenderRef.Console.filter.threshold.type = ThresholdFilter
log4j2.rootLogger.appenderRef.Console.filter.threshold.level = ${karaf.log.console:-OFF}
# Enable log routing...
log4j2.rootLogger.appenderRef.Routing.ref = Routing
# Loggers configuration
...
# Configure the routing (pay close attention to the escapes)...
log4j2.appender.routing.type = Routing
log4j2.appender.routing.name = Routing
log4j2.appender.routing.routes.type = Routes
log4j2.appender.routing.routes.pattern = \$\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.type = Route
log4j2.appender.routing.routes.bundle.appender.type = RollingRandomAccessFile
log4j2.appender.routing.routes.bundle.appender.name = Bundle-\$\\\{ctx:bundle.name\}
log4j2.appender.routing.routes.bundle.appender.fileName = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log
log4j2.appender.routing.routes.bundle.appender.filePattern = ${karaf.data}/log/bundle-\$\\\{ctx:bundle.name\}.log.%d{yyyy-MM-dd}
log4j2.appender.routing.routes.bundle.appender.append = true
log4j2.appender.routing.routes.bundle.appender.layout.type = PatternLayout
log4j2.appender.routing.routes.bundle.appender.layout.pattern = ${log4j2.pattern}
log4j2.appender.routing.routes.bundle.appender.policies.type = Policies
log4j2.appender.routing.routes.bundle.appender.policies.time.type = TimeBasedTriggeringPolicy
log4j2.appender.routing.routes.bundle.appender.strategy.type = DefaultRolloverStrategy
log4j2.appender.routing.routes.bundle.appender.strategy.max = 31
That clearly worked for me. I wouldn't even think about a static configuration in OSGi! ;-)
log4j Configuration commented section on below link
https://github.com/apache/karaf/blob/master/assemblies/features/base/src/main/resources/resources/etc/org.ops4j.pax.logging.cfg
will log messages for each bundle to a separate file but By default karaf comes with multiple bundles this will result one log file for each bundle. So many logs file will be generated.
How it can be done for specific bundles which user have deployed on deploy folder

Flume Multiplexing not working

I have configured my flume agent like below. Somehow, the flume agent doesn't run properly. It keeps hanging without any errors. Is there any problem with the below configuration.
FYI: I have a file named "country" with hard-coded header as state
#Define sources, sink and channels
foo.sources = s1
foo.channels = chn-az chn-oth
foo.sinks = sink-az sink-oth
#
### # # Define a source on agent and connect to channel memory-channel.
foo.sources.s1.type = exec
foo.sources.s1.command = cat /home/hadoop/flume/country.txt
foo.sources.s1.batchSize = 1
foo.sources.s1.channels = chn-ca chn-oth
#selector configuration
foo.sources.s1.selector.type = multiplexing
foo.sources.s1.selector.header = state
foo.sources.s1.selector.mapping.AZ = chn-az
foo.sources.s1.selector.default = chn-oth
#
#
### Define a memory channel on agent called memory-channel.
foo.channels.chn-az.type = memory
foo.channels.chn-oth.type = memory
#
#
##Define sinks that outputs to hdfs.
foo.sinks.sink-az.channel = chn-az
foo.sinks.sink-az.type = hdfs
foo.sinks.sink-az.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-az.hdfs.filePrefix = statefilter
foo.sinks.sink-az.hdfs.fileType = DataStream
foo.sinks.sink-az.hdfs.writeFormat = Text
foo.sinks.sink-az.batchSize = 1
foo.sinks.sink-az.rollInterval = 0
#
foo.sinks.sink-oth.channel = chn-oth
foo.sinks.sink-oth.type = hdfs
foo.sinks.sink-oth.hdfs.path = hdfs://master:9099/user/hadoop/flume
foo.sinks.sink-oth.hdfs.filePrefix = statefilter
foo.sinks.sink-oth.hdfs.fileType = DataStream
foo.sinks.sink-oth.batchSize = 1
foo.sinks.sink-oth.rollInterval = 0
Thanks,
Vinoth
Regarding the channels list configured at the source:
foo.sources.s1.channels = chn-ca chn-oth
I think chn-ca should be chn-az.
Nevertheless, I think such a configuration will never work since the "state" header used by the selector is not created by any Flume component. You must introduce an interceptor for that, typically the Regex Extractor Interceptor.

Using grails datasources in quartz plugin

I want to create quartz jobs that use a JdbcStore as described in the clustering section of the docs, in Burt's example.
The example shows how to configure quartz using a quartz.properties file.
Now, I'd like my jdbc store to be the same database as my grails application, so that I have less settings to duplicate.
So, assuming I manually create the required tables in my database, is it possible to use the default dataSource configured in Datasource.groovy with the quartz plugin ?
I'm using grails 2.4.4 and quartz 1.0.2.
In other terms, can I add my settings to QuartzConfig.groovy rather than creating a new quartz.properties file ? At least I could benefit from the separate environments settings.
Would something like this be valid in QuartzConfig.groovy ?
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
exposeSchedulerInRepository = true
props {
scheduler.skipUpdateCheck = true
threadPool.class = 'org.quartz.simpl.SimpleThreadPool'
threadPool.threadCount = 50
threadPool.threadPriority = 9
jobStore.misfireThreshold = 60000
jobStore.class = 'impl.jdbcjobstore.JobStoreTX'
jobStore.driverDelegateClass = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
jobStore.useProperties = false
jobStore.tablePrefix = 'QRTZ_'
jobStore.isClustered = true
jobStore.clusterCheckinInterval = 5000
plugin.shutdownhook.class = 'org.quartz.plugins.management.ShutdownHookPlugin'
plugin.shutdownhook.cleanShutdown = true
jobStore.dataSource = 'myDS'
// [...]
}
I managed to tweak all my settings in QuartzConfig.groovy. The only thing I had to remove to make it work were the database specific options.
Also, I had to add the property scheduler.idleWaitTime = 1000 as advised here http://www.quartz-scheduler.org/generated/2.2.1/pdf/Quartz_Scheduler_Configuration_Guide.pdf on page 12, because despite my job being called as MyJob.triggerNow(paramsMap), there was a 20 to 30 seconds delay before it actually started.
With scheduler.idleWaitTime set to 1 second, the job indeed triggers 1 second after it has been submitted.
QuartzProperties.groovy actually accepts all the properties described in the quartz configuration docs (e.g. : http://quartz-scheduler.org/documentation/quartz-2.1.x/configuration/ConfigJobStoreTX). Just put them inside the props {...} block, and remove the org.quartz prefix.
Here is my final config, as an example :
quartz {
autoStartup = true
jdbcStore = true
waitForJobsToCompleteOnShutdown = true
// Allows monitoring in Java Melody (if you have the java melody plugin installed in your grails app)
exposeSchedulerInRepository = true
props {
scheduler.skipUpdateCheck = true
scheduler.instanceName = 'my_reporting_quartz'
scheduler.instanceId = 'AUTO'
scheduler.idleWaitTime = 1000
threadPool.'class' = 'org.quartz.simpl.SimpleThreadPool'
threadPool.threadCount = 10
threadPool.threadPriority = 7
jobStore.misfireThreshold = 60000
jobStore.'class' = 'org.quartz.impl.jdbcjobstore.JobStoreTX'
jobStore.driverDelegateClass = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
jobStore.useProperties = false
jobStore.tablePrefix = 'QRTZ_'
jobStore.isClustered = true
jobStore.clusterCheckinInterval = 5000
plugin.shutdownhook.'class' = 'org.quartz.plugins.management.ShutdownHookPlugin'
plugin.shutdownhook.cleanShutdown = true
}
}
Don't forget to create the sql tables with the appropriate script, which is located at /path/to/your/project/target/work/plugins/quartz-1.0.2/src/templates/sql/...

Graphite carbon-relay not working

I have two graphite setup and I am trying to relay the traffic between the two, but somehow the carbon-relay is not working.
My cache runs on 2003/2004 and relay on 2013/2014
Following are the configurations done :
#carbon file
[cache:b]
LINE_RECEIVER_PORT = 2003
PICKLE_RECEIVER_PORT = 2004
CACHE_QUERY_PORT = 7012
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
#relay-rules file
[default]
default = true
destinations = 127.0.0.1:2003:a, aa.bb.cc.dd:2003:b
Any pointers will be helpful
As part of the recent project at work, I've figured out that carbon demons uses PICKLE protocol when sending data to the destinations.
So the destination of carbon-relay should be carbon-cache's pickle receiver port instead.
#carbon.conf
....
[relay]
LINE_RECEIVER_INTERFACE = 0.0.0.0
LINE_RECEIVER_PORT = 2013
PICKLE_RECEIVER_INTERFACE = 0.0.0.0
PICKLE_RECEIVER_PORT = 2014
RELAY_METHOD = rules
REPLICATION_FACTOR = 1
DESTINATIONS = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b
Also modify the relay-rules.conf with the same destinations specified in carbon.conf
relay-rules.conf
.....
[default]
default = true
destinations = 127.0.0.1:2004:a, aa.bb.cc.dd:2004:b

how to improve apache flume performance to write data in hbase

I m using apache-flume1.4.0 with hbase0.94.10 and hadoop1.1.2.
flume agent have spool directory as source and hbase as sink and file channel.It is running successfully but very slow.what should I do for improving hbase write performance.
Flume agent conf is as below:
agent1.sources = spool
agent1.channels = fileChannel
agent1.sinks = sink
agent1.sources.spool.type = spooldir
agent1.sources.spool.spoolDir = /opt/spoolTest/
agent1.sources.spool.fileSuffix = .completed
agent1.sources.spool.channels = fileChannel
#agent1.sources.spool.deletePolicy = immediate
agent1.sinks.sink.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sink.channel = fileChannel
agent1.sinks.sink.table = test
agent1.sinks.sink.columnFamily = log
agent1.sinks.sink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sink.serializer.regex = (.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)^C(.*)
agent1.sinks.sink.serializer.colNames = id,no_fill_reason,adInfo,locationInfo,handsetInfo,siteInfo,reportDate,ipaddress,headerContent,userParaContent,reqParaContent,otherPara,others,others1
agent1.sinks.sink1.batchSize = 100
agent1.channels.fileChannel.type = file
agent1.channels.fileChannel.checkpointDir = /usr/flumeFileChannel/chkpointFlume
agent1.channels.fileChannel.dataDirs = /usr/flumeFileChannel/dataFlume
agent1.channels.fileChannel.capacity = 10000000
agent1.channels.fileChannel.transactionCapacity = 100000
What should be capacity,transaction capacity of file channel and batch size of sink.
Please help me.
Thanks in advance.

Resources