Kafka 0.8.1.1 (kafka_2.8.0-0.8.1.1.tgz)
I am using jmxtrans to do JMX monitoring of a Kafka instance (which is running in docker). Unfortunately, kafka metrics are not being returned.
I have tried a few things to debug this and know that kafka is running correctly (I can produce/consume messages successfully) have concluded that jmxtrans does return JMX metrics (for example, java.lang:type=Memory, attribute=HeapMemoryUsage returns correct data) so the general kafka and JMX capability seems to be working. Also, I can access the metrics when I use jconsole -- the metrics seem to be captured with data in all relevant fields.
When I try jmxtrans using the following configuration, unfortunately, I do not get any information back (no data at all in fact). I believe the metrics are supposed to be captured based upon the kafka documentation ("kafka.server:type=BrokerTopicMetrics", attribute="MessagesInPerSec")
The following is the jmxtrans configuration that I used:
{
"servers" : [ {
"port" : "9999",
"host" : "10.0.1.201",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.StdOutWriter",
"settings" : {
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics",
"attr" : [ "MessagesInPerSec" ]
} ],
"numQueryThreads" : 2
} ]
}
I am not sure why data is not returned. Maybe I setup an invalid jmxtrans configuration or perhaps I am specifying the metric improperly.
Any help is appreciated.
After a lot of experimentation, I have now resolved the question. For completeness, below is how I resolved the problem.
It appears that I specified the "obj" value incorrectly.
The CORRECT obj value (an example) is as follows:
"obj": "\"kafka.server\":type=\"BrokerTopicMetrics\",name \"AllTopicsLogBytesAppendedPerSec\"",
"attr": [ "Count" ]
Note that the "obj" value requires additional quotes. This seems unusual and different than the normal pattern I have seen (no quotes) for other JMX obj values.
JMXTRANS provided valid output after putting the correct (quoted) values in the obj string...
As I could've found out in ./bin/jmxtrans.sh, by default the stdout/log file is /dev/null.
LOG_FILE=${LOG_FILE:-"/dev/null"}
That's why it's important to set the env var to something you can use to see the output:
LOG_FILE=log.txt ./bin/jmxtrans.sh start kafka.json
I'm using the following kafka.json configuration file:
{
"servers" : [ {
"port" : "10101",
"host" : "localhost",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.StdOutWriter",
"settings" : {
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=testowo",
"attr" : [ "Count" ]
} ],
"numQueryThreads" : 2
} ]
}
When you start jmxtrans it will query the broker with JMX on localhost:10101 about Count attribute for testowo topic. It will print the result out to the file LOG_FILE every 60 secs (you can change it using SECONDS_BETWEEN_RUNS env var), e.g.
LOG_FILE=log.txt SECONDS_BETWEEN_RUNS=5 ./bin/jmxtrans.sh start kafka.json
You may want to use other writers of jmxtrans so the output is not intermingled, e.g.
{
"servers" : [ {
"port" : "10101",
"host" : "localhost",
"queries" : [ {
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.KeyOutWriter",
"settings" : {
"outputFile" : "testowo-counts.txt",
"maxLogFileSize" : "10MB",
"maxLogBackupFiles" : 200,
"delimiter" : "\t",
"debug" : true
}
} ],
"obj" : "kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=testowo",
"attr" : [ "Count" ]
} ],
"numQueryThreads" : 2
} ]
}
And the last but not least, to set the JMX port to a known value use JMX_PORT env var when starting a Kafka broker using ./bin/kafka-server-start.sh, i.e.
JMX_PORT=10101 ./bin/kafka-server-start.sh config/server.properties
Related
I'm trying to integrate Amplify to my project but I'm having some issues with the configuration.
The backend is sending the S3 Storage configuration to my project so I have to configurate Amplify with the data received.
I tried to configurate the storage following this test but It's failing with the following error:
PluginError: Unable to decode configuration
Recovery suggestion: Make sure the plugin configuration is JSONValue
▿ pluginConfigurationError : 3 elements
- .0 : "Unable to decode configuration"
- .1 : "Make sure the plugin configuration is JSONValue"
- .2 : nil
This is my code:
func amplifyConfigure() {
do {
Amplify.Logging.logLevel = .verbose
try Amplify.add(plugin: AWSCognitoAuthPlugin())
try Amplify.add(plugin: AWSS3StoragePlugin())
let storageConfiguration = StorageCategoryConfiguration(
plugins: [
"awsS3StoragePlugin": [
"bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"
]
]
)
let amplifyConfiguration = AmplifyConfiguration(storage: storageConfiguration)
try Amplify.configure(amplifyConfiguration)
// LOG success.
} catch {
// LOG Error.
}
}
Can someone help me with this custom configuration?
Thanks!
It seems that config cannot be declared directly in one go for some reason, possibly type-related. For me it works if I declare it in multiple steps. Try replacing this:
let storageConfiguration = StorageCategoryConfiguration(
plugins: [
"awsS3StoragePlugin": [
"bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"
]
]
)
with this:
var storageConfigurationJson : [String:JSONValue] = [ "awsS3StoragePlugin" : [] ]
storageConfigurationJson["awsS3StoragePlugin"] = ["bucket": "bucket",
"region": "us-west-2",
"defaultAccessLevel": "protected"]
let storageConfiguration = StorageCategoryConfiguration(plugins: storageConfigurationJson)
I've only used Amplify config with AuthCategoryConfiguration, so in case StorageCategoryConfiguration has a different syntax you may need to adjust my suggested code accordingly.
I am trying to execute remote webdriver tests using Selenium Grid, My node is a Windows 8 system, and I have insider version of edge installed on it. Now when I try to execute my tests, the node starts the driver service but it is not able to find the microsoft edge binary on the node.
Machine : Windows 8
Edge Version : 77.0.235.9
binary Path : C:\Program Files (x86)\Microsoft\Edge Beta\Application\msedge.exe
Selenium Version : 3.141.59
I tried to append the path to the edge executable "msedge.exe" on the node's Path variable -> did not work
I tried to mention the edge_binary bath in the nodeconfig.json as well, but still the executable was not found.
else if (browsername.equalsIgnoreCase("edge")) {
EdgeOptions options = new EdgeOptions();
driver = new RemoteWebDriver(new URL("http://192.168.1.107:8889/wd/hub"), options);
return driver;
}
nodeconfig.json :
{
"capabilities":
[
{
"browserName" : "chrome",
"maxInstances" : 5,
"seleniumProtocol" : "WebDriver"
},
{
"browserName" : "firefox",
"maxInstances" : 5,
"firefox_binary" : "C:\\Program Files\\Mozilla Firefox\\firefox.exe",
"seleniumProtocol" : "WebDriver",
"acceptInsecureCerts": true,
"acceptSslCerts": true
},
{
"browserName" : "internet explorer",
"version" : "11",
"maxInstances" : 5,
"seleniumProtocol" : "WebDriver"
} ,
{
"browserName" : "MicrosoftEdge",
"platform" : "WINDOWS",
"edge_binary" : "C:\\Program Files (x86)\\Microsoft\\Edge Beta\\Application\\msedge.exe",
"version" : "77.0.235.9",
"maxInstances" : 5,
"seleniumProtocol" : "WebDriver"
}
],
"proxy" : "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"maxSession" : 10,
"port" : 5555,
"register" : true,
"registerCycle" : 5000,
"hub" : "http://192.168.1.107:8889",
"nodeStatusCheckTimeout" : 5000,
"nodePolling" : 5000,
"role" : "node",
"unregisterIfStillDownAfter" : 1000,
"downPollingLimit" : 2,
"debug" : false,
"servlets" : [],
"withoutServlets" : [],
"custom": {
"webdriver.ie.driver" : "drivers/ie/win/IEDriverServer.exe",
"webdriver.gecko.driver" : "drivers/firefox/win/geckodriver.exe",
"webdriver.chrome.driver" : "drivers/chrome/win/chromedriver.exe",
"webdriver.edge.driver" : "drivers/edge/win/msedgedriver.exe"
}
}
How can I make the executable visible to the client trying to automate it,
EdgeOptions does not allow to set a binary path as well.
I am trying to get the Kafka metrics (version 0.8.2) to my grafana server. Unfortunately I can only get java.lang metrics, but no kafka metrics. Connecting with jmxtrans and jconsoel is now problem and I can see the MBeans of Kafka.
Configuration of jmxtrans:
{
"servers" : [ {
"port" : "9393",
"host" : "localhost",
"queries" : [ {
"obj" : "kafka.network:type=RequestMetrics,name=LocalTimeMs,request=ConsumerMetadata",
"attr" : [ "Count" ],
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.GraphiteWriter",
"settings" : {
"port" : 2003,
"host" : "localhost"
}
} ]
}, {
"obj" : "kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=FetchConsumer",
"attr" : [ "Count" ],
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.GraphiteWriter",
"settings" : {
"port" : 2003,
"host" : "localhost"
}
} ]
}, {
"obj" : "kafka.network:type=RequestMetrics,name=TotalTimeMs,request=OffsetFetch",
"attr" : [ "Count" ],
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.GraphiteWriter",
"settings" : {
"port" : 2003,
"host" : "localhost"
}
} ]
}, {
"obj" : "kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets",
"attr" : [ "Count" ],
"outputWriters" : [ {
"#class" : "com.googlecode.jmxtrans.model.output.GraphiteWriter",
"settings" : {
"port" : 2003,
"host" : "localhost"
}
} ]
}
],
"numQueryThreads" : 2
} ]
}
Log of jmxtrans:
15:52:11.564 [ServerScheduler_Worker-10] DEBUG org.quartz.core.JobRunShell - Calling execute on job ServerJob.localhost:9393-1443714611563-0428747962
15:52:11.564 [ServerScheduler_Worker-10] DEBUG c.googlecode.jmxtrans.jobs.ServerJob - +++++ Started server job: Server [host=localhost, port=9393, url=null, cronExpression=null, numQueryThreads=2]
15:52:11.569 [ServerScheduler_Worker-10] DEBUG com.googlecode.jmxtrans.jmx.JmxUtils - ----- Creating 4 query threads
15:52:11.638 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Executing queryName [kafka.network:name=RequestQueueTimeMs,request=FetchConsumer,type=RequestMetrics] from query [Query [obj=kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=FetchConsumer, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]]
15:52:11.638 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Finished running outputWriters for query: Query [obj=kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request=FetchConsumer, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]
15:52:11.640 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Executing queryName [kafka.network:name=TotalTimeMs,request=OffsetFetch,type=RequestMetrics] from query [Query [obj=kafka.network:type=RequestMetrics,name=TotalTimeMs,request=OffsetFetch, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]]
15:52:11.640 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Finished running outputWriters for query: Query [obj=kafka.network:type=RequestMetrics,name=TotalTimeMs,request=OffsetFetch, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]
15:52:11.641 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Executing queryName [kafka.network:name=TotalTimeMs,request=Offsets,type=RequestMetrics] from query [Query [obj=kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]]
15:52:11.641 [pool-60-thread-2] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Finished running outputWriters for query: Query [obj=kafka.network:type=RequestMetrics,name=TotalTimeMs,request=Offsets, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]
15:52:11.642 [pool-60-thread-1] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Executing queryName [kafka.network:name=LocalTimeMs,request=ConsumerMetadata,type=RequestMetrics] from query [Query [obj=kafka.network:type=RequestMetrics,name=LocalTimeMs,request=ConsumerMetadata, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]]
15:52:11.642 [pool-60-thread-1] DEBUG c.g.jmxtrans.jmx.JmxQueryProcessor - Finished running outputWriters for query: Query [obj=kafka.network:type=RequestMetrics,name=LocalTimeMs,request=ConsumerMetadata, useObjDomainAsKey:false, resultAlias=null, attr=[Count]]
15:52:11.643 [ServerScheduler_Worker-10] DEBUG c.googlecode.jmxtrans.jobs.ServerJob - +++++ Finished server job: Server [host=localhost, port=9393, url=null, cronExpression=null, numQueryThreads=2]
For me it looks like that jmxtrans can connect without problem and tries to get the data, but does not get any data at all from the kafka metrics.
JMX Options of Kafka:
-Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9393 -Djava.rmi.server.hostname=example.com -Dcom.sun.management.jmxremote.rmi.port=9011
Any idea why the Kafka metrics are not available?
Thanks.
Some versions of Kafka had quoted JMX elements (you could see the attributes having quotes in the jconsole). This has/is changing for future versions of Kafka. Check out the this post (How to monitor Kafka broker using jmxtrans?) for details, but if you use quoted strings it should work (see example below:)
"obj": "\"kafka.server\":type=\"BrokerTopicMetrics\",name \"AllTopicsLogBytesAppendedPerSec\"", "attr": [ "Count" ]
Note that the "obj" value requires additional quotes. When I tried this, JMXTRANS provided valid output after putting the correct (quoted) values in the obj string...
The following are the contents of config/default_mapping.json:
{
"_default_" : [
{
"int_template" : {
"match": "*",
"match_mapping_type": "int",
"mapping": {
"type": "string"
}
}
]
}
Want i want ES to do is to pick out all numbers from my logs and map them as strings.
Use case-
After clearing all indexes- curl -XDELETE 'http://localhost:9200/_all', i run this to send the following to ES (through fluentd's tailf plugin)-
echo "{\"this\" : 134}" >> /home/user/logs/program-data/logs/tiger/tiger.log
Elastic happily creates the initial indexes. Now, to test weather my default_mapping works, i send a string at the value where i previously sent an int.
echo "{\"this\" : \"ABC\"}" >> /home/user/logs/program-data/logs/tiger/tiger.log
Exception caught by ES-
org.elasticsearch.index.mapper.MapperParsingException: failed to parse [this]
at org.elasticsearch.index.mapper.core.AbstractFieldMapper.parse(AbstractFieldMapper.java:398)
at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:618)
at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:471)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:513)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:457)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:342)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:401)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:155)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)
Caused by: java.lang.NumberFormatException: For input string: "ABC"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:438)
at java.lang.Long.parseLong(Long.java:478)
at org.elasticsearch.common.xcontent.support.AbstractXContentParser.longValue(AbstractXContentParser.java:89)
What could be wrong here?
Update-
My default_mapping.json now looks like-
{
"_default_": {
"dynamic_templates": [
{
"string_template": {
"match": "*",
"mapping": {
"type": "string"
}
}
}
]
}
}
First of all, I'd suggest not to use file system based configuration or mappings. Just do it via api.
Your mapping is malformed, as you have the type name (_default_) but you don't specify that what you are submitting is a dynamic template.
As for the content, I'd remove that match_mapping_type if you want to map everything as a string.
When a page is called, I want to check the call on depending on the path, I want to redirect the user to the frontpage, with some parameters. These parameters I will use in a block to show extra information to the visitor.
What hook should I use, so that drupal has to do the least unnecessary work ?
1) The template_preprocess_page function is the appropriate hook here.
2) An alternative option is to use Rules module.
Event: Drupal is initializing (using hook_init)
Condition: Execute custom php code (check path argument)
Actions: Page redirect, Other rules actions (eg a Message)
I would suggest to show a Drupal Message to the user instead of a block. Except if user is logged in and parameters shown in block do exist in the database so you can use views module to create that block.
Here is an export of a rule that redirects if the taxonomy term page display belongs to Vocabulary '4'. Import it to your rules to see the results.
{ "rules_taxonomy_redirect_business" : {
"LABEL" : "Taxonomy redirect - Business",
"PLUGIN" : "reaction rule",
"TAGS" : [ "redirect", "taxonomy" ],
"REQUIRES" : [ "php", "rules" ],
"ON" : [ "init" ],
"IF" : [
{ "php_eval" : { "code" : "$check1 = (arg(0)==\u0027taxonomy\u0027)\u0026\u0026(arg(1)==\u0027term\u0027);\r\n$check2 = (arg(3)!=\u0027edit\u0027);\r\n\r\nif (arg(2)) {\r\n$tid = arg(2);\r\n$vid = db_query(\u0027SELECT vid FROM {taxonomy_term_data} WHERE tid = :tid\u0027, array(\u0027:tid\u0027 =\u003E $tid))-\u003EfetchField();\r\n$check3 = ($vid == \u00274\u0027);\r\n}\r\n\r\nreturn ($check1)\u0026\u0026($check2)\u0026\u0026($check3);" } }
],
"DO" : [
{ "redirect" : { "url" : "\u003C?php\r\n$tid = arg(2);\r\nreturn \u0027business?cat%5B%5D=\u0027 . $tid;\r\n?\u003E" } }
]
}
}