Weird kibana error - invalid code -- missing end-of-block - elasticsearch-5

I just started seeing this error on my kibana server :
read err { Error: invalid code -- missing end-of-block at
InflateRaw.zlibOnError (zlib.js:153:15) errno: -3, code:
'Z_DATA_ERROR' }
There is no helpful information in the corresponding logs :
{"type":"log","#timestamp":"2019-01-22T13:46:34Z","tags":
["license","info","xpack"],"pid":17310,"message":"Imported license
information from Elasticsearch for the [monitoring] cluster: mode:
basic | status: active"}
However, in the browser there is an error: Kibana server is not ready yet.
I have no idea how to tackle this!
UPDATE
I have seen additional error in elasticsearch logs that might suggest the cause of failure :
[2019-01-24T11:15:47,216][INFO ][o.e.c.m.MetaDataIndexTemplateService] [cloudraid01] adding template [.management-beats] for index patterns [.management-beats]
This seems to be related to metricbeats.

Related

RabbitMQ cant start due to an error on log fomatter config

I'm using RabbitMQ 3.8.5-management with the following config:
log.file = rabbit.log
log.dir = /var/log/rabbitmq
log.file.level = info
log.file.formatter = json
log.file.rotation.date = $D0
I get the following error:
12:45:12.131 [error] You've tried to set log.file.formatter, but there is no setting with that name.
12:45:12.134 [error] Did you mean one of these?
12:45:12.182 [error] log.file.level
12:45:12.182 [error] log.file
12:45:12.182 [error] log.file.rotation.date
12:45:12.182 [error] Error preparing configuration in phase transform_datatypes:
12:45:12.183 [error] - Conf file attempted to set unknown variable: log.file.formatter
According to the documentation log.file.formatter should work - what is wrong?
checked documentation on RabbitMQ.
checked other SO posts.
entered the container and remove the config - it works without it.
Looks like JSON logging and the log.file.formatter setting was added with RabbitMQ 3.9.0 release.
Try upgrading if possible.

Spinnaker Kayenta service is failing to identify the spinnaker details

I am testing the spinnaker for one pipeline implementation, during canary analysis process spinnaker is unable to read the metrics from datadog and throwing below error.
Request GET:https://app.datadoghq.com/api/v1/query?from=&to=&query=avg%3Asystem.mem.total%7B is missing [X-SPINNAKER-USER, X-SPINNAKER-ACCOUNTS] authentication headers and will be treated as anonymous.
Request from: com.netflix.spinnaker.okhttp.MetricsInterceptor.doIntercept(MetricsInterceptor.java:98)
2022-10-10 06:02:41.583 INFO 1 --- [ handlers-20] c.n.k.d.service.DatadogRemoteService : <--- HTTP 403 https://app.datadoghq.com/api/v1/query?from=&to=&query=avg%3Asystem.mem.total%7 (1106ms)
2022-10-10 06:02:41.587 ERROR 1 --- [ handlers-20] c.n.s.orca.q.handler.RunTaskHandler : Error running DatadogFetchTask for pipeline[01GF07SZ02KCXHCD2SWZ69XY38] (edited)
(https://spinnakerteam.slack.com/archives/C091CCWRJ/p1665387018133599)
it would be really helpful if some one can help me on this.
It is looking for
X-SPINNAKER-USER and X-SPINNAKER-ACCOUNTS

Failed to start the VM error when starting a Dataflow SQL job

Getting the following error when I try to launch a Dataflow SQL job:
Failed to start the VM, launcher-____, used for launching because of status code: INVALID_ARGUMENT, reason: Error: Message: Invalid value for field 'resource.networkInterfaces[0].network': 'global/networks/default'. The referenced network resource cannot be found. HTTP Code: 400.
This issue just started today.
Adding the default network solved the issue.

ECONNRESET when opening a large number of connection in small time period

I have situation where I want to create large number of entities on orion. I am using docker version of Orion and mongo with this docker-compose.
version: "3"
services:
mongo:
image: mongo:3.4
volumes:
- /data/docker-mongo/db:/data/db
- /data/docker-mongo/log/mongodb.log:/var/log/mongodb/mongod.log
command: --nojournal
orion:
image: fiware/orion
volumes:
- /data/docker-mongo/log/contextBroker.log:/tmp/contextBroker.log
links:
- mongo
ports:
- "1026:1026"
command: -dbhost mongo
Now problems happens when I want to upload 2000 entities (opening new connection for each, I know it can be done different but for now this is request), I successfully create no more than 600 (or less never exact number) of them rest fail to create with error:
"error": {
"errno": "ECONNRESET",
"code": "ECONNRESET",
"syscall": "read"
},
So I assume this issue has something to do with maxConnections, reqPoolSize etc settings in Orion. But in docker I failed to locate Orion config file, I have no way of knowing when I type commands like contextBroker -maxConnections 123456 that setting is being accepted by Orion and docker container.
Also log of Orion is empty, and i cannot determined what is causing this issue when Orion is running on docker.
So main questions:
Can Orion running on docker be used in same manner as Orion running on VM (are there some fallbacks)
And how do I check this problem when Orion is running in docker, because I read a lot of docs/issues but no luck (or I missed something).
If you have some advice/soultion it would really help.
Thanks
{
"orion" : {
"version" : "1.13.0-next",
"uptime" : "2 d, 15 h, 46 m, 34 s",
"git_hash" : "ae72acf9e8eeaacaf4eb138f7de37bfee4514c6b",
"compile_time" : "Fri May 4 10:12:18 UTC 2018",
"compiled_by" : "root",
"compiled_in" : "1901fd6bb51a",
"release_date" : "Fri May 4 10:12:18 UTC 2018",
"doc" : "https://fiware-orion.readthedocs.org/en/master/"
}
}
{ Error: socket hang up
at createHangUpError (_http_client.js:313:15)
at Socket.socketOnEnd (_http_client.js:416:23)
at Socket.emit (events.js:187:15)
at endReadableNT (_stream_readable.js:1090:12)
at process._tickCallback (internal/process/next_tick.js:63:19) code: 'ECONNRESET' }
error:
{ Error: connect ECONNREFUSED ipofvirtualm:1026
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'read',
address: 'ipofvm',
port: 1026 },
options:
{ method: 'POST',
uri: 'http://ip:1026/v2/entities?options=keyValues',
headers:
{ 'Fiware-Service': 'some service',
'Fiware-ServicePath': 'some servicepath' },
body:
{ id: 'F0B935',
type: 'Transaction',
refEmitter: 'F0B935',
refReceiver: '7501JXG',
refCapturer: 'testtdata',
date: '12/12/2017 13:25',
refTransferredResources: 'testtdata',
transferredLoad: 92 },
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
I am using request promise library for making calls, i try others they had same issue. Now since i cannot send u all 2000 responses i will try to describe. So it when i start to send this it behave. It create like 30 entities then next few or more return response saying ECONNRESET then it start creating again and so on.
What confuse me is that it is not failing totally meaning it works but not as intended. Also it seem that Orion close socket or hang it up some period then he is open again and create as normal and so on. If u need any more info ask, and thanks for quick answer.
instead of opening a new connection per entity why don't you use
POST /v2/op/update
and create all entities in just one batch? or a couple of batches
See some code at
https://github.com/Fiware/dataModels/blob/master/Weather/WeatherObserved/harvest/spain_weather_observed_harvest.py#L235
With regards to CLI argument passing to CB running inside docker, use the command line in docker compose file, eg:
command: -dbhost mongo -maxConnections 123456
However, I'm not sure that would help to solve the problem, as Orion should deal with your use case without any special customization. Looking to the error message (which seems to be about some problema at TCP layer) I wonder if docker networking layer is acting as bottleneck in some way...
In addition, the suggestion done by Jose Manuel Cantera about using POST /v2/op/update would be a good idea. It would reduce connection stress at network layer and may help to alleviate the problem.
If you cannot change your update strategy, maybe using an inter-request delay (100-200ms) could also help.

Flume agentSink "Unable to load output format plugin class"

I'm getting the following error and I have no idea why. If I change the sink to "console", it works fine. I'm just trying to recreate an example from the flume documentation except across two different nodes. This is using CDH3.
2011-10-20 17:41:13,046 [main] WARN text.FormatFactory: Unable to load output format plugin class - Class not found
2011-10-20 17:41:13,065 [main] INFO agent.FlumeNode: Loading spec from command line: 'foo:console|agentSink("somehost",35853);'
2011-10-20 17:41:13,228 [main] WARN agent.FlumeNode: Caught exception loading node:null
I'm trying to run flume as such:
flume node_nowatch -1 -s -n foo -c 'foo:console|agentSink("somehost",35853);'
Thanks in advance.

Resources