Elasticsearch / Kibana - Wrong time values - docker

I construct an Elasticsearch and Kibana service via Docker. Therefore I use this Github sources: deviantony/docker-elk.
After importing JSON data with a script, Kibana shows me wrong time values. All time values are increased by exactly two hours. Maybe it could be a problem with GMT or UTC time, but I'm not sure. Notice: I work from the timezone Europe/Berlin.
I verified the JSON data, but the time values are correct there. Furthermore, the system datetime of Elasticsearch and Kibana are correct, too.
Unfortunately, I didn't found helpful links for solving this problem.

By default Kibana adjusts timestamps to the browser's timezone. This can be disabled in advanced settings.

Related

Solr7 and zookeeper behavior leading to deleted data directories, how to research/prevent

During testing, I came across the following situation:
I had set up 3 VMs, all Ubuntu 18.04.
The first 2 machines had a solr7 instance. All 3 machines had a zookeeper. All of these are in Docker containers, the entire config deployed via Ansible.
Solr 7.5, Zookeeper 3.14.3
There's a frontend that acts as interface to insert stuff.
The zookeeper machines were set up to create an ensemble, which they properly did. They all had their id, a leader was elected, solr7 instances could connect and received their settings properly.
Inserting a bunch of data all worked fine.
Then I took down 2 of the VMs, leaving 1 with both a solr7 and zookeeper and redeployed the new config, without a zookeeper ensemble.
This did not work, the interface refused to come up, it all took too long so I decided to go back to 3 VMs.
While I could once again connect, I noticed all data was gone.
Even worse, when looking at the location of the solr data directories, those were all gone. Every single collection/core was gone.
I've been trying to google this issue, but there seems to be no documentation of anything like this.
My current working theory is that solr started and asked the zookeeper ensemble for its configuration. Zookeeper either was not in sync or lost its settings and sent an empty reply or did not reply at all. To which solr decided to remove the existing data folders, as the received config specified nothing/not receiving a config at all.
That's just guesswork though. I'm at a complete loss even finding information about this
I'm not even sure what to search for. All results I get are "how to delete solr cores" or "how to remove collections".
Any help or pointing in the right direction would be appreciated.
EDIT: After talking about it on the solr mailing list, a ticket was made for this: https://issues.apache.org/jira/browse/SOLR-13396
After asking about it on the solr mailing list, a bug ticket was made: https://issues.apache.org/jira/browse/SOLR-13396
So answering my own question so this can be closed.

Can we set the timezone for influxdb.Chronograf?

I'm currently using chronograf to view my point data in influxdb.
At first the queried results in chronograf seem abnormal to me but I have later worked out the issue to be at timezone differences.
So influxdb could only store data in UTC timezone but chronograf is using my local machine's timezone to display the data.
Example:
In influxdb I have a point sitting at 7PM on a particular day but when I tried to look it up in chronograf, it is saying timestamp for the same point is on 5PM.
Question:
Is there a way for me to set the default timezone for my chronograf? This is so that it will not try to tamper my data and be showing the original timestamp at UTC?
Short answer: It is not possible to display data in UTC in Chronograf 1.3 yet.
Chronograf by default offset influxdb's UTC data to whatever your local browser's time is.
I have raised a github issue to the Chronograf team and hopefully it will support displaying data in UTC soon.
See: https://github.com/influxdata/chronograf/issues/1960
Reference:
https://community.influxdata.com/t/chronograf-set-to-use-local-or-set-timezone/947
https://github.com/influxdata/chronograf/issues/1960
Install this add-on/extension for Chrome/Chromium
https://chrome.google.com/webstore/detail/change-timezone-time-shif/nbofeaabhknfdcpoddmfckpokmncimpj?hl=en
Install this add-on/extension for Firefox
https://addons.mozilla.org/en-US/firefox/addon/change-timezone-time-shift/

Prometheus cAdvisor docker monitoring

I've setup a docker monitoring stack using Prometheus, Grafana and cAdvisor. While using this query to get running containers:
count_scalar(container_last_seen{name=~container1|container2})
It picks up the containers allright, as soon as i launch a new container it is picked up right away. The problem is when a container is stopped or removed it does not pick it up, it still shows it as a running container.
From the cAdvisor/metrics endpoint it is removed as soon as the container stops.
Is there something wrong with the query?
(this is what i used for the stack: https://github.com/vegasbrianc/prometheus)
It seems to be related to the amount of time cAdvisor stores the data in memory.
While cAdvisor keeps the data in memory, you still have a valid date in container_last_seen metric. So the count_scalar instruction still 'sees' the container as it has a valid value.
In my test setup, cAdvisor keeps the data during 5 minutes. After this duration, I get the right information out of your formula because the container_last_seen metric has disappeared.
You can change this cAdvisor configuration with the --storage_duration flag.
--storage_duration=2m0s: How long to store data.
As an alternative if you wan't quick alerting, you could also consider running a query that would compare last seen date with current date:
count_scalar(time()-container_last_seen{name=~"container1|container2"}<=60)

How to configure mysql for perfino?

After a H2 database corruption, I'm considering migrating to mysql. But in my first attempt, I'm loosing lots of data (with H2 it was a nice continuous curve):
I could find the following quote in the perfino.properties:
Please note that it is possible to configure MySQL in such a way that it
will not work with the perfino connection pool, including (but not
limited to) setting low values for max_allowed_packet or max_connections.
Unfortunately, I couldn't find any documentation about mysql recommended configuration. I have fiddled a bit with my.cnf but either I have yet to find the optimal configuration, or mysql is not appropriate as persistence solution for perfino.
Does anyone have any suggestions?
Edit
I found remarkable the difference in the telemetry "JDBC Average Statement Execution Time" for Perfino:
Using H2: about 25 us.
Using MySQL: about 5 ms.
The corresponding comparison:
I'm attaching another screenshot, just to make the problem more evident:

google cloud sql redmine mysql not responding

I've been trying to set up a Redmine on google compute engine with the mysql 5.5 database hosted on google cloud sql (d1, 512mb of ram, always-on, europe, package-billed).
Unfortunately, Redmine stops responding (really stops, I set the timeout to 1hour and nothing happens) to requests after a few minutes. Using newrelic I found out that it's database-related - ActiveRecord seems to have some problems with the database ..
In order to find out if the problems are really related to the cloud sql database, I set up a new database on my own server and it's working fine since then. So there definitely is an issue with the cloud sql database and redmine/ruby.
Does anyone have an idea what I can try to solve the problem?
Best,
Jan
GCE idle connections are closed automatically after 10 minutes as explained in [1]. As you are connecting to CloudSQL from a GCE instance, this is most likely the cause for your issue.
Additionally, take into account Cloud SQL instances can go down and come back anytime due to maintenances and connections must be managed accordingly. Checking the CloudSQL instance operation list would confirm this. Hope this helps.
[1] https://cloud.google.com/sql/docs/gce-access

Resources