I have timestamp plugin enabled in Jenkins. I am trying to jump to particular timestamp by using elapsed timestamp in URL, as http://localhost:8080/job/frog/11/consoleFull?elapsed=14:43:11
Is it possible to jump to timestamp in build log? or is there any error in url ?
Please help me on this.
You cannot jump to timestamps directly, but it is possible to skip an arbitrary amount of leading bytes. E.g., to skip the first million characters, you'd use
http://localhost:8080/job/frog/11/logText/progressiveText?start=1000000
Related
The official documentation states that one can use the x-stream-offset consumer argument to specify where to start reading the Stream from. But it doesn't say where to get these values from.
For example, what does timestamp mean here? The point in time when the chunk was inserted into the stream? If one is reading messages from the Stream, is there a way to get this timestamp to store it so that when the process is restarted it can continue from where it left off?
Same with the numerical offset value... how does a consumer know the current offset when reading messages from the stream?
The offset for each message can be retrieved from its headers.
Usually:
message.properties.headers["x-stream-offset"]
I have an elasticsearch instance that receives logs from multiple backup routines. I'd like to query ES for these logs from Grafana and set up a panel that shows the last time for the different backups. Ideally I would also like to be able to show this in color if the time is longer than a certain threshold.
Basically the idea is to have a display that shows, for instance, green if a certain backup has been completed in the last 24 hours, and red if it hasn't.
How would I do this in Grafana with ES as the datasource?
Exact implementation depends on the used panel.
Example for singlestat: write ES query and then select Stat: Time of last point, you may need to select suitable unit/format:
Unfortunately, Grafana doesn't understand thresholds in your requested time format (older than 24 hours). You will need to return it as metric (for example as age of last backup in seconds) = you will need to write query for that. That means, that you will have 2 stats to show (last time + age), so you won't be able to use singlestat. Probably table panel will be better - you can use thresholding based on the age metric there.
In addition to the great answer by Jan Garaj, it looks like there is work being done to make this type of thing much easier in the future.
Check out this issue to check progress.
How do I use Prometheus + Grafana to tell how many time an event occurs
during a given time period?
I have a Prometheus counter that I increment every time this event happens. I would like to display it in a Singlestat number. It seems like this should be as simple as:
sum(increase(some_event_happened{application="example-app"}[$__range]))
And the display set to "Current" value.
However, this gives numbers that are much higher than the actual number of events in the given range. Also, it seems to vary based on how much I offset the range, and how large the range is.
More importantly, it crashes our Prometheus server with an out of memory error when I have three or four of these on a single dashboard.
I've tried setting a recorded rule to address the crashes, but I haven't figured out the right way to slice up the record rule to still be able to display the Grafana range.
So in summary, I want a Singlestat displaying the number of times an event happened in the current time range set in the Grafana dashboard. It seems like this is a very basic thing for a monitoring system. Am I just using the wrong approach?
I've encountered similar issues and they appear to be due to discrepancies between the query interval (in Prometheus) and the min step (in Grafana). Try using this global, built-in variable for your interval, which will make sure Prometheus is always in sync with the Grafana step: $__interval.
sum(increase(some_event_happened{application="example-app"}[$__interval]))
http://docs.grafana.org/reference/templating/
https://www.stroppykitten.com/technical/prometheus-grafana-statistics
I have an existing BEAM pipeline that is handling the data ingested (from Google Pubsub topic) by 2 routes. The 'hot' path does some basic transformation and stores them in Datastore, while the 'cold' path performs fixed hourly windowing for deeper analysis before storage.
So far the pipeline has been running fine until I started to do some local buffering on the data before publishing to Pubsub (so data arrives at Pubsub may be a few hours 'late'). The error that gets thrown is as below:
java.lang.IllegalArgumentException: Cannot output with timestamp 2018-06-19T14:00:56.862Z. Output timestamps must be no earlier than the timestamp of the current input (2018-06-19T14:01:01.862Z) minus the allowed skew (0 milliseconds). See the DoFn#getAllowedTimestampSkew() Javadoc for details on changing the allowed skew.
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.checkTimestamp(SimpleDoFnRunner.java:463)
at org.apache.beam.runners.core.SimpleDoFnRunner$DoFnProcessContext.outputWithTimestamp(SimpleDoFnRunner.java:429)
at org.apache.beam.sdk.transforms.WithTimestamps$AddTimestampsDoFn.processElement(WithTimestamps.java:138)
It seems to be referencing the section of my code (withTimestamps method) that performs the hourly windowing as below:
Window<KV<String, Data>> window = Window.<KV<String, Data>>into
(FixedWindows.of(Duration.standardHours(1)))
.triggering(Repeatedly.forever(pastEndOfWindow()))
.withAllowedLateness(Duration.standardSeconds(10))
.discardingFiredPanes();
PCollection<KV<String, List<Data>>> keyToDataList = eData.apply("Add Event Timestamp", WithTimestamps.of(new EventTimestampFunction()))
.apply("Windowing", window)
.apply("Group by Key", GroupByKey.create())
.apply("Sort by date", ParDo.of(new SortDataFn()));
I'm not sure if I understand exactly what I've done wrong here. Is it because the data is arriving late that is throwing the error? As I understand, if the data arrives late past the allowed lateness, it should be discarded and not throw an error like the one I'm seeing.
Wondering if setting an unlimited timestampSkew will resolve this? The data that's late can be exempt from analysis, I just need to ensure that errors don't get thrown that will choke the pipeline. There's also nowhere else where I'm adding/ changing the timestamps for the data so I'm not sure why the errors are thrown.
It looks like your DoFn is using “outputWithTimestamp” and you are trying to set a timestamp which is older than the input element’s timestamp. Typically timestamps of output elements are derived from inputs, this is important to ensure the correctness of the watermark computation.
You may be able to workaround this by increasing both the timestamp skew and the windwing allowed lateness, however, some data may be lost, it is for you to determine if such loss is acceptable in your scenario.
Another alternative is not to use output with timestamp and instead use the PubSub message timestamp to process each message. Then, output each element as a KV, where the RealTimestamp is computed in the same way you are currently processing the timestamp (just don’t use it in “WithTimestamps”), GroupByKey and write the KV to Datastore.
Other questions you can ask yourself are:
Why are the input elements associated to a most recent timestamp than the output elements?
Do you really need to Buffer that much data before publishing to PubSub?
How can I get current date / time in Lua embedded in Redis?
I need to have it in following format - YYYY-MM-DD, HH:MM:SS
Tried with os.date() but it does not recognize it.
Redis' Lua sandbox has only a handful of libraries, and os isn't one of these.
You can call the Redis TIME from Lua like so:
local t = redis.call('TIME')
However, you'll need to find a way to convert the epoch to the desired format and also note that it will stop you script from performing any writes (as it is a non-deterministic command).
Update: as of Redis v3.2, there is a new replication mode for scripts that is effect-based (rather than code-based). When using this mode you can actually call all the random, non-deterministic commands. More information is at EVAL's documentation page
This was already discussed in the comments, but the correct answer should have an answer:
The current time is non-deterministic i.e. it returns different values on repeated calls. This hurts replication. For this reason, the current time should be passed into your LUA script as a parameter.