I want to subtract a number form a duration but not sure how can I do it.
A1 : 137:47:00 (formatted as duration)
A2 : 126 (formatted as number)
When I subtract it is showing unexpected value
=(A1-A2) = -120.26
I was expecting something similar to 11.
Subtracting a number (without dimension) from a duration does not really make a lot of sense but if 137:47:00 represented 137 hours and 47 minutes then subtracting 126 hours from that would (and give a result between 11 and 12 hours). To be able to compare like with like, the duration can be represented as a number by accessing the fact that Google spreadsheets treats 24 hours as number 1. So multiply 137:47:00 (if representing hour, minutes, seconds) by 24 to get a number from which another number can be subtracted to give a meaningful result (ie 11.7833333 - representing 11 hours 47 minutes if to subtract 126 hours from 137 hours and 47 minutes). Therefore:
=24*A1-A2
might suit.
Calculating time worked per day on Web Applications addresses a vaguely similar issue.
Related
I have following temperature values stored inside Prometheus DB (each minute):
4
7
11
52
97
19
95
89
43
19
. . .
Now, I would like to get average temperature in each 5 minute interval.
/api/v1/query_range?query=avg_over_time(current_temp[5m])&start=1475483802.739&end=1475498202.739&step=300&_=1475493021942
I get following data back:
"values":[[1475488602.739,"4"],[1475488902.739,"37.2"],[1475489202.739,"51"],[1475489502.739,"79.6"] . . .
I really can not relate these values (4, 37.2, 51, 79.6 ...) with average data. Can some one help me with this?
Thanks
Here are two example through Prometheus graphing tool:
Let me answer my own question, the thing is that with the query I gave here:
/api/v1/query_range?query=avg_over_time(current_temp[5m])&start=1475483802.739&end=1475498202.739&step=300&_=1475493021942
following happens:
Each 300 seconds (from step parameter), read current temperature five minutes before that (each point you have) and calculate average from that. Do this in timespan between 1475483802.739 and 1475498202.739.
More information here https://github.com/prometheus/prometheus/issues/2051
I have a list of sporting matches by time with result and margin. I want Tableau to keep a running count of number of matches since the last x (say, since the last draw - where margin = 0).
This will mean that on every record, the running count will increase by one unless that match is a draw, in which case it will drop back to zero.
I have not found a method of achieving this. The only way I can see to restart counts is via dates (e.g. a new year).
As an aside, I can easily achieve this by creating a running count tally OUTSIDE of Tableau.
The interesting thing is that Tableau then doesn't quite deal with this well with more than one result on the same day.
For example, if the structure is:
GameID Date Margin Running count
...
48 01-01-15 54 122
49 08-01-15 12 123
50 08-01-15 0 124
51 08-01-15 17 0
52 08-01-15 23 1
53 15-01-15 9 2
...
Then when trying to plot running count against date, Tableau rearranges the data to show:
GameID Date Margin Running count
...
48 01-01-15 54 122
51 08-01-15 17 0
52 08-01-15 23 1
49 08-01-15 12 123
50 08-01-15 0 124
53 15-01-15 9 2
...
I assume it is doing this because by default it sorts the running count data in ascending order when dates are identical.
I'm working on a thing that calculates that turns a number eg 900 into a human readable date.
I've got turning 365 into 1 year 0 months & 0 days.
But, how do I turn 365 into 20/3/15
Lua standard library os provides the functions time and date for such things.
But can use other libraries as well. Like wxLua e.g.
First you need the current time:
local currentTimeInSeconds = os.time()
Then you need to go back in time. Remeber 2016 is a leap year! So instead of 365 you have to go 366 days back.
local timeAgo = 366 * 24 * 60 * 60
Then call os.date() to convert the time in seconds to a date
print(os.date("%d/%m/%y", currentTimeInSeconds - timeAgo))
Which will give you the output
20/03/15
Please refer to the Lua 5.0 PIL for more info
local t = os.date("*t", os.time())
t.day = t.day - 900
local ago = os.time(t)
ago is the timestamp of the time 900 days ago. You can get the formatted date as you want:
print(os.date("%d/%m/%y", ago))
So it seems according to this answer, that the opencv VideoWriter is not really smart (or well, maybe not suited for the purpose I would like to use it) about handling frames. According to the answer of this question, you have to time your frames manually, thus the creation of a two hour long video will take two hours.
If you want to check, the following script creates a 100 fps VideoWriter and writes 1500 frames to it, which should be exactly 15 seconds long, but ends up being 26 seconds or so.
EDIT: The code was edited to create six videos, with 3 fps-s intended to be 15 and 30 seconds long. The table at the end of the question was made using this.
import numpy as np
import cv2
for fps in [20,50,100]:
vWriter = cv2.VideoWriter("test" +str(fps)+".avi", cv2.VideoWriter_fourcc('P','I','M','1'),fps,(500,500),True)
y = 0
for x in range(15*fps):
img = np.zeros((500,500,3)).astype(np.uint8)
cv2.circle(img,(250,int(y)),5,(255,255,255),-1,cv2.LINE_AA)
y += 500/15/fps
vWriter.write(img)
for fps in [20,50,100]:
vWriter = cv2.VideoWriter("test2_" +str(fps)+".avi", cv2.VideoWriter_fourcc('P','I','M','1'),fps,(500,500),True)
y = 0
ts = time.time()
for x in range(30*fps):
img = np.zeros((500,500,3)).astype(np.uint8)
cv2.circle(img,(250,int(y)),5,(255,255,255),-1,cv2.LINE_AA)
y += 500/30/fps
vWriter.write(img)
Is there any workaround for this? This manual timing of frames seems really cumbersome. Or if there are no workarounds, any other cross-platform video creation method that you can recommend, that does not suffer from this problem?
I made a little test with different lengths and framerates, I checked 20, 50 and 100 fps with 15 and 30 second long videos (intended length, so I generated 15 or 30 times the fps frames).
FPS intended_length actual_length
20 15 12
50 15 15
100 15 25
20 30 25
50 30 30
100 30 50
Looks like the 50 fps is the one where it gets it correctly, but why?
I have data from an experiment that is sampling responses between 59 to 60 hz. There is no way to predict the drop-down in sampling rate throughout the experiment which runs for 18 minutes.
Each of the sampled responses are numbered from 1 to N (for total number of rows) showing relative passage of time, stored in variable 'frame'. I also have a unix time stamp marking absolute time stored in 'unixtime'. But unixtime is reported in whole integers & not in fractional units. For example:
1376925380 may be repeated 59 times;
1376925381 may be repeated 60 times in the data file.
I would like to create a new variable that tracks each consecutive frame (or sampled response) from 1 to 60 or from 1 to 59, as the case may be, for each given unixtime stamp in SPSS. See the desired re-arrangement below. Any help w/ appropriate SPSS-syntax is appreciated!
unixtime newframe
1376925380 1
1376925380 2
1376925380 3
1376925380 4
1376925380 5
1376925380 6
....
1376925380 58
1376925380 59
1376925381 1
1376925381 2
1376925381 3
1376925381 4
.... ....
1376925381 60
1376925382 1
1376925382 2
....
If I understand correctly, you can use LAG to figure out your counter between the time stamps. Example below.
*fake data.
set seed 10.
input program.
loop #i = 1 to 100.
loop #j = 1 to TRUNC(RV.UNIFORM(59,61)).
compute unixtime = 1376925379 + #i.
end case.
end loop.
end loop.
end file.
end input program.
*Using lag to calculate newframe variable.
DO IF ($casenum = 1) OR (unixtime <> lag(unixtime)).
compute newframe = 1.
ELSE.
compute newframe = lag(newframe) + 1.
END IF.
exe.
See related discussion for using lag at, Using sequential case processing for data management in SPSS.