fellows.
I'm new to InfluxDB and I'd like to get a maximum value in the point.
I think it must be very simple, but, since InfluxDB's MAX() function does not take muliple fields, I don't know how to get a result I want.
My measurement looks like this:
{
time: <timestamp>,
tags: {
station: <station-code>,
},
fields: {
e-componet: <value>,
n-componet: <value>,
z-componet: <value>,
}
}
Let's say these are my data:
time station e-componet n-componet z-componet
------------------------------------------------------------
2019-08-14T00:20:00Z A -10 10.9 -11.8
2019-08-14T00:10:00Z A -9.9 10.8 -10.8
2019-08-14T00:00:00Z A -9.8 10.7 -10.8
2019-08-13T23:50:00Z A -9.8 10.7 -10.8
2019-08-13T23:40:00Z A -9.7 10.6 -10.4
2019-08-13T23:30:00Z A -9.7 10.5 -10.4
2019-08-13T23:20:00Z A -9.5 10.5 -10.3
2019-08-13T23:10:00Z A -9.4 10.4 -10.3
And I'd like to get a result like this:
time station MAX(ABS(e-componet), ABS(n-componet), ABS(z-componet))
-------------------------------------------
2019-08-14T00:20:00Z A 11.8
2019-08-14T00:10:00Z A 10.8
2019-08-14T00:00:00Z A 10.8
2019-08-13T23:50:00Z A 10.8
2019-08-13T23:40:00Z A 10.6
2019-08-13T23:30:00Z A 10.5
2019-08-13T23:20:00Z A 10.5
2019-08-13T23:10:00Z A 10.4
I've tried several queries, but, I couldn't get a desired result.
Any comments will be welcome.
SELECT max("$FIELD_NAME") FROM "$DB" WHERE time >= now() - 6h GROUP BY "$TAG"
Above query was copied from grafana. You could use grafana for better monitoring.
Grafana
Related
I have 1 Million Items to check with Powershell. To improve the performance, I want to use ForEach-Object -Parallel. For this reason, I have deployed a very powerful VM.
Docs:
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/foreach-object?view=powershell-7.2#description
The runspace pool size is specified by the ThrottleLimit parameter.
The default runspace pool size is 5. You can still create a new
runspace for each iteration using the UseNewRunspace switch.
I notice that no matter how high I set the limit, the CPU usage does not increase. Is there a technical maximum limit for this parameter? I have the impression that above a certain value the limit is simply capped.
$oneMillionItems | ForEach-Object -Parallel {
Do something ...
} -Throttlelimit 300
PS > $PSVersionTable
Name Value
---- -----
PSVersion 7.2.1
PSEdition Core
GitCommitId 7.2.1
OS Microsoft Windows 10.0.20348
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
Within my current research I'm trying to find out, how big the impact of ad-hoc sentiment on daily stock returns is.
Calculations functioned quite well and results also are plausible.
The calculations until now with quantmod package and yahoo financial data look like below:
getSymbols(c("^CDAXX",Symbols) , env = myenviron, src = "yahoo",
from = as.Date("2007-01-02"), to = as.Date("2016-12-30")
Returns <- eapply(myenviron, function(s) ROC(Ad(s), type="discrete"))
ReturnsDF <- as.data.table(do.call(merge.xts, Returns))
# adjust column names
colnames(ReturnsDF) <- gsub(".Adjusted","",colnames(ReturnsDF))
ReturnsDF <- as.data.table(ReturnsDF)
However, to make it more robust towards noisy influence of pennystock data I wonder, how its possible to exclude stocks that once in the time period go below a certain value x, let's say 1€.
I guess, the best thing would be to exclude them before calculating the returns and merge the xts object results or even better, before downloading them with the getSymbols command.
Has anybody an idea how this could work best? Thanks in advance.
Try this:
build a price frame of the Adj. closing prices of your symbols
(I use the PF function of the quantmod add-on package qmao which has lots of other useful functions for this type of analysis. (install.packages("qmao", repos="http://R-Forge.R-project.org”))
check by column if any price is below your minimum trigger price
select only columns which have no closings below the trigger price
To stay more flexible I would suggest to take a sub period - let’s say no price below 5 during the last 21 trading days.The toy example below may illustrate my point.
I use AAPL, FB and MSFT as the symbol universe.
> symbols <- c('AAPL','MSFT','FB')
> getSymbols(symbols, from='2018-02-01')
[1] "AAPL" "MSFT" "FB"
> prices <- PF(symbols, silent = TRUE)
> prices
AAPL MSFT FB
2018-02-01 167.0987 93.81929 193.09
2018-02-02 159.8483 91.35088 190.28
2018-02-05 155.8546 87.58855 181.26
2018-02-06 162.3680 90.90299 185.31
2018-02-07 158.8922 89.19102 180.18
2018-02-08 154.5200 84.61253 171.58
2018-02-09 156.4100 87.76771 176.11
2018-02-12 162.7100 88.71327 176.41
2018-02-13 164.3400 89.41000 173.15
2018-02-14 167.3700 90.81000 179.52
2018-02-15 172.9900 92.66000 179.96
2018-02-16 172.4300 92.00000 177.36
2018-02-20 171.8500 92.72000 176.01
2018-02-21 171.0700 91.49000 177.91
2018-02-22 172.5000 91.73000 178.99
2018-02-23 175.5000 94.06000 183.29
2018-02-26 178.9700 95.42000 184.93
2018-02-27 178.3900 94.20000 181.46
2018-02-28 178.1200 93.77000 178.32
2018-03-01 175.0000 92.85000 175.94
2018-03-02 176.2100 93.05000 176.62
Let’s assume you would like any instrument which traded below 175.40 during the last 6 trading days to be excluded from your analysis :-) .
As you can see that shall exclude AAPL and FB.
apply and the base function any applied(!) to a 6-day subset of prices will give us exactly what we want. Showing the last 3 days of prices excluding the instruments which did not meet our condition:
> tail(prices[,apply(tail(prices),2, function(x) any(x < 175.4)) == FALSE],3)
FB
2018-02-28 178.32
2018-03-01 175.94
2018-03-02 176.62
I wrote a python program to enter historical data into influxdb.
Everything seems to be ok, but I am not sure if the time field is incorrect. The time is supposed to be YYYY,MM,DD,HH,MM as integers.
This is an example of the json that I am sending to influxdb,
[{'fields': {'High': 72.06, 'Close': 72.01, 'Volume': 6348, 'Open': 72.01, 'Low': 72.01}, 'tags': {'country': 'US', 'symbol': 'AAXJ', 'type': 'ETF', 'exchange': 'NASDAQ'}, 'time': datetime.dat
e(2017, 9, 7, 15, 35), 'measurement': 'quote'}]
However, when I query the data, I get a strange number for the time like this:
time Close High Low Open Volume country exchange symbol type
---- ----- ---- --- ---- ------ ------- -------- ------ ----
1504798500000000000 144.46 144.47 144.06 144.1 112200 US NYSE IBM STOCK
Seems like either the json time format is wrong, or the number displayed by the query is an encoded date representation?
I found the answer here
Formatting the output by entering the following command in the CLI:
precision rfc3339
I'm new to F# and I'm coding little challenges to learn the nitty-gritty details about the language. I think I have a problem because of immutability.
Scenario:
I have to read height lines in the console, each line contains one integer. That integer represent the size of a mountain.
After reading the input i need to write the line number of the highest mountains.
If the index given is the highest mountain then the size is set to zero else I loose.
Repeat the scenario until all mountains have their size set to zero.
Here the code I wrote:
open System
type Mountain = {Id:int; Height:int}
let readlineInt() = int(Console.In.ReadLine())
let readMountainData id = {Id = id; Height = readlineInt()}
let readAllMountainsData = [ for a in 0 .. 7 do yield readMountainData a ]
let rec mainLoop () =
let mountains = readAllMountainsData
let highestMountain = mountains |> List.maxBy (fun x -> x.Height)
printfn "%i" highestMountain.Id
mainLoop()
mainLoop()
This code is going to an infinite loop, I believe it's because the
let readlineInt() = int(Console.In.ReadLine())
is immutable, so the value is set once and after it's never stop again to read the line. I try to put 'mutable' keyword for
let mutable readAllMountainsData = [ for a in 0 .. 7 do yield readMountainData a ]
But it didn't change a thing.
Do you have any idea?
Edit:
I know that this code is going into an infinite loop because after adding logging into the main loop as follow:
let rec mainLoop () =
let mountains = readAllMountainsData
Console.Error.WriteLine("Mountain Count:{0} ", mountains.Length)
mountains |> List.iter (fun x -> Console.Error.WriteLine("Mountain Id:{0} Height:{1}", x.Id, x.Height))
let highestMountain = mountains |> List.maxBy (fun x -> x.Height)
printfn "%i" highestMountain.Id
mainLoop()
Then I have this in the output:
Standard Error Stream:
Mountain Count:8
Mountain Id:0 Height:9
Mountain Id:1 Height:8
Mountain Id:2 Height:7
Mountain Id:3 Height:6
Mountain Id:4 Height:5
Mountain Id:5 Height:4
Mountain Id:6 Height:3
Mountain Id:7 Height:2
Mountain Count:8
Mountain Id:0 Height:9
Mountain Id:1 Height:8
Mountain Id:2 Height:7
Mountain Id:3 Height:6
Mountain Id:4 Height:5
Mountain Id:5 Height:4
Mountain Id:6 Height:3
Mountain Id:7 Height:2
Mountain Count:8
Mountain Id:0 Height:9
Mountain Id:1 Height:8
Mountain Id:2 Height:7
etc...
Why do I want to reread the value? Because the values are provided by an external source. So the workflow is as follow:
Loop one:
I read 8 values for the height of the mountains in the console
I output the value of the highest mountain
Loop two:
I read 8 values for the height of the mountains in the console
I output the value of the highest mountain
Loop three:
I read 8 values for the height of the mountains in the console
I output the value of the highest mountain
etc
let readlineInt () = ... defines a function. It's body will be executed every time you call it. And in this case the body has a side-effect and that side-effect (reading from stdin) will be executed every time the body is executed. So that's not your problem.
readAllMountainsData is defined to be a list containing the data of seven mountains. Each of those mountains will have its own height (because readLineInt() is called once for each mountain). This list is calculated once and does not change after that. It is not re-calculated every time you use readAllMountainsData as it is a variable, not a function (even though the name might suggest otherwise). That seems perfectly sensible as re-reading the mountain data every time would make no sense.
Adding the mutable keyword to the definition allows you to re-assign the variable. That is, it allows you to write readAllMountainsData <- someNewValue later in the program to change the variable's value. Since you never actually do that, nothing changes.
The reason that your program loops infinitely is that mainLoop always calls itself again. It has no exit condition. So to fix that you should decide how often you want to loop / under which condition you want to exit, and then implement that logic accordingly.
In your edit you clarified, that you do want to re-read your values, so you simply need to make readAllMountainsData a function by giving it a parameter list (let readAllMountainsData () = ...) and then call it as a function. This way you'll get new data on each iteration, but the loop will still be infinite unless you add an exit condition.
I have a "raw" data set that I´m trying to clean. The data set consists of individuals with the variable age between year 2000 and 2010. There are around 20000 individuals in the data set with the same problem.
The variable age is not increasing in the years 2004-2006. For example, for one individual it looks like this:
2000: 16,
2001: 17,
2002: 18,
2003: 19,
2004: 19,
2005: 19,
2006: 19,
2007: 23,
2008: 24,
2009: 25,
2010: 26,
So far I have tried to generate variables for the max age and max year:
bysort id: egen last_year=max(year)
bysort id: egen last_age=max(age)
and then use foreach combined with lags to try to replace age variable in decreasing order so that when the new variable last_age (that now are 26 in all years) rather looks like this:
2010: 26
2009: 25 (26-1)
2008: 24 (26-2) , and so on.
However, I have some problem with finding the correct code for this problem.
Assuming that for each individual the first value of age is not missing and is correct, something like this might work
bysort id (year): replace age = age[1]+(year-year[1])
Alternatively, if the last value of age is assumed to always be accurate,
bysort id (year): replace age = age[_N]-(year[_N]-year)
Or, just fix the ages where there is no observation-to-observation change in age
bysort id (year): replace age = age[_n-1]+(year-year[_n-1]) if _n>1 & age==age[_n-1]
In the absence of sample data none of these have been tested.
William's code is very much to the point, but a few extra remarks won't fit easily into a comment.
Suppose we have age already and generate two other estimates going forward and backward as he suggests:
bysort id (year): gen age2 = age[1] + (year - year[1])
bysort id (year): gen age3 = age[_N] - (year[_N] - year)
Now if all three agree, we are good, and if two out of three agree, we will probably use the majority vote. Either way, that is the median; the median will be, for 3 values, the sum MINUS the minimum MINUS the maximum.
gen median = (age + age2 + age3) - max(age, age2, age3) - min(age, age2, age3)
If we get three different estimates, we should look more carefully.
edit age* if max(age, age2, age3) > median & median > min(age, age2, age3)
A final test is whether medians increase in the same way as years:
bysort id (year) : assert (median - median[_n-1]) == (year - year[_n-1]) if _n > 1