From what I can gather, the DebuggerDisplayAttribute cannot be applied to individual levels of a discriminated union, only to the top-level class.
The corresponding documentation suggests that overriding the ToString() method is an alternative way.
Taking the following example:
type Target =
| Output of int
| Bot of int
override this.ToString () =
match this with
| Output idx -> $"output #{idx}"
| Bot idx -> $"bot #{idx}"
[<EntryPoint>]
let main _ =
let test = Bot 15
0
When breaking on the return from main and placing a watch on test, the VS2019 debugger is showing Bot 15 rather than bot #15.
The documentation also suggests that:
Whether the debugger evaluates this implicit ToString() call depends
on a user setting in the Tools / Options / Debugging dialog box.
I cannot figure out what user setting it is referring to.
Is this not available in VS2019 or am I just missing the point?
The main problem here is that the F# compiler silently emits a DebuggerDisplay attribute to override the default behavior described in the documentation you're looking at. So overriding ToString alone is not going to change what the debugger displays when debugging an F# program.
F# uses this attribute to implement its own plain text formatting. You can control this format by using StructuredFormatDisplay to call ToString instead:
[<StructuredFormatDisplay("{DisplayString}")>]
type Target =
| Output of int
| Bot of int
override this.ToString () =
match this with
| Output idx -> $"output #{idx}"
| Bot idx -> $"bot #{idx}"
member this.DisplayString = this.ToString()
If you do this, the Visual Studio debugger will display "bot #15", as you desire.
Another option is to explicitly use DebuggerDisplay yourself at the top level, as you mentioned:
[<System.Diagnostics.DebuggerDisplay("{ToString()}")>]
type Target =
| Output of int
| Bot of int
override this.ToString () =
match this with
| Output idx -> $"output #{idx}"
| Bot idx -> $"bot #{idx}"
FWIW, I think the direct answer to your question about the Tools / Options / Debugging setting is "Show raw structure of objects in variables windows". However, this setting isn't really relevant to the problem you're trying to solve.
Having a set of logs like:
Log10:[requestId=2][taskId=C][message='End']
Log9: [requestId=2][taskId=C][message='Start']
Log8: [requestId=2][taskId=B][message='End']
Log7: [requestId=1][taskId=B][message='End']
Log6: [requestId=1][taskId=B][message='Start']
Log5: [requestId=1][taskId=A][message='End']
Log4: [requestId=2][taskId=B][message='Start']
Log3: [requestId=2][taskId=A][message='End']
Log2: [requestId=2][taskId=A][message='Start']
Log1: [requestId=1][taskId=A][message='Start']
First, I wanted to calculate the avg time each task takes to complete. I was able to that with transactionize:
* | concat(requestId,":",taskId) as transactionKey | transactionize transactionKey avg(_group_duration) group by taskId
Now, I'm willing to know how much time (avg) is happening between one task finishes and the next one is starting.
In this concrete example, my desired output would be:
((Log9 - Log8) + (Log4 - Log3) + (Log6 - Log5)) / 3
Any clue is appreciated.
Thanks to #chadoliver, he pointed me to the diff operator.
* | keyvalue auto | diff _messagetime by requestId | where message = "End" | avg(_diff) | ceil(_avg)
You may use regex, avg and group by functions to get aggregate results.
_sourceCategory="dev/test-app"
and "[Error]"
and "Error occurred"
| formatDate(_receiptTime, "yyyy-MM-dd") as date
| parse regex field=_raw "Error occurred. Exception:(?<message> \w.*)" nodrop
| replace(message,/my custom error message: ([0-9A-Fa-f\-]{36})/,"my custom error message") as replaceMessage
| parse regex field=_raw "\[Error](?<otherMessage> \w.*)" nodrop
| if (replaceMessage = "", otherMessage, replaceMessage ) as consolidatedMessage
| if (length(consolidatedMessage)> 150,substring(consolidatedMessage,0, 150),consolidatedMessage) as finalMessage
| count date, finalMessage
| transpose row data column finalMessage
https://www.youtube.com/watch?v=Nxzp7G-rUh8
Is there a good way to write a query in influxdb that will show you a change of state from the previous value? I am looking to query my database for times of where a server has turned off.
For example if I had the following database:
Time | Server_1_ON | Server_2_ON
-------------------------------------------------
2019-08-18T14:43:00Z | True | True
2019-08-18T14:43:05Z | True | True
2019-08-18T14:43:10Z | True | False
2019-08-18T14:43:15Z | True | False
2019-08-18T14:43:20Z | True | False
2019-08-18T14:43:25Z | True | True
2019-08-18T14:43:30Z | True | True
2019-08-18T14:43:35Z | True | False
I would want to be able to detect that server 2 had turned off twice, and return the two rows
2019-08-18T14:43:10Z | True | False
2019-08-18T14:43:35Z | True | False
I could achieve the same results by writing a query to
SELECT * WHERE "Server_2_ON" = False
and then filtering out duplicate results. But this is a multi-step process.
If this is not easily possible in influxdb, is there another database that is more suited to this style of query?
If your measurements were integer (1 to represent ON / 0 for OFF) instead of boolean, you could use the difference function.
To select any change in either measurement:
WHERE (DIFFERENCE(Server_1_ON) != 0
OR DIFFERENCE(Server_2_ON) != 0)
to select change from on to off in either measurement:
WHERE (DIFFERENCE(Server_1_ON) = -1
OR DIFFERENCE(Server_2_ON) = -1)
Note that in InfluxDb v1.x it is not possible to cast from Boolean to Integer, so for this to work you will need to change the stored data type to int. See can I change a field’s data type?
"There is no way to cast a float or integer to a string or Boolean (or
vice versa). The simplest workaround is to begin writing the new data type to a different field in the same series.
In InfluxDb v2.0 (still alpha) it is possible to cast Boolean to Int (see INT function).
(I have just started to investigate InfluxDb. I don't like it so far. But it seems that's just me, for according to this article in DZone it's currently the no 1 Time Series Database.)
I've inherited a binary file format with the following specification:
| F | E | D | C | B | A | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
0:| Status bit | ------ 15 - bit unsigned integer -----------
1:| Status bit | ---- uint:10 ---- | ---- uint:5 ----
Bit matching in Erlang is awesome. So I'd love to do something like this:
<<StatBit1:1, ValA:15/unsigned>> = <<2#1000000000101010:16>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<2#0000001010100111:16>>.
The problem is that the file I need to process is saved in 8-bit-little-endian
convention. So the very first 8-bits of the file in the example above would be
00101010 then 1000000 e.t.c.
{ok, S} = file:open("datafile", [read, binary, raw]).
{ok, <<Byte1:8, Byte2:8, Byte3:8, Byte4:8>>} = file:read(S,4).
io:format(
" ~8.2.0B | ~8.2.0B | ~8.2.0B | ~8.2.0B ~n ",
[Byte1, Byte2, Byte3, Byte4]).
# 00101010 | 1000000 | 10100111 | 00000010
# ok
So I resort to reading and swapping the bytes:
<<StatBit1:1, ValA:15/unsigned>> = <<Byte2:8, Byte1:8>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<Byte4:8, Byte3:8>>.
Alternatively I can read 16 bit little-endian and then "parse" it:
{ok, S} = file:open("datafile", [read, binary, raw]).
{ok, <<DW1:16/little, DW2:16/little>>} = file:read(S,4).
<<StatBit1:1, ValA:15/unsigned>> = <<DW1:16>>.
<<StatBit2:1, ValB:10/unsigned, ValC:5/unsigned>> = <<DW2:16>>.
Both solutions make me equally frustrated. I still suspect that there is a nice way of
dealing with that type of situations. Is there?
I'd first look into changing the application generating these files to write the data in network (big-endian) order. If that's not possible, then you're stuck with byte swapping like you're already doing. You could wrap the swapping into a function to keep it out of your decoding logic:
byteswap16(F) ->
case file:read(F, 2) of
{ok, <<B1:8,B2:8>>} -> {ok, <<B2:8,B1:8>>};
Else -> Else
end.
Alternatively, perhaps you could preprocess the file. You mentioned in your comment that the files are huge, so maybe this isn't practical for your case, but if each file fits comfortably in memory you could use file:read_file/1 to read the whole file and then preprocess the contents using a binary comprehension:
byteswap16(Filename) ->
{ok,Bin} = file:read_file(Filename),
<< <<B2:8,B1:8>> || <<B1:8,B2:8>> <= Bin >>.
Both these solutions assume the entire file is written in 16-bit little endian format.
As an explanation of why the binary syntax (as it is) can't solve your problem, consider that the bits in your file really is in order 7, ...0, F, E, ...8. The status bit is in F, but if you say "the next field is 15 bits long, and is a little-endian unsigned integer", you'll get bits 7,...0,F,E,...9 (the next 15 bits) which will then be interpreted as little-endian. You can't express the fact that you'd like to skip bit F and use E-8 instead, and then go back and pick up bit F for the status. If you could byte swap the file first, e.g. with "dd if=infile of=outfile conv=swab", you'd make your life a whole lot easier.
Did you try something like:
[edit] make some correction, but I can't test this on my tab.
decode(<<A:8, 1:1, B:7>>) -> {status1, B*256+A};
decode(<<A:3, C:5, 0:1, B:7>>) -> {status2, B*8+A, C}.
I've been writing some F# now for about 6 months and I've come across some behavior that I can't explain. I have some boiled down code below. (value names have been changed to protect the innocent!)
I have a hierarchy defined using record types rec1 and rec2, and also a dicriminated union type with possible values CaseA and CaseB. I'm calling a function ('mynewfunc') that takes a du_rec option type. Internally this function defines a recursive function that processes the hierarchy .
I'm kicking off the processing by passing the None option value to represent the root of the hierarchy (In reality, this function is deserializing the hierarchy from a file).
When I run the code below I hit the "failwith "invalid parent"" line of code. I can not understand why this is, because the None value that is passed down should match the outer pattern matching's None case.
The code works if I delete either of the sets of comments. This is not a showstopper for me - I just feel a bit uncomfortable not knowing why this is happening (I thought I was understanding f#)
Thanks in advance for any replies
James
type rec2 =
{
name : string
child : rec1 option
}
and rec1 =
{
name : string ;
child : rec2 option
}
and du_rec =
| Case1 of rec1
| Case2 of rec2
let mynewfunc (arg:du_rec option) =
let rec funca (parent:du_rec option) =
match parent with
| Some(node) ->
match node with
| Case2(nd) ->
printfn "hello"
(* | Case1(nd) ->
printfn "bye bye" *)
| _ ->
failwith "invalid parent"
| None ->
// printfn "case3"
()
funcb( None )
and funcb (parent: du_rec option) =
printfn "this made no difference"
let node = funca(arg)
()
let rootAnnot = mynewfunc(None)
Based on the comments, this is just a bad experience in the debugger (where the highlighting suggests that the control flow is going places it is not); the code does what you expect.
(There are a number of places where the F# compiler could improve its sequence-points generated into the pdbs, to improve the debugging experience; I think we'll be looking at this in a future release.)