Is there a way to filter based on historical data?
For example: "Show me all objects who had "Attribute_X" == True on 01/01/2013"
As Steve stated, this would require an advanced DXL script.
I'm not sure about creating a filter on this, but identifying those objects you are looking for, I might be able to help. Having recently solved a similar task, I recommend to start with Tony Goodman's really excellent Smart History Viewer (this code could be used as DXL tutorial!) which has almost all the code you need. You just need to find and understand it.
Let me elaborate. Besides other nifty stuff, the history viewer basically does:
For all (selected) baselines, explicitly including un-baselined current version: gather all module changes and put them into a two-dimensional Skip list each, for module/object/session changes. Focus on the object changes.
There is an unused function printObjectHistory in the code which helps understanding the data structures. Have a look at the inner loop
for hist in skipHistory do
Inside this loop, consider only changes which happened before "01/01/2013" (check hist->HIST_DATE to obtain this information). The history viewer code already classified the detected changes, so you want to watch out for changes which contain the string "Modify Attribute: Attribute_X". Assign the new value to a buffer. Outside this loop, check if the buffer contains "True". If so, you this is one of the objects you wanted to find.
Related
Im trying to make a game on Scratch that will use a feature to generate a special code, and when that code is input into a certain area it will load the stats that were there when the code was generated. I've run into a problem however, I don't know how to make it and I couldn't find a clear cut answer for how to make it.
I would prefer that the solution be:
Able to save information for as long as needed (from 1 second to however long until it's input again.)
Doesn't take too many blocks to make, so that the project won't take forever to load it.
Of course i'm willing to take any solution in order to get my game up and running, those are just preferences.
You can put all of the programs in a custom block with "Run without screen refresh" on so that the program runs instantly.
If you save the stats using variables, you could combine those variable values into one string divided by /s. i.e. join([highscore]) (join("/") (join([kills]) (/))
NOTE: Don't add any "/" in your stats, you can probably guess why.
Now "bear" (pun) with me, this is going to take a while to read
Then you need the variables:
[read] for reading the inputted code
[input] for storing the numbers
Then you could make another function that reads the code like so: letter ([read]) of (code) and stores that information to the [input] variable like this: set [input] to (letter ([read]) of (code)). Then change [read] by (1) so the function can read the next character of the code. Once it letter ([read]) of (code) equals "/", this tells the program to set [*stat variable*] to (input) (in our example, this would be [highscore] since it was the first variable we saved) and set [input] to (0), and repeat again until all of the stats variables are filled (In this case, it repeats 2 times because we saved two variables: [highscore] and [kills]).
This is the least amount of code that it takes. Jumbling it up takes more code. I will later edit this answer with a screenshot showcasing whatever I just said before, hopefully clearing up the mess of words above.
The technique you mentioned is used in many scratch games but there is two option for you when making the save/load system. You can either do it the simpler way which makes the code SUPER long(not joking). The other way is most scratchers use, encoding the data into a string as short as possible so it's easy to transfer.
If you want to do the second way, you can have a look at griffpatch's video on the mario platformer remake where he used a encode system to save levels.https://www.youtube.com/watch?v=IRtlrBnX-dY The tips is to encode your data (maybe score/items name/progress) into numbers and letters for example converting repeated letters to a shorter string which the game can still decode and read without errors
If you are worried it took too long to load, I am pretty sure it won't be a problem unless you really save a big load of data. The common compress method used by everyone works pretty well. If you want more data stored you may have to think of some other method. There is not an actual way to do that as different data have different unique methods for things working the best. Good luck.
Apologies, in trying to be concise and clear my previous description of my question turned into a special case of the general case I'm trying to solve.
New Description
I'm trying to Compare the last emitted value of an Aggregation Function (Let's say Sum()) with a each element that I aggregate over in the current window.
Worth noting, that the ideal (I think) solution would include
The T2(from t-1) element used at time = t is the one that was created during the previous window.
I've been playing with several ideas/experiments but I'm struggling to find a way to accomplish this in a way is elegant and "empathetic" to Beam's compute model (which I'm still trying to fully Grock after many an article/blog/doc and book :)
Side inputs seem unwieldy because It looks like I have to shift the emitted 5M#T-1 Aggregation's timestamp into the 5M#T's window in order to align it with the current 5M window
In attempting this with side inputs (as I understand them), I ended up with some nasty code that was quite "circularly referential", but not in an elegant recursive way :)
Any help in the right direction would be much appreciated.
Edit:
Modified diagram and improved description to more clearly show:
the intent of using emitted T2(from t-1) to calculate T2 at t
that the desired T2(from t-1) used to calculate T2 is the one with the correct key
Instead of modifying the timestamp of records that are materialized so that they appear in the current window, you should supply a window mapping fn which just maps the current window on to the past one.
You'll want to create a custom WindowFn which implements the window mapping behavior that you want paying special attention to overriding the getDefaultWindowMappingFn function.
Your pipeline would be like:
PCollection<T> mySource = /* data */
PCollectionView<SumT> view = mySource
.apply(Window.into(myCustomWindowFnWithNewWindowMappingFn))
.apply(Combine.globally(myCombiner).asSingletonView());
mySource.apply(ParDo.of(/* DoFn that consumes side input */).withSideInputs(view));
Pay special attention to the default value the combiner will produce since this will be the default value when the view has had no data emitted to it.
Also, the easiest way to write your own custom window function is to copy an existing one.
I am using the ELKI MiniGUI to run LOF. I have found out how to normalize the data before running by -dbc.filter, but I would like to look at the original data records and not the normalized ones in the output.
It seems that there is some flag called -normUndo, which can be set if using the command-line, but I cannot figure out how to use it in the MiniGUI.
This functionality used to exist in ELKI, but has effectively been removed (for now).
only a few normalizations ever supported this, most would fail.
there is no longer a well defined "end" with the visualization. Some users will want to visualize the normalized data, others not.
it requires carrying over normalization information along, which makes data structures more complex (albeit the hierarchical approach we have now would allow this again)
due to numerical imprecision of floating point math, you would frequently not get out the exact same values as you put in
keeping the original data in memory may be too expensive for some use cases, so we would need to add another parameter "keep non-normalized data"; furthermore you would need to choose which (normalized or non-normalized) to use for analysis, and which for visualization. This would not be hard with a full-blown GUI, but you are looking at a command line interface. (This is easy to do with Java, too...)
We would of course appreciate patches that contribute such functionality to ELKI.
The easiest way is this: Add a (non-numerical) label column, and you can identify the original objects, in your original data, by this label.
I'm trying to use butterworth filter. The input data comes from an "index array" module (the data is acquired through DAQ and I want to process the voltage signal which is in an array of waveforms). when I use this filter in a case structure, it doesn't work. yet, when I use the filters in the "waveform conditioning" section, there is no problem. what exactly is the difference between these two types of filters?
a little add on to my problem: the second picture is from when i tried to reassemble the initial combination, and the error happened
You are comparing offline filtering to online filtering.
In LabVIEW, the PtbyPt-VIs are intended to be used in an online setting, that is - iteratively.
For each new sample that is obtained, it would be input directly into the VI. The VI stores the states of the previous iterations to perform the filtering.
The "normal" filter VIs are intended for offline analysis and expects an array containing the full data of the signal.
The following whitepaper explains Point-by-Point-VIs. Note that this paper is quite old, so it should explain the concepts - but might be otherwise outdated.
http://www.ni.com/pdf/manuals/370152b.pdf
If VoltageBuf is an array of consecutive values of the same signal (the one that you want to filter) you only need to connect VoltageBuf directly to the filter.
I'm pretty sure this is a silly newbie question but I didn't know it so I had to ask...
Why do we use data structures, like Linked List, Binary Search Tree, etc? (when no dynamic allocation is needed)
I mean: wouldn't it be faster if we kept a single variable for a single object? Wouldn't that speed up access time? Eg: BST possibly has to run through some pointers first before it gets to the actual data.
Except for when dynamic allocation is needed, is there a reason to use them?
Eg: using linked list/ BST / std::vector in a situation where a simple (non-dynamic) array could be used.
Each thing you are storing is being kept in it's own variable (or storage location). Data structures apply organization to your data. Imagine if you had 10,000 things you were trying to track. You could store them in 10,000 separate variables. If you did that, then you'd always be limited to 10,000 different things. If you wanted more, you'd have to modify your program and recompile it each time you wanted to increase the number. You might also have to modify the code to change the way in which the calculations are done if the order of the items changes because the new one is introduced in the middle.
Using data structures, from simple arrays to more complex trees, hash tables, or custom data structures, allows your code to both be more organized and extensible. Using an array, which can either be created to hold the required number of elements or extended to hold more after it's first created keeps you from having to rewrite your code each time the number of data items changes. Using an appropriate data structure allows you to design algorithms based on the relationships between the data elements rather than some fixed ordering, giving you more flexibility.
A simple analogy might help to understand. You could, for example, organize all of your important papers by putting each of them into separate filing cabinet. If you did that you'd have to memorize (i.e., hard-code) the cabinet in which each item can be found in order to use them effectively. Alternatively, you could store each in the same filing cabinet (like a generic array). This is better in that they're all in one place, but still not optimum, since you have to search through them all each time you want to find one. Better yet would be to organize them by subject, putting like subjects in the same file folder (separate arrays, different structures). That way you can look for the file folder for the correct subject, then find the item you're looking for in it. Depending on your needs you can use different filing methods (data structures/algorithms) to better organize your information for it's intended use.
I'll also note that there are times when it does make sense to use individual variables for each data item you are using. Frequently there is a mixture of individual variables and more complex structures, using the appropriate method depending on the use of the particular item. For example, you might store the sum of a collection of integers in a variable while the integers themselves are stored in an array. A program would need to be pretty simple though before the introduction of data structures wouldn't be appropriate.
Sorry, but you didn't just find a great new way of doing things ;) There are several huge problems with this approach.
How could this be done without requring programmers to massively (and nontrivially) rewrite tons of code as soon as the number of allowed items changes? Even when you have to fix your data structure sizes at compile time (e.g. arrays in C), you can use a constant. Then, changing a single constant and recompiling is sufficent for changes to that size (if the code was written with this in mind). With your approach, we'd have to type hundreds or even thousands of lines every time some size changes. Not to mention that all this code would be incredibly hard to read, write, maintain and verify. The old truism "more lines of code = more space for bugs" is taken up to eleven in such a setting.
Then there's the fact that the number is almost never set in stone. Even when it is a compile time constant, changes are still likely. Writing hundreds of lines of code for a minor (if it exists at all) performance gain is hardly ever worth it. This goes thrice if you'd have to do the same amount of work again every time you want to change something. Not to mention that it isn't possible at all once there is any remotely dynamic component in the size of the data structures. That is to say, it's very rarely possible.
Also consider the concept of implicit and succinct data structures. If you use a set of hard-coded variables instead of abstracting over the size, you still got a data structure. You merely made it implicit, unrolled the algorithms operating on it, and set its size in stone. Philosophically, you changed nothing.
But surely it has a performance benefit? Well, possible, although it will be tiny. But it isn't guaranteed to be there. You'd save some space on data, but code size would explode. And as everyone informed about inlining should know, small code sizes are very useful for performance to allow the code to be in the cache. Also, argument passing would result in excessive copying unless you'd figure out a trick to derive the location of most variables from a few pointers. Needless to say, this would be nonportable, very tricky to get right even on a single platform, and liable to being broken by any change to the code or the compiler invocation.
Finally, note that a weaker form is sometimes done. The Wikipedia page on implicit and succinct data structures has some examples. On a smaller scale, some data structures store much data in one place, such that it can be accessed with less pointer chasing and is more likely to be in the cache (e.g. cache-aware and cache-oblivious data structures). It's just not viable for 99% of all code and taking it to the extreme adds only a tiny, if any, benefit.
The main benefit to datastructures, in my opinion, is that you are relationally grouping them. For instance, instead of having 10 separate variables of class MyClass, you can have a datastructure that groups them all. This grouping allows for certain operations to be performed because they are structured together.
Not to mention, having datastructures can potentially enforce type security, which is powerful and necessary in many cases.
And last but not least, what would you rather do?
string string1 = "string1";
string string2 = "string2";
string string3 = "string3";
string string4 = "string4";
string string5 = "string5";
Console.WriteLine(string1);
Console.WriteLine(string2);
Console.WriteLine(string3);
Console.WriteLine(string4);
Console.WriteLine(string5);
Or...
List<string> myStringList = new List<string>() { "string1", "string2", "string3", "string4", "string5" };
foreach (string s in myStringList)
Console.WriteLine(s);