I recently needed to modify someone's code that used multiple continue cases in a for each. The addition was a new control loop inside the for each, which promptly broke the continue logic. Is there a good way to get the next list item in a such a loop without rewriting all of the continue cases?
// Additional control loops within the member function which cannot be
// turned into functions due to native C++ data types.
{
for each(KeyValuePair<String^,String^> kvp in ListOfItems) {
do { // new condition testing code
// a bunch of code that includes several chances to continue
} while (!reachedCondition)
}
}
continue and break go to the inner most control loop. I've used the iterator within a for loop. So depending upon what your ListOfItems is (i.e. SortedList or Dictionary ...) you might be able to iterate instead of continue.
int i=0;
Dictionary^ d = gcnew Dictionary<string, string>();
for (IEnumerator<KeyValuePair<string, string>>^ e = d->GetEnumerator();i< d->Count;i++) {
// do stuff
e->MoveNext();
}
Related
It would also be helpful if you could tell me how I can make the Arduino display the name of object detected.
Can you tell me what I need to add to do this:
#include <Pixy2.h>
// This is the main Pixy object
Pixy2 pixy;
void setup()
{
Serial.begin(115200);
Serial.print("Starting...\n");
pixy.init();
}
void loop()
{
int i;
// grab blocks!
pixy.ccc.getBlocks();
// If there are detect blocks, print them!
if (pixy.ccc.numBlocks)
{
Serial.print("Detected ");
Serial.println(pixy.ccc.numBlocks);
for (i=0; i<pixy.ccc.numBlocks; i++)
{
Serial.print(" block ");
Serial.print(i);
Serial.print(": ");
pixy.ccc.blocks[i].print();
}
}
}
I'm not sure, if I get your question correct, but as far as I remember getBlocks() returns you the number of recognised objects. Given the case, that there has been a detection of a known object, this number should be positive.
As you do already print those blocks, what keeps you from calling new functionality from this loop?
For the second question on how to display the names I'm not exactly sure, what you're looking for. You can take the "signature" of a block and use it as a name and of course you can match your own names to certain signatures. If you want to print them like all other values, you can just use Serial.print() as well. If you want to print them differently e.g. to a LC-display, then we first need to know your intentions.
Maybe check out this tutorial to get a better grasp of the interface: https://www.open-electronics.org/pixy-camera-detect-the-colour-of-the-objects-and-track-their-position/
I am trying to migrate from RxJava1 to RxJava2. I am replacing all code parts where I previously had Observable<Void> to Compleatable. However I ran into one problem with order of stream calls. When I previously was dealing with Observables and using maps and flatMaps the code worked 'as expected'. However the andthen() operator seems to work a little bit differently. Here is a sample code to simplify the problem itself.
public Single<String> getString() {
Log.d("Starting flow..")
return getCompletable().andThen(getSingle());
}
public Completable getCompletable() {
Log.d("calling getCompletable");
return Completable.create(e -> {
Log.d("doing actuall completable work");
e.onComplete();
}
);
}
public Single<String> getSingle() {
Log.d("calling getSingle");
if(conditionBasedOnActualCompletableWork) {
return getSingleA();
}else{
return getSingleB();
}
}
What I see in the logs in the end is :
1-> Log.d("Starting flow..")
2-> Log.d("calling getCompletable");
3-> Log.d("calling getSingle");
4-> Log.d("doing actuall completable work");
And as you can probably figure out I would expect line 4 to be called before line 3 (afterwards the name of andthen() operator suggest that the code would be called 'after' Completable finishes it's job). Previously I was creating the Observable<Void> using the Async.toAsync() operator and the method which is now called getSingle was in flatMap stream - it worked like I expected it to, so Log 4 would appear before 3. Now I tried changing the way the Compleatable is created - like using fromAction or fromCallable but it behaves the same. I also couldn't find any other operator to replace andthen(). To underline - the method must be a Completable since it doesn't have any thing meaning full to return - it changes the app preferences and other settings (and is used like that globally mostly working 'as expected') and those changes are needed later in the stream. I also tried to wrap getSingle() method to somehow create a Single and move the if statement inside the create block but I don't know how to use getSingleA/B() methods inside there. And I need to use them as they have their complexity of their own and it doesn't make sense to duplicate the code. Any one have any idea how to modify this in RxJava2 so it behaves the same? There are multiple places where I rely on a Compleatable job to finish before moving forward with the stream (like refreshing session token, updating db, preferences etc. - no problem in RxJava1 using flatMap).
You can use defer:
getCompletable().andThen(Single.defer(() -> getSingle()))
That way, you don't execute the contents of getSingle() immediately but only when the Completablecompletes and andThen switches to the Single.
I know using Reflux.__keep.createdActions I get a list of all actions created. Is there a way to know the name of these actions?
Is there a way to define a preEmit hook for all actions?
Important Note: Reflux.__keep was actually originally created to support another feature that never materialized. However it was also creating memory leaks in some programs. Therefore it was recently made to NOT store anything by default. To make it store anything you have to use Reflux.__keep.useKeep() in latest versions of reflux and reflux-core. Reflux.__keep is not a documented part of the API, and as such changes to it do not necessarily follow semantic versioning. From v5.0.2 of Reflux onward the useKeep() is needed for Reflux.__keep to store anything.
On to the question though:
1) In Reflux.__keep there is a createdActions property, which is an Array holding all created actions so far (if you did the useKeep() thing, of course). Every action should have on it an actionName property telling you that action's name which you supplied when you created it:
Reflux.__keep.useKeep()
Reflux.createActions(['firstAction', 'secondAction']);
console.log(Reflux.__keep.createdActions[0].actionName) // <-- firstAction
console.log(Reflux.__keep.createdActions[1].actionName) // <-- secondAction
2) preEmit hooks can be assigned to actions after-the-fact, so assigning them to actions within Reflux.__keep.createdActions would be a simple matter of a loop:
Reflux.__keep.useKeep()
var Actions = Reflux.createActions(['firstAction', 'secondAction']);
var total = Reflux.__keep.createdActions.length;
for (var i=0; i<total; i++) {
Reflux.__keep.createdActions[i].preEmit = function(arg) { console.log(arg); };
}
Actions.firstAction('Hello'); // <- preEmit outputs "Hello"
Actions.secondAction('World!'); // <- preEmit outputs "World!"
I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.
I'm using ANTLR4 to create a parse tree for my grammar, what I want to do is modify certain nodes in the tree. This will include removing certain nodes and inserting new ones. The purpose behind this is optimization for the language I am writing. I have yet to find a solution to this problem. What would be the best way to go about this?
While there is currently no real support or tools for tree rewriting, it is very possible to do. It's not even that painful.
The ParseTreeListener or your MyBaseListener can be used with a ParseTreeWalker to walk your parse tree.
From here, you can remove nodes with ParserRuleContext.removeLastChild(), however when doing this, you have to watch out for ParseTreeWalker.walk:
public void walk(ParseTreeListener listener, ParseTree t) {
if ( t instanceof ErrorNode) {
listener.visitErrorNode((ErrorNode)t);
return;
}
else if ( t instanceof TerminalNode) {
listener.visitTerminal((TerminalNode)t);
return;
}
RuleNode r = (RuleNode)t;
enterRule(listener, r);
int n = r.getChildCount();
for (int i = 0; i<n; i++) {
walk(listener, r.getChild(i));
}
exitRule(listener, r);
}
You must replace removed nodes with something if the walker has visited parents of those nodes, I usually pick empty ParseRuleContext objects (this is because of the cached value of n in the method above). This prevents the ParseTreeWalker from throwing a NPE.
When adding nodes, make sure to set the mutable parent on the ParseRuleContext to the new parent. Also, because of the cached n in the method above, a good strategy is to detect where the changes need to be before you hit where you want your changes to go in the walk, so the ParseTreeWalker will walk over them in the same pass (other wise you might need multiple passes...)
Your pseudo code should look like this:
public void enterRewriteTarget(#NotNull MyParser.RewriteTargetContext ctx){
if(shouldRewrite(ctx)){
ArrayList<ParseTree> nodesReplaced = replaceNodes(ctx);
addChildTo(ctx, createNewParentFor(nodesReplaced));
}
}
I've used this method to write a transpiler that compiled a synchronous internal language into asynchronous javascript. It was pretty painful.
Another approach would be to write a ParseTreeVisitor that converts the tree back to a string. (This can be trivial in some cases, because you are only calling TerminalNode.getText() and concatenate in aggregateResult(..).)
You then add the modifications to this visitor so that the resulting string representation contains the modifications you try to achieve.
Then parse the string and you get a parse tree with the desired modifications.
This is certainly hackish in some ways, since you parse the string twice. On the other hand the solution does not rely on antlr implementation details.
I needed something similar for simple transformations. I ended up using a ParseTreeWalker and a custom ...BaseListener where I overwrote the enter... methods. Inside this method the ParserRuleContext.children is available and can be manipulated.
class MyListener extends ...BaseListener {
#Override
public void enter...(...Context ctx) {
super.enter...(ctx);
ctx.children.add(...);
}
}
new ParseTreeWalker().walk(new MyListener(), parseTree);