How do we verify that an initial guess is feasible?
There should be some API call prog.CheckInitialGuess feasible or something like that, so we don't need to add a lot of extra code to do this.
We do have a function CheckSatisfiedAtInitialGuess You could call it through
prog.CheckSatisfiedAtInitialGuess(prog.GetAllConstraints());
to check whether all constraints are satisfied.
If you want to return the infeasible constraints, you can try
std::vector<Binding<Constraint>> failed_constraints;
for (const auto& constraint : prog.GetAllConstraints()) {
if (!prog.CheckSatisfied(constraint, prog.initial_guess())) {
failed_constraints.push_back(constraint);
}
}
Related
I wonder if there's a language sugar/SDK utility function in Dart that allows to protect a certain code from running more than once?
E.g.
void onUserLogin() {
...
runOnce(() {
handleInitialMessage();
});
...
}
I know I can add a global or class static boolean flag to check but it would be accessible in other functions of the same scope with a risk of accidental mixup in the future.
In C++ I could e.g. use a local static bool for this.
There is no built-in functionality to prevent code from running more than once. You need some kind of external state to know whether it actually did run.
You can't just remember whether the function itself has been seen before, because you use a function expression ("lambda") here, and every evaluation of that creates a new function object which is not even equal to other function objects created by the same expression.
So, you need something to represent the location of the call.
I guess you could hack up something using stack traces. I will not recommend that (very expensive for very little advantage).
So, I'd recommend something like:
class RunOnce {
bool _hasRun = false;
void call(void Function() function) {
if (_hasRun) return;
// Set after calling if you don't want a throw to count as a run.
_hasRun = true;
function();
}
}
...
static final _runOnce = RunOnce();
void onUserLogin() {
_runOnce(handleInitialMessage);
}
It's still just a static global that can be accidentally reused.
Is there a way to enforce the order of execution for a broadcast stream with multiple listeners where order of execution matters?
StreamSubscription<T> listen(void onData(T event)?,
{Function? onError, void onDone()?, bool? cancelOnError});
The abstract definition doesn't seem to support it. I was looking for perhaps something like a 'priority' parameter to specify the order of operation.
For example, right now I have a UserController that notify its listeners to do something when the user changes. However, some of the listeners need to be prioritised, but they need to be in their own separate class. Example code:
class UserController{
Stream user;
}
class IndependentControllerA {
//...
userController.user.listen((){
// This needs to be carried out first before everything else
}
//...
}
class IndependentControllerB {
userController.user.listen((){
// This needs to be carried out before A
}
}
What I have thought to overcome this is for UserController to instead register a list of its own Future callbacks that can be awaited in order. See example:
class UserController {
List<Future Function()> callbacks;
void changeUser() async {
callbacks.forEach((callback) => await callback());
}
}
class IndependentControllerA {
//...
userController.callbacks.add(() => print('Do first thing'));
//...
}
class IndependentControllerB {
//...
userController.callbacks.add(() => print('Do second thing'));
//...
}
However, I feel that this is not very elegant, if there is a better innate way to do this already with stream. Is there?
The order that listeners are notified in is definitely not something Dart promises. In practice, it's likely to be ordered in some way depending on the order the listeners were added, but it's not a guarantee, and it might change at any time. (Not really, there's definitely badly written code which depends on the ordering and will break if the ordering changes, but that just means there'll have to be a good reason for the change, not that it can't happen).
I'd write my own "prioritizer" if I had such a specific need. Something like what you have started here. Knowing the specific requirements you have might make it much simpler than making a completely general solution.
I am making a discord bot that needs to read a list of arguments, and with the first argument given, have it determine which branch to run.
Something kinda like this.
Mono.just(stringList)
.ifSelectmap(conditional1, branch1)
.ifSelectmap(conditional2, branch2)
.ifSelectmap(conditional3, branch3)
// non branch logic here
The only way I can figure out to do anything like this would just cause several deeply nested switchIfEmpty statements. Which would be hard to manage.
if the conditional logic doesn't involve latency-heavy operations (ie performing IO), then there is nothing wrong in passing a more fleshed out Function to map/flatMap.
I'm going to assume your "branches" are actually asynchronous operations represented as a Mono<R> or Flux<R> (that is, all the branches share the same return type R), so we're talking flatMap:
Flux<V> source; //...
Flux<R> result = source.flatMap(v -> {
if (conditional1) return branch1(v);
if (conditional2) return branch2(v);
if (conditional3) return branch3(v);
return Mono.empty(); //no conditional match == ignore
//you might want a default processing instead for the above
};
I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.
I receive the "Recursion depth exceeded allowed limit." error when I make a breeze where condition with more than 100 conditions.
My code is
$(list).each(function () {
if (pred === undefined) {
pred = entity_ODL.create("id", "==", this.id());
}
else {
pred = pred.or("id", "==", this.id());
}
});
More than 100 conditions on a query? That sounds warning bells to me. If I were you, I'd really have a good look at what needs to be accomplished and if the current method is indeed the correct way of doing things.
The limit doesn't especially surprise me. So I think that your best bet would be to create and execute multiple queries each with less that 100 conditions and then concatenate the results. See the Q.all method for combining multiple async methods into a single callback.
you didn't specifically say, but I believe the error is actually occurring on the server side and not a breeze specific problem.
You can fix it by adding/changing the attribute of the method in your ApiController.
i.e.
[HttpGet]
[BreezeQueryable(MaxNodeCount = 10000)]
public IQueryable<EquipmentSearchView> EquipmentSearchView()
{
...