I receive the "Recursion depth exceeded allowed limit." error when I make a breeze where condition with more than 100 conditions.
My code is
$(list).each(function () {
if (pred === undefined) {
pred = entity_ODL.create("id", "==", this.id());
}
else {
pred = pred.or("id", "==", this.id());
}
});
More than 100 conditions on a query? That sounds warning bells to me. If I were you, I'd really have a good look at what needs to be accomplished and if the current method is indeed the correct way of doing things.
The limit doesn't especially surprise me. So I think that your best bet would be to create and execute multiple queries each with less that 100 conditions and then concatenate the results. See the Q.all method for combining multiple async methods into a single callback.
you didn't specifically say, but I believe the error is actually occurring on the server side and not a breeze specific problem.
You can fix it by adding/changing the attribute of the method in your ApiController.
i.e.
[HttpGet]
[BreezeQueryable(MaxNodeCount = 10000)]
public IQueryable<EquipmentSearchView> EquipmentSearchView()
{
...
Related
I am making a discord bot that needs to read a list of arguments, and with the first argument given, have it determine which branch to run.
Something kinda like this.
Mono.just(stringList)
.ifSelectmap(conditional1, branch1)
.ifSelectmap(conditional2, branch2)
.ifSelectmap(conditional3, branch3)
// non branch logic here
The only way I can figure out to do anything like this would just cause several deeply nested switchIfEmpty statements. Which would be hard to manage.
if the conditional logic doesn't involve latency-heavy operations (ie performing IO), then there is nothing wrong in passing a more fleshed out Function to map/flatMap.
I'm going to assume your "branches" are actually asynchronous operations represented as a Mono<R> or Flux<R> (that is, all the branches share the same return type R), so we're talking flatMap:
Flux<V> source; //...
Flux<R> result = source.flatMap(v -> {
if (conditional1) return branch1(v);
if (conditional2) return branch2(v);
if (conditional3) return branch3(v);
return Mono.empty(); //no conditional match == ignore
//you might want a default processing instead for the above
};
I have an application which is written entirely using the FRP paradigm and I think I am having performance issues due to the way that I am creating the streams. It is written in Haxe but the problem is not language specific.
For example, I have this function which returns a stream that resolves every time a config file is updated for that specific section like the following:
function getConfigSection(section:String) : Stream<Map<String, String>> {
return configFileUpdated()
.then(filterForSectionChanged(section))
.then(readFile)
.then(parseYaml);
}
In the reactive programming library I am using called promhx each step of the chain should remember its last resolved value but I think every time I call this function I am recreating the stream and reprocessing each step. This is a problem with the way I am using it rather than the library.
Since this function is called everywhere parsing the YAML every time it is needed is killing the performance and is taking up over 50% of the CPU time according to profiling.
As a fix I have done something like the following using a Map stored as an instance variable that caches the streams:
function getConfigSection(section:String) : Stream<Map<String, String>> {
var cachedStream = this._streamCache.get(section);
if (cachedStream != null) {
return cachedStream;
}
var stream = configFileUpdated()
.filter(sectionFilter(section))
.then(readFile)
.then(parseYaml);
this._streamCache.set(section, stream);
return stream;
}
This might be a good solution to the problem but it doesn't feel right to me. I am wondering if anyone can think of a cleaner solution that maybe uses a more functional approach (closures etc.) or even an extension I can add to the stream like a cache function.
Another way I could do it is to create the streams before hand and store them in fields that can be accessed by consumers. I don't like this approach because I don't want to make a field for every config section, I like being able to call a function with a specific section and get a stream back.
I'd love any ideas that could give me a fresh perspective!
Well, I think one answer is to just abstract away the caching like so:
class Test {
static function main() {
var sideeffects = 0;
var cached = memoize(function (x) return x + sideeffects++);
cached(1);
trace(sideeffects);//1
cached(1);
trace(sideeffects);//1
cached(3);
trace(sideeffects);//2
cached(3);
trace(sideeffects);//2
}
#:generic static function memoize<In, Out>(f:In->Out):In->Out {
var m = new Map<In, Out>();
return
function (input:In)
return switch m[input] {
case null: m[input] = f(input);
case output: output;
}
}
}
You may be able to find a more "functional" implementation for memoize down the road. But the important thing is that it is a separate thing now and you can use it at will.
You may choose to memoize(parseYaml) so that toggling two states in the file actually becomes very cheap after both have been parsed once. You can also tweak memoize to manage the cache size according to whatever strategy proves the most valuable.
I'm using the HotTowel SPA template which makes use of Durandal. In my Durandal ViewModels I am using Breeze to get some data from the database.
I have a datacontext class that I put all my breeze queries in and the queries all follow the pattern like the following:
getAthletes: function (queryCompleted) {
var query = breeze.EntityQuery.from("Athletes");
return manager
.executeQuery(query)
.then(queryCompleted)
.fail(queryFailed)
}
Since I'm doing an asynchronous call in the activate method of the view model, I have to return the promise that comes back from these calls in the activate method.
Using a single query works great like this:
function activate() {
datacontext.getAthlete(loadAthlete);
}
However, if I need to perform two queries I run into problems, but only in the release version of my application. I have tried doing this with the following syntax:
function activate() {
datacontext.getAthlete(loadAthlete).then(datacontext.getOtherData(loadOtherData));
}
This will work fine in debug mode, but when I deploy it out to the server and my scripts get bundled, I get an exception which isn't very clear.
t is not a function
I've also tried chaining them together in my datacontext class like below, but I still get the same error.
getAthleteAndEfforts: function (athleteId, athleteQueryCompleted, effortsQueryCompleted) {
var athleteQuery = breeze.EntityQuery.from("Athletes").where("id", "==", athleteId);
var effortsQuery = breeze.EntityQuery.from("BestEfforts").where("athleteId", "==", athleteId);
return manager.executeQuery(athleteQuery).then(athleteQueryCompleted)
.then(manager.executeQuery(effortsQuery).then(effortsQueryCompleted))
.fail(queryFailed);
}
So I'm assuming I just don't understand the Q.defer() enough to use it properly or there is something else going on.
What is the correct syntax to accomplish this?
Ok, thanks to RainerAtSpirit for pointing me in the right direction to find this. I looked at John Papa's jumpstarter examples and he has a datacontext that does this under the primeData function.
So using the syntax he used there I was able to get it to work correctly like this:
getAthleteAndEfforts: function (athleteId, athleteQueryCompleted, effortsQueryCompleted) {
return Q.all([
datacontext.getAthlete(athleteId, athleteQueryCompleted),
datacontext.getAthleteEfforts(athleteId, effortsQueryCompleted)]);
}
I had seen the Q.all in the Q documentation but wasn't sure how to use it, but this example helped. I tested this and it works both in debug and release modes.
Not sure why the first version is working at all, but you'd return a promise when datacontext is making async calls.
function activate() {
return datacontext.getAthlete(loadAthlete);
}
or
function activate() {
return datacontext.getAthlete(loadAthlete).then( return datacontext.getOtherData(loadOtherData));
}
Check #John Papa's jumpstarter for more examples: https://github.com/johnpapa/PluralsightSpaJumpStartFinal/search?q=activate
My ApiController is supposed to return data:
// GET api/profile
public IEnumerable<HubBasicProfile> GetProjectProfiles()
{
IEnumerable<HubBasicProfile> res = _bpss.GetAllBasicProfiles();
return res;
}
When I debug and inspect res before it is returned, it has data for 91 HubBasicProfile objects.
However, when the data is returned, I see 91 {}, empty objects. No data at all.
Anybody any clue why this might be?
Thanks
Eric
call the ToList() method. So deferred execution won't happen.
public IEnumerable<HubBasicProfile> GetProjectProfiles()
{
IEnumerable<HubBasicProfile> res = _bpss.GetAllBasicProfiles();
return res.ToList();
}
Deferred execution means that the evaluation of an expression is
delayed until its realized value is actually required
collection.
EDIT : As per the comment
If you are serializing these items, You need to make sure your class is marked as serializable / has the [DataMember]/ [DataContract] Attributes.
I would imagine this is because IEnumerable will use lazy evaluation and isn't being asked to enumerate over it's collection.
When you're debugging, you're telling the debugger to enumerate over 'res' so seeing the results.
If you do a .ToList() or similar before returning, do you see the results?
I have these interfaces:
public interface IBaseInterface
{
function Method():void:
}
public interface IExtendedInterface extends IBaseInterface
{
function MethodTwo():void;
}
...and a vector of type "IBaseInterface" I need to iterate through:
var myVector:Vector.<IBaseInterface> = new Vector.<IBaseInterface>();
I need to perform an operation on objects that use IExtendedInterface. Which is the preferred option?
for each(var obj:IBaseInterface in myVector)
{
// Option 1:
var tmp:IExtendedInterface = obj as IExtendedInterface;
if(tmp != null)
tmp.MethodTwo();
// Option 2:
if(obj is IExtendedInterface)
IExtendedInterface(obj).MethodTwo();
}
I'm sure the info I'm looking for is out there, it's just hard to search for "is" and "as"...Thanks in advance!
I tried a little test to find out which is faster, expecting the "as" variant to be slightly better, because a variable assignment and check for null seem less complex than a type comparison (both options include a type cast) - and was proven right.
The difference is minimal, though: At 100000 iterations each, option 1 was consistently about 3 milliseconds(!) faster.
As weltraumpirat says, Option 1 is faster,
But if you are working on your own code, and no one else is going to ever touch it then go for it.
But if its a team collaboration and you are not going to run the operation 100,000 times with mission critical to the millisecond timings, then Option 2 is much easier for someone looking through your code to read.
Especially important if you are giving your code over to a client for further development.