I've got the following working A* code in C#:
static bool AStar(
IGraphNode start,
Func<IGraphNode, bool> check,
out List<IGraphNode> path)
{
// Closed list. Hashset because O(1).
var closed =
new HashSet<IGraphNode>();
// Binary heap which accepts multiple equivalent items.
var frontier =
new MultiHeap<IGraphNode>(
(a, b) =>
{ return Math.Sign(a.TotalDistance - b.TotalDistance); }
);
// Some way to know how many multiple equivalent items there are.
var references =
new Dictionary<IGraphNode, int>();
// Some way to know which parent a graph node has.
var parents =
new Dictionary<IGraphNode, IGraphNode>();
// One new graph node in the frontier,
frontier.Insert(start);
// Count the reference.
references[start] = 1;
IGraphNode current = start;
do
{
do
{
frontier.Get(out current);
// If it's in the closed list or
// there's other instances of it in the frontier,
// and there's still nodes left in the frontier,
// then that's not the best node.
} while (
(closed.Contains(current) ||
(--references[current]) > 0) &&
frontier.Count > 0
);
// If we have run out of options,
if (closed.Contains(current) && frontier.Count == 0)
{
// then there's no path.
path = null;
return false;
}
closed.Add(current);
foreach (var edge in current.Edges)
{
// If there's a chance of a better path
// to this node,
if (!closed.Contains(edge.End))
{
int count;
// If the frontier doesn't contain this node,
if (!references.TryGetValue(edge.End, out count) ||
count == 0)
{
// Initialize it and insert it.
edge.End.PathDistance =
current.PathDistance + edge.Distance;
edge.End.EstimatedDistance = CalcDistance(edge.End);
parents[edge.End] = current;
frontier.Insert(edge.End);
references[edge.End] = 1;
}
else
{
// If this path is better than the existing path,
if (current.PathDistance + edge.Distance <
edge.End.PathDistance)
{
// Use this path.
edge.End.PathDistance = current.PathDistance +
edge.Distance;
parents[edge.End] = current;
frontier.Insert(edge.End);
// Keeping track of multiples equivalent items.
++references[edge.End];
}
}
}
}
} while (!check(current) && frontier.Count > 0);
if (check(current))
{
path = new List<IGraphNode>();
path.Add(current);
while (current.PathDistance != 0)
{
current = parents[current];
path.Add(current);
}
path.Reverse();
return true;
}
// Yep, no path.
path = null;
return false;
}
How do I make it faster? No code samples, please; that's a challenge I've set myself.
Edit: To clarify, I'm looking for any advice, suggestions, links, etc. that apply to A* in general. The code is just an example. I asked for no code samples because they make it too easy to implement the technique(s) being described.
Thanks.
Have you looked at this page or this page yet? They have plenty of helpful optimization tips as well as some great information on A* in general.
Change to using a Random Meldable Queue for the heap structure. Since you wanted a programming challenge, I won't show you how I changed the recursive Meld method to not be recursive. That's the trick to getting speed out of that structure. More info in Gambin's paper "Randomized Meldable Priority Queues" (search on the web for that).
Related
This issue is an Extension of Multiple GLTF loading and Merging on server side.
I am trying to merge multiple GLTF files that have some common nodes too.
The answer helped me for the merging the files and I combined the scenes by following code and it rendered perfectly
const scenes = root.listScenes()
const scene0 = scenes[0]
root.setDefaultScene(scene0);
if (scenes.length > 1) {
for (let i = 1; i < scenes.length; i++) {
let scene = scenes[i];
let nodes = scene.listChildren()
for (let j = 0; j < nodes.length; j++) {
scene0.addChild(nodes[j]);
}
}
}
root.listScenes().forEach((b, index) => index > 0 ? b.dispose() : null);
My issue is all the data that was common in the GLTFs is duplicated and this will create issue in animation when root bones are required to change. Is there a way to merge so that the common nodes are not duplicated ? I am also trying for some custom merge.
const gltfLoader = () => {
const document = new Document();
const root = document.getRoot();
document.merge(io.read(filePaths[0]));
let model;
for (let i = 1; i < filePaths.length; i++) {
const inDoc = new Document();
inDoc.merge(io.read(filePaths[i]));
model = inDoc.getRoot().listScenes()[0];
model.listChildren().forEach((child) => {
mergeStructure(root.listScenes()[0], child);
});
}
io.write('output.gltf', document);
}
const mergeStructure = (parent, childToMerge) => {
let contains = false;
parent.listChildren().forEach((child) => {
if (child.t === childToMerge.t && !contains && child.getName() === childToMerge.getName()) {
childToMerge.listChildren().forEach((subChild) => {
mergeStructure(child, subChild);
});
contains = true;
}
});
if (!contains) {
console.log("Adding " + childToMerge.getName() + " to " + parent.getName() + " as child")
parent.addChild(childToMerge);
}
}
But this merge is not working due to Error: Cannot link disconnected graphs/documents.
I am newbie to 3D modelling. Some direction would be great.
Thanks!
The error you're seeing above is occurring because the code attempts to move individual resources — e.g. Nodes — from one glTF Document to another. This isn't possible, each Document manages its resource graph internally, but an equivalent workflow would be:
Load N files and merge into one document (with N scenes).
import { Document, NodeIO } from '#gltf-transform/core';
const io = new NodeIO();
const document = new Document();
for (const path of paths) {
document.merge(io.read(path));
}
Iterate over all of the scenes, moving their children to some common scene:
const root = document.getRoot();
const mainScene = root.listScenes()[0];
for (const scene of root.listScenes()) {
if (scene === mainScene) continue;
for (const child of scene.listChildren()) {
// If conditions are met, append child to `mainScene`.
// Doing so will automatically detach it from the
// previous scene.
}
scene.dispose();
}
Clean up any remaining unmerged resources.
import { prune } from '#gltf-transform/functions';
await document.transform(prune());
In openlayers 2.8 there was a threshold associated with the cluster strategy as per ticket https://trac.osgeo.org/openlayers/ticket/1815.
In openlayers 3 there is no mention of it anywhere (and the strategy paradigm seems to be gone as well).
http://openlayers.org/en/master/apidoc/ol.source.Cluster.html
Does anyone know if there exist a ticket for this feature?
The paradigm has changed considerably. In OpenLayers 3 you create a new layer, with a cluster style, and the "threshold" is set as a maxResolution, or minResolution in the layer's options.
Similar to:
var clusterLayer = new ol.layer.Vector({
visible: true,
zIndex: insightMap.totalServcies - element.SortOrder,
id: Id,
serviceId: element.Id,
minResolution: clusteringThreshold,
cluster: true,
});
You can also use minZoom and maxZoom according to the documentaiton, but I've encountered issues with them performing consistently.
Update
It's actually possible to have a proper cluster threshold without recompiling the library. You need to use the geometry of each feature (from the features property of the cluster) in the style function.
const noClusterStyles = [];
vectorLayer.setStyle(feature => {
const features = feature.get('features');
if (features.length > 5) {
return clusterStyle;
} else {
for (let i = 0; ii = features.length; i < ii; ++i) {
const clone = noClusterStyles[i] ? noClusterStyles[i] : noClusterStyle.clone();
clone.setGeometry(features[i].getGeometry());
noClusterStyles[i] = clone;
}
noClusterStyles.length = features.length;
return noClusterStyles;
}
});
Thank you to #ahocevar for the code snippet.
Another solution would require modifying the OL3 library itself.
The ol.source.Cluster.prototype.cluster_ function has the following code snippet :
var neighbors = this.source_.getFeaturesInExtent(extent);
ol.DEBUG && console.assert(neighbors.length >= 1, 'at least one neighbor found');
neighbors = neighbors.filter(function(neighbor) {
var uid = ol.getUid(neighbor).toString();
if (!(uid in clustered)) {
clustered[uid] = true;
return true;
} else {
return false;
}
});
// Add the following
// If one element has more too many neighbors, register it as a cluster of one
// Size-based styling should be handled separately, in the layer style function
let THRESHOLD= 3;
if(neighbors.length > THRESHOLD) {
this.features_.push(this.createCluster_(neighbors));
} else {
for(var j = 0 ; j < neighbors.length ; j++) {
this.features_.push(this.createCluster_([neighbors[j]]));
}
}
Here's my pagination/infinite scrolling scenario:
Load the initial N with startAt().limit(N).once('value'). Populate a list items.
On scroll, load the next N. (I pass a priority to startAt() but that's tangential.)
When a new item is added, I'd like to pop it to the top of items.
If I use a .onChildAdded listener for step 3, it finds all the items including those I've already pulled in thus creating duplicates. Is there a better way?
Another method would be to use the .onChildAdded listener for the initial N in step 1 instead of .once, but when the initial N items come in I do items.add(item) to sort one after the other as they are already in order, but with the new one that comes in after the fact I need to somehow know it's unique so I can do items.insert(0, item) to force it to the top of the list. I'm not sure how to set this up, or if I'm off the mark here.
EDIT: Still in flux, see: https://groups.google.com/forum/#!topic/firebase-talk/GyYF7hfmlEM
Here's a working solution I came up with:
class FeedViewModel extends Observable {
int pageSize = 20;
#observable bool reloadingContent = false;
#observable bool reachedEnd = false;
var snapshotPriority = null;
bool isFirstRun = true;
FeedViewModel(this.app) {
loadItemsByPage();
}
/**
* Load more items pageSize at a time.
*/
loadItemsByPage() {
reloadingContent = true;
var itemsRef = f.child('/items_by_community/' + app.community.alias)
.startAt(priority: (snapshotPriority == null) ? null : snapshotPriority).limit(pageSize+1);
int count = 0;
// Get the list of items, and listen for new ones.
itemsRef.once('value').then((snapshot) {
snapshot.forEach((itemSnapshot) {
count++;
// Don't process the extra item we tacked onto pageSize in the limit() above.
print("count: $count, pageSize: $pageSize");
// Track the snapshot's priority so we can paginate from the last one.
snapshotPriority = itemSnapshot.getPriority();
if (count > pageSize) return;
// Insert each new item into the list.
// TODO: This seems weird. I do it so I can separate out the method for adding to the list.
items.add(toObservable(processItem(itemSnapshot)));
// If this is the first item loaded, start listening for new items.
// By using the item's priority, we can listen only to newer items.
if (isFirstRun == true) {
listenForNewItems(snapshotPriority);
isFirstRun = false;
}
});
// If we received less than we tried to load, we've reached the end.
if (count <= pageSize) reachedEnd = true;
reloadingContent = false;
});
// When an item changes, let's update it.
// TODO: Does pagination mean we have multiple listeners for each page? Revisit.
itemsRef.onChildChanged.listen((e) {
Map currentData = items.firstWhere((i) => i['id'] == e.snapshot.name);
Map newData = e.snapshot.val();
newData.forEach((k, v) {
if (k == "createdDate" || k == "updatedDate") v = DateTime.parse(v);
if (k == "star_count") v = (v != null) ? v : 0;
if (k == "like_count") v = (v != null) ? v : 0;
currentData[k] = v;
});
});
}
listenForNewItems(endAtPriority) {
// If this is the first item loaded, start listening for new items.
var itemsRef = f.child('/items').endAt(priority: endAtPriority);
itemsRef.onChildAdded.listen((e) {
print(e.snapshot.getPriority());
print(endAtPriority);
if (e.snapshot.getPriority() != endAtPriority) {
print(e.snapshot.val());
// Insert new items at the top of the list.
items.insert(0, toObservable(processItem(e.snapshot)));
}
});
}
void paginate() {
if (reloadingContent == false && reachedEnd == false) loadItemsByPage();
}
}
Load the initial N with startAt().limit(N).once('value'). Populate a list items.
On the first run, note the first item's priority, then start an onChildAdded listener that has an endAt() with that priority. This means it'll only listen to stuff from there and above.
In that listener, ignore the first event which is the topmost item we already have, and for everything else, add that to the top of the list.
Of course, on scroll, load the next N.
EDIT: Updated w/ some fixes, and including the listener for changes.
Outlook rules are putting all Facebook originating mail into a Facebook folder, an external process is running as detailed here to separate the contents of that folder in a way that was not feasible through Outlook rules process, originally I had this process running in VBA in outlook but it was a pig choking outlook resources. So I decided to throw it out externally and as I want to improve my c# skill set, this would be a conversion at the same time. Anyway the mail processing is working as it should items are going to correct sub-folders but for some reason the temporary constraint to exit after i number of iterations is not doing as it should. If there are 800 mails in the Facebook folder ( I am a member of many groups) it only runs through 400 iterations, if there are 30 it only processes 15 etc.
I cant for the life of me see why - can anyone put me right?
Thanks
private void PassFBMail()
{
//do something
// result = MsgBox("Are you sure you wish to run the 'Allocate to FB Recipient' process", vbOKCancel, "Hold up")
//If result = Cancel Then Exit Sub
var result = MessageBox.Show("Are you sure you wish to run the Are you sure you wish to run the 'Allocate to SubFolders' process","Sure?",MessageBoxButtons.OKCancel,MessageBoxIcon.Question,MessageBoxDefaultButton.Button2);
if (result == DialogResult.Cancel)
{
return;
}
try
{
OutLook._Application outlookObj = new OutLook.Application();
OutLook.MAPIFolder inbox = (OutLook.MAPIFolder)
outlookObj.Session.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderInbox);
OutLook.MAPIFolder fdr = inbox.Folders["facebook"];
OutLook.MAPIFolder fdrForDeletion = inbox.Folders["_ForDeletion"];
// foreach (OutLook.MAPIFolder fdr in inbox.Folders)
// {
// if (fdr.Name == "facebook")
// {
// break;
// }
// }
//openFacebookFolder Loop through mail
//LOOPING THROUGH MAIL ITEMS IN THAT FOLDER.
Redemption.SafeMailItem sMailItem = new Redemption.SafeMailItem();
int i = 0;
foreach ( Microsoft.Office.Interop.Outlook._MailItem mailItem in fdr.Items.Restrict("[MessageClass] = 'IPM.Note'"))
{
//temp only process 500 mails
i++;
if (i == 501)
{
break;
}
// eml.Item = em
// If eml.To <> "" And eml.ReceivedByName <> "" Then
// strNewFolder = DeriveMailFolder(eml.To, eml.ReceivedByName)
// End If
sMailItem.Item = mailItem;
string strTgtFdr = null;
if (sMailItem.To != null && sMailItem.ReceivedByName != null)
{
strTgtFdr = GetTargetFolder(sMailItem.To, sMailItem.ReceivedByName );
}
// If fdr.Name <> strNewFolder Then
// If dDebug Then DebugPrint "c", "fdr.Name <> strNewFolder"
// eml.Move myInbox.Folders(strNewFolder)
// If dDebug Then DebugPrint "w", "myInbox.Folders(strNewFolder) = " & myInbox.Folders(strNewFolder)
// Else
// eml.Move myInbox.Folders("_ForDeletion")
// End If
if (fdr.Name != strTgtFdr)
{
OutLook.MAPIFolder destFolder = inbox.Folders[strTgtFdr];
mailItem.Move(destFolder);
}
else
{
mailItem.Move(fdrForDeletion);
}
}
//allocate to subfolders
//Process othersGroups
//Likes Max 3 per day per user, max 20% of group posts
//Comments one per day per user, max 10% of group posts
//Shares one per day per user, max 10% of group posts
}
catch(System.Exception crap)
{
OutCrap(crap);
MessageBox.Show("MailCamp experienced an issues while processing the run request and aborted - please review the error log","Errors during the process",MessageBoxButtons.OK,MessageBoxIcon.Error,MessageBoxDefaultButton.Button1);
}
}
Do not use a foreach loop it you are modifying the number of items in the collection.
Loop from MAPIFolder.Items.Count down to 1.
I am working in ASP.Net MVC 1.0 and SQL Server 2000 using Entity Framework.
My faulty controller code is given below:
int checkUser = id ?? 0;
string userNameFromNew, userNameToNew;
if (checkUser == 1)
{
userNameFromNew = "U" + Request.Form["usernameFrom"];
userNameToNew = "U" + Request.Form["usernameTo"];
}
else
{
userNameFromNew = "C" + Request.Form["usernameFrom"];
userNameToNew = "C" + Request.Form["usernameTo"];
}
var rMatrix = from Datas in repository.GetTotalRightData()
where Datas.UserName == userNameFromNew.Trim()
select Datas;
Right_Matrix RM = new Right_Matrix();
foreach(var Data in rMatrix)
{
RM.Column_Id = Data.Column_Id;
RM.ProMonSys_Right = Data.ProMonSys_Right;
RM.UserName = userNameToNew;
UpdateModel(RM);
this.repository.AddRightTransfer(RM);
}
return RedirectToAction("RightTransfer");
My faulty model code is given below:
public void AddRightTransfer(Right_Matrix RM)
{
context.AddObject("Right_Matrix", RM);
context.SaveChanges();
}
My code shows error once it is in the model code stating the DataReader is already open and I need to close it first.
Please suggest a workaround.
Try moving the AddRightTransfer loop out of the LINQ foreach, and into a separate one. I think LINQ is executing the first add before the database result set is closed. By moving the call to AddRightTransfer into a second foreach, you should avoid the problem, if that's what is happening.
Here's an example:
List<Right_Matrix> matrixes = new List<Right_Matrix>();
foreach (var Data in rMatrix)
{
Right_Matrix rm = new Right_Matrix();
rm.Column_Id = Data.Column_Id;
rm.ProMonSys_Right = Data.ProMonSys_Right;
rm.UserName = userNameToNew;
UpdateModel(rm);
matrixes.Add(rm);
}
foreach (var rm in matrixes)
{
this.repository.AddRightTransfer(rm);
}
The problem is that you are modifying the database while still iterating over it. Either finish the query before modifying, like this:
foreach(var Data in rMatrix.ToArray())
Or don't modify the database in your loop, like this:
foreach(var Data in rMatrix)
{
RM.Column_Id = Data.Column_Id;
RM.ProMonSys_Right = Data.ProMonSys_Right;
RM.UserName = userNameToNew;
UpdateModel(RM);
context.AddObject("Right_Matrix", RM);
}
context.SaveChanges();
Obviously, those two context. calls would have to be made into methods on your repository, but you get the picture.