I've an app whose function is to run in the background, collecting location data ('runkeeper' style app). It could potentially be running for hours, and collect thousands of points.
These 'runs' are listed in a tableview and on selection it will redraw that run on the map. I'm also coloring these polylines, so in order to have multiple colors on a seemingly single line, I connect a bunch of different polylines. When I go to add an NSArray containing (say 700) lines, and use
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
lineArray = [self polylinesFromSession];
dispatch_async(dispatch_get_main_queue(), ^{
[map addOverlays:lineArray] // lineArray.count = ~700
});
});
It really, really, bogs the app down for a 10-15 seconds. I can't use addOverlays on any thread other than main, so I don't see many options here. Is it possible to join a bunch of lines into a single overlay, THEN add it to the map? Or, any ideas for a better way to do this?
Thanks!
If your data points are contiguous, instead of adding hundreds of different lines, try to combine them into a single line with many points.
You might try creating raster tiles (with alpha transparency) of your lines on the fly and adding those as an MKTileOverlay to your map. For each point in a line, you can figure out in "tile space" what point this corresponds to, and use Core Graphics to draw in that tile. You can also skip points that would be over or under previous lines' points (unless you are plotting in a different color or want to layer the lines in a specific way).
The math is a little of the scope of an answer here, but Spherical Mercator is relatively easy to grasp as the world is a large square continuously tiled into smaller squares and the projection math is relatively straightforward trigonometry.
But you will likely find higher performance out of rasterizing this way, as long as you don't need to interact with the various line annotations individually in the app, but just show them.
Related
Over on the Apple Documents It claimes you can make an infinitely sized SKTileMap
Generating procedural game-world maps resembling natural terrain. You can create game world of infinite size by using procedural noise as its underlying representation, and manage storage and memory efficiently by creating noise maps (and their visual representations) only for the area around a player’s current position. (See the SKTileMap class.)
I can generate realistic terrain with GKNoise like the Apple Documents claim you can.
I cannot, however, make 1 giant infinitely sized SKTileMapNode it would be to intense to run on a device
The Apple Documents say to make an SKTileMapNode only around the players current position (like chunks in minecraft)
how can I achieve this in swift? my RPG needs to be infinitely sized to achieve everything I want to do with this game.
I need the "chunks" to be SKTileMapNodes because I need trees, stone, water, etc. to be added to the map so the player can interact with it.
The solution to your problem begins in making the GKNoise tileable.
You are probably using GKNoiseMap to generate them.
When you use the initializer:
let map = GKNoiseMap(_ noise: GKNoise, size: vector_double2,
origin: vector_double2, sampleCount: vector_int2, seamless: Bool)
Important: Don't forget to set the "seamless" variable to true.
That way you get a tileable map.
It looks better when you make them larger than the screen.
Let's say that a part (tileable) of the map that is your realistic terrain is going to be 2048 by 2048
One map may cover 128x128 tiles, for example. In this case each SKTile would be 16x16 pixels
You make a SKTileMap with 128x128 tiles.
Now the SKTileMap needs a background image, or the tile definitions (in this case, it is the GKNoiseMap, that you generated)
Now you can just use the same GKNoiseMap map, and place next to the first map, in any direction, containing another tile map of 128x128 tiles.
Your map is now 256x128 tiles. When the user scrolls, they can't tell where one image ends and another begins so the whole size of the map can be as large as you want, by repeating the same exercise.
It works well when you generate GKNoiseMap bigger than the screen, when you have to scroll a couple of times before the next GKNoiseMap starts. That way it doesn't get visually repetitive.
The area around "the player's position" can be one map, and then when you scroll, the map can repeat itself, saving you from loading anything else besides the map you already generated. That answers the " and manage storage and memory efficiently by creating noise maps" part of your question.
You should also be careful with Data Storage. If every SKTile needs to store variables different than what the GKNoiseMap will give you, infinite maps can be expensive.
I need to be able to interact with a representation of a cilinder that has many different parts in it. When the users taps over on of the small rectangles, I need to display a popover related to the specific piece (form).
The next image demonstrates a realistic 3d approach. But, I repeat, I need to solve the problem, the 3d is NOT required (would be really cool though). A representation that complies the functional needs will suffice.
The info about the parts to make the drawing comes from an API (size, position, etc)
I dont need it to be realistic really. The simplest aproximation would be to show a cilinder in a 2d representation, like a rectangle made out of interactable small rectangles.
So, as I mentioned, I think there are (as I see it) two opposite approaches: Realistic or Simplified
Is there a way to achieve a nice solution in the middle? What libraries, components, frameworks that I should look into?
My research has led me to SceneKit, but I still dont know if I will be able to interact with it. Interaction is a very important part as I need to display a popover when the user taps on any small rectangle over the cylinder.
Thanks
You don't need any special frameworks to achieve an interaction like this. This effect can be achieved with standard UIKit and UIView and a little trigonometry. You can actually draw exactly your example image using 2D math and drawing. My answer is not an exact formula but involves thinking about how the shapes are defined and break the problem down into manageable steps.
A cylinder can be defined by two offset circles representing the end pieces, connected at their radii. I will use an orthographic projection meaning the cylinder doesn't appear smaller as the depth extends into the background (but you could adapt to perspective if needed). You could draw this with CoreGraphics in a UIView drawRect.
A square slice represents an angle piece of the circle, offset by an amount smaller than the length of the cylinder, but in the same direction, as in the following diagram (sorry for imprecise drawing).
This square slice you are interested in is the area outlined in solid red, outside the radius of the first circle, and inside the radius of the imaginary second circle (which is just offset from the first circle by whatever length you want the slice).
To draw this area you simply need to draw a path of the outline of each arc and connect the endpoints.
To check if a touch is inside one of these square slices:
Check if the touch point is between angle a from the origin at a.
Check if the touch point is outside the radius of the inside circle.
Check if the touch point is inside the radius of the outside circle. (Note what this means if the circles are more than a radius apart.)
To find a point to display the popover you could average the end points on the slice or find the middle angle between the two edges and offset by half the distance.
Theoretically, doing this in Scene Kit with either SpriteKit or UIKit Popovers is ideal.
However Scene Kit (and Sprite Kit) seem to be in a state of flux wherein nobody from Apple is communicating with users about the raft of issues folks are currently having with both. From relatively stable and performant Sprite Kit in iOS 8.4 to a lot of lost performance in iOS 9 seems common. Scene Kit simply doesn't seem finished, and the documentation and community are both nearly non-existent as a result.
That being said... the theory is this:
Material IDs are what's used in traditional 3D apps to define areas of an object that have different materials. Somehow these Material IDs are called "elements" in SceneKit. I haven't been able to find much more about this.
It should be possible to detect the "element" that's underneath a touch on an object, and respond accordingly. You should even be able to change the state/nature of the material on that element to indicate it's the currently selected.
When wanting a smooth, well rounded cylinder as per your example, start with a cylinder that's made of only enough segments to describe/define the material IDs you need for your "rectangular" sections to be touched.
Later you can add a smoothing operation to the cylinder to make it round, and all the extra smoothing geometry in each quadrant of unique material ID should be responsive, regardless of how you add this extra detail to smooth the presentation of the cylinder.
Idea for the "Simplified" version:
if this representation is okey, you can use a UICollectionView.
Each cell can have a defined size thanks to
collectionView:layout:sizeForItemAtIndexPath:
Then each cell of the collection could be a small rectangle representing a
touchable part of the cylinder.
and using
collectionView:(UICollectionView *)collectionView
didSelectItemAtIndexPath:(NSIndexPath *)indexPath
To get the touch.
This will help you to display the popover at the right place:
CGRect rect = [collectionView layoutAttributesForItemAtIndexPath:indexPath].frame;
Finally, you can choose the appropriate popover (if the app has to work on iPhone) here:
https://www.cocoacontrols.com/search?q=popover
Not perfect, but i think this is efficient!
Yes, SceneKit.
When user perform a touch event, that mean you knew the 2D coordinate on screen, so your only decision is to popover a view or not, even a 3D model is not exist.
First, we can logically split the requirement into two pieces, determine the touching segment, showing right "color" in each segment.
I think the use of 3D model is to determine which piece of data to show in your case if I don't get you wrong. In that case, the SCNView's hit test method will do most of work for you. What you should do is to perform a hit test, take out the hit node and the hit's local 3D coordinate of this node, you can then calculate which segment is hit by this touch and do the decision.
Now how to draw the surface of the cylinder would be the only left question, right? There are various ways to do, for example simply paint each image you need and programmatically and attach it to the cylinder's material or have your image files on disk and use as material for the cylinder ...
I think the problem would be basically solved.
I'm currently trying to find a neat way of storing separate "branches" in a binary image. This little animation explains it:
As I go along the branches I need to collect the pixel indices that makes up a single-pixel wide branch. When I hit a junction point it should split up and store the new branches.
One way of going about it is maybe to create a 3x3 subregion, find out if there are white pixels inside it, move it accordingly, create a junction point if there is more than two. Always store the previous subregion so one can use it for making sure that we don't move to regions we already scanned.
It's a bit tricky to figure out how I would go about it though.
I basically need to reorder the pixels based on a "line/curve" hierarchy. Another part of the application will then redraw the figures, which internally works by creating lines between points hence the need to have them "ordered".
I don't know if you could apply it in your case but you should take a look at cv::findContour.
you will get a vector of points ordered.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/find_contours/find_contours.html
We are devloping a Drawing app. In that when we draw crossed lines ,the intersection point clears the previously drawn pixels where both lines intersects eachother.We are using setneedsdisplayinrect to refresh the drawing data.
How to over come this issue?
tl;dr: You need to store the previous line segments and redraw them when draw in the same rectangle again.
We are using setneedsdisplayinrect to refresh the drawing data
That is a good thing. Can you see any side effects of doing that? If not, try passing the entire rectangle and see what happens. You will see that only the very last segment is drawn.
Now you know that you need to store and possibly redraw previous line segments (or just their image).
Naive approach
The first and simplest solution would be to store all the lines in an array and redraw them. You would notice that this will slow down your app a lot especially when after having drawn for a while. Also, it doesn't go very well with only drawing what you need
Only drawing lines inside the refreshed rectangle
You could speed up the above implementation by filtering all the lines in the array to only redraw those that intersect the refreshed rect. This could for example be done by getting the bounding box for the line segment (using CGPathGetBoundingBox(path)) and checking if it intersects the refreshed rectangle (using CGRectIntersectsRect(refreshRect, boundingBox)).
That would reduce some of the drawing but you would still end up with a very long array of lines and see performance problems after a while.
Rasterize old line segments
One good way of not having to store all previous lines is to draw them into a bitmap (a separate image context (see UIGraphicsBeginImageContextWithOptions(...))) draw that image before drawing the new line segment. However, that would almost double the drawing you would have to do so it should not be done for every frame.
One thing you could do is to store the last 100 line segments (or maybe the last 1000 or whatever your own performance investigation shows, you should investigate these things yourself) and draw the rest into an image. Every time you have a 100 new lines you add them to the image context – by first drawing the image and then drawing the new 100 line – and save that as the new image.
One other thing you could do is to take all the new lines and draw them to the image every time the user lifts their finger. This can work well in combination with the above suggestion.
Depending on the complexity of your app you may need one or more of these suggestions to keep your app responsive (which is very important for a drawing app) but investigate first and don't make your solution overly complex if you don't have to.
I have a task where I need to track a series of objects across several frames, and compose the background from the image. The issue arises because one of the objects does not move until near the end, so I'm forced to take a shoddy average of the image. However, if I can blur out the objects, I think I'll be able to improve the background average.
I can identify a subsection of the image where the object is, an m by m array. I just need the ability to blur out this section with a filter. However, imfilter uses a fullsized array (image) as its input, so I cannot simply move along this array pixel by pixel in a for loop. But, if I try removing the image to take an image, I cannot put it back in without using another for loop, which would be computational expensive.
Is there a method of mapping a blur to a subsection of an image using MATLAB? Can this be done without using two for loops?
Try this...
sub_image = original_image(ii:jj,mm:nn)
blurred_sub_image = imfilter(sub_image, etc)
original_iamge(ii:jj,mm:nn) = blurred_sub_image
In short, you don't need to use a for loop to address a subsection of an image. You can do it directly, both for reading and writing.