I have chart images that are conical projections. Whenever the chart position moves, I need to rotate the chart. Imagine viewing a globe from above. The change in rotation is generally less than 1 degrees for a full screen of scrolling.
I am currently doing this in an image view within a scroll view, redrawing the images when the position changes. Any small change in position requires a complete redraw.
Is there a way to have IOS rotate the images within the layer without having to redraw them.
I am imaging something like in the maps app where you can scroll-rotate your course.
Another way of looking at it would be to have something like a scroll view that can scroll up/down or rotate its contents around an axis that may be way off the top of the screen.
Related
I just bought the Apple Watch and I want to create a super simple game for it.
It is simple enough to only use monochrome color scheme, but advanced enough to have an object moving in real-time.
I am trying to figure out how to position an object on my Apple Watch with Watch OS 2.
I want to place my object somewhere on the screen (anywhere I'd like to) but there are absolutely
no way to do that, I think.
But, in the following library: https://github.com/shu223/watchOS-2-Sampler
the developer can actually animate the alignment of the image so I guess
that itself suggests it should be possible to somehow specify a point of where to position an object.
And the animation is pretty smooth as well.
I have tried to generate frames on-the-fly CGContext which uses Quartz 2D, but it's way too slow and the app on my Apple Watch just crashes.
I don't necessarily think that the watch itself is too weak, but some clever programming should solve my problem, but I just cant figure out how to do it.
As you already said there is no way to directly position an WKInterfaceObject on your watch. You can change the alignment of an object but that leaves exactly 3 positions: left, center and right. Probably not enough for your game.
What you could do: You can set a background image on a WKInterfaceGroup. So you could draw your game objects into an UIImage and set that as background image of that group. That background images can also be animated. So maybe you can draw the movement of your game object into several UIImages and then set those as animated background image.
One way that I have found to have a little more control over where a WKInterfaceObject is on the screen is to add it to a parent WKInterfaceGroup, and center your item (vertically and/or horizontally depending on how you want to move it). Then you can tweak the group's relative height / width (setRelativeHeight and setRelativeWidth) and it's alignment (setVerticalAlignment and setHorizontalAlignment) to get your item positioned where you want.
This doesn't give you the ability to tweak frames or give the item a particular coordinate, but it does give you the ability to programmatically move a WKInterfaceObject anywhere on the screen.
For example if you want your object in the exact center of the screen vertically, you leave the group's relative height to its container as 1, with your item centered. To get the object at 40% from the top, you can change the group's relative height to 0.8 and it's alignment to .Top. If you want it 60% from the top, the relative height would still be 0.8, but the group's vertical alignment should be .Bottom etc.
I know this is kind of a confusing way to have to accomplish this, but this worked for me in getting items positioned exactly how I wanted.
I want to draw a rectangle on a chart and have it move with the rest of the chart. However, it stays where it was originally rendered in pixel coordinates.
I would have thought there would be a way to add annotations such as this in world coordinates rather than screen coordinates, but I was not able to find a way. Is such a thing possible?
Assuming I am stuck with annotating in screen coordinates, what event(s) do I need to handle to redraw the rectangle when the user zooms or scrolls?
Simple example:
chart.renderer.rect(chart.xAxis[0].toPixels(1432897200000),chart.yAxis[0].toPixels(1.09674),14,14, 0)
http://jsfiddle.net/bz3t79p5/
Also, I have had no luck with using axis.toPixels() for setting the rectangle's width and height. I was getting values that were way out of whack. Is there a trick to converting width and height from world coordinates to pixels?
Does an affine transform reposition the content within the view or transform the view's output?
In my experiments, it appears to be the latter. If I apply a translation moving the contents off the view the right, I do not appear to be able to draw in the area within the view to the left.
As requested: I would like to be able to create a scrolling view that rotates. Imagine viewing part of a globe viewed parallel to the axis. The content has to rotate with scrolling.
If I use an affine transform, it appears that the view display is transformed, not the content within the view. If I scroll the content to the right using an affine transform, I get a black area to the left visible in the view that I cannot draw in.
I am trying to confirm whether my interpretation of what is happening is correct. If so, I need to try a different approach.
I'm drawing a few circles, each filled with an image. When the user pans I'd like to scale/resize the circles. So I called drawRect again and again, redrawing every GCRect until the gesture was completed - of course the animation was very choppy. In my case a UIScrollView doesn't fit the needs, because I don't want to scroll, but to scale the circles while the user is panning.
Is there any way except using OpenGL ES to implement this functionality?
Do you really need custom drawing for this? You can easily clip an image to a circle without drawRect.
Without -drawRect:
Using Core Animation you can set the corner radius of a layer. If all you want is to show an image inside a circle then you can put the image in an image view with a square frame and set the corner radius of the image views layer to half the width of the frame.
Now each time the user drags you can change the bounds and the corner radius of the image views layer. This will make it look like the circle becomes bigger/smaller.
If you require custom drawing
Maybe you are doing some custom shadows or blending that can only be done with Core Graphics. If so, you could apply a scale transform and stretch the image while the user is dragging their finger and only redraw once the finger lifts from the screen. That will be much, much cheaper and is also very easy to implement. Just create a scale transform (CGAffineTransformMakeScale(xScale, yScale);) and set it as the transform on the view with the circle (this will only work if each circle is its own view).
Note: You can still use the same trick (scaling while dragging and then redrawing) if you use the corner radius approach if you require the extra performance.
I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.