MonoTouch Mapkit image overlay - ios

I have been looking for a good tutorial on adding an image overlay for Mapkit in C# Monotouch.
I have found many overlay examples for coloured circles or polygons. But I want to load a PNG over the top of my maps. I am coming from MonoAndroid and have done it there, but need to transfer my program across to iOS.
Even a objective C example would help, but Mono would be better.

I ended up downloading some native objective C code and pretty much just converting it into C#. The function names are very similar, the Xamarin API references Documentation is very helpful.
There where some tricky bumps I ran into around the app delegate, and how it is handled differently in C# to Objective C.
Here are the two hardest functions to convert and my solution:
1) The Draw functions in the map overlay class
public override void DrawMapRect (MKMapRect mapRect, float zoomScale, CGContext ctx)
{
InvokeOnMainThread(
() =>
{
UIImage image = UIImage.FromFile(#"indigo_eiffel_blog.png");
DrawImgRotated(image, 0, ctx);
}
);
}
public void DrawImgRotated(UIImage image, float rotDegree, CGContext c)
{
c.SaveState();
CGImage imageRef = image.CGImage;
//loading and setting the image
MKMapRect theMapRect = ((MapOverlay)this.Overlay).BoundingMapRect;//MKMapRect theMapRect = [self.overlay boundingMapRect];
RectangleF theRect = RectForMapRect(theMapRect);
//we need to flip and reposition the image
c.ScaleCTM( 1.0f, -1.0f);
c.TranslateCTM(-theRect.Width/8,-theRect.Height);
// Proper rotation about a point
var m = CGAffineTransform.MakeTranslation(-theRect.Width/2,-theRect.Height/2);
m.Multiply( CGAffineTransform.MakeRotation(DegreesToRadians(rotDegree)));
m.Multiply( CGAffineTransform.MakeTranslation(theRect.Width/2,theRect.Height/2));
c.ConcatCTM( m );
c.DrawImage(theRect, imageRef);
c.RestoreState();
}
and 2) the bounding mapRect function in my mapOverlay class overriding MKOverlay.
Yes the position is hardcoded, I am working on unit conversion atm but those are the correct coordinates to draw the image same as in the sample obejctive c code I used.
public MKMapRect BoundingMapRect
{
[Export("boundingMapRect")]
get
{
var bounds = new MKMapRect(1.35928e+08, 9.23456e+07,17890.57, 26860.05);
return bounds;
}
}
The source code for the Objective C project I converted is here: https://github.com/indigotech/Blog-MKMapOverlayView
Xamarin API reference documentation: http://iosapi.xamarin.com/

The way you want to go about this will depend on the kind of image you want to overlay. If it’s just pretty small, you should be able to get away with just using a single image. However, if it covers a larger area that uses expect to be able to zoom in on, you may have to split it up into separate tiles for better performance.
Here’s some other questions on Stack Overflow that might answer it for you:
How do I create an image overlay and add to MKMapView?
iPhone - Image overlay MapKit framework?
See Apple’s WWDC2010 sample code TileMap
https://github.com/klokantech/Apple-WWDC10-TileMap (posted to GitHub by someone)
None of this has anything to do with Mono, but you should be able to convert it…

Related

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

GPUImage: How to determine an average color within the hexagon?

I’m doing video processing with GPUImage2. When the app starts, I create a hexagonal grid and add it to my cameraView. The grid is fullscreen and consists of about 100 of hexagons.
In general, what I’m trying to achieve is
For each frame I want to find an average color (in RGB or even better HSV) within each cell of the grid.
When the color is determined, I want to draw something in the center of each hexagon depending on its average color.
I have an array with hexagons, each of them knows its vertexes’ coordinates and center.
I also have an array with UIBezierPaths which contains bounds of these hexagons (just in case).
So my code looks like this
class ViewController: UIViewController {
var hexagons = [HKHexagon]()
var hexagonsBounds = [UIBezierPath]()
let averageColorExtractor = AverageColorExtractor()
override func viewDidLoad() {
super.viewDidLoad()
do {
camera = try Camera(sessionPreset:AVCaptureSessionPreset1920x1080)
camera.delegate = self
cameraView.orientation = .landscapeLeft
camera --> cameraView
camera.startCapture()
drawGrid()
} catch {
fatalError("Could not initialize rendering pipeline: \(error)")
}
}
}
extension ViewController: CameraDelegate {
func didCaptureBuffer(_ sampleBuffer: CMSampleBuffer) {
for hexagon in hexagons {
}
}
}
I guess didCaptureBuffer() should be the place to apply averageColorExtractor to each hexagon but don’t have an idea what to do next..
I am new to iOS development and it’s the first time I use GPUImage2… Please, guide me in the right direction.
Not coding for your platform at all but GPU architecture allows to do it like this:
pass the image as texture
render the center points only as points
in fragment shader compute the avg color of hex around actual position
This is the hardest and most performance demanding part. If you compute just inscribed circle it is easy but for hexagon you need to compute which texel is inside and which not. For axis aligned hexagons you can divide hex into regions (2x rectangle, 4x triangle) for rotated hexes you need add transformation matrix.
compute/render output inside the center point.
I do not know what your framework can do for you from this. If you rendered stuff is bigger then just the center point then you need either use another pass in your render or use bigger primitive then points in #2 but that means you will compute the avg color for each rendered pixel which can slow things down a lot.
Take a look at GLSL shader that uses this technique (for entirely different task but the technique is the same):
How to implement 2D raycasting light effect in GLSL
If this is not adaptable to your platform then ignore this answer ...

Issue with Pixi.js lineTo() using WebGL

I'm having an issue using Pixi.js, where the lineTo method seems to draw lines that aren't specified. The bad lines aren't uniform width (seem to taper off towards the ends) and are much longer than they should be. Example jsfiddle showing the problem here:
http://jsfiddle.net/b1e48upd/1/
var stage, renderer;
function init() {
stage = new PIXI.Stage(0x001531, true);
renderer = new PIXI.WebGLRenderer(800, 600);
// renderer = new PIXI.CanvasRenderer(400, 300);
document.body.appendChild(renderer.view);
requestAnimFrame( animate );
graphics = new PIXI.Graphics();
stage.addChild(graphics);
graphics.beginFill(0xFF0000);
graphics.lineStyle(3, 0xFF0000);
graphics.moveTo(200, 200);
graphics.lineTo(192, 192);
graphics.lineTo(198, 183);
graphics.lineTo(189, 197);
}
function animate() {
requestAnimFrame( animate );
renderer.render(stage);
}
init();
Using the canvas renderer gives correct results.
Trying to search for this problem, I've gathered that the WebGL renderer may have an issue with non-integer values (shown in the question here: Pixi.js lines are rendered outside dedicated area), and I've also seen that sending consecutive lineTo commands to the same coordinates will cause issues, but my example doesn't have either of those.

How to draw images to viewport in Max SDK

I want to be able to draw images to the viewport in my 3d Max Plugin,
The GraphicsWindow Class has functions for drawing 3d objects in the viewport but these drawing calls are limited by the current viewport and graphics render limits.
This is undesirable as the image I want to draw should always be drawn no matter what graphics mode 3d max is in and or hardware is used, futher i am only drawing 2d images so there is no need to draw it in a 3d context.
I have managed to get the HWND of the viewport and the max sdk has the function
DrawIconButton();
and i have tried using this function but it does not function properly, the image flickers randomly with user interaction, but disappears when there is no interactivity.
i Have implemented this function in the
RedrawViewsCallback function, however the DrawIconButton() function is not documented and i am not sure if this is the correct way to implemented it.
Here is the code i am using to draw the image:
void Sketch_RedrawViewsCallback::proc (Interface * ip)
{
Interface10* ip10 = GetCOREInterface10();
ViewExp* viewExp = ip10->GetActiveViewport();
ViewExp10* currentViewport;
if (viewExp != NULL)
{
currentViewport = reinterpret_cast<ViewExp10*>(viewExp->Execute(ViewExp::kEXECUTE_GET_VIEWEXP_10));
} else {
return;
}
GraphicsWindow* gw = currentViewport->getGW();
HWND ViewportWindow = gw->getHWnd();
HDC hdc = GetDC(ViewportWindow);
HBITMAP bitmapImage = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_BITMAP1));
Rect rbox(IPoint2(0,0),IPoint2(48,48));
DrawIconButton(hdc, bitmapImage, rbox, rbox, true);
ReleaseDC(ViewportWindow, hdc);
ip->ReleaseViewport(currentViewport);
};
I could not find a way to draw directly to the view-port window, however I have solved the problem by using a transparent modeless dialog box.
May be a complete redraw will solve the issue. ForceCompleteRedraw

How can I fill an actionscript 3 polygon with a solid color?

I'm building a map editor for a project and need to draw a hexagon and fill it with a solid color. I have the shape correct but for the life of me can't figure out how to fill it. I suspect it may be due to whether the thing is a Shape, Sprite or UIComponent. Here is what I have for the polygon itself:
import com.Polygon;
import mx.core.UIComponent;
public class greenFillOne extends UIComponent {
public var hexWidth:Number = 64;
public var hexLength:Number = 73;
public function greenFillOne() {
var hexPoly:Polygon = new Polygon;
hexPoly.drawPolygon(40,6,27+(hexWidth*.25),37,0x499b0e,1,30);
addChild(hexPoly);
}
}
The Polygon class isn't a standard Adobe library, so I don't know the specifics. However, assuming that it uses the standard flash API, it should be no problem to add some code to extend the function. You just need to make sure you're doing a graphics.beginFill before the graphics.lineTo / graphics.moveTo functions. And then finish with graphics.endFill.
e.g.,
var g:Graphics = someShape.graphics;
g.beginFill(0xFF0000,.4); // red, .4 opacity
g.moveTo(x1,y1);
g.lineTo(x2,y2);
g.lineTo(x3,y3);
g.lineTo(x1,y1);
g.endFill();
This will draw a triangle filled with .4 red.
I'll put this here because answering it as a comment to Glenn goes past the character limit. My actionscript file extends UIComponent. When I created a variable hexPoly:Polygon = new Polygon; it would render the outline of the hex, but would not fill it no matter what I did. I examined polygon.as and duplicated the methods, but as a sprite and it worked. So, I need to figure out how to wrap the polygon as a sprite, or just leave it as is.
var hexPoly:Sprite = new Sprite;
hexPoly.graphics.beginFill(0x4ea50f,1);
hexPoly.graphics.moveTo(xCenter+(hexWidth*.25)+Math.sin(radians(330))*radius,offset+(radius-Math.cos(radians(330))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(30))*radius,offset+(radius-Math.cos(radians(30))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(90))*radius,offset+(radius-Math.cos(radians(90))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(150))*radius,offset+(radius-Math.cos(radians(150))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(210))*radius,offset+(radius-Math.cos(radians(210))*radius));
hexPoly.graphics.lineTo(xCenter+(hexWidth*.25)+Math.sin(radians(270))*radius,offset+(radius-Math.cos(radians(270))*radius));
hexPoly.graphics.endFill();
addChild(hexPoly);

Resources