Making a position indicator 'map' of position in a scrollview iOS - ios

I have a view containing a scrollview the size of a large image, and i'd like to add a position indicator for where you are in the scrollview, like a map in games.
I've created a smaller view on top of the scrollview with a smaller version of the image, and I'm wondering how I could add a rectangle relative to the size and position of the zoom scale of the scrollview in my smaller view, to indicate on the small image what part of the larger image you're currently looking at? I'm targeting iOS 5+ using ARC.
here's what i have so far - it seems to set the origin correctly, but when i zoom it gets smaller when it should get bigger and vice versa:
int scrollxint = self.scrollView.bounds.origin.x;
int scrollyint = self.scrollView.bounds.origin.y;
int scrollxlength = self.scrollView.contentSize.width;
int scrollylength = self.scrollView.contentSize.height;
int reducedscrollxint = (scrollxint*0.05);
int reducedscrollyint = (scrollyint*0.05);
int reducedscrollxlength = (scrollxlength*0.05);
int reducedscrollylength = (scrollylength*0.05);
areaFrame.frame = CGRectMake(reducedscrollxint, reducedscrollyint, reducedscrollxlength, reducedscrollylength);
[self.mapView addSubview:areaFrame];
Any help would be much appreciated.

I had to use the zoom scale relative to the original size of the area indicator to make this work, but this code is working for me:
int scrollxint = self.scrollView.bounds.origin.x;
int scrollyint = self.scrollView.bounds.origin.y;
float scale = self.scrollView.zoomScale;
int reducedscrollxint = (scrollxint*0.05);
int reducedscrollyint = (scrollyint*0.05);
areaFrame.frame = CGRectMake(reducedscrollxint, reducedscrollyint, (10/scale), (6.5/scale));
[self.mapView addSubview:areaFrame];

Related

Get x,y from Top, Left, Width and Height for Corona Objects

Good day
I just started using Corona and I'm kinda confused with this x and y properties. Is it possible to perhaps get the x and y values using Top, Left, Width and Height properties if these are provided? For example, I want an an object to be at Left=10, Top=0, Width=40 and Height=40. Can someone please advise how I can do this, this could be for images, text, textfield etc
Of course. There are several methods to do this.
Example 1:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.anchorX = 0; myImage.anchorY = 0
myImage.x = 10 -- Left gap
myImage.y = 0 -- Top gap
localGroup:insert(myImage)
Here, setting the anchor points to (0,0) will make the reference point of your images' geometric center to its top left corner.
Example 2:
local myImage = display.newImageRect( "homeBg.png", 40, 40)
myImage.x = (myImage.contentWidth/2) + 10
myImage.y = (myImage.contentHeight/2)
localGroup:insert(myImage)
Here, the center-X position of your image is calculated by adding Left gap to the image's half width itself. And the center-Y position is calculated by adding Top gap to the image's half height
You can position the objects with any of such methods. If you are a beginner in corona, then the following topics will be useful for you to get more knowledge about Displaying Objects with specific size, position, etc.
Corona SDK : display.newImageRect()
Tutorial: Anchor Points in Graphics 2.0
Corona uses like a Cartesian Coordinate System But the (0,0) is on the TOP LEFT you can view more here:
https://docs.coronalabs.com/guide/graphics/group.html#coordinates
BUT: You can get the screen corners based on the image width and height using this codes:
NOTE THAT YOU SHOULD YOU SHOULD CHANGE IT WITH YOUR IMAGE
local image = display.newImageRect(“images/yourImage.png”, width, height)
--TOP:
image.y = math.floor(display.screenOriginY + image*0.5)
--BOTTOM:
image.y = math.floor(screenH - display.screenOriginY) - image.height*.5
--LEFT:
image.x = (screenOffsetW*.5) + image*.5
--RIGHT:
image.x = math.floor(screenW - screenOffsetW*.5) - image.width*.5
Corona SDK display objects have attributes that can be read or set:
X = myObject.x -- gets the current center (by default) of myObject
width = myObject.width
You can set these values too....
myObject.x = 100 -- centers the object at 100px left of the content area.
By default Corona SDK's are based on their center, unless you change it's anchor point:
myObject.anchorX = 0
myObject.anchorY = 0
myObject.x = 100
myObject.y = 100
by setting the anchor's to 0, then .x and .y refer to the top left of the object.

How to use scanCrop property on a custom sized ZBarReaderView with an overlay

I have a ZBarReaderView embedded in a viewController and I would like to limit the area of the scan to a square in the middle of the view. I have set the resolution of the camera to 1280x720. Assuming the device is in Portrait Mode, I calculate the normalized coordinates only using the cameraViewBounds, which is the readerView's bounds and the overlayViewFrame which is the orange box's frame, as seen in this screenshot - http://i.imgur.com/xzUDHIh.png . Here is what I have so far:
self.cameraView.session.sessionPreset = AVCaptureSessionPreset1280x720;
//Set CameraView's Frame to fit the SideController's Width
CGPoint origin = self.cameraView.frame.origin;
float cameraWidth = self.view.frame.size.width;
self.cameraView.frame = CGRectMake(origin.x, origin.y, cameraWidth, cameraWidth);
//Set CameraView's Cropped Scanning Rect
CGFloat x,y,width,height;
CGRect cameraViewBounds = self.cameraView.bounds;
CGRect overlayViewFrame = self.overlay.frame;
y = overlayViewFrame.origin.y / cameraViewBounds.size.height;
x = overlayViewFrame.origin.x / cameraViewBounds.size.width;
height = overlayViewFrame.size.height / cameraViewBounds.size.height;
width = overlayViewFrame.size.width / cameraViewBounds.size.width;
self.cameraView.scanCrop = CGRectMake(x, y, width, height);
NSLog(#"\n\nScan Crop:\n%#", NSStringFromCGRect(self.cameraView.scanCrop));
As you can see in the screenshot, the blue box is the scanCrop rect, what I want to be able to do is have that blue box match the orange box. Do I need to factor in the resolution of the image or the image size when calculating the normalized values for the scanCrop?
I cannot figure out what spadix is explaining in this comment on sourceforge:
"So, assuming your sample rectangle represents screen points in portrait orientation and using the default 640x480 camera resolution, your scanCrop rectangle would be {75/w, 38/320., 128/w, 244/320.}, where w=480/(320*640.) is the major dimension of the image in screen points." and here:
"assuming your coordinates refer to points in portrait orientation, using the default camera image size, located at the default location and taking into account that the camera image is rotated:
scanCrop = CGRectMake(50/426., 1-(20+250)/320., 150/426., 250/320.)"
He uses the values 426 and 320 which I am assuming have to do with the image size but in one of his comments he mentions that the resolution is 640x480. How do I factor in the image size to calculate the correct rect for scanCrop?

position view relative to tab indicator

I've been given a UI that uses a UITabBarController, and in one of the tabbed ViewControllers, a View (an arrow as a UIImageView) needs to be positioned relative to the tab indicator (the arrow needs to point down at the indicator, centered along the x-axis). I used basic math to emulate the position based on known variables, and it works properly for iPhone/Pod in both landscape and portrait orientations, but fails on iPad (it's a little too far right in both orientations - seems worse in landscape but that might just be my perception).
Here's what I used:
int visibleWidth = self.view.frame.size.width;
int tabBarSize = self.tabBarController.tabBar.frame.size.width;
int tabSize = tabBarSize / [self.tabBarController.viewControllers count];
int tabPosition = 2; // hint should point at the 3rd tab
int arrowIconWidth = hintIcon.image.size.width;
int arrowIconHeight = hintIcon.image.size.height;
int tabEmulationPosition = visibleWidth / 2 - tabBarSize / 2;
int tabOffsetPosition = tabEmulationPosition + ( tabSize * tabPosition );
int iconPosition = tabOffsetPosition + ( tabSize / 2 - arrowIconWidth / 2 );
Is there a better way than trying to "fake" the math (maybe a getBoundingClientRect type method?) Or is the math approach the best bet, assuming there's a correct-able flaw in what I've got?
TYIA
Your math seems alright except one caveat: the Core Graphics distances are all of type CGFloat not int. Remember that integer division works differently in C than what you might expect:
int x = 5 / 2; // x == 2
Thus, replace your ints with CGFloats and divide by 2.0 rather than 2.
Also, in the iPad the tab button are not necessarily spaced evenly across the whole bar but have a certain padding to the left and right. You would have to fiddle for finding a suitable constant, or experimentally find a reliable way to calculate it based on the number of view controllers.
Assuming a fixed tab bar button width, you could calculate an offset to add to the x of your image:
CGFloat tabSize =
FIXED_TAB_BUTTON_WIDTH * self.tabBarController.viewControllers.count;
CGFloat offset = (tabBarSize - tabSize) / 2.0;

Scaling sprites (not textures) for target viewport size/device in MonoGame

When you have to display a series of visual components (sprites) within the context of a game each taking a literal height and width that needs to be relative to the height & width of the Viewport (not necessarily aspect ratio) of the target device:
Is there a scaling class to help come up with scaling ratio in a dynamic fashion based on current device viewport size?
Will I need to roll my own scaling ratio algorithm?
Any cross platform issues I should be aware of?
This is not a question relating to the loading of assets based on target device nor is it a question of how to perform the scaling of the sprite (which is described here: http://msdn.microsoft.com/en-us/library/bb194913.aspx), rather a question of how to determine the scale of sprites based on view port size.
You can always create your own implementation of scaling.
For example, the default target viewport dimensions are:
const int defaultWidth = 1280, defaultHeight = 720;
And your current screen dimensions are 800×600, which gives you a (let's use a Vector2 instead of two floats):
int currentWidth = GraphicsDevice.Viewport.Width,
currentHeight = GraphicsDevice.Viewport.Height;
Vector2 scale = new Vector2(currentWidth / defaultWidth,
currentHeight / defaultHeight);
This gives you a {0.625; 0.83333}. You can now use this in a handy SpriteBatch.Draw() overload that takes a Vector2 scaling variable:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
Vector2 scale,
SpriteEffects effects,
float layerDepth
)
Alternatively, you can draw all your stuff to a RenderTarget2D and copy the resulting image from there to a stretched texture on the main screen, but that will still require the above SpriteBatch.Draw() overload, though it might save you time if you have lots of draw calls.
Another Option to generate the scale would be to leverage:
var scaleMatrix = Matrix.CreateScale(
(float)GraphicsDevice.Viewport.Width / View.Width,
(float)GraphicsDevice.Viewport.Width / View.Width, 1f);
http://msdn.microsoft.com/en-gb/library/bb195692.aspx.
But this did not meet my needs, as I would then have to roll my own transform to map touch input location to the 'transformed' sprites (which respond to user touch input by knowing their own position and size).
In the end I used a percentage based approach.
I basically got the viewport height and width...
GraphicsDevice.Viewport.Width
GraphicsDevice.Viewport.Height
...then calculated the Height and Width of my sprites (Note: "as mentioned in question they take a literal height and width") based on their relative size to the screen myself using percentages.
//I want the buttons height and width to be 20% of the viewport
var x, y = GraphicsDevice.Viewport.Width * 0.2f; //20% of screen width
var btnsize = new Vector(x,y);
var button = new GameButton(btnsize);
Then once I have the size of the button I am able to calculate the position on the screen to render the button based of the size of the button and the available viewport size, against working in relative position based in percentages.

mapRectThatFits : What does it do?

I don't understand mapRectThatFits in the slightest. Here is a simple line of code:
MKMapRect zoomRectNorm = [mapView mapRectThatFits:zoomRect];
// BREAKPOINT HERE
Now lets look at the debugger.
Print zoomRect:
(lldb) p zoomRect
(MKMapRect) $1 = {
(MKMapPoint) origin = {
(double) x = 4.2997e+07
(double) y = 9.36865e+07
}
(MKMapSize) size = {
(double) width = 26493.1
(double) height = 148685
}
}
Print zoomRectNorm:
(lldb) p zoomRectNorm
(MKMapRect) $2 = {
(MKMapPoint) origin = {
(double) x = 4.29283e+07
(double) y = 9.36379e+07
}
(MKMapSize) size = {
(double) width = 163840
(double) height = 245760
}
}
So it adjusted the aspect ratio to 2:3 but it did not maintain the width, the height, or the origin!?
According to the documentation it should return:
A map rectangle that is still centered on the same point of the map
but whose width and height are adjusted to fit in the map view’s
frame.
Whats the deal? I would expect it to maintain the origin (as stated in the docs) and at least one of the width/height?
MapRect that fits will zoom out until it hits a zoom level that can contain your region do that the tiles are displayed in their native resolution.
It gives you back the map rect that you would get if you used setVisibleMapRect on the mapview. The center should be the same. The origin probably won't. You'll have to think about the difference between origin and center to understand why. The other thing to understand is that, although you ask for a specific map rect to be set, the mapview will always set its own idea of what is best. Its idea of what is best is the one that allows it to display tiles without zooming in or out.

Resources