How to detect if an image is square? [closed] - image-processing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
I am making a desktop image editing software using Processing. It will allow the user to select the image to edit. The area in which the user can do the editing is a fixed 640 x 480 screen.
This means that I will have to scale the input image to fit the screen. That is easy to do with rectangular images. The problem arises when dealing with square images.
Programatically, 2500x2501 is not a square image. For all practical purposes it is.
How do I make sure that I properly scale these images ?

Calculate the aspect ratio (width / height, or vice-versa). I suggest dividing whichever is smaller by the other one, so you always get a number that is no greater than one.
Then define a threshold, as a number between 0 and 1. If the resulting division gives a result smaller than the threshold, you can consider the image a non-square.

Something along these lines...
var ratio = 1;
if(Height>Width)
{
ratio = (Height / Width);
}else{
ratio = (Width / Height);
}
var ThresHoldVal = 0.1; // 10% out.
if((Ratio-1) > ThresholdVal)
{
//Invalid.
}

Related

get subImage from image [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have following image,
from that I need to use separate images like below.
I don't know which kind of functionalities can be work here.
I don't want to just crop that image from photoshop or anything like that. I know there is some way to achieve this. But don't know how to get sub-image.
I've searched somewhere long ago but now can't find the way.
could you please help me to get this.
I've already visited here.
Swift 4
If I understood correctly you would want something like this:
[YOUR_FIRST_IMAGE]
let image = UIImage(named: "[YOUR_FIRST_IMAGE]")
let fromRect = CGRect(x:[OFFSET_HERE], y:0,width:[WIDTH_OF_EACH_ICON],height:[HEIGHT_OF_EACH_ICON])
let croppedImageFromRect = image?.cgImage!.cropping(to: fromRect)
let dottedCircleGreenImage = UIImage(cgImage: croppedImageFromRect!)
imageView.image = dottedCircleGreenImage
To select a different sub image from this collection ([YOUR_FIRST_IMAGE]) you have to offset the x (in the CGRect) with the width of each icon multiplied by the position of image you want minus 1.
For example to select the fifth one: . Get the width of a single icon than multiply it by 5.
Notes:
imageView in the above example is some outlet or form of reference to a UIImageView in a view.
force unwrapping variables is not a good practise, this is just per example.

iOS SpriteKit Get All nodes in a current scene and make the application universal. [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Hi, I am new to sprite kit framework I just wanted to know two things:
1) How to get all the SKSprite Nodes in a current scene?
2) How to make the application universal i.e it will run on iPhone 3.5", iPhone 4" and on iPad?
Thanks in advance.
in Scene.m, you can use self.children to get all the SKSprite Nodes
http://www.raywenderlich.com/49695/sprite-kit-tutorial-making-a-universal-app-part-1
While the second part of your answer is too broad, I can understand where you're coming from, as it's a bit overwhelming trying to figure out how to support all of the different screen sizes. But for other new developers who come across a similar overly broad question, if you are simply looking to have your scene fit in the different screen resolutions, and for the time being want to ignore the concept of multiple image resolutions and asset catalogs, one place to start is with handling the size of your scene and its scale mode:
In your viewController you could do something like this. It's a bit heavy handed, but designed to show the distinctions:
//we'll set what we want the actual resolution of our app to be (in points)
//remember that points are not pixels, necessarily
CGSize sceneSize = CGSizeMake(320, 568);
//check if this is an ipad, this means the screen has 2x the points
//it also means that the resolution is different than the iphone 5
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
sceneSize = CGSizeMake(sceneSize.width * 2, sceneSize.height * 2);
}
SKView *skView = (SKView*)self.view;
self.mainPrimaryScene = [[PrimaryScene alloc] initWithSize: sceneSize forSKView:skView];
//by setting the scale mode to fit, it takes the size of the scene
//and ensures that the entire scene is rendered on screen, in the correct ratio
self.mainPrimaryScene.scaleMode = SKSceneScaleModeAspectFit;
if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad)
{
//you can check out all the other scale modes if you want to fill, etc
mainPrimaryScene.scaleMode = SKSceneScaleModeResizeFill;
}
[skView presentScene:mainPrimaryScene];

Adding two images and make one [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I am new programming IOS app , and I got into this problem : I have two images in the app that I want to post in twitter , but Twitter just accept one picture to be uploaded , so I came out with the idea to make Image A + Image B just One Image. I already have the posting and resizing the images process done. However, I need help to make Image A + Image B just one Image. Anyone can HELP? Thank you.
Here is some code you can use...
Assumptions:
You have your two images already initialized
The two images have the same dimension. If your images have different sizes you'll have to play around with the dimensions yourself.
UIImage *girl1 = INITIALIZE_HERE;
UIImage *girl2 = INITIALIZE_HERE;
// Get the size of your images (assumed to be the same)
CGSize size = [girl1 size];
// Create a graphic context that you can draw to. You can change the size of the
// graphics context (your 'canvas') by modifying the parameters inside the
// CGSizeMake function.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(size.width*2, size.height), NO, 0);
// Draw the first image to your newly created graphic context
[girl1 drawAtPoint:CGPointMake(0,0)];
// Draw your second image to your graphic context
[girl2 drawAtPoint:CGPointMake(size.width,0)];
// Get the new image from the graphic context
UIImage *theOneImage = UIGraphicsGetImageFromCurrentImageContext();
// Get rid of the graphic context
UIGraphicsEndImageContext();
Hope this helps!

Detect end of image [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have a bunch of UIImages that consist of black and white book pages. I want to be able to "crop" or "cut" the image based on where the page ends (Where the white space begins). To illustrate what I mean, look at the image below. I want to crop the image programatically right after the word "Sie".
I am not sure how to go about this problem. I have gave it some thought however, perhaps detecting where the black pixels stop since it will always be black and white but not sure how to properly do this. Can anyone offer any insight or tell me how this may be done?
Thank you so much!
Here's some python code I wrote using the Python Imaging Library. It finds the lowest black pixel, and then crops the image five pixels down from that y value.
import Image
img = Image.open("yourimage.fileformat")
width,_ = img.size
lowestblacky = 0
for i, px in enumerate(img.getdata()):
if px[:3] == (0, 0, 0):
y = i/width
if y > lowestblacky:
lowestblacky = y
img.crop((0,0,width,lowestblacky+5)).save("yourimagecropped.fileformat")
Python is available on nearly all operating systems, so I hope you'll be able to use this. If you want to crop the image right after the last black pixel, simply remove the "+5" from the last line, or change the value to your liking.
Convert the image into pixel values. Start with the bottom line of pixels and look for black pixels. If some are found, the line is the end of the image. If there is no black pixel, continue to the next line of pixels.
Actually, "black" pixel is probably too simplified. A good solution would use a limit of gray (every image has some noise) or even better, a change in pixel contrast. You can also apply an image filter (e.g. Edge detection or sharpen) before starting the detection algorithm.

How to mimic iOS Maps application for an iPad based survey/diagram application where instead of a map I have a image (building or site plan) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am starting a development for an iPad application to help surveyors perform building/site surveys. The basic scenario is that the application will present a plan of a selected building or site (which can be manipulated in the usual iPad way - zoom, rotate, pan). The surveyor (user) will then be able to select pins to drop on the plan which will have pop-outs for entering survey details against the pin. Think iOS Maps app but with a plan/diagram instead of a map and (I guess) using the images co-ordinates rather than geo info.
I have the plan loading into a UIImageView and the various gestures are all working smoothly and I'm now at the big issue of how I'm going to have the pins 'attached' or locked to the plan as it gets zoomed/rotated/panned (and saved/restored). I have played around with a few ideas such as adding pins as UIImageView objects to the base plan image with addsubview. This works in that the pins are locked into position on the base image but then are also scaled and rotated along with the base image when gestures are used. The kind of questions I then have with this approach is... How to have track the pin locations in relation to the base image (plan/diagram) while the base image may be zoomed and rotated? How to keep the pins from scaling (always the same size)? Retain the pins orientation?. I am concerned that I may go down a track that ends up with performance issues or display issues when dealing with many pins (100s).
Again, with iOS Maps application (or google maps) in mind, I am seeking some guidance as to best approach for this at higher level but obviously the more detail and specifics the better!
Many thanks in advance.
Michael.
This problem was solved in another question I subsequently asked which can be found here... iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative
Basically, I used a scrollView instead and used the following calculations for positioning x, y...
newX = (pin.baseX * scrollView.zoomScale) +
(((pin.frame.size.width * scrollView.zoomScale) -
pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) +
((pin.frame.size.height * scrollView.zoomScale) -
pin.frame.size.height);
Note: 'pin' is a custom object that inherits from UIImageView which has properties 'baseX' and 'baseY' where I store the original x/y coordinates of the pin at zoomScale 1.0. See the link above for sample of my full implementation. Thanks.
You have to be working out the coordinates for the pins in a mathematical way rather then using pixels for example if your image was 100 x 100 and you wanted to put the pin in the centre point of the map rather then setting the x and y coordinates to (50, 50) you set them to (image.size.width/2,image.size.height/2) because if you were to then scale the view down the pin would remain in the centre point.
Obviously this is a fairly basic example but the same logic can be applied wherever you put the pin on the image.
Hope this helps.
OR
You could work out your new coordinates by how much the image is being scaled. If the image was again 100 x 100 and you scaled it down to 50 x 50 and you wanted your pin to remain in the same place you would do (x / (100/50), y / (100/50))
The formula would be:
x = x / (original width / new width)
y = y / (original height / new height)

Resources