I have an image that can be photographed on different orientations. My problem is after that the image was photographed I have to rotate it in an upright position. My initial plan was to put a marker on the upper left hand of the image, find its location, and rotate it. However if I take this approach, I have to study object/feature extraction. Is there any other simple solution to this problem? By the way, I need to make an iOS app out of it
Thanks in avance!
Related
I have an app where users can scale and position images in a number of ways. They can drag an entire layer of images around, scale that layer, drag around individual images inside the layer, and scale those individual images.
For some unrelated functionality, I need to generate the image coordinates that a user is pointing to on a given image (ie (0,0) for the top left & (width,height) for the bottom right), independent of how much it has been moved around and scaled. Is there a built in method for tranforming an absolute mouse position to it's relative position on an image (and vice versa) that takes into account any scaling/panning? I have started building my own methods for this tranformation but before I got too deep I wanted to see if it was already built in somewhere that I'm not seeing.
Konva doesn't have such methods yet. You have to implement them manually.
You can subscribe to this related issue: https://github.com/konvajs/konva/issues/303
I have been searching from last two days on internet, I have checked many source codes on net but none of them has provided the result I want.
The image rotation would have perspective but still there would be no changes in the heights of both left and right sides of an image.
I want to set image inside the laptop screen
Please help me out, Thanks.
So you want to 2D pespective drawing of a laptop screen (on an iOS device?) and put a 2D image on that screen, but with the image transformed so its perspective looks correct on the laptop screen, right?
What you need to do is to add an image view on top of your laptop image view. Lets call it laptopScreenImageView.
Then apply a CATransform3D to that the laptopScreenImageView's layer.
The trick to get 3D perspective out of a CALayer is to modify the .m34 value of the transform. Typically you set the .m34 value to a very small negative number, somewhere around -1/200 to -1/500 (the denominator in the fraction is the z coordinate of the "eye position" for viewing the perspective image, in pixels, or how many pixels "above" the image the viewer's eye should seem to be. I don't fully understand it, to be honest. I fiddle with the .m34 value until I get something that looks right.)
Alternately you could try adding a CATransformLayer to your laptop image view's layer, and then adding a CALayer containing your image as a sublayer of the CATransformLayer. I haven't used CATransformLayers before, but the docs say they are supposed to support layers with 3D perspective, giving you the same effect as modifying the .m34 component of a layer's transform.
Is there a way to use the overlays in the UIImagePickerController to show the square picture a user might use, while having a toggle button in there somewhere to switch on the fly?
Currently the iOS 7 camera has this capability, but UIImagePickerController does not (thanks Apple), so is there a way to add this functionality?
Would using AVCaptureSession be necessary? It seems like serious overkill, and I'd have to program flash/focus/etc all over again, I think.
Failing that, is there a customizable class that already exists that I could just implement?
I'm banging my head against the wall trying to figure out the best course of action here, so any help would be appreciated.
You can adjust the camera view by transforming the scale
CGAffineTransform transform = CGAffineTransformMakeScale(1.0, 0.5);
imagePickerController.cameraViewTransform = transform;
with the arguments being the scale to change the x and y axis of the camera screen by. You could add a button that re-presents the camera view within the view controller at the top of your hierarchy. It may not perfectly recapitulate the native phone app, but it will be a pretty good approximation.
I would probably do this with a custom cameraOverlayView that letterboxes the viewfinder. Once the picture has been taken you'll have to crop the resulting image.
Was playing around with CAEmitterLayer and discovered something really weird.
I set up the CAEmitterLayer at the lower left corner, positioned at 45 degree (pointing towards the top right corner) and tried to shoot some arrows toward the top right corner.
Everything worked, except the image that I set via the content property of the cell.
Here is the original image on iOS 7 device:
When run on iOS 6, it becomes like this:
Has anyone experienced this and do you know why this is so? Having two sets of images and check whether the device is iOS 6 or iOS 7 and set the image up accordingly is not a problem for me, but my curiosity urges me to find out why this is so. Thanks in advance.
I am using Xcode 5.
This is normal behaviour in CAEmitterLayer. It uses a different coordinate system than the rest of iOS. As it was a technology derived from MacOS its origin (0,0) is located at the bottom left, while in iOS the origin is located at the top left. When the picture gets drawn it causes the image to get flipped. CAEmitterLayer was not really designed to use images like that, mostly made for particle systems that do not require a specific orientation.
The simplest solution to this would be to flip the image yourself so when CAEmitterLayer flips it again it will appear like you want it. This might have gotten changes in iOS7 so you would have to do a version check and apply the correct image.
You could also flip it in code if you wanted. This is a short code that does it:
UIImage *flippedPicture = [UIImage imageWithCGImage:picture.CGImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
Source: http://www.vigorouscoding.com/2013/02/particle-image-gets-mirrored-by-uikit-particle-system/
To best illustrate the issue I'm having, I created a short screen grab. Watch it here: http://cl.ly/1o3p3x2e2J1a1d3d2N1Q
Basically, the stars on the screen, as they're animated across the screen from right to left, are dimming and brightening on their own. I'm not intending on this happening. When you zoom in, the issue disappears.
My hunch is that this has to do with the size of the objects being drawn and the pixel boundaries. Is this correct? What is the best way to go about fixing this issue?
Thanks!
---Edit---
Here's how I'm loading the texture: http://pastebin.com/RDc8x7Te
And, here's how I'm setting up OpenGL ES: http://pastebin.com/SpvAqPqA
You use nearest and linear for scaling textures, which are both not very accurate. You might want to use linear for both, or build mipmaps. Also in case you use an orthogonal view, try aligning your geometry on pixels.