I have an application where I can have an image moving towards the eye.
When the image enlarged, I would like to have it resized as 635 px where initial size was 220 px. I have the image with starting position as 0 in z axis. I am wanting to calculate the distance from starting position to the resized image.
I have already calculate the distance by hand but when I tried to put it on flash the result is not what i wanted. I am sure that the value I calculated was correct.
I know it may be hard to understand my problem. Please help.
Do you have to use 3D? To make an image larger, you need to merely change its scaleX and scaleY properties.
Related
I have to paint some pictures for a small display of a micro controller. The Display has a resolution of 128x64 Pixel. But the Pixel aren't a square. They have a width of 0.5 mm and a height of 0.75mm. All my nice drawn images in GIMP look ugly on this display.
Can i change the ratio of drawn pixel in GIMP so i can see the image the same way like on my micro controller screen? Is there a setting for this or do i need to use my imagination?
I've looked around in settings menu but found nothing ...
thx in advanced
PS: Wrong Network?
Use Image>Print size to set a different definition for the vertical and horizontal axis (don't forget to "unlink" the two entry fields otherwise changing one will change the other).
Then untick View>Dot for dot so that Gimp no longer maps image pixels to screen pixels and displays the images with their intended definition (and aspect ratio in your case).
Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.
The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.
I've created simple 7 seconds clop which uses standard plugin for FCPX: "Bold Fin" title.
While i am editing this clip - everything fits to the screen:
also everything is ok when i am starts to export this clip to the master file:
but when actual file is ready - it seems like it is cropped:
Could somebody please help to find a reason why my output actually cropped? And how fix this issue?
Judging by the image you provided, you have non-square pixels in your footage or in projects settings (or footage's aspect ratio isn't 16:9). I print-screened the image inside FCP's canvas and found that you have rectangular pixels stretched along X axis, instead of square ones.
Seemingly, FCPX trying to compensate pixel aspect ratio for FullHD export (par = 1.0, ar = 16:9), stretched pixels along Y axis, which led to cropping.
I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral
I have a nested video like this:
Live camera feed
When the user takes a photo, the image is offset along the y axis
Captured Still image
I do want to capture the WHOLE image and let the user scroll up and down. They can do this currently but I want the starting scroll of the image to be centered to match the camera feed preview. So if they take a picture, the image matches the frame that the video feed was showing.
The problem is, because the aspect on the camera is set to AVLayerVideoGravityResizeAspectFill it's doing some 'cropping' to fit the image into the live preview. Since the height is much bigger than the width, there are top and bottom parts that are captured in the image that are NOT showing up in the live feed (naturally).
What I don't know, however, is how much the top is being cropped so I can offset the previewed image to match this.
So my question is: Do you know how to calculate how much is being cropped from the top of a camera with its aspect ratio set to AVLayerVideoGravityResizeAspectFill? (Objective-C and Swift answers welcome!)
The solution I came up with is this:
func getVerticalOffsetAdjustment()->CGFloat
{
var cropRect:CGRect = _videoPreviewLayer.metadataOutputRectOfInterestForRect(_videoPreviewLayer.bounds) //returns the cropped aspect ratio box so you can use its offset position
//Because the camera is rotated by 90 degrees, you need to use .x for the actual y value when in portrait mode
return cropRect.origin.x/cropRect.width * frame.height
}
Its confusing I admit, but because the camera is rotated 90 degrees when in portrait mode you need to use the width and x values. The cropRect will return a value like (0.125,0,0,75,1.0)(your exact values will be different).
What this tells me, is my my shifted y value (that the video live feed is showing me) is shifted down 12.5% of its total height and that the height of the video feed is only 75% of the total height.
So I take 12.5% and divide by 75% to get the normalized (to my UIWindow) value and then apply that amount to the scrollview offset.
WHEW!!!