I've created simple 7 seconds clop which uses standard plugin for FCPX: "Bold Fin" title.
While i am editing this clip - everything fits to the screen:
also everything is ok when i am starts to export this clip to the master file:
but when actual file is ready - it seems like it is cropped:
Could somebody please help to find a reason why my output actually cropped? And how fix this issue?
Judging by the image you provided, you have non-square pixels in your footage or in projects settings (or footage's aspect ratio isn't 16:9). I print-screened the image inside FCP's canvas and found that you have rectangular pixels stretched along X axis, instead of square ones.
Seemingly, FCPX trying to compensate pixel aspect ratio for FullHD export (par = 1.0, ar = 16:9), stretched pixels along Y axis, which led to cropping.
Related
I am running the sample project for ARKit and I am trying to add my own image. There is a weird behaviour as the image is loaded at a different size than set in the asset inspector:
As can be seen, the image is (almost) an A5 size (14.8 x 19.7 cm), but it is loaded by ARKit at 14.8 x 11.1 cm. The metadata of the picture also has a height greater than the width: 3024 x 4032. The image is a jpg.
To be noted, there are no changes made in the code from the sample project from Apple which I am currently running.
Another note, the app detects the image successfully, but the detected area on the piece of paper is clipped to the dimensions printed in console.
A third note: I observed that the 11.1 cm dimension would be obtained if the image would be rotated 90 degrees (i.e. consider the height to be 14.8 cm, then the width would be 11.1 cm). This looks weird enough as nothing points to anything like that. I also double checked my image, but there seems nothing wrong with it.
The question is: is there anything I am missing or things I have to consider/do with my sample image? Is this a known issue or is there any workaround to make ARKit load my image with the expected dimensions?
Update
The source of the Apple's sample project: https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience
An extra side note: I only experienced this issue with a single image file. It was an image taken by an iPhone and converted from HEIC to jpg using an online converter (which I cannot find anymore). Anyway, the image file did not seem to have any anomalies.
I have to paint some pictures for a small display of a micro controller. The Display has a resolution of 128x64 Pixel. But the Pixel aren't a square. They have a width of 0.5 mm and a height of 0.75mm. All my nice drawn images in GIMP look ugly on this display.
Can i change the ratio of drawn pixel in GIMP so i can see the image the same way like on my micro controller screen? Is there a setting for this or do i need to use my imagination?
I've looked around in settings menu but found nothing ...
thx in advanced
PS: Wrong Network?
Use Image>Print size to set a different definition for the vertical and horizontal axis (don't forget to "unlink" the two entry fields otherwise changing one will change the other).
Then untick View>Dot for dot so that Gimp no longer maps image pixels to screen pixels and displays the images with their intended definition (and aspect ratio in your case).
We have a background image for our app that needs to be full screen for each device we run the app on. Our problem is the background image is tiling on our iPhone 6S+ (Display Zoom off).
I have drawn in red lines to highlight where the tiling is occurring...
We have created 3 background images of the following sizes...
So, designing for 1x (which is the recommended way to go), our base level 1x background image is 320 pixels wide. Our 2x is 640 pixels, and our 3x is 960 pixels.
The problem is the iPhone 6S+ is 1080 pixels wide and according to this chart, you need to start with a 3x image that is 1242 pixels wide. And this is where I am missing how this is supposed to work.
from https://www.paintcodeapp.com/news/ultimate-guide-to-iphone-resolutions
With the above chart in mind, it seems you need a separate image for each resolution highlighted with a red square in the above image. Is this correct? And if yes, how do you label each individual image so that at runtime the correct one is picked?
Three images, named as you have them for background.png, are all you need.
Now let's talk about image views. They display their image using a content mode. The key thing is to pick the correct mode. Aspect Fill is what you probably want here, because it will fill the image view without distorting the image.
One procedure, then, is to use a bigger image than what you have, and configure the image view that shows the image to use an appropriate content mode such as Aspect Fill, so that it sizes the image down to fit (or, to save memory, at runtime you can size it down yourself).
The other possibility would be to leave your image as it is, and solve the issue on the Plus machines by telling the image view to size the image up to fit, again possibly by using Aspect Fill. That might or might not look acceptable; you'd have to try it and see what you think.
I want to use pdf vector images in my app, I don't totally understand how it works though. I understand that a PDF file can be resized to any size and it will retain quality. I have a very large PDF image (a cartoon/sticker for a chat app) and it looks perfectly smooth at a medium size on screen. If I start to go smaller though, say thumbnail size the black outline starts to look jagged. Why does this happen? I thought the images could be resized without quality loss. Any help would be appreciated.
Thanks
I had a similar issue when programatically changing the UIImageView's centre.
The result of this can lead to pixel misalignment of your view. I.e. the x or y of the frame's origin (or width or height of the frame's size) may lie on a non integral value, such as x = 10.5, where it will display correctly if x = 10.
Rendering views positioned a fraction into a full pixel will result with jagged lines, I think its related to aliasing.
Therefore wrap the CGRect of the frame with CGRectIntegral() to convert your frame's origin and size values to integers.
Example (Swift):
imageView?.frame = CGRectIntegral(CGRectMake(10, 10, 100, 100))
See the Apple documentation https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CGGeometry/#//apple_ref/c/func/CGRectIntegral
I have an application where I can have an image moving towards the eye.
When the image enlarged, I would like to have it resized as 635 px where initial size was 220 px. I have the image with starting position as 0 in z axis. I am wanting to calculate the distance from starting position to the resized image.
I have already calculate the distance by hand but when I tried to put it on flash the result is not what i wanted. I am sure that the value I calculated was correct.
I know it may be hard to understand my problem. Please help.
Do you have to use 3D? To make an image larger, you need to merely change its scaleX and scaleY properties.