I have a large number of small images that I am swapping between in a SKSpriteNode object. The problem is it seems to establish a set container size and other images will stretch to fit the container. Is there a way to reset this each time? Or do I have to modify all the images to be the same size? I was hoping it could just display them as the true image size and I could keep it centered as they are all already centered as they are.
I basically just update the texture each time I do it. Is there a flag or property I am not setting correctly?
When a SKSpriteNode is first created, its texture/image size is set. I suggest you take the largest x and y values in your animation sequence and set all pics to the same (largest) size. You can use a transparent background for each pic.
Related
So in my scenario, I have a square that is (for understanding's sake) 100x100 and need to display an image that is 300x800 inside of it.
What I want to do is be able to have the image scale just as it would with UIViewContentMode.ScaleAspectFill so that the width scales properly to 100.
However, after that, I would like to then "move" the image up to the top of the image instead of it putting it inside the imageView right in the center, basically what UIViewContentMode.Top does. However that doesn't scale it first.
Is there anyway to do this type of behavior with the built in tools? Anyway to add multiple contentModes?
I already had a helper function that I wrote that scaled an image to a specific size passed in, so I just wrote a function that calculated the scaled image that would fit into the smaller square I had similar to the size AspectFill would do, and then I wrote code that would crop it with the rectangle size I needed at (0,0).
I'm making a volume meter, this volume is basically an Image, that as the sound changes, the view of the image increases or decreases. but when only change the frame size image resets to the new frame, when in reality what I want is to keep the image the same size and just show the content that appears on the frame, not stretching my image.
there may be better ways of doing this, using layers,masks or other things like this, I accept suggestions.
thx
I just started learning obj-c the other day and i'm putting together a crappy game as practice. I have an IMG as a main character to the game and every time this character hits certain obstacles, i wanted the size of the image to change (decrease or increase depending on the circumstance).
I used photoshop the change the size of the images to the appropriate sizes but for some reason, when i run the game, the images change when i want them too, but they are way smaller than the size i set them to in photoshop......
any ideas?
i don't think its necessary to post my code for something as simple as this, right? its just a simple "if" statement followed by the instance and the UIimage named for the image name...
I assume you display the image in an UIImageView? Or do you use SpriteKit? If you use an UIImageView the image is actually automatically scaled to the size of the UIImageView.
Therefore you would just change the size of the UIImageView (you can of course also change the image inside the UIImageView.
If you are using SpriteKit, you have to remember, that your are probably testing on a 'retina' device and for this reason the image's width and height is divided by 2 (the real resolution of the current iPhone 5/5s is 1136x640 and not 568x320 !
I have 2 relatively small pngs that will be images inside UIButtons.
Once our app is finished, we might want to resize the buttons and make them smaller.
Now, we can easily do this by resizing the button frame; the system automatically re-sizes the images smaller.
Would the system's autoresize cause the image to look ugly after shrinking the image? (i.e., would it clip pixels and make it look less smooth than if I were to shrink it in a photo editor myself?)
Or would it better to make the image the sizes they are intended to be?
It is always best to make the images of correct size from the beginning. All resize-functions will have negative impact on the end result. If you scale it up to a larger image it will be a big different, but even if you scale it down to a smaller it is usually creating visible noise in the image. Let's say that you have a line of one pixel in your image. scale it down to 90% of the original size, this line will just use 90% of a pixel wide and other parts of the images will influence the colors of the same pixels.
In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.