I'm trying to develop a simple app that shows a stack of images: each image on top of the other. And when you swipe or click a button, the first image will disappear and the image below it will have the same size as the one that was on the top (as in the image below, but without the rotation effect)
I've tried to fetch the images and create the frame of the images at each for loop iteration:
UIImageView*picture = [[UIImageView alloc] initWithFrame:CGRectMake(40+10*i, 101-10*i, 240-20*i, 200)];
UIImage *image = [UIImage imageWithData:imageData];
picture.image = image;
[dView addSubview:picture];
and it worked. But I still can't find how to make the second image in the same size as the one that was on top of it.
CGrect nextImageFrame = nextImage.frame;
nextImageFrame.size.width = 240;
nextImageFrame.size.height = 200;
nextImage.frame = nextImageFrame;
assuming that nextImage is the image you want to show next
Related
In my application want to repeat image in imageview but when i repeat image small small slot display how to resolved??
I tried below code :
self.bgimg.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed: #"pattern-dots"]];
Because your image size is smaller than the view size.
So, you provide enough larger image than self.view size.
You can try to check by bellow code fragment.
UIImage *image = [UIImage imageNamed: #"pattern-dots"];//#"oval"
if (self.view.frame.size.width<=image.size.width) {
self.view.backgroundColor = [UIColor colorWithPatternImage:image];
}else{
NSLog(#"You image is too small to fit.");
}
NSLog(#"view Size:%#",NSStringFromCGSize(self.view.frame.size));
NSLog(#"imsge Size:%#",NSStringFromCGSize(image.size));
I want to show progress activity in the form of image getting completed. A simple example is when we install app from app store, we can see the image icon getting full as downloading gets completed, i.e. I get total bytes to be downloaded but I want to show progress in the form of icon image getting completed, i.e. according to bytes completed, want to show that icon image completed, I am looking for that kind of progress activity. Has anybody done that. Please help.
You need an image sequence of empty image and partially filled images until image is fully filled. Then it must be animated so output will look like image is getting filled.
A sample code would look like,
UIImageView *animationView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 0,320, 460)];//specify your size here
animationView.contentMode = UIViewContentModeCenter;
animationView.animationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"1.png"],
[UIImage imageNamed:#"2.png"],
[UIImage imageNamed:#"3.png"],
[UIImage imageNamed:#"4.png"],
[UIImage imageNamed:#"5.png"],
[UIImage imageNamed:#"6.png"],
[UIImage imageNamed:#"7.png"],
[UIImage imageNamed:#"8.png"],
[UIImage imageNamed:#"9.png"],
[UIImage imageNamed:#"10.png"],nil];
animationView.animationDuration = 1.5f;
animationView.animationRepeatCount= 0;
[animationView startAnimating];
[self.view addSubview:animationView];
If you're not downloading the image the why not use a CAShapeLayer used as a mask over your image. In other words your image is there the whole time but more of it becomes visible as your mask changes.
If your mask was a single slice of a circle you could then use the NSURLSession/NSURLConnection delegate to apply a CATransform around the centre point of the circle and then recalculate the bezierpath of the mask. Before the delegate returns apply the new mask and you would end up with a circular progress view where more of your image becomes visible with each call to the delegate.
I have a small problem with a UIImageView, I try to place an image dynamically so I start like this
self.picture_view = UIImageView.alloc.initWithFrame (CGRectMake (0, 16,45, 46))
but I want to take my frame size of each image I passed him.
So I do like this.
picture_frame = picture_view.frame;
picture_frame.size = picture_view.size;
picture_view.frame = picture_frame;
but when I do this
NSLog (picture_frame.size.inspect)
it gives me 45, 46 for each image,
So how recovered the image size and frame overrider for me that shows the correct size
thank you in advance.
PS: I done well picture_view.image = UIImage.imageNamed (my_picture)
You are actually setting the ImageViews frame to the frame of itself. You need to be doing it based on the actual UIImage. You can do this automatically by initing the view with the image.
Ex.
UIImageView *pictureView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"whatever.png"]];
//The size is already set correctly, but if you want to change the x or y origin do like so
pictureView.frame = CGRectMake(0, 16, pictureView.frame.size.width, pictureView.frame.size.height);
Edit: This answer is written in Objective-C, but the core idea should translate to rubymotion very easily.
If you are trying to create a UIImageView and then resize it to fit the UIImage you assign to it, you should try something like this:
Create an imageView with an image:
UIImage *image = [UIImage imageNamed:#"myPicture.png"];
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
Change the image, and resize the imageView to fit:
UIImage *anotherImage = [UIImage imageNamed:#"anotherPicture.png"];
imageView.image = anotherImage;
CGRect imageViewFrame = imageView.frame;
imageViewFrame.size = anotherImage.size;
imageView.frame = imageViewFrame;
I have a UIImageView that can be moved/scaled (self.imageForEditing). On top of this image view I have an overlay with a hole cut out, which is static and can't be moved. I need to save just the part of the underlying image that is visible through the hole at the time a button is pressed. My current attempt:
- (IBAction)saveImage
{
UIImage *image = self.imageForEditing.image;
CGImageRef originalMask = [UIImage imageNamed:#"picOverlay"].CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(originalMask),
CGImageGetHeight(originalMask),
CGImageGetBitsPerComponent(originalMask),
CGImageGetBitsPerPixel(originalMask),
CGImageGetBytesPerRow(originalMask),
CGImageGetDataProvider(originalMask), nil, YES);
CGImageRef maskedImageRef = CGImageCreateWithMask(image.CGImage, mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
UIImageView *test = [[UIImageView alloc] initWithImage:maskedImage];
[self.view addSubview:test];
}
As a test I'm just trying to add the newly created image to the top left of the screen. Theoretically it should be a small round image (the part that was visible through the overlay). But I'm just getting the whole image created again. What am I doing wrong? And how can I account for the fact that self.imageForEditing can be moved around?
CGImageCreateWithMask returns an image of the same size as the original's one.
That is why you get the original image (I assume) with the mask being applied.
You can apply the mask and then remove the invisible border. Use the advice from this question: iOS: How to trim an image to the useful parts (remove transparent border)
Find the bounds of the non-transparent part of the image and redraw it into a new image.
I have a view with two UIImageViews. The first uiimageview is a picture, which can be changed but is always at the same position. The second uiimageview contains a uiimage of a logo, this uiimageview can be resized, panned and rotated.
Both uiimageviews are set to Aspect Fit by the way.
When im done dragging and scaling the second uiimageview, i want to save these two uiimages, and draw one single uiimage. But in the new uiimage i need them to be positioned exactly as they were after i finished dragging and resizing the second uiimageview.
Im pushing this combined uiimage over to a new uiviewcontroller, which is called PreviewViewController. Here i want to show the uiimage in an imageview, and save it if the user presses yes.
I almost got it working, but the x and y position is confusing me.
And there is another problem, when drawin the second uiimageview image onto the new image, its view mode looks like Scale to fill and it looks ugly.
Heres my code
- (UIImage *)combineImages{
UIImage *tshirt = self.tskjorteTemplateView.image;
UIImage *logo = self.bildeView.image;
UIGraphicsBeginImageContext(self.tskjorteTemplateView.image.size);
UIGraphicsBeginImageContextWithOptions(self.tskjorteTemplateView.frame.size, NO, 0.0);
[tshirt drawInRect:CGRectMake(0,0, self.tskjorteTemplateView.frame.size.width, self.tskjorteTemplateView.frame.size.height)];
[logo drawInRect:CGRectMake(bildeView.center.x - (bildeView.frame.size.width / 2) ,
bildeView.center.y - (bildeView.frame.size.height / 2), self.bildeView.frame.size.width, self.bildeView.frame.size.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
and its called here:
fullPreviewImage = [self combineImages];
WantToSaveViewController *save = (WantToSaveViewController *)segue.destinationViewController;
save.delegate = [self.navigationController.viewControllers objectAtIndex:0];
save.previewImage = fullPreviewImage;