Tiled Background Image Moving Diagonally In View - ios

I've seen many solutions out here but they're not exactly what I'm looking for. What I'm attempting to do is have the view's background to move an image multiple times-- slowly and diagonally rather than horizontally or vertically.
From a web designing perspective, this can be achieved with a tiled bg.gif where the image gives the effect of an image moving without additional programming.
Example:
However, I'm not sure whether this is achievable within Swift programming. I'm sure alternative solutions are out there like having a single image (a smiley face) tiled to fill the view frame and have it move diagonally by incrementing it's x & y coordinates.
I'm new to iOS development and Swift yet I am learning a lot! Much help will be appreciated.

refer this two links. it will help to achieve what you want. click here and click here
Update as advised in comment : download SwiftGif and Import the Gif.swift in your project and do the following:
// Returns an animated UIImage
let myGif = UIImage.gifWithName("imggif")
// Use the UIImage in your UIImageView
let imageView = UIImageView(image: myGif)
here imggif is your gif image which shown in question. put it in your project
hope this will help :)

Related

How can I tile the background with UIImageViews with code efficiently?

I'm working in Xcode 6 on tiling the iPhone background with many UIImageViews and I'd like to know if this is the most efficient solution.
I know one simple solution would be to create image views in the storyboard and cover the entire screen with them manually. I'd like to do it with code. Here's the code I have currently (5x5 is an okay size since I can scale it up or down to fill the screen with bigger or larger images):
CGRect tiles[5][5];
UIImage *tileImages[5][5];
UIImageView *tileViews[5][5];
for(int i=0;i<5;i++)
{
for(int j=0;j<5;j++)
{
tiles[i][j] = CGRectMake(50*i,50*j,50,50);
tileImages[i][j] = [UIImage imageNamed:#"tile.png"];
tileViews[i][j] = [[UIImageView alloc] initWithFrame:tiles[i][j]];
tileViews[i][j].image = tileImages[i][j];
[self.view addSubview:tileViews[i][j]];
}
}
Currently all the images are the same, but in the long haul I'm going to make them dependent on various factors.
I have read around and I know that UIImageViews are finicky. Is this the proper and memory efficient way to tile a background with UIImageViews? Is there a better way to do this? Can I manually go in after the tiles are initialized and change an image it's displaying and have it update in real time with just this?
tileView[1][2].image = [UIImage imageNamed:#"anotherTile.png"];
Thanks in advance, I just finished a basic 6-week course in IOS programming at my college so I still find myself trying to appease the Objective C Gods occasionally.
I guess my doubt would be why you need them to be image views. Drawing an image in a view or layer is so easy, and arranging views or layers in a grid is so easy; what are the image views for, when you are not really using or needing any of the power of image views?
I have several apps that display tiled images - one shows 99 bottles in a grid, one shows a grid of tile "pieces" that the user taps in matched pairs to dismiss them, one shows a grid of rectangular puzzle pieces that the user slides to swap them and get them into the right order, and one shows a grid of "cards" that the user taps in triplets to match them - and in none of those cases do I use image views. In one, they are CALayers; in the other cases they are custom UIView subclasses; but in no case do I use UIImageViews.
For something as simple as a grid of images, using UIImageViews seems, as you seem to imply, overkill. If the reason you have resorted to UIImageViews is that you don't know how to make a UIView or a CALayer draw its content, I'd say, stop and learn how to do that before you go any further.

How to dim/blur everything outside given rect in iOS?

I'm currently developing an iOS app that is using OCR.
Currently I'm using AVFoundation to preview the video from the camera (using Apples sample AVCam).
For a good user experience I want to lay out a rectangle in the preview layer. The image inside this rectangle will be the image parsed by the OCR engine. My problem is that I also would like to "dim" everything outside this rectangle and I'm currently out of ideas how to solve this. Does anybody know how to do this?
Edit
This is what I would like to accomplish (image taken from the app Horizon):
http://i.imgur.com/MuuJNS9.png
You can use two black images covering the top and bottom areas that you want to "dim", set the alpha of those images to a certain value, like 0.5.
Why not add a subview that covers the entire screen and set the background color to a semi transparent gray - your gray overlay?
And then add the image parsed by the OCR engine add a subview of this grayoverlay int the center of it

Center-aligned masked UIImageView / Imitating Twitter & Facebook app's image views

Post title can be a little bit weird but here's what I'm looking forward to do:
Look at the image in the middle, it's displaying only a part of the original image. It's still high definition, not really cropped but only masked with the center of the image at the center of the view.
So basically they put a bigger image behind a smaller view (I did that for having circular imageviews in the past). But how can I achieve that particularly?
Is there any cocoapods or something to do so or should I get started doing it myself? Any suggestions on how to code-wisely build this?
The main goal here is to keep a static space to display images so they're always the same width/height. Doing this effect seems like a good way of achieving this.
EDIT: Here's a little sketch of an idea I just had to mimic that behavior:
Thanks a lot and have a nice day.
If you're asking what I think you're asking, you don't have to look far to find this functionality. Use UIView's built in contentMode property, specifically in this case, UIViewContentModeScaleAspectFill.
[imageView setContentMode:UIViewContentModeScaleAspectFill];
Then to crop of the parts of the image extending out of the frame, be sure to use clipsToBounds:
[imageView setClipsToBounds:YES];
Here is another solution, but make sure your image width and height is greater than the imageview,
imageView.contentMode=UIViewContentModeCenter;
imageView.clipsToBounds=YES;
imageView.clearsContextBeforeDrawing=NO;

Detect touch on irregular shaped images that are closely placed inside a UIView

I have situation where large number of images are placed closed to one another inside a view, each image has its own image-view. The images are high resolution pngs and are irregular shape such as shape of a country. The problem is that I wish to do something uniquely when an image is touched. However, the frames of image-views are all rectangles and overlap neighboring images and hence correct detection is not possible.
I would really appreciate any guidance in this regard. Please let me know if I have not clearly explained my problem.
Regards
Check this question (Detect touches only on non-transparent pixels of UIImageView, efficiently)
On github, you can find a project by Ole Begemann which extends UIButton so that it only detects touches where the button's image is not transparent.
Since UIButton is a subclass of UIView, adapting it to UIImageView should be straightforward.
Hope this helps.

how can I make a custom overlay filter?

I have a viewController with a UIImageView. The imageView is to be loaded with a different random picture from a given array when the viewController is displayed. Above the UIImageView I would like to implement a filter similar to one I found in photoshop but with my own custom modification for a clear window to the image below. Basically, what I am looking to do is display a random image behind a blurred filter but I would like a part of the blur filter to have a custom shaped window to the image below it where the image can be seen clearly. The rest of the image would still be blurred out. I have read apples documentation for applying filters to images but none of them suit my needs. Pretty new to development and haven't written any code for this feature yet. I'm more looking to see if it can be done and if so, could you point me in the direction of where I can research to find the answers I'm looking for? cheers!
I would recommend that you take the input image, pass it through a CIGaussianBlur, then I'd draw the image applying an image mask (using CIBlendWithMask or a CGPath.)

Resources