I am creating a comprehensive filtering tweak for Instagram's iOS app to remove unwanted posts. I am trying to add a feature to store all viewed thumbnails and use image diffing to determine whether any new thumbnails are reposts.
What is the most efficient way to obtain a % difference between two images in iOS? Since this is a tweak, I would like to avoid the use of third-party libraries, if possible.
Thanks.
Related
I am building my flash card iOS app for reviewing my learning Japanese using SwiftUI language.
The problem is how to storing and updating my images(>500 images). Please help me, any suggestion is appreciated, thanks for reading my post.
I think you're asking about how to manage 500+ images in an Xcode project. You could just add all the images to your project and load them as you would any image. You could use asset catalogs, which have the advantage that they let you store different versions of a resource for use on different devices, and only the ones needed for the device the app runs on will actually be installed on the device. See How Many Images Can/Should you Store in xcassets in Xcode? and Asset Catalog Format Reference for more information about asset catalogs. But any way you slice it, managing 500+ images is going to be cumbersome. There's probably a better way...
Managing all those images in your app isn't just a problem for you as the developer; building them into the app will also create problems for the user. Even if each image is relatively small, having hundreds of them in the app will probably make the app huge. That means it'll take a long time to install, and the app will use a lot of storage on the device. Every time you release a new version of the app, with more words, or even just to fix a few small bugs, the user will have to download all that data all over again.
Instead, you should consider building an app that can fetch the data it needs from a server. Ideally, you could apply that approach to all your app's data, not just the images. Maybe you'll organize your flash cards into sets of a few dozen, so that you can fetch a set of cards and the associated images pretty quickly, and sets that the user hasn't used for a while can be removed to free up space on the device. You'll be able to update a set of flash cards without having to update the app, and when you do update the app your users won't have to download all the data all over again.
You've said that you're a beginner, so this approach might seems very difficult. That's OK, you can start with a simpler approach and then improve as you go along. For example, you might just put all the images on a server and fetch them one at a time as you need them. Your flash card data file could contain just a dictionary with words and the URLs associated with those words. There are lots of examples of loading an image from an URL here on SO and elsewhere, so I'm not going to provide code for that, but it won't be hard to find. The earlier you start thinking about how to design your app so that it can scale as you add more and more words, the easier it will be to maintain the app later.
500 images can have a huge size. Applications that published on Appstore have size limit and Apple does not recommend to make big apps.
Store them on server and load needed images on fly. Also you will get possibility to update your images, remove add new.
If you don't have a backend, you can use something easy and free (Firebase storage for example) or with minimal code writing on AWS.
If you need to keep them on device - store them as files in the Documents or another apps folder, do not use CoreData for it (you can keep only the list of names/urls in database).
After loading image to be displayed for user, you can prefetch next bunch of images.
Use Alamofire, or SDWebImage to load images from network (I prefer last). These frameworks can do many useful things with images.
To load images:
you can have a list of your images (just list of the names and urls)
or
you can know only path and names pattern and generate links dynamically (like https://myserver/imageXXX.png.
I've been wrestling with finding a way to add keyword metadata to images in iOS. I've scoured the internet and stackoverflow but haven't found anything specific to adding keywords to images in iOS. I've seen that different applications like Photoshop and Aperture allow you to do this type of thing so, the capability for images is there in general, but is there a way to achieve this in iOS?
Specifically, is it possible to create a new key/value pair and nest it within the image metadata?
Look in the CGImageProperties reference. As you can see, you are allowed to add EXIF dictionary information to an image. This is exactly how Photoshop and Aperture do it. The ImageIO framework will give you the means to write into the EXIF dictionary.
Is it possible to do very basic image recognition to compare an image against a database of images(resource folder images or any web servers images if we have) and determine which image in the database is the best match? I don't need to do any processing of any of the images, but simply differentiate between a finite list of images.
Is it any open source code available ?
I would recommend using OpenCV if you simply want to compare images (i.e. decide if two images are the same).
Here is a similar question on SO:
iOS image comparison
I would also go about reading a little bit about what Core Image (the iOS image library) has to offer, before going about OpenCV or other 3rd party.
I hope this helps.
There are a lot of iOS automated test frameworks out there, but I'm looking for one that allows comparison of images with previous images at that location. Specifically, the best method would be for me to be able to take an element that contains an image, such as a UIImageView, and test to see whether the image in it matches a previously taken image during that point of the testing process.
It's unclear to me which of the many frameworks I've looked at allow this.
You're looking for Zucchini!
It allows you to take screenshots at different points in the app testing process, and compare them against previous versions. There is some help about such as this video and this tutorial.
For comparing specific parts of a UI, you can use the masks feature they support to only compare relevant parts of the UI.
You can also check out the demo project.
I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.