Comparing images for regression testing - ios

I have a drawing application that renders images (using Quartz2D). I want to be able to run regression tests to determine if a file is rendered consistently the same, so I can determine if anything broke following code changes. Are there any API's which allow me to compare screenshots (or image files), and get some similarity score.?

I am not sure if this will suite your needs (It doesn't return a score).
But this software let you compare image and decide which parts to ignore
Visual CI

Related

Is there an annotation tool for instance segmentation on Ipad?

Is there an annotation tool to produce multi layer (for overlapping objects within the image) and pixel exact image annotations on IPad?
Background to my question:
There a lot of annotation tools for Linux and Windows
(e.g. the ones listed here: https://www.v7labs.com/blog/best-image-annotation-tools
or here: https://humansintheloop.org/10-of-the-best-open-source-annotation-tools-for-computer-vision-2021/)
I haven't tried all of them, but non of them seem to be available for the IPad.
I am using the Ipad to make image annotations because it is faster for me to annotate with a stylus than with a mouse on the PC (also I can do annotations when I am not in the office). Further, most annotation tools feel clunky and too overloaded with bureaucracy (this is only my subjective opinion).
I am currently using Adobe Fresco (sucks only because its not open source and a little expensive), which works well in combination with a small script, that I wrote to convert the .psd files into torch tensors.
My workflow with Fresco is fast and the annotations are very precise. However, I was bashed by a reviewer when submitting a paper mentioning that the annotations were produced with Fresco. The paper was rejected because the reviewer thought annotating images with fresco was ridiculous and that there are supposedly much better alternatives (which he did not mention)... and which I am still too dump to find. Any suggestions?

Error - WebApp Implementation with Principal Component Analysis(PCA) - Azure ML Studio

After applying Principal Component Analysis PCA to my data set in order to achieve better model accuracy. The 13 features dimensions, I am reducing it to 10 features using PCA. Everything is fine till here.
After implementing the model in WebApp, it is building & seems fine in the studio.
In the testing phase of model prediction, Instead of displaying 10 features as an input, the UI system is showing the original features which is 13 & the output is showing 10 featuenter image description hereres which does not have any feature names for the newly generated features which are 10. And also prediction is not working at all after executing it.\
Attached are the screenshots, Please refer.
Could you please also show your diagram of the experiment? This kind of issue happens when you are not setting the input of the model correctly according to your requirement. Please double check how you define your experiment.
One thing I want to highlight is, in the portal/ quick test page, all the input data will be the same as your original import data according to the document.
https://learn.microsoft.com/en-us/azure/machine-learning/classic/tutorial-part3-credit-risk-deploy#deploy-as-a-new-web-service

Temporary tiles cache for Mapserver

I was searching on Google and StackOverflow to see if anyone have solution for my problem, but didn't found anyone with same problems.
So, currently I'm running Debian machine with Mapserver installed on it. The server also run webserver for displaying map data over the browser. The generation of map is dynamic, based on layers definition in database I built mapfile in PHP and based on that generated PHP the map is shown to user. The data is defined in database and as a SHP files (both combined in single mapfile).
It is fully dynamic, what I mean with that is that user can enable/disable any of layers or click inside polygon (select some points on map) it color the selection (generate new mapfile based on selection and re-generate tiles).
So the execution of all that code from selecting some area to coloring selected items somtimes take too much time for good user experience.
For solution I'd like to use some kind of temporary tiles cache, that can be used for single user, and to be able to delete it's content when user select some items on map or enable/disable one of the layers.
P.S. I already did all the optimizations provided from Mapserver documentation.
Thanks for any help.
It sounds to me like your problem is not going to be helped by server-side caching. If all of the tiles depend on user selections, then you're going to be generating a bunch of new tiles every time there's an interaction.
I've been using MapCache to solve a similar problem, where I am rendering a tileset in response to a user query. But I've broken up my tiles into multiple logical layers, and I do the compositing on the browser side. This lets me cache, server-side, the tiles for various queries, and sped up performance immensely. I did seed the cache down to zoom level 12, and I needed to use the BerkeleyDB cache type to keep from running out of inodes.
I'm using Leaflet.js for the browser-side rendering, but you should also consider OpenLayers.
After looking at the source code, I have some other ideas.
It looks like you're drawing each layer the same way each time. Is that right? That is, the style and predicate of a particular layer never change. Each user sees the image for that layer the same way, if they have selected the layer. But the combination of layers you show does change, based on OpenLayers control? If that's the case, you don't need per-user caching on the server. Instead, use per-layer caching, and let the user's browser figure out the client side caching.
A quick technique for finding slow layers is to turn them all of. Then reenable them one by one to find the culprit. Invoke Mapserver from the command line, and time the runs, for greater precision than you'll get by running it from your webserver.
You mentioned you're serving the images in Google 3857 while the layers are in Gauss-Kruger/EPSG 3912. Reprojecting this on the fly is expensive. Reprojecting the rasters on the fly is very expensive. If you can, you should reproject them ahead of time, and store them in 3857 (add an additional geometry column).
I don't know what a DOF file is--maybe Digital Obstacle File? Perhaps preload the DOF file into PostGIS too? That would eliminate the two pieces you think are problematic.
Take a look at the SQL queries that PostGIS is performing, and make sure those are using indexes
In any case, these individual layers should go into MapCache, in my opinion. Here is a video of a September 2014 talk by the MapCache project leader.

iOS automated test framework that allows image comparison

There are a lot of iOS automated test frameworks out there, but I'm looking for one that allows comparison of images with previous images at that location. Specifically, the best method would be for me to be able to take an element that contains an image, such as a UIImageView, and test to see whether the image in it matches a previously taken image during that point of the testing process.
It's unclear to me which of the many frameworks I've looked at allow this.
You're looking for Zucchini!
It allows you to take screenshots at different points in the app testing process, and compare them against previous versions. There is some help about such as this video and this tutorial.
For comparing specific parts of a UI, you can use the masks feature they support to only compare relevant parts of the UI.
You can also check out the demo project.

How can I process a -dynamic- videostream and find the (relative) location of a "match" in that videostream?

As the question states: how is it possible to process some dynamic videostream? By saying dynamic, i actually mean I would like to just process stuff on my screen. So the imagearray should be some sort of "continuous screenshot".
I'd like to process the video / images based on certain patterns. How would I go about this?
It would be perfect if there already was (and there probably is) existing components. I need to be able to use the location of the matches (or partial matches). A .NET component for the different requirements could also be useful I guess...
You will probably need to read up on Computer Visual before you attempt this. There is nothing really special about video that seperates it from still imgaes. The process you might want to look at is:
Acquire the data
Split the data into individual frames
Remove noise (Use a Gaussian filter)
Segment the image into the sections you want
Remove the connected components of the image
Find a way to quantize the image for comparison
Store/match the components to a database of previously found components
With this database/datastore you'll have information on matches later in the database. Do what you like with it.
As far as software goes:
Most of these algorithms are not too difficult. You can write them yourself. They do take a bit of work though.
OpenCV does a lot of the basic stuff, but it won't do everything for you
Java: JAI, JHLabs [for filters], Various other 3rd party libraries
C#: AForge.net

Resources