I'm currently trying to implement UI tests in my iOS app. An important feature of my app is the ability to let users scan QR codes so they can quickly retrieve the ID's of certain objects. I'm struggling to write tests for this however. What I would like to achieve is to supply mock images to the camera during UI testing so I can essentially simulate the scanning action. So far I haven't really been able to find anything that mentions whether it is even possible to supply image data to the camera, let alone how this would be implemented.
So my question is, is this even possible?
Related
We're looking to share AR experiences (ARWorldMap) over the web (not necessarily to devices nearby, I'm referring to data that can be stored to some server and then retrieved by another user).
Right now we're looking into ARWorldMap which is pretty awesome, but I think it only works on the same device AND with devices nearby. We want to be able to delete these constraints (and therefore save the experience over the web on some server) so that everyone else can automatically see 3D things with their devices (not necessarily at the same time) exactly where they were places.
Do you know if it's possible to send the archived data (ARWorldMap) to some server in some kind of format so that another user can later retrieve that data and load the AR experience on their device?
The ARWorldMap contains the feature points in the enviroment around the user. For example, the exact position and size of a table but including all the other points found by the camera. You can visualize those with the debugOptions.
It does not make sense to share those to a user that is not in the same room.
What you want to do is share the interactions between the users, eg when an object was placed or rotated.
This is something you would need to implement yourself anyway since ARWorldMap only contains anchors and features.
I can recommend Inside Swift Shot from last years WWDC as a starting point.
Yep technically it’s possible as according to docs here. You will have to use the nsssecure coding protocol, might have to extend if you aren’t using a swift backend though. It’s still possible as the arkit anchors are defined by a mix of point maps, rotational data , and image algos. Just implement portions of codable to JSON and voila it’s viable for most backends. Of course this is easier said then done.
We are working in cloud recognition. In this, we have to restrict recognition of the particular image target not more than 2 recognitions in each device.
We know, we have to use VWS API for that. But our question is how we can restrict recognition of image target only in particular device, but it has to recognize in other devices which is not exceeding 2 recognitions.
How we can achieve this?
I thought this was impossible, but then after updating to Vuforia 4, I noticed in their prefab scripts they have this function RequireComponent now this has a lot of interesting applications to deal with.
Vuforia basically uses it to make sure the device has a camera, so you can notice that in their prefab scripts you can see RequireComponent(typeof(Camera))
With respect to your problem you could do something like RequireComponent(iPhone) because while playing with it, I noticed that was an option they gave me for the brackets.
Check it out and let us all know. I haven't been able to try it out, so can't confirm the same.
I thought it would be fun to experiment with algorithms that can play the game 2048. I'm familiar with Objective-C development and iOS. Is there any way, perhaps through scripting in the iPhone simulator, automated testing frameworks, or some other method, to be able to get the state of the board and then be able to simulate the appropriate touches in the simulator or on the device?
This question doesn't concern the algorithms determining the correct move, just the ability to ascertain the board state, and then execute whatever move the algorithm determines to be best.
Basically I'm looking for a method that allows you to automate user actions on an app that you do not have the source code for.
I hope this is not a duplicate, but I couldn't find what I'm interested in.
I'm trying to build some automated tests that will use an iPhone. The best tool I can find for this seems to be UI Automation, but from what I can tell so far, I need to run it against a specific app. My tests need to be more general. For example, I want to be able to answer an incoming call in the default dialer.
I would like to be able to automate the iPhone itself, not a specific app. My tests might involve switching between apps or making calls. The main features I need are to be able to take a screenshot and touch certain coordinates on the screen, regardless of what is on the screen and running at the time.
Can anyone suggest the proper way to set this up? I'd like to use the supported UI Automation tools, but would like to use them in a more general way.
Thanks
There are a lot of iOS automated test frameworks out there, but I'm looking for one that allows comparison of images with previous images at that location. Specifically, the best method would be for me to be able to take an element that contains an image, such as a UIImageView, and test to see whether the image in it matches a previously taken image during that point of the testing process.
It's unclear to me which of the many frameworks I've looked at allow this.
You're looking for Zucchini!
It allows you to take screenshots at different points in the app testing process, and compare them against previous versions. There is some help about such as this video and this tutorial.
For comparing specific parts of a UI, you can use the masks feature they support to only compare relevant parts of the UI.
You can also check out the demo project.