Having read google's paper from 09 [1] about captchas based on image rotation I wonder if they are implemented somewhere. Did anybody see an actual system which uses them? (I haven't created a google account in a while, but I think they still use "normal" captchas)
http://research.google.com/pubs/archive/35157.pdf
Related
Everytime creating an ASA, no matter how much time you 'scan' the environment, but only a few seconds after you call 'save' is accoutable, but it's hard to only visualize those point-cloud, and you never know which points are accountable for ASA. Simply visualizaing all point-cloud does not make sense.
So how do we visualize the accountable point-cloud for an anchor? Maybe from ASA there should provide a 'point-cloud visualizer' for this? This might sounds not a big deal but it's very important UX feature to provide proper feedback to the user when creating an anchor. Currently it's very hard to make the anchor creation experience perfect.
As of December 2020, such functionality does not exist in the Azure Spatial Anchors SDK. There is a feature request for this on the Azure Spatial Anchors feedback site already. The team uses the feedback site to help prioritize its work.
(It looks like this question author, Cliff, created this feature request.)
I see that ImageMogr2 is some kind of tool used by qiniu.com (a chinese hosting provider), If some one could help me understand what it is and what similar tech e have with any other hosting provider available.
Yes.
You may see a very similar service provided by tencent cloud has exactly the same name.
its an image processing utility that can scale, crop, rotate images on-the-fly using URI-programming, which means, defining the image processing command and parameters in the request URIs and you'll get the cropped images based on the original image you uploaded before.
You can easily get their documentations and some simple examples on their website.
e.g. https://developer.qiniu.com/dora/api/1270/the-advanced-treatment-of-images-imagemogr2
but not sure if you can read Chinese.
there are similar solutions provided by a us company. e.g.
https://cloudinary.com/
I'm trying to write an app for detecting "where you are" in a building use ARCore. I'd like to use previously learnt and then saved feature points to provide the initial sync position as well as then helping to continuously update position accurately. But this feature does not currently appear to be supported in ARCore.
Currently I'm using tracked images as a way to do an initial sync. It works, but not brilliantly - alignment is often a few degrees off and you have to approach the image pretty slowly and deliberately. And then once synced there is drift... Yes, loop closing works pretty well when it gets back to somewhere it recognises, but it needs to build up that map every time you start the session.
So, obvious solution: are there any plans for Google to implement "Area Learning" as it was back in Google Tango? It looks like Cloud Anchors might be some attempt to do this, but clearly that's all hosted on Google, and it strictly limited as to how long that data is stored. Currently that's just not a possible solution. OTOH, Apple's ARKit seems to now provide just what is needed:
https://developer.apple.com/documentation/arkit/saving_and_loading_world_data
Does this mean that Apple / ARKit is the only way to go for the app? Hope not...
You might want to check out persistent cloud anchors that is still in development.
From documentation:
Note: We’re currently developing persistent Cloud Anchors, which can
be resolved for much longer. Before making the feature broadly
available, we’re looking for more developers to help us explore and
test persistent Cloud Anchors in real world apps at scale. See here if
you’re interested.
I need to find a way to implement face detection and recognition completely offline using a browser. Trained model specific to each user maybe loaded initially. We only need to recognize one face per device. What is the best way to implement this?
I tried tracking.js to implement face detection. It works. But couldn't get a solution to implement recognition. I tried face-recognition.js. But it needs a node server.
Take a look at: face-api.js it can both detect and recognize faces in realtime completely in the browser! It's made by Vincent Mühler, the same creator of face-recognition.js.
(Face-api.js Github)
Thing to note:
It's realtime, my machine gets ~50ms (using MTCNN model)
It's JavaScript but uses WebGL GPU acceleration under the hood which is why it performs so well
It can also work on mobile! (tested on my S8+)
I recommend looking at the included examples as well, these helped me a lot
I have used the package to create a working project, it was surprisingly easier than I thought and this is coming from a student that just started web development. (used it in a ReactJs App)
Just like you I was searching and trying things such as tracking.js but to be honest they didn't work well.
I've recently submitted my iOS Quiz app to Apple but noticed that the file size for the app is pretty big (about 150 MB). Users would need to be connected to wifi in order to download it per Apple's rules. My quiz app is set up so users are given 4 choices and shown an image and must guess the correct answer from the image shown to them. How would I minimize the file size for my app so that it isn't so large? Is there a way I can host the images on a server without losing the functionality of my app? I heard of something like Backend Services but know nothing about it. If anyone can guide me in the right direction that would be awesome, thanks!
You can check out a free back end service like Parse, it could do the trick for you, especially because you dont have a lot (besides images I guess) that'll be on the server side.
This also helped me start with using it.
Good luck :)
I'm assuming you have all the quiz data (questions and images) within your app bundle?
You can shrink it next to nothing if you move all your questions and images to a backend server and serve the questions and images (links) using simple JSON Structure.
You can build your own backend (Java/PHP/etc..) or look into using Parse.
use JPEG images whenever possible. PNGs costs more space. Do not place jpeg to xcassets, since they will be converted to PNGs. If your pictures should be transparent - it is better to use Webp or JPNG format.
You may use CloudKit to host your data in a public database. You won't need any backend knowledge to do that. This tutorial will help you understand the basics. WWDC videos covers some more, i suggest you to look at WWDC 2014, Introducing CloudKit and WWDC 2015, CloudKit Tips and Tricks.