Orbit Books has started creating AR book covers using google lens. For an example, open up Lens and look at this image: https://www.hachettebookgroup.com/wp-content/uploads/2022/05/9780316368865-1.jpg?fit=436%2C675
I've been trying to figure out how it was done, but I can't find anything. Sure, I can create a custom app that finds the image and overlays an AR environment on it, but I can't find any documentation on how to integrate within Google Lens itself.
Has anyone seen this before or know of the steps to, for example, add a similar experience to my own books. (I assume it's a closed system since they wouldn't want anyone creating custom triggers on anything.)
Any advice would be great. I have a group of authors that would love to add a similar experience to their books.
Related
I am trying to make a photo editing application for iOS, but am not sure where to start looking. I have attached an image made in Word... that hopefully simply depicts what I am trying to achieve. It will involved manipulating individual pixels of a shape/image and masking/clipping. WHow should I start and what resources are available to me other than the developer docs?
Cheers
If you are not new to programming I would suggest a trial and run kind of approach. If it was me, I would follow a approach like this
Figuring out what to do/ what not to do
Do I need to develop the tech I want from scratch or can I use some pods ?
What are the good reads and example apps - (Try this)
Development approach
Build a photo gallery to pick images from
Build a EDIT mode screen
Get set of template overlay images
Figure out how to overlay them on top of each other
Export the final picture as one picture
The developer documentation is essential when it comes to learning new APIs, but sometimes it can be a little overwhelming. You can try reading raywenderlich.com tutorials on Core Image first to get an idea (link here) or find a book on computer graphics. It is essential to understand at least the underlying techniques to efficiently program image processing code. In many cases you'll find there is a more elegant technique than just looping on pixels and modifying one-by-one.
Then you can continue with reading on image compositing using core image for example.
I have planned to detect an image in a news paper play the video relevant to it. I have seen several news paper reading AR apps include this feature. But i couldn't find how to do so. How can I do it??
I dont expect any code. But like to know what are the steps I should follow to do this. Thank you.
You need to browse through the available marker-based AR SDKs - such SDKs let you defined in advance the database of images you would like to detect and respond to, and once any of these images is detected during runtime, you get some kind of an event with data on the detected image.
Vuforia is considered a good one and it has good samples, so it is supposed to be easier to start with. You should also check out Kudan, and there are more.
I'm taking a beginning mobile development class, and my professor wants me to jump right in and help him with an app of his written in Objective-C, and I have 3 months. I have taken a few other CS classes so far, but no next to nothing about mobile app development.
This app is basically a songbook that holds many PDF files of music scores. The first (of multiple things) that he wants me to add is the ability for a user to annotate the music score with highlighter, pen, and eraser. Since there are many music scores, I would need to have the app save these annotations for each score, and allow editing by the user later if needed (i.e. allow the user to go back and erase stuff and add more annotations to a given score).
I'm in the planning phase and I'm trying to figure out the best way to do this. I was thinking of having the annotations occur on a second view layer, and then saving that layer as an image so that it can be overlaid back onto the music notes sheet at any time (for the user to view). My concern is, would the user be able to re-annotate this layer once it has been saved as an image (i.e. erase and add more annotations, then save it again)?
Or what's the best way to go about this? I would really appreciate any advice because I am in over my head!
Well This is very broad question to answer it but let me help you with some links and you will need to go through that like.
It will help you to start your requirements into app.
There are many 3rd party frameworks are there for PDF annotations:
PSPDFKit (Paid)
FastPDFKit
Poppler (OpenSource)
There are some SO Questions links which also helps you for PDF annotation
Add Annotation to PDF
Annotation on an PDF
Programmatically add annotations on PDF
Some Github Links
LazyPDF
Note: LazyPDFKit - (No longer maintained - Use the source code to fix
the bugs)
Hope this will helps you in your research.
Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!
I need a help to develop a small application on Augmented Reality.I have spend almost a week trying but with no proper solution.Tried some sample code but still not successful.
I have seen many videos and want to develop something like that.
for example my code should detect only square or any particular shape.And then after detecting the square and another Image should appear on the screen.
Please help me out.
This stuff is hard, but most new cool things are until they are no longer cool or new.
You can play with AR Toolkit until you are familiar with the functionality and then attempt to dive into the setting and mess with those, then maybe look at the source.