codename one augmented reality - augmented-reality

I'd like to port a small android app which uses camera and draws some infos on the screen while pointing at a specific location. My intention is to port it on iOS. I've found that codename one could be suitable but I don't know if it supports the use of the camera. Could anyone help me? Thanks.

Codename One still doesn't support some of the more elaborate AR API's introduced by Google/Apple but we do support placing a camera view finder into your app with a new cn1lib: https://github.com/codenameone/CameraKitCodenameOne
Original answer below for reference:
It supports the use of the camera but currently doesn't support AR use cases where you draw on the camera viewfinder. You can do that with native code support but since I understand that this is the main feature of the app its probably not the ideal solution.
If its just a small feature within a large app then native interfaces make sense.

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

Fix to get my wishes?

I have problems in myself when I drive cars 🚘 I forget to slowly in some way have cameras 🎥 speed in high ways so I thinked if possible to make IOS APP to fixing these problems I explain my thing in image but I can't convert to coding by this step ?
1-After to speed camera 🎥 100 Miter Alerts me app( (there are camera speeds pleas slow down your speed.))
2- just post code i have basic programming languages in swift.
I'm not sure if I got you wright, so please correct me if I'm wrong.
You want an iOS app that tells you if there is a speed camera on the road you're driving, right?
So you have some possibilities to achieve that:
you can have a look at the app store. There are lot of such apps (e.g. TomTom) (easiest way)
if you want to build your own app you can make a use of the navigation sdk provided by mapbox: https://www.mapbox.com/help/ios-navigation-sdk/ (some programming skills needed)
Build your own app from scratch (much work and advanced programming skills)
If you want to build your app by mapbox or on your own you'll need the GPS-locations of speed cameras like provided here: https://www.scdb.info/

How to build a skype-like small video chat window (multiscreen)?

I am trying to build a small video chat window which can co-exist on the screen with other application such as internet browser like which is shown in this picture
My main problem is not with WEBRTC but with how to make two applications co-exist on the screen as skype does.
I have some experience with unity , rails and node.js . Is there any chance I can achieve my goal with the above framework?
Or do I have to learn something new like QT?
Please give me some advice, thanks a lot.
p.s. Do I have to build a desktop app for this feature or a website would be able to do the trick?
Unity has decent webcam support which insists that it supports webGL(html5).
You might want to check links below.
https://docs.unity3d.com/ScriptReference/WebCamDevice.html
WebGL webcam update
how to setup(unity question forum)

Can augmented reality be realized in a website?

Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!

iOS Augmented Reality Framework and some guidance

I have done some iOS development before never quite something that handled the Camera (which is what I think I need to do) Could you point me in the right direction?
What I was requested to do is basically to have a QR code reader that can display certain information (images/video/text) and then take it further up a notch by adding augmented reality (not necessarily using said QR codes but within the same app).
I've looked for some Augmented reality frameworks for iOS and I found SimpleGeo https://github.com/simplegeo/SGAREnvironment but it's more of a location-based AR framework. Any others? Or should I not even include the QR reader and do everything with the AR framework?
A few hints:
OpenCV
OpenCV iOS port
ZXing - Code bar generator (support for QR)
LibDecodeQR (uses OpenCV)

Resources