Does anyone know of any good tutorials or demos for the ARToolkit Mobile Library?
Thanks
with the ARToolkit Mobile come four well documented examples for using OpenGL, Wavefront, OSG and Video formats.
Other options are the support wiki of ARToolworks or the support forum.
Cheers,
Mark
Related
I want to use C++ library which makes it easy to build high-performance audio apps
https://github.com/google/oboe
Google Oboe seems for Android
can i somehow use it for iOS also ? or any similar alternative for iOS ?
I don't want to use Superpowered because of their licence terms!
There are no plans to release an iOS version of Oboe at present. You could look at FMOD or JUCE.
If I remember correctly (from videos of demos at events), The development of this library came from people heavily involved with the Google Android infrastructure, and thus the Oboe library is highly customized to tackle the low-latency short-comings of Android.
This being said, Google would not have the resources to tackle such an intensive and complicated problem for a completely different platform. As well (unfortunately) that wouldn't be in their best interest competitively-speaking.
I have heard of others using Superpowered, but I haven't gotten much info on it honestly, their marketing about it is all fluff, and there isn't any actual useful information, haha. I used Oboe myself, because I needed a dedicated native library.
As for iOS, I found a decent blog page that might be worth checking out: https://exceed7.com/native-audio/
This page suggests using OpenAL for objective-C/Swift. It looks like OpenAL is the similar implementation to OpenSL, which the Oboe library is partially based on. Unity seems to also utilize a library called FMOD (Not familiar with this one myself), as well DonTurner mentioned JUCE?.
So perhaps looking into these would be a good start, although I would assume using OpenAL might have some pretty involved developing, so ready your thinking cap!
Best of luck on your project!
Maybe you are looking for AudioKit
https://github.com/AudioKit/AudioKit
I just recently got started with he Google Cardboard SDK for iOS and I'm looking to create an simple app in Swift that displays a 3D (Stereoscopic) VR video.
First, I adapted the VideoWidgetDemo Sample in the SDK (https://github.com/googlevr/gvr-ios-sdk/tree/master/Samples/VideoWidgetDemo) from its original Objective C to Swift 4 and it performs well. It uses GVRKit to create a GVRSceneRenderer with a GVRVideoRenderer.
But then I came across a blog post on the Ray Wenderlich site (https://www.raywenderlich.com/136692/introduction-google-cardboard-ios) that uses GVRSDK's GVRVideoView instead, which feels simpler and easier to use. However, there is a very noticeable performance difference. The video displayed by this app stutters/jitters much more than the GVRKit version.
I'm puzzled by the fact that the official Google VR reference documentation site (https://developers.google.com/vr/ios/reference/) doesn't even mention GVRKit, even though all the official samples from the GitHub repo use it instead of GVRSDK. So the samples follow one approach and the reference docs cover a different one.
I haven't been able to find any guidance for when to use one or the other (or even both together if it makes sense), so I'm hoping that someone on StackOverflow can shed some light on this choice.
I'm also curios about the performance difference I'm experiencing with the two different approaches. It would be great if there is a way to achieve the same level of performance with the GVRVideoView than with the GVRVideoRenderer.
Thanks in advance for your insights and suggestions.
Seem that the SDK is deprecated.
I've posted an issue about a GVRSDK on GitHub, and they say that the SDK is deprecated and developer have to switch to GVRKIT.
Here the GitHub issue:
https://github.com/googlevr/gvr-ios-sdk/issues/298
If your goal is to display 360 video in a simple app, check this Cordova plugin:
https://codecanyon.net/item/cordova-ionic-vr-plugin-photo-360-video-360-player-with-cardboard/20392357
It seems that Google came up with a new SDK because Daydream is now deprecated.
The Google Cardboard SDK offers a streamlined API, improved device
compatibility, and built-in viewer profile QR code scanning.
Quickstart : https://developers.google.com/cardboard/develop/ios/quickstart
Github : https://github.com/googlevr/cardboard
Is it possible / a good choice to use Google Cardboard SDK to realise AR?I Only have found VR related things with that SDK. What is the best framework for AR? Is Vuforia a good way to go? Im trying to write an AR app (for Android) which detects/scans room numbers at my university and shows the schedule of this room (which class / time / which prof...)
Thanks for help!
Google Cardboard SDK is made for VR, hence not the best option for AR. There are SDKs built specifically for AR. Check out this comparison
There is an alpha version of DroidScript (JavaScript IDE for Android) available which supports augmented reality for Google Cardboard. There is also a sample on the DroidScript forum that demonstrates Aruco marker detection (Augmented reality). So I guess you could hack something together quite easily if you ask the developers for the latest version.
The latest version of vuforia can be integrated with Google cardboard.
Check this link:
https://developer.vuforia.com/library/articles/Solution/Integrating-Cardboard-SDK-050
Recently I've been getting more and more into mobile development. I am currently working with the iPhone and Android based devices.
Palm's new WebOS looks interesting.
Are there any good online tutorials for quickly getting up to speed on developing for the Palm WebOS?
The Palm Developer Network has some basic overviews: http://developer.palm.com/
They also have a section up there: Palm webOS: Developing Applications in JavaScript Using the Palm Mojo Framework. This may be a good start.
Palm webOS: Developing Applications in JavaScript Using the Palm Mojo Framework is a book in the making, available currently through O'Reilly Rough Cuts program).
You can easily read the first chapter.
That's the closest you can get currently from official sources. Unless you apply to their SDK early access program (sdkapplication.palm.com/sdkapplication) and they let you in (you can apply for it until the SDK is officially released to the public).
Of course, another thing we can do until the SDK is out is catch up on whatever technologies we individually need that programming for Palm's webOS will require: JavaScript, HTML5, CSS... and there's ton of material about these online. Actually, there are many websites dedicated to Palm Pre and webOS that sprung up recently. The one that is more programming oriented that I know of is webOShelp.net: take a look at their Getting started with webOS guide (www.weboshelp.net/getting-started-with-webos).
P.S. sorry about not clickable links, had to play the system somehow ;) - it won't allow me to post more than one link since I'm new here.
Now that the device is out, people are actively playing with the device. Best site I have found so far is (no affiliation) http://predev.wikidot.com
Also, if you root the device, you can look at the source for the shipped apps in /usr/palm/applications
I have additional notes at http://friendfeed.com/
The site www.weboshelp.net has quite a few good tutorials.
This blog has a good tutorials:
http://kmdarshan.com/blog/category/webos/
Are there any good components, free or commercial, available for Delphi (I use Delphi 2009) that will allow me to easily implement face detection and tagging of the faces in photos (i.e. graphics/images)?
I need to do something similar to what Google Picasa's Web Albums can do, but from within my application.
Did you see the SDK's that come in the answer Face recognition Library.
The one from nuerotechnology has an activex component that you could use.
Here is what you wanted
http://delphimagic.blogspot.com/2011/08/reconocimiento-de-caras-con-delphi.html