I'm currently developing some computer-vision application for blind people. Now we decided to move our studies on mobile apps.
Since OpenCV 2.4, building them for iOS is quite simple due to the two scripts.
The most problem I'm facing is time of processing. I read that most of image processing on iPhone is done with OpenGL. So I was wondering if could be a possibility to build OpenCV with OpenGL support when builded for iOS.
The processing time for very simple operation in OpenCV (on iPhone) is too long for real-time apps, especially if dedicated to blind people who need a rapid feed-back of reality.
Could someone help me?
It is not possible to build OpenCV with OpenGL support for iOS (and OS X). It is unconditionally disabled by the OpenCV's build scripts.
Actually OpenGl is not used in OpenCV for acceleration. So you will not get any speedup even after editing the sources.
Related
Real-time face recognition is a relatively non-trivial process and as a result most people use a range of popular libraries, such as OpenCV. However, there seem to be few options for targetting Windows 10 platforms for Universal Applications.
Support now seems to be available for OpenCV and Universal Apps but I have had quite a bit of difficulty getting this setup and I really only require the face recognition features.
What libraries currently provide support for developing real-time face recognition applications for Windows 10 and Universal Applications?
I would go down the OpenCV route, it's the most popular one so there is a lot of support for it if your problem isn't platform specific.
If there's no requirement for offline use, checkout Project Oxford, it's really cool.
Alternatively, if javascript is an option, there are a whole lot of libraries out there. I've used headtrackr, which worked well with Windows 8/WinJS 1.0.
I'm newly invovled developing image processing app on iOS, I have lots of experience on OpenCV, however everything is new for me on the iOS even OSX.
So I found there are mainly the core image library and the GPUImage library around for the normal image processing work. I'm insterested in knowing which one should I choose as a new on the iOS platform? I have seen some tests done on iOS 8 on iPhone 6, it appears the core image is faster than the GPUImage on the GPUImage's benchmark now.
I'm actually looking for a whole solution on image processing development,
What language ? Swift, Objective-C or Clang and C++ ?
What library ? GPUImage or Core Image or OpenCV or GEGL ?
Is there an example app ?
My goal is to develop some advance colour correction functions, I wish to make it as fast as possible, so in future I can make the image processing become video processing without much problem.
Thanks
I'm the author of GPUImage, so you might weigh my words appropriately. I provide a lengthy description of my design thoughts on this framework vs. Core Image in my answer here, but I can restate that.
Basically, I designed GPUImage to be a convenient wrapper around OpenGL / OpenGL ES image processing. It was built at a time when Core Image didn't exist on iOS, and even when Core Image launched there it lacked custom kernels and had some performance shortcomings.
In the meantime, the Core Image team has done impressive work on performance, leading to Core Image slightly outperforming GPUImage in several areas now. I still beat them in others, but it's way closer than it used to be.
I think the decision comes down to what you value for your application. The entire source code for GPUImage is available to you, so you can customize or fix any part of it that you want. You can look behind the curtain and see how any operation runs. The flexibility in pipeline design lets me experiment with complex operations that can't currently be done in Core Image.
Core Image comes standard with iOS and OS X. It is widely used (plenty of code available), performant, easy to set up, and (as of the latest iOS versions) is extensible via custom kernels. It can do CPU-side processing in addition to GPU-acelerated processing, which lets you do things like process images in a background process (although you should be able to do limited OpenGL ES work in the background in iOS 8). I used Core Image all the time before I wrote GPUImage.
For sample applications, download the GPUImage source code and look in the examples/ directory. You'll find examples of every aspect of the framework for both Mac and iOS, as well as both Objective-C and Swift. I particularly recommend building and running the FilterShowcase example on your iOS device, as it demonstrates every filter from the framework on live video. It's a fun thing to try.
In regards to language choice, if performance is what you're after for video / image processing, language makes little difference. Your performance bottlenecks will not be due to language, but will be in shader performance on the GPU and the speed at which images and video can be uploaded to / downloaded from the GPU.
GPUImage is written in Objective-C, but it can still process video frames at 60 FPS on even the oldest iOS devices it supports. Profiling the code finds very few places where message sending overhead or memory allocation (the slowest areas in this language compared with C or C++) is even noticeable. If these operations were done on the CPU, this would be a slightly different story, but this is all GPU-driven.
Use whatever language is most appropriate and easiest for your development needs. Core Image and GPUImage are both compatible with Swift, Objective-C++, or Objective-C. OpenCV might require a shim to be used from Swift, but if you're talking performance OpenCV might not be a great choice. It will be much slower than either Core Image or GPUImage.
Personally, for ease of use it can be hard to argue with Swift, since I can write an entire video filtering application using GPUImage in only 23 lines of non-whitespace code.
I have just open-sourced VideoShader, which allows you to describe video-processing pipeline in JSON-based scripting language.
https://github.com/snakajima/videoshader
For example, "cartoon filter" can be described in 12 lines.
{
"title":"Cartoon I",
"pipeline":[
{ "filter":"boxblur", "ui":{ "primary":["radius"] }, "attr":{"radius":2.0} },
{ "control":"fork" },
{ "filter":"boxblur", "attr":{"radius":2.0} },
{ "filter":"toone", "ui":{ "hidden":["weight"] } },
{ "control":"swap" },
{ "filter":"sobel" },
{ "filter":"canny_edge", "attr":{ "threshold":0.19, "thin":0.50 } },
{ "filter":"anti_alias" },
{ "blender":"alpha" }
]
}
It compiles this script into GLSL (OpenGL's shading language for GPU) at runtime, and all the pixel operations will be done in GPU.
Well if you are doing some ADVANCE image processing stuff then i suggest to go with OpenGL ES(i assume i don't need to cover the benefit of OpenGL over UIKit or Core Graphics) and you can start with below tutorials.
http://www.raywenderlich.com/70208/opengl-es-pixel-shaders-tutorial
https://developer.apple.com/library/ios/samplecode/GLImageProcessing/Introduction/Intro.html
With the push towards multimedia enabled mobile devices this seems like a logical way to boost performance on these platforms, while keeping general purpose software power efficient. I've been interested in the IPad hardware as a developement platform for UI and data display / entry usage. But am curious of how much processing capability the device itself is capable of. OpenCL would make it a JUICY hardware platform to develop on, even though the licensing seems like it kinda stinks.
OpenCL is not yet part of iOS.
However, the newer iPhones, iPod touches, and the iPad all have GPUs that support OpenGL ES 2.0. 2.0 lets you create your own programmable shaders to run on the GPU, which would let you do high-performance parallel calculations. While not as elegant as OpenCL, you might be able to solve many of the same problems.
Additionally, iOS 4.0 brought with it the Accelerate framework which gives you access to many common vector-based operations for high-performance computing on the CPU. See Session 202 - The Accelerate framework for iPhone OS in the WWDC 2010 videos for more on this.
Caution! This question is ranked as 2nd result by google. However most answers here (including mine) are out-of-date. People interested in OpenCL on iOS should visit more update-to-date entries like this -- https://stackoverflow.com/a/18847804/443016.
http://www.macrumors.com/2011/01/14/ios-4-3-beta-hints-at-opencl-capable-sgx543-gpu-in-future-devices/
iPad2's GPU, PowerVR SGX543 is capable of OpenCL.
Let's wait and see which iOS release will bring OpenCL APIs to us.:)
Following from nacho4d:
There is indeed an OpenCL.framework in iOS5s private frameworks directory, so I would suppose iOS6 is the one to watch for OpenCL.
Actually, I've seen it in OpenGL-related crash logs for my iPad 1, although that could just be CPU (implementing parts of the graphics stack perhaps, like on OSX).
You can compile and run OpenCL code on iOS using the private OpenCL framework, but you probably don't get a project into the App Store (Apple doesn't want you to use private frameworks).
Here is how to do it:
https://github.com/linusyang/opencl-test-ios
OpenCL ? No yet.
A good way of guessing next Public Frameworks in iOSs is by looking at Private Frameworks Directory.
If you see there what you are looking for, then there are chances.
If not, then wait for the next release and look again in the Private stuff.
I guess CoreImage is coming first because OpenCL is too low level ;)
Anyway, this is just a guess
I am really interested in using Unity3d to develop an app.
I like the fact that I can develop once and port the app to multiple platforms (Mac/Windows/iPhone/Android), and the performance on my Mac seems to be quite good.
This will be the first time I write an app for iPhone, and I am curious about performance issues down the road. I think I will definitely use Unity3d on iPhone for a prototype, but am wondering if building an iPhone Unity3d app will use the iPhone resources as efficiently as a native app written in Objective-C.
The Unity3d site seems to suggest that Unity3d algorithms are optimized, and I thought that if I asked that question in the Unity3d forums, that would be the kind of response I would get. Ideally, I'd be interested in hearing from someone who has built an app in Unity3d and Objective-C and can compare the two.
The discussion that got me thinking about this was Andrew and Peter Mortensen's response to a question about iOS development cost, which begins "There is a much easier way to develop iPhone apps than learning Cocoa."
There are specific resources in Unity that will help with mobile development including resources, shaders, etc. that are specifically designed with mobile in mind.
You certainly won't want to take 'unoptimized' PC-quality assets and drop them into a Unity project and export that for the iOS platform as you will ensure poor/unreliable performance. What you want to do is start building out a scene using assets of similar quality to those you want for your game and then seeing what the performance is on a real device. This will give you a feel for what level of performance you can expect from your game in production.
Remember that the performance of a iPhone, iPad, iPad2, etc will vary wildly depending on what you're doing and which features you're touching. While Unity3D has been heavily optimized to deal with a variety of the scenarios, you can certainly do things like fogging which push the fill rate (a known limitation of the platform) and end up with horrendous performance.
Could you possibly get more performance out of building your application purely in Objective-C? If you have the right skillset in engine development to design a specific implementation of technology for your specific requirements - Certainly.
You just need to decide if you want to spend your time writing technology or building product. Most people who choose Unity do so because you get an exceptionally good engine which most people cannot beat the performance of (try building your own landscape engine), while at the same time getting exceptional time to market... and really its time to market that really matters in most cases.
This is an old post, but I figured I'd answer it because no one here has really got it quite right.
It's important to keep in mind that the internal core workings of Unity is entirely native. The physics engine and resultantly everything dealing with collision. The occlusion system (umbra). The entire rendering engine core. All of that is written in C/C++ and runs at full native speed on any platform. What AmitApollo says is not correct at all, the unreal engine 3 is not more direct 'native' at all when compared to unity. Both Unity and Unreal engine 3, as well as any other 3D engine like Ogre or cocos3d, their core rendering system is all written in c/c++. Some of these engines may have certain internal rendering algorithms implemented better than others, and may thus produce better performance, but this has nothing to do with whether or not they are 'native', because the internal core rendering system is native for all of them.
The internal workings of the physics engine is written in c/c++ as well, and thus the physics engine in UE3 and Unity both run at 'full native speed'.
The epicCitadel demo also does not show greater technical prowess or performance than Unity on iOS. Much of the 'visual impact' of the citadel demo comes simply from the fact that it is really good artwork. The citadel demo is not pushing any higher vertex count than what Unity could handle on iOS, the citadel demo is not demonstrating any more advanced shader or lighting techniques than what Unity can do on iOS. In fact there are more examples of Unity showing off more advanced mobile rendering techniques than what Unreal Engine 3 has demonstrated. Look at games like Shadowgun or BladeSlinger made in Unity, both these games demonstrate more advanced mobile rendering techniques than what Unreal Engine 3 has shown. Light Probes, Mobile BRDF shaders with translucency and normal mapping and well implemented dynamic mobile shadows to name a few. The vast majority of the most successful 3D games in the App store are Unity games, and Unity has thus put alot of R&D into Unity's mobile rendering performance and capabilities.
Now Unity is scripted in C# and Mono. Which does run slower than native code, about 50% slower on iOS by most estimates. But you must keep in mind that you are only doing game logic in this code. You are not writing any code in C# and Mono in Unity that deals with the working of it's internal rendering system, nor the internal workings of the physics system. You only write game logic in C#, that then interfaces with the rendering and physics core, which then executes at full native speed. Mono C# does execute slower than native C++, but if you program intelligently, I think you will find this is hardly a hindrance at all because you only do game logic in Mono C# and game logic is not necessarily CPU heavy. In my experience, it is really quite difficult to make an iPad 2 drop below 60 fps on purely game logic written in Mono C#. I have never actually been hindered by this at all.
If we are to compare to Unreal Engine 3, keep in mind that UE3 also is set up to have it's game logic programmed in a non-native language, UnrealScript. Unrealscript is a language much like Mono C# or Java, where UnrealScript is compiled down to byte code then interpreted at runtime. Meaning, just like Unity, game logic is not 'native' in UE3.
Now if you look here:
http://lua-users.org/wiki/LuaVersusUnrealScript
That is a benchmark comparing UnrealScript to C++ on a simple arithmetic operation using ints. It shows that unreal script is 1/4 to 1/20th the speed of C++.
Then have a look here:
http://www.codeproject.com/Articles/212856/Head-to-head-benchmark-Csharp-vs-NET
If you scroll down to the C# vs C++ Simple arithmetic benchmark. It shows Mono C# is 3/4 the speed of C++ doing simple int arithmetic. Mono C# is about 1/2 the speed when doing simple arithmetic with floats. The int64 and double benchmarks don't really mean much to us because typically you'll never use those in performance critical code in iOS game logic.
Now other benchmarks on there do show Mono C# at times having as bad as 1/20th the performance of C++. But these are through very specific tests, really the best apples to apples benchmark I could find are those simple arithmetic tests.
So really, since Unity's scripting runs on Mono C# and UE3 runs on UnrealScript. Unity is actually the engine that will offer you radically better performance in game logic code.
The notion that UE3 is any more advanced, or offers any more performance, or any greater graphical capability than Unity on iOS is simply not true. Quite the contrary is true.
Now it is true that if you used something like cocos3d you could potentially get better performance because your game logic could be written natively in C++ as well. But the benefits of working with a scripting language like c# to do game logic I think far outweighs the performance loss that is generally never an issue. Namely the benefits of using a scripting language for game logic is that it offers you faster iterations of design, which when doing games is really critical due to how quirky things can be and how frequently you have to recompile and test code.
However, in Unity, it is really easy to write native code plugins with the Pro version. So if you ever do have a piece of performance critical code that needs to run at native speed, you can write it in C++, compile it to a native library, then call that from Mono C#.
Also keep in mind if you are targeting all iOS devices the difference for heavy GPU graphics means drastic performance discrepancies between iPhone 3GS to 4, then from 4,4S to iPad2,& 3 Even certain games on the new iPhone5 or iPad4 could run at a higher FPS than it's predecessors. Keep in mind to keep poly's low, and of course in your terrain keep resolutions low, and even something as subtle as pixel error could drastically effect. Fog will always produce a strain. Textures > 512x512 may cause a problem, same with multiple light sources. It's much faster to have no light rendering, and bake the shadows and highlights. I also found that running in Native Resolution as opposed to best performance may hinder performance (Unity 4). Billboarding, Occlusion Culling are also topics you want to lookup. There is a fine line between looking good, and running slowly.
If performance is an issue to you, you may want a different engine altogether. A more Direct "native" engine like Unreal Engine 3 is amazing with it's capabilities. And it can do it without much overhead. Case and point, Epic Citadel Demo App running on an iPhone 4 or 3GS. Something comparable in Unity would be slow, and wouldn't quite look as sexy.
Perhaps its a good idea to take a look at other games made with Unity and see where yours fits in and what kind of performance you can expect.
http://www.youtube.com/watch?v=7aVttB25oPo
http://unity3d.com/gallery/game-list/
One asset that is helpful to increase performance on IOS is KGFSkyBox.
We found out, that unity3d skyboxes are using up to 6 drawcalls! This is guite a problem on devices having limits of max 30DCs!
We solved this by implementing KGFSkyBox which reduces the drawcalls to 1 if you have terrain (Hides bottom sky hemisphere). If you do not use terrain KGFSkyBox will render using 2 drawcalls which is still better than 6!
Check it out here:
http://u3d.as/4Wg
If you have any questions or suggestions just contact us here: support#kolmich.at
Platform: amd_64
Operating System: Ubuntu 8.10
Problem:
The current release of OpenCV (2.1 at time of writing) and libdc1394 doesn't properly interface with the new USB-interface PointGrey High-Res FireFlyMV Color camera.
Does anyone have this camera working with OpenCV on Ubuntu?
Currently, I'm working on writing my own frame-grabber using PointGrey's FlyCapture2 SDK, which works well with the camera. I'd like to interface this with OpenCV, by converting each image I grab into an IplImage object. When I write OpenCV programs, I use CMake. The example code for the FlyCapture2 SDK uses fairly simple makefiles. Does anyone know how I can take the information from the simple FlyCapture2 makefile so I can include the appropriate lines in CMakeLists.txt for my CMake build routine?
Not a simple answer (sorry) - but.
Generally you don't want to use cvCaptureCam() for high performance cameras beyond initial tests that they work. Even for standard interfaces like firewire It is very limited in what features of the camera it can control, it doesn't handle threading well and the performance is poor - especially at high data rates.
The more common way is to control the camera with the makers own SDK and output frames in a form (cv::mat/iplimage) that openCV can process. All openCV image types are very flexible in being able to share data with the camera API and specify padding/row striping etc so you should be able to design it so there is no unnecessary copying.