I checked torch7 for ios https://github.com/clementfarabet/torch-ios
I could not find anywhere mentioned if torch 7 for ios uses GPU. I checked the example projects XOR_Example... It includes Accelerate framework, which I guess uses CPU only. I could not see Metal framework being used. Does this mean Torch 7 does not use GPU on ios? If yes, what are other options to for deploying a NN on ios (for inference) that uses GPU?
Related
We have a model trained using Keras, using the MobileNetV2 architecture.
We can use CoreMLTools to convert from the .H5 file to a .MLModel CoreML model.
However, with latest CoreMLTools (5.x) the resulting model only runs on iOS 13 and later, but our app supports iOS 11.
Is there a way to generate a iOS 11/12 compatible model with latest CoreMLTools?
We thought of trying to install older CoreMLTools (like 2.x) but for other reasons have had dependency issues installing that. But it feels there should be a way to specify CoreML version when converting the model?
I would highly recommend working through your version dependencies and get an older version of coremltools working. I understand the difficulty there but I promise you all other paths will be more difficult.
Now the good news. CoreML models are just protocol buffers that you can easily load and manipulate yourself without coremltools. I keep a compiled version of their protocol spec in a library just for these kinds of tasks. You can get the PB spec here: https://github.com/apple/coremltools/tree/f19052c7f113740069bfac7b0291c5c6c9571ca6/mlmodel/format
Load up your iOS 11 model in a PB viewer, load up the iOS 13 version, and remove everything in the 13 version that’s not in 11. :-)
Thankfully CoreML models are pretty simple and I can guess that there’s just one version flag set that you need to reset.
I can't figure out whether or not SIMD groups are supported on iOS.
The Metal Shading Language Specification states at the time of writing on page 59, section 4.4.1:
iOS: No support for SIMD-groups.
However, in Table 6.11., "SIMD-group functions in the Metal standard library", some SIMD-group functions are listed as supported on iOS. This is one of the ones I'd like to use:
T simd_shuffle_down(T data, ushort delta)
macOS: Since Metal 2.0.
iOS: Since Metal 2.2.
Similarly, table Table 5.7., "Attributes for kernel function input arguments", states that some attributes are available:
threads_per_simdgroup
macOS: Since Metal 2.0.
iOS: Since Metal 2.2.
So it's not clear from the documentation whether any SIMD group functionality is supposed to be supported. Using a function argument with the threads_per_simdgroup attribute in a compute kernel currently causes the run-time Metal compiler to crash on iPhone 7 and 8 (but not 11):
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
All devices tested with iOS 13.3. Metal language version was 2.2. Xcode version 11.3.
I think that the claim that SIMD-groups are unsupported on iOS is either inaccurate, or not specific enough.
If you consult the Metal Feature Set Tables for Metal 2.2, you'll note that "SIMD-scoped permute operations" (simd_broadcast,
simd_shuffle, simd_shuffle_up, etc.) are supported on MTLGPUFamilyApple6, which includes devices with A13 processors. Hence why this works on iPhone 11.
The fact that using this attribute on unsupported devices causes a compiler crash is a bug, and I'd recommend that you file feedback.
How can I obtain a EGLNativeWindowType object in the iOS platform? or achieve the equivalent of the following android code?
To provide a bit more insight, I am currently porting a native android app to iOS which shares a single core C library, while the iOS project itself is written in Objective-C. The project is also using EGL and not EAGL.
The existing source code is standard C but uses Android's NDK; a EGLSurface object is defined with EGLAPI EGLSurface EGLAPIENTRY eglCreateWindowSurface(EGLDisplay dpy, EGLConfig config, EGLNativeWindowType win, const EGLint *attrib_list)
EGLNativeWindowType win = AndroidMainGetAndroidActivity()->app->window;
EGLSurface eglSurface = eglCreateWindowSurface(e_eglDisplay, config, win, NULL);
I haven't found any documentation relating to EGLNativeWindowType and iOS.
iOS uses EAGL as an interface between OpenGL ES and the underlying windowing system. EAGL need for iOS as EGL needs for Android to draw via OpenGL ES. So you can not use EGL API on iOS.
Differences between them and how them works very good described in an article.
I am developing an image processing application in Centos with OpenCV using C/C++ coding. My intension is to have a single development platform for Linux and IOS (IPAD).
So if I start the development in a Linux environment with OpenCV installed ( in C/CPP ),Can I use the same code in IOS without going for Objective-C? I don't want to put dual effort for IOS and Linux, so how to achieve this?
It looks like it's possible. Compiling and running C/C++ on iOS is no problem, but you'll need some Objective-C for the UI. When you pay some attention to the layering/abstraction of your modules, you should be able to share most/all core code between the platforms.
See my detailed answer to this question:
iOS:Retrieve rectangle shaped image from the background image
Basically you can keep most of your CPP code portable between platforms if you keep your user interface code separate. On iOS all of the UI should be pure objective-C, while your openCV image processing can be pure C++ (which would be exactly the same on linux). On iOS you would make a thin ObjC++ wrapper class that mediates between Objective-C side and the C++ side. All it really does is translate image formats between them and send data in and out of C++ for processing.
I have a couple of simple examples on github you might want to take a look at: OpenCVSquares and OpenCVStitch. These are based on C++ samples distributed with openCV - you should compare the C++ in those projects with the original samples to see how much altering was required (hint: not much).
can you please tell me if there is a weka (machine learning algorithm) for iOS ?
and if yes then provide me with a download link to download it.
iOS Agreement says:
"3.3.2 — An Application may not itself install or launch other executable code by any means, including without limitation through the use of a plug-in architecture, calling other frameworks, other APIs or otherwise. No interpreted code may be downloaded or used in an Application except for code that is interpreted and run by Apple’s Documented APIs and built-in interpreter(s)."
So you cannot lanch a java interpreter to use WEKA libraries.
BUT... Google released a "Java to iOS Objective C translator" a few days ago. And WEKA is an "Open Source" project. So, maybe, you could try to download WEKA's (java) code and translate it from java to Objective-C in order to run WEKA's algorithms in iOS.
If you get it, please, let me know ;-)
Weka is written in Java. This means the likelihood of it being adapted to iOS is quite small.