How to include C library (VLFeat 0.9.21) in iOS Project? - ios

What are the necessary steps I need to take to compile VLFeat (an open source C library) and include it in my iOS project? Here is what I found out so far:
I am not too familiar with the compilation and linking procedures, but from what I currently understand, I need to compile VLFeat for the arm64 architecture in order for it to run on the iPhone.
There is a severely outdate guide on how to include VLFeat in XCode:
http://www.vlfeat.org/xcode.html
But when I follow these steps I get the following error:
Undefined symbols for architecture arm64:
"_vl_get_printf_func", referenced from:
-[OpenCVWrapper createMatrixFromImage:] in OpenCVWrapper.o
ld: symbol(s) not found for architecture arm64
I suspect this is because the library is actually built for OSX (hence for a different architecture). I have not been able to figure out how to build this for iOS (arm architecture) and get it running on a physical iPhone.
When I open the XCode project that is included in the VLFeat download and go to build settings for one of the Targets and change the Base SDK to iOS and Supported platforms to iOS and Valid Architectures to arm64 and then try to build, a bunch of errors come up for the VLFeat source code such as:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/9.0.0/include/mmintrin.h:64:12: Invalid conversion between vector type '__m64' (vector of 1 'long long' value) and integer type 'int' of different size
and
"Compiling with SSE2 enabled, but no __SSE2__ defined"
These errors make me suspect that actually its not possible to build this library for arm and that, short of porting the library over (modifying code), it is an impossible task. I am not sure whether the conclusions I reached are correct so any input on this would be helpful.

The VLFeat library does not currently (January 2023) support ARM architectures, and seems to no longer be maintained, as the last commit was in January 2018.
However, this commit to a copy of the library updates the host.h and host.c files to add basic support for the Clang compiler and ARM architectures. This has enabled me to compile binaries for the Apple M1 Pro processor running MacOS, and these perform as expected.
Performance will not be as good as on Intel CPUs because the SIMD Neon instruction set for ARM is not yet supported, whereas SSE2 (on Intel) is.

Related

Xcode 14 deprecates bitcode - but why?

The bounty expires in 7 days. Answers to this question are eligible for a +50 reputation bounty.
mfaani wants to reward an existing answer.
Xcode 14 Beta release notes are out, all thanks to the annual WWDC.
And alas, the Bitcode is now deprecated, and you'll get a warning message if you attempt to enable it.
And I was wondering, why has this happened? Was there any downside to using Bitcode? Was it somehow painful for Apple to maintain it? And how will per-iPhone-model compilation operate now?
Bitccode is actually just the LLVM intermediate language. When you compile source code using the LLVM toolchain, source code is translated into an intermediate language, named Bitcode. This Bitcode is then analyzed, optimized and finally translated to CPU instructions for the desired target CPU.
The advantage of doing it that way is that all LLVM based frontends (like clang) only need to translate source code to Bitcode, from there on it works the same regardless the source language as the LLVM toolchain doesn't care if the Bitcode was generated from C, C++, Obj-C, Rust, Swift or any other source language; once there is Bitcode, the rest of the workflow is always the same.
One benefit of Bitcode is that you can later on generate instructions for another CPU without having to re-compile the original source code. E.g. I may compile a C code to Bitcode and have LLVM generate a running binary for x86 CPUs in the end. If I save the Bitcode, however, I can later on tell LLVM to also create a running binary for an ARM CPU from that Bitcode, without having to compile anything and without access to the original C code. And the generated ARM code will be as good as if I had compiled to ARM from the very start.
Without the Bitcode, I would have to convert x86 code to ARM code and such a translation produces way worse code as the original intent of the code is often lost in the final compilation step to CPU code, which also involves CPU specific optimizations that make no sense for other CPUs, whereas Bitcode retains the original intent pretty well and only performs optimization that all CPUs will benefit from.
Having the Bitcode of all apps allowed Apple to re-compile that Bitcode for a specific CPU, either to make an App compatible with a different kind of CPU or an entirely different architecture or just to benefit from better optimizations of newer compiler versions. E.g. if Apple had tomorrow shiped an iPhone that uses a RISC-V instead of an ARM CPU, all apps with Bitcode could have been re-compiled to RISC-V and would natively support that new CPU architecture despite the author of the app having never even heard of RISC-V.
I think that was the idea why Apple wanted all Apps in Bitcode format. But that approach had issues to begin with. One issue is that Bitcode is not a frozen format, LLVM updates it with every release and they do not guarantee full backward compatibility. Bitcode has never been intended to be a stable representation for permanent storage or archival. Another problem is that you cannot use assembly code as no Bitcode is emitted for assembly code. Also you cannot use pre-built third party libraries that come without Bitcode.
And last but not least: AFAIK Apple has never used any of the Bitcode advantages so far. Despite requiring all apps to contain Bitcode in the past, the apps also had to contain pre-build fat binaries for all supported CPUs and Apple would always only just ship that pre-build code. E.g. for iPhones you used to once have a 32 Bit ARMv7 and a 64 Bit ARM64 version, as well as the Bitcode and during app thinning, Apple would remove either the 32 Bit or the 64 Bit version, as well as the Bitcode, and then ship whats left over. Fine, but they could have done so also if no Bitcode was there. Bitcode is not required to thin out architectures of a fat binary!
Bitcode would be required to re-build for a different architecture but Apple has never done that. No 32 Bit app magically became 64 bit by Apple re-compiling the Bitcode. And no 64 bit only app was magically available for 32 bit systems as Apple re-compiled the Bitcode on demand. As a developer, I can assure you, the iOS App Store always delivered exactly the binary code that you have built and signed yourself and never any code that Apple has themselves created from the Bitcode, so nothing was server side optimized. Even when Apple switched from Intel to M1, no macOS app magically got converted to native ARM, despite that would have been possible for all x86 apps in the app store for that Apple had the Bitcode. Instead Apple still shipped the x86 version and let it run in Rosetta 2.
So imposing various disadvantages onto developers by forcing all code to be available as Bitcode and then not using any of the advantages Bitcode would give you kinda makes the whole thing pointless. And now that all platforms migrated to ARM64 and in a couple of years there won't even be fat binaries anymore (once x86 support for Mac has been dropped), what's the point of continuing with that stuff? I guess Apple took the chance to bury that idea once and for all. Even if they one day add RISC-V to their platforms, developers can still ship fat binaries containing ARM64 and RISC-V code at the same time. This concept works well enough, is way simpler, and has no downsides other than "bigger binaries" and that's something server side app thinning can fix, as during download only the code for the current platform needs to be included.
Apple Watch Series 3 was the last device to not support 64-bit. (i.e. i386 or armv7)
Apple has now stopped supporting the Apple Watch Series 3. [1] They would have been happy to drop support for bitcode.
[1] https://www.xda-developers.com/watchos-9-not-coming-apple-watch-series-3
xcode remove armv7/armv7s/i386 targets support. bitcode use for build different cpu targets. but now all devices might be arm64 . and now no more developer use this tech. so deprecated maybe a wise choice
Bitcode was always pointless, as even if you compiled bitcode to another architecture, there's high chance it won't actually work because the ABI is different. For example, when you compile C program, the libc headers actually are different for every architecture. I'm glad they are finally getting rid of it, as it caused more problems than solved. At most they could've done is re-optimize the binary for the same architecture, or similar enough architecture. There is also the problem of unwanted symbols leaking in bitcode builds, so you either have to rename/obfuscate those or get hit by collisions (big problem if you are a library/framework vendor).

I can't build dlib library on xcode

I'm trying to use dlib in ios so I can run an application using face recognition
I'm following this link to build dlib for ios then the error below shows up.
Undefined symbols for architecture arm64:
"_USER_ERROR__missing_dlib_all_source_cpp_file__OR__inconsistent_use_of_DEBUG_or_ENABLE_ASSERTS_preprocessor_directives_", referenced from:
_dlib_check_consistent_assert_usage in DlibWrapper.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
My environment
Mac (OSX) Catalina 10.15.4
SwiftUI (working with)
XCode 11.4
iPhone 6S
IOS 13.4.1
I'm afraid your title is a little misleading. You're not trying to build a library - you've downloaded a precompiled library and are trying to use it in a way it wasn't meant to be used. The library in question was not built for Intel architecture, so it won't run on the simulator. You have a number of options on how to proceed:
You can look for binaries with the appropriate architecture
Find the source code to the library (perhaps in a git project) and compile the libraries yourself
or look into Apple's Machine Learning Libraries and Technologies
https://developer.apple.com/videos/play/wwdc2019/209/
https://developer.apple.com/videos/play/wwdc2018/703/
Check out these and other WWDC videos on machine learning and ARKit as a starter.
I recommend you download Apple's own Developer app from the App Store.
https://apps.apple.com/us/app/apple-developer/id640199958
You can use it to find many videos on available resources.
There are a number of very powerful tools available. It helps if you know python, since that's where a lot of the development work is happening.
Maybe it would help if you were to look into how things are done in the iOS environment so you can better understand how it relates to other platforms.
There are many helpful articles out there, this is the first one I found:
https://towardsdatascience.com/core-machine-learning-for-ios-developers-7f2a4b19ec08
The key is to not get discouraged! There's a lot of useful information out there and it's important to look for alternatives when you've gone down a dead end.
Good Luck! 🍀

How 32bit architecture is differ from 64bit architecture mainly in form of app speed and memory management?

As per my knowledge, OS architecture is generally used to speed up our OS and adding new features with higher memory management but in IOS i am little bit confuse regarding architecture which we generally set in our app is as below
Architectures - Standard Architecture (armv7,arm64) Valid Architectures - armv7,arm64,armv7s.
Due to this, we are getting many warnings related to datatypes size and conversation because 64-bit architecture is the use of processors that have datapath widths, integer size, and memory address widths of 64 bits.
so my question is I want to understand what mechanism will work behind this while I generating IPA file for 32 bit supported architecture or 64 bit architecture (I know now after XCode-6 we will only build our app with 64-bit architecture with bitcode enabled in our app for thining our app size)
Can anyone help me with this to understand architecture mechanism, especially in IOS?
Your help will be appreciated.
There are two architecture settings in an iOS project:
Architectures
Valid Architectures
The list of Valid Architectures constrains the possible values in the Architectures list.
When building for debugging on a device, Xcode will only build for the architecture of the target device (which may be x86, for the simulator). If the target device is a 32bit architecture, you'll get a 32-bit build.
When building for some kind of release (ad-hoc or App Store), Xcode will build for all the architectures listed in the build setting's Architecture list. The app binary, along with any dynamically-loaded frameworks will have a slice for each architecture.
Original Link: http://iossupportmatrix.com/
To add on to what Avi said, I hope this picture will give you a better understanding of how OS are evolving. The more right you go, the more the OS can handle information (it can handle more RAM) and some application require more RAM to run. I wish I could give you more specific information about this but I wouldn't want to say something wrong.

Apple Mach-0 linker warning

I'm fairly new to ios programming. I've got an app that I think is almost ready for the appstore. However, I have one remaining warning and I'm not sure how to track down where the problem lies.
Here's the warning:
(null): Ignoring file /Users/maclehose/Library/Developer/Xcode/DerivedData/epidemiologyCalculator-esumevitgvkrmsfrmqeerqjyjfoh/Build/Products/Debug-iphoneos/epidemiologyCalculator.app/epidemiologyCalculator, missing required architecture arm64 in file /Users/maclehose/Library/Developer/Xcode/DerivedData/epidemiologyCalculator-esumevitgvkrmsfrmqeerqjyjfoh/Build/Products/Debug-iphoneos/epidemiologyCalculator.app/epidemiologyCalculator (2 slices)
Can anyone give me any specific advice on how to track down the source of the warning?
Thanks,
Rich
Go to targets and then Build Settings. Then turn the architectures to (armv7, armv7s). Like this you can silence the warning.
But then when you change it like this, it means that your app doesn't gets compiled for the 64-Bit architecture (iPhone 5S uses it).
You don't need 64-bit setting, if performance is not an issue. However if you really need it, you'll need to compile and build CorePlot library from source by adding it as a dependent project inside your app.

Link error adding library built with Clang to iOS app built with GCC

I'm trying to add the Dropbox Sync API (v1.1.2) to an iOS app built with Marmalade (v6.3). I'm getting the following link error:
Undefined symbols for architecture armv7:
"___udivmodsi4", referenced from:
_sqlite3BitvecSet in libDropbox.a(sqlite3.o)
_sqlite3BitvecClear in libDropbox.a(sqlite3.o)
_sqlite3BitvecTest in libDropbox.a(sqlite3.o)
ld: symbol(s) not found for architecture armv7
Googling for pertinent parts of that error message finds a number of users of a certain SQLCipher library experiencing the same issue, and suggestions that the problem is caused by an inconsistency of the compilers used to build that library and the various projects using it.
As the build system for our project is set up by the Marmalade toolset, changing the compiler (currently a version of GCC 4.4 supplied by Marmalade, I believe) is not an option, I think.
Can anyone tell me more precisely what is going wrong? Are there any other workarounds to this problem?
On processors like ARM, with relatively simple instruction sets, some more complex operations are mapped on to function calls rather sequences of instructions. When you link using the "correct" compiler, the implementations of these are pulled in and it works! You on't normally see it.
The obvious solution here would be to use marmalade to compile the dropbox library, then it will use a compatible compiler.
The question then, I guess, is whether there is a reason you are not doing this to start with? Current Marmalade compilers don't support ARC. I guess that would be a reason not to compile under ARC.
The other answer is correct. However, for this specific problem (if these are the only linker errors you are getting) I see two workarounds:
Grab the source from sqlite3 that includes sqlite3BitvecSet and compile those functions in your own project to override the library. They will pick up whatever divmod support is offered by your own compiler.
Implement your own udivmodsi4. You don't have to implement bitwise division (although you can go get that basic C implementation from the GCC source). You just have to implement it in native operations and let your compiler call whatever internal support it needs.
This is untested, but should give you the basic idea. You may need more underscores on the name to match the behavior/naming of the other build environment:
unsigned long
udivmodsi4(unsigned long num, unsigned long den, int modwanted)
{
if (modwanted)
return num % den;
else
return num / den;
}

Resources