How to build tensorflow lite with C API support? - bazel

In the official github repo and documentation it appears that there is no predefined bazel target for tensorflow lite with C API support.
I'm not very familiar with bazel, but it seems like there is a way to do it.

You can find an experimental C library for TensorFlow Lite in lite/experimental/c/BUILD. There's also a shared library target that you can build with bazel, e.g.,
bazel build -c opt --cxxopt=--std=c++11 \
//tensorflow/lite/experimental/c:libtensorflowlite_c.so
If you wish to build for Android, you'll use something like:
bazel build -c opt --cxxopt=--std=c++11 --config=android_arm64 \
//tensorflow/lite/experimental/c:libtensorflowlite_c.so

Related

How to package bazel project when it depends on #bazel_tools?

I have an example bazel project that I want to package as tar (to further distribute through homebrew etc...). The project consists of some python files, and a a run shell script.
When trying to use rules_pkg, I run into the following problem - the run script, b.sh, depends on #bazel_tools/tools/bash/runfiles/runfiles.bash to resolve the location of the binary package. #bazel_tools is not provided by default when using #rules_pkg.
What would be the correct approach for creating a distribution of the project above?

How to use meson to build glib

I need to upgrade glib for a specific project. It currently uses glib 2.28.8. I have three problems.
I've never used meson and ninja before, so I checked glib's INSTALL.in and it just said to run meson _build followed by ninja -C _build. So I ran meson _build and got the following output:
$ meson _build
The Meson build system
Version: 0.47.2
Source dir: /srv/devel/build/glib-2.65.0
Build dir: /srv/devel/build/glib-2.65.0/_build
Build type: native build
meson.build:227: WARNING: Identifier 'in' will become a reserved keyword in a future release. Please rename it.
meson.build:227:14: ERROR: Expecting eol got id.
if vs_crt_opt in ['mdd', 'mtd']
So the basic build doesn't work. Why?
For our purposes, we use the following configure command:
PKG_CONFIG_PATH=$(OUTPUT_DIR)/lib/pkgconfig ./configure --prefix=$(OUTPUT_DIR) --disable-dtrace --disable-selinux ac_cv_path_MSGFMT=/bin/true CPPFLAGS="-fPIC -I$(OUTPUT_DIR)/include" LDFLAGS="-L$(OUTPUT_DIR)/lib" --enable-static --disable-shared
How do I specify that in meson?
I will also need to build in Windows. Any gotchas there?
Thanks!
EDIT: I tried older versions of glib, going back to 2.62.0 and when I run meson _build I get the error meson.build:1:0: ERROR: Meson version is 0.47.2 but project requires >= 0.49.2.. So that's probably a big part of the problem for question (1). This is running on CentOS 6 & 7, so I'll probably have to get and install a current meson package.
So the basic build doesn't work. Why?
You correctly figured this out in your edit: GLib 2.64 requires Meson 0.49.2, and it seems that Meson 0.47.2 is so old as to not be able to correctly parse GLib’s meson.build.
It looks from your build output that you’re trying to build GLib 2.65.0. Note that 2.65 is an unstable release series. Even minor versions of GLib (2.62.x, 2.64.x, etc.) are stable; odd ones are unstable. Using an unstable release is fine, as long as you know what you’ve signed up for: it may contain bugs, and new APIs introduced in that unstable series may change or be removed before the first stable release (in the case of 2.65.x, the corresponding first stable release will be 2.66.0).
For our purposes, we use the following configure command:
You’ll want something like:
meson --prefix "$(OUTPUT_DIR)" -Dselinux=disabled -Ddefault_library=static _build
You can see from the b_staticpic option’s default value that -fPIC is the default for static libraries, so (I believe) doesn’t need to be explicitly specified.
There should be no need to disable dtrace support since it’s disabled by default. If you did need to disable it, you’d do that with -Ddtrace=false.
The custom -L and -I arguments should be covered by use of --prefix.
Overriding the msgfmt tool to disable internationalisation is not a supported way of building GLib and you’re on your own with that one.
There is some good documentation on the built-in options in Meson here and here.
I will also need to build in Windows. Any gotchas there?
That’s too broad a question to be answered on StackOverflow.

Darwin not supported yet for MKL build / configure for Tensorflow

I am trying to Use tensorflow-1.3.0 in my iOS App. I am Trying this link -[http://jeffxtang.github.io/deep/learning,/tensorflow,/mobile,/ai/2016/09/23/mobile-tensorflow.html][1]
I am trying Step 4 from this tutorial, I need to run ./configure script before I can run bazel build to run the retrain script:
bazel build tensorflow/examples/image_retraining:retrain
bazel-bin/tensorflow/examples/image_retraining/retrain \
--model_dir=/tf_files/inception-v3 \
--output_graph=/tf_files/retrained_models/dog_retrained.pb \
--output_labels=/tf_files/retrained_models/dog_retrained_labels.txt \
--image_dir ~/Downloads/dog_images \
--bottleneck_dir=/tf_files/dogs_bottleneck
But the ./configure step fails with the following message:
MobioApps-Mac-mini:tensorflow-1.3.0 mobioapp$ ./configure
You have bazel 0.5.3-homebrew installed.
Please specify the location of python. [Default is /Users/mobioapp/anaconda/bin/python]: /Users/mobioapp/anaconda/bin/python
Found possible Python library paths:
/Users/mobioapp/anaconda/lib/python2.7/site-packages
Please input the desired Python library path to use. Default is [/Users/mobioapp/anaconda/lib/python2.7/site-packages]
/Users/mobioapp/anaconda/lib/python2.7/site-packages
Do you wish to build TensorFlow with MKL support? [y/N] y
MKL support will be enabled for TensorFlow
Do you wish to download MKL LIB from the web? [Y/n] Y
Darwin is unsupported yet
Problem : Darwin is unsupported yet
What is the solution for this issue? Please help me out. I am searching a long time for it.
To overcome this problem please skip step number 4 from this link http://jeffxtang.github.io/deep/learning,/tensorflow,/mobile,/ai/2016/09/23/mobile-tensorflow.html .I just followed step 1,2,3 and 5 respectively from the tutorial.This solved my problem and finally i am able to run my own trained model into my iOS App and got successful prediction result.

Yocto SDK with cmake toolchain file

I provide a Yocto SDK to cross-build an application for an embedded target. The application itself is built using CMake. The SDK setup script provides many necessary environment variables (like location of the cross-compiler, sysroot, etc.), which so far was enough to build the application.
However, since recently the application has a dependency to the Boost library (through the command find_package(Boost REQUIRED) in the CMakeLists.txt). Now CMake complains that it cannot find the library, even though it's installed in the SDK sysroot. But if I build the application directly in Yocto, it works fine.
After some research it turned out that Yocto generates a toolchain.cmake file which is added to the cmake call. In this file, the variable CMAKE_FIND_ROOT_PATH is set, which CMake needs to find libraries. Using such a toolchain file, I can also build using the SDK.
Now I'm wondering if Yocto provides any mechanism to export such a toolchain file with the SDK. Or alternatively if the SDK provides a script or something to automatically create a toolchain file directly on the SDK build host.
Or shall I just tell the users of the SDK to manually create a toolchain file and add it to their cmake call?
Assuming that you're using the image based SDK, i.e. building it with bitbake <image> -c populate_sdk, adding the following toimage.bb should fix it:
TOOLCHAIN_HOST_TASK += "nativesdk-cmake"
That should give you a OEToolchainConfig.cmake file in the SDK. After sourcing the SDK environment file, cmake will be an alias to cmake -DCMAKE_TOOLCHAIN_FILE=$OECORE_NATIVE_SYSROOT/usr/share/cmake/OEToolchainConfig.cmake to further help your developers.
I'd like to add to Anders answer that while it worked great for me to add nativesdk-cmake this way it did not work when I tried to add nativesdk-python3-numpy. After some googling I found this, suggesting that TOOLCHAIN_HOST_TASK has to be extended using _append instead of +=.

How to build opencv_contrib module for iOS

I want to use some function in the newly introduced opencv_contrib modules on iOS, how can I build a iOS framework with those extra modules. Thanks in advance.
I am answering this (old) question for the benefit of other developers who would like to try this on newer OpenCV versions.
It is possible to build opencv_contrib modules together with the iOS framework, in version 4 (current at the time of answering).
set path to the Xcode command line tools:
sudo ln -s /Applications/Xcode.app/Contents/Developer Developer
cd to the path above the opencv directory
cd ~/
Build the framework with the --contrib option:
python opencv/platforms/ios/build_framework.py --contrib <'relative_path_to_opencv_contrib'>/opencv_contrib/ ios
If an individual module does not get build, you should check the CMakelists.txt of that module to see if it has been disabled for iOS.
I just tested this before answering, so feel free to drop a comment or a question if there are issues.
See fficial document tutorial_ios_install!
This works well.
The official document does not include building iOS framework with opencv_contrib.
But inferring from the cmake file, you can copy the module you want (ximgproc in your case) to opencv/modules. Then run build_framework.py as usual.
You can checkout this post

Resources