How to use Tensorflow trained Machine Learning Model in iOS - ios

I am trying to recognise image which has been trained using Tensorflow. I followed this steps
and finally i got success to train my own dataset and its give a good prediction result. All of the code is in python. Now i am trying to use this trained model in my iOS project . I am following this tutorial to use my trained model in my iOS project. But when i followed those steps i got an error in my mac terminal window like -
"ERROR: /tensorflow/tensorflow/core/kernels/BUILD:2235:1: C++ compilation of rule '//tensorflow/core/kernels:self_adjoint_eig_v2_op' failed: gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG ... (remaining 124 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 4. gcc: internal compiler error: Killed (program cc1plus)".
What will be the solution for this issue or is there any way to convert this tensor-flow model to iOS supported Core ML Model ? Here i'm sharing a screen shot of that error. Please help me out. Thanks.

Most likely this is due to insufficient memory inside the docker container for bazel to run with default parameters. Try rerunning (after bazel clean) the build with these extra flags to the build command: --local_resources 2048,2.0,1.0 -j 1. You can also try giving a little more resources: --local_resources 4096,2.0,1.0 -j 1
Here are some links for this issue: https://github.com/tensorflow/tensorflow/issues/1530
Tensorflow Serving Compile Error Using Docker on OSX

Related

How to use the installed (pre-compiled) drake as external with bazel?

I am working on a C++ project with drake, using bazel as the build system. Previously, I use the drake source code as the external, following the drake_bazel_external example. Everything works fine.
Since I want to use the SNOPT solver in drake, I want to change to use the pre-compiled drake. I follow the drake_bazel_installed example. However, I got the following errors.
Compiling kuka/diffIK_controller.cc failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 27 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox
In file included from bazel-out/k8-opt/bin/external/drake/_virtual_includes/.drake_headers/drake/common/default_scalars.h:3,
from bazel-out/k8-opt/bin/external/drake/_virtual_includes/.drake_headers/drake/systems/framework/leaf_system.h:14,
from ./kuka/diffIK_controller.h:3,
from kuka/diffIK_controller.cc:3:
bazel-out/k8-opt/bin/external/drake/_virtual_includes/.drake_headers/drake/common/autodiff.h:12:10: fatal error: Eigen/Core: No such file or directory
12 | #include <Eigen/Core>
| ^~~~~~~~~~~~
compilation terminated.
I also find that the apps in the drake_bazel_external cannot be compiled successfully by drake_bazel_installed setting. The error message is
ERROR: error loading package 'app': Label '#drake//tools/skylark:py.bzl' is invalid because 'tools/skylark' is not a package; perhaps you meant to put the colon here: '#drake//:tools/skylark/py.bzl'?
Update
The bug can be produced by both the http_archive fetched drake and the apt installed drake (the latest stable drake I think, since I just installed it yesterday). I have isolated the relevant code to reproduce the bug in a github repo. Currently, I can get the original apps in drake_bazel_installed to work.
Update
By adding
# solve the eigen not found bug
build --cxxopt=-I/usr/include/eigen3
to the .bazelrc file, I can solve the above problem. However, when I try to build a program that uses iiwa_status_receiver.h, I encounter a new problem.
ERROR: /home/chenwang/repro_drake_bazel_external/drake_bazel_installed/apps/BUILD.bazel:102:10: Compiling apps/connection_test.cc failed: (Exit 1): gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections ... (remaining 32 arguments skipped)
Use --sandbox_debug to see verbose messages from the sandbox and retain the sandbox build root for debugging
In file included from apps/connection_test.cc:10:
bazel-out/k8-opt/bin/external/drake/_virtual_includes/.drake_headers/drake/manipulation/kuka_iiwa/iiwa_status_receiver.h:6:10: fatal error: drake/lcmt_iiwa_status.hpp: No such file or directory
6 | #include "drake/lcmt_iiwa_status.hpp"
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
INFO: Elapsed time: 2.967s, Critical Path: 0.24s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
This problem is also a missing header file problem. I have update the github repo to reproduce this problem.
This is a bug in Drake (filed as https://github.com/RobotLocomotion/drake/issues/17965 now).
To work around it, pass --cxxopt=-I/usr/include/eigen3 on all of your bazel commands, e.g., by adding this line to your projects' .bazelrc file:
build --cxxopt=-I/usr/include/eigen3
Edit: The nightly builds of Apt packages as of 20220923 should have this fixed as well.

Exhausted Virtual Memory Installing SyntaxNet Using Docker Toolbox

I exhausted my virtual memory when trying to install SyntaxNet from this Dockerfile using the Docker Toolbox. I received this message when compiling the Dockerfile:
ERROR: /root/.cache/bazel/_bazel_root/5b21cea144c0077ae150bf0330ff61a0/external/org_tensorflow/tensorflow/core/kernels/BUILD:1921:1: C++ compilation of rule '#org_tensorflow//tensorflow/core/kernels:svd_op' failed: gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE '-D_FORTIFY_SOURCE=1' -fstack-protector -Wall -Wl,-z,-relro,-z,now -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-canonical-system-headers ... (remaining 115 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 1. virtual memory exhausted: Cannot allocate memory ____Building complete. ____Elapsed time: 8548.364s, Critical Path: 8051.91s
I have a feeling this could be resolved by changing Bazel's default jobs limit with (for example) --jobs=1, however I'm not sure where I would put that in the Dockerfile.
There are two possibilities: You could either modify the Dockerfile so that it creates a ~/.bazelrc that contains the following text:
build --jobs=1
Note that this works, even though the Dockerfile runs bazel test (as opposed to bazel build), because build flags in the .bazelrc also apply to Bazel's test command.
The other possibility would be to modify the RUN command in the Dockerfile to include the --jobs=1 parameter, e.g. RUN [...] && bazel test --jobs=1 --genrule_strategy=standalone [...].
Bazel should then spawn not more than a single child process during the build. You can verify this by running "ps axuf" on your host and looking at the process tree of your container. If you modified the RUN cmd, you should also see the --jobs=1 parameter on Bazel's command-line.

ERROR while code coverage report using lcov

I am trying to run coverage on my project, after updating to Ubuntu 16.04.
I get
Deleted 665 files
Writing data to coverage.info.cleaned
lcov: ERROR: cannot write to coverage.info.cleaned!
CMakeFiles/coverage.dir/build.make:57: recipe for target 'CMakeFiles/coverage' failed
make[3]: *** [CMakeFiles/coverage] Error 13
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/coverage.dir/all' failed
make[2]: *** [CMakeFiles/coverage.dir/all] Error 2
CMakeFiles/Makefile2:74: recipe for target 'CMakeFiles/coverage.dir/rule' failed
make[1]: *** [CMakeFiles/coverage.dir/rule] Error 2
Makefile:129: recipe for target 'coverage' failed
make: *** [coverage] Error 2
enter code here
Before the update I had no problem running the coverage
Does it help if you use absolute paths instead of relative paths when passing files to lcov?
I ran into a similiar problem where lcov also failed to write the file.
Not sure if it is a bug in lcov, but the problem was that it got confused with relative paths:
lcov -a test_fast_cxxtest_gcov__base.info -a test_fast_cxxtest_gcov__test.info \
-o test_fast_cxxtest_gcov__total.info
Combining tracefiles.
Reading tracefile test_fast_cxxtest_gcov__base.info
Reading tracefile test_fast_cxxtest_gcov__test.info
lcov: WARNING: function data mismatch at /home/phil/ghost/constants.h:1862
Writing data to test_fast_cxxtest_gcov__total.info
lcov: ERROR: cannot write to test_fast_cxxtest_gcov__total.info!
Running it with strace revealed that it executes chdir("/") on several locations, which changes the working directory to /. That explains why it cannot write the file.
One workaround is to use absolute paths. For instance, if you are using GNU make, you can use the abspath command:
lcov -a $(abspath test_fast_cxxtest_gcov__base.info) \
-a $(abspath test_fast_cxxtest_gcov__test.info) \
-o $(abspath test_fast_cxxtest_gcov__total.info)
After that change, it was finally able to write the file.
(Other options like trying to set the directories using the --base-directory or --directory option did not have an effect, as far as I saw.
The version of lcov that I tested with is 1.12.)
The problem is not limited to Ubuntu, as I ran into it on Arch Linux. It could be a regression introduced in 1.12, however, so I reported it (see issue #77630).
Update: Lcov is not part of GCC, so my original bug report was closed, but I got an answer from the Lcov mailing list. The problem is already fixed in commit 632c25. Users of Arch Linux based distros can try the latest snapshot with aur/lcov-git.

OpenCV with CMake version 3.5.2 vs CMake 2.X.X

I am at a loss to solve a particular issue I have: I cannot pinpoint the culprit.
System: Jetson TX1, arm64 kernel, 32b userspace, opencv4tegra
Situation: Have been building projects using:
find_package( OpenCV )
And this has worked fine and compiled.
Fault: I built from source and installed CMake 3.5.2. Now I can no-longer build any projects that depend on OpenCV. I get linker errors that point cannot find:
opencv_dep_cudart
I am assuming the issues are caused in OpenCVCConfig.cmake, around this point:
# Import target "opencv_core" for configuration "Release"
set_property(TARGET opencv_core APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(opencv_core PROPERTIES
IMPORTED_LINK_INTERFACE_LIBRARIES_RELEASE "opencv_dep_cudart;opencv_dep_nppc;opencv_dep_nppi;opencv_dep_npps;dl;m;pthread;rt;tbb"
IMPORTED_LOCATION_RELEASE "${_IMPORT_PREFIX}/lib/libopencv_core.so.2.4.12"
IMPORTED_SONAME_RELEASE "libopencv_core.so.2.4"
)
Out of the file: /usr/share/OpenCV/OpenCVModules-release.cmake
However, this file doesn't change between CMake versions as it is an OpenCV file. So this must be how it is processed.
Reverting my CMake back to 2.8.12.2 manually allowed me to build again. Here is an example of a make command that uses OpenCV. Note the correct cuda libs:
Linking CXX executable DuoInterfaceTest
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/DuoInterfaceTest.dir/link.txt --verbose=1
/usr/bin/c++ -O2 -g -DNDEBUG -std=gnu++11 CMakeFiles/DuoInterfaceTest.dir/src/mainTest.cpp.o -o DuoInterfaceTest -L/home/ubuntu/catkin_ws/duointerface/lib/linux/arm -rdynamic libDuoInterface.a /usr/lib/libopencv_vstab.so.2.4.12 /usr/lib/libopencv_tegra.so.2.4.12 /usr/lib/libopencv_imuvstab.so.2.4.12 /usr/lib/libopencv_facedetect.so.2.4.12 /usr/lib/libopencv_esm_panorama.so.2.4.12 /usr/lib/libopencv_detection_based_tracker.so.2.4.12 /usr/lib/libopencv_videostab.so.2.4.12 /usr/lib/libopencv_video.so.2.4.12 /usr/lib/libopencv_ts.a /usr/lib/libopencv_superres.so.2.4.12 /usr/lib/libopencv_stitching.so.2.4.12 /usr/lib/libopencv_photo.so.2.4.12 /usr/lib/libopencv_objdetect.so.2.4.12 /usr/lib/libopencv_ml.so.2.4.12 /usr/lib/libopencv_legacy.so.2.4.12 /usr/lib/libopencv_imgproc.so.2.4.12 /usr/lib/libopencv_highgui.so.2.4.12 /usr/lib/libopencv_gpu.so.2.4.12 /usr/lib/libopencv_flann.so.2.4.12 /usr/lib/libopencv_features2d.so.2.4.12 /usr/lib/libopencv_core.so.2.4.12 /usr/lib/libopencv_contrib.so.2.4.12 /usr/lib/libopencv_calib3d.so.2.4.12 /usr/lib/libopencv_tegra.so.2.4.12 /usr/lib/libopencv_stitching.so.2.4.12 /usr/lib/libopencv_gpu.so.2.4.12 /usr/lib/libopencv_photo.so.2.4.12 /usr/lib/libopencv_legacy.so.2.4.12 /usr/local/cuda-7.0/lib/libcufft.so /usr/lib/libopencv_video.so.2.4.12 /usr/lib/libopencv_objdetect.so.2.4.12 /usr/lib/libopencv_ml.so.2.4.12 /usr/lib/libopencv_calib3d.so.2.4.12 /usr/lib/libopencv_features2d.so.2.4.12 /usr/lib/libopencv_highgui.so.2.4.12 /usr/lib/libopencv_imgproc.so.2.4.12 /usr/lib/libopencv_flann.so.2.4.12 /usr/lib/libopencv_core.so.2.4.12 /usr/local/cuda-7.0/lib/libcudart.so /usr/local/cuda-7.0/lib/libnppc.so /usr/local/cuda-7.0/lib/libnppi.so /usr/local/cuda-7.0/lib/libnpps.so -ldl -lm -lpthread -lrt -ltbb -lDUO -Wl,-rpath,/home/ubuntu/catkin_ws/duointerface/lib/linux/arm:/usr/local/cuda-7.0/lib
Thoughts? I would like to be able to keep the newer CMake on my system but don't understand enough to fix the problem. If you think this is too system-unique I will withdraw the question.
As noted by Michael Mairegger, you have to cmake in the build directory by doing
sudo cmake .. -DCUDA_USE_STATIC_CUDA_RUNTIME=false
But one extra thing I noticed is that if I try to make afterwards it won't work unless I do the cmake command twice.

Why does my erlang build fail with a core dump on Solaris Sparc?

(I have the answer to this already; I'm going to answer my own question so that I can share what I've learned and save someone else this trouble in the future)
When I attempt to build Erlang on Solaris 10 Sparcv9, the build fails partway through:
cd lib && \
ERL_TOP=/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221 PATH=/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221/bootstrap/bin:${PATH} \
make opt SECONDARY_BOOTSTRAP=true
make[1]: Entering directory `/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221/lib'
make[2]: Entering directory `/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221/lib/hipe'
=== Entering application hipe
make[3]: Entering directory `/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221/lib/hipe/misc'
erlc -W +debug_info +warn_exported_vars +warn_missing_spec +warn_untyped_record -o../ebin hipe_consttab.erl
make[3]: *** [../ebin/hipe_consttab.beam] Bus Error (core dumped)
make[3]: Leaving directory `/var/tmp/pkgbuild-0/erlang/sparcv9/erlang-otp-73b4221/lib/hipe/misc'
Why is this and what can I do to complete my Erlang build?
The reason that the build fails is due to a broken build environment.
In this specific case the Sun GCC build is being used. This particular version of GCC was compiled to use a mixture of the the GNU assembler and the Sun linker.
The Sparc platform is highly sensitive to alignment of code and it will fault (for example, with a bus error) if unaligned code is executed.
The GNU assembler used by the stock GCC build on Sparc Solaris 10 doesn't work so hard to automatically align functions generated by the compiler, leading to unaligned code.
The solution is to build your own GCC and make sure that you use the system assembler and linker; you can achieve this by using the following options to GCC's configure script:
--with-as=/usr/ccs/bin/as \
--without-gnu-as \
--without-gnu-ld \
--with-ld=/usr/ccs/bin/ld \
The resultant GCC build will generated properly aligned code and allow you to build Erlang successfully.

Resources