Hadoop fails to load opencv native library - opencv

I am trying to run an image processing example in Hadoop.
Hadoop Version: Hadoop 2.0.0-cdh4.2.1
Hipi Version: hipi-2.1.0
OpenCV Version: opencv-2.4.11
opencv-2411.jar and hipi-2.1.0.jar are in hadoop-classpath
I have put “libopencv_java2411.so” in directory /etc/opencv/lib.
Set JAVA_LIBRARY_PATH in /usr/lib/hadoop/libexec/hadoop-config.sh file, to point to OpenCV native library as below:
JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}:/etc/opencv/lib
When I submit the job, I get following error message.
attempt_201804241646_0001_m_000000_0: Native code library failed to load.
attempt_201804241646_0001_m_000000_0: java.lang.UnsatisfiedLinkError: no opencv_java2411 in java.library.pathopencv_java2411
18/04/24 17:05:05 INFO mapred.JobClient: Task Id : attempt_201804241646_0001_m_000000_1, Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237)
Why it fails to load the native library? Please help.

Related

how to generate sdk for arm am5728 evm module?

i am building sdk for my am5728 board for that i am following this linkenter link description here
when i tried to run MACHINE=am57xx-evm bitbake arago-base-tisdk-image
i got following error
ERROR: Unable to start bitbake server (None)
ERROR: Server log for this session (/home/jenexpc/tisdk/build/bitbake-cookerdaemon.log):
--- Starting bitbake server pid 44756 at 2022-03-28 14:11:56.538965 ---
ERROR: ParseError at /home/jenexpc/tisdk/sources/meta-ros/meta-ros2-eloquent/conf/ros-distro/include/eloquent/ros-distro.inc:112: unparsed line: 'ROS_BUILD_TYPE:pn-ros-workspace = "ament_cmake"'
any idea how to solve this error.
thank you

OpenCV build error while compiling with IE (Inference Engine)

OpenCV Version: 4.4.0 (latest)
OpenVINO Version: 2020.4 (latest)
The error happens when trying to build the project which is generated by CMake 3.18.0 on Windows 10.
I have tried many times building from scratch (clear all caches, update the source code, reinstall the OpenVINO toolkit, and run it's all demos successfully), but the problem still exists.
Here is the VS 2019 build error logs:
46>Done building project "opencv_dnn.vcxproj" -- FAILED.
73>LINK : fatal error LNK1181: cannot open input file '..\..\lib\Release\opencv_dnn440.lib'
75>LINK : fatal error LNK1181: cannot open input file '..\..\lib\Release\opencv_dnn440.lib'
73>Done building project "opencv_text.vcxproj" -- FAILED.
81>------ Build started: Project: opencv_datasets, Configuration: Release x64 ------
82>------ Build started: Project: opencv_videostab, Configuration: Release x64 ------
75>Done building project "opencv_mcc.vcxproj" -- FAILED.
81>Done building project "opencv_datasets.vcxproj" -- FAILED.
88>Done building project "opencv_dnn_objdetect.vcxproj" -- FAILED.
87>LINK : fatal error LNK1181: cannot open input file '..\..\lib\Release\opencv_datasets440.lib'
87>Done building project "opencv_tracking.vcxproj" -- FAILED.
94>LINK : fatal error LNK1181: cannot open input file '..\..\lib\Release\opencv_tracking440.lib'
94>Done building project "opencv_stereo.vcxproj" -- FAILED.
95>cv2.cpp
95>D:\GitHub\opencv\opencv_contrib\modules\saliency\include\opencv2/saliency/saliencySpecializedClasses.hpp(1,1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss
95>D:\GitHub\opencv\opencv_contrib\modules\datasets\include\opencv2/datasets/dataset.hpp(1,1): warning C4819: The file contains a character that cannot be represented in the current code page (936). Save the file in Unicode format to prevent data loss
93>LINK : fatal error LNK1181: cannot open input file '..\..\..\lib\Release\opencv_tracking440.lib'
93>Done building project "opencv_java.vcxproj" -- FAILED.
95>LINK : fatal error LNK1181: cannot open input file '..\..\lib\Release\opencv_dnn_superres440.lib'
95>Done building project "opencv_python3.vcxproj" -- FAILED.
96>------ Build started: Project: ALL_BUILD, Configuration: Release x64 ------
96>Building Custom Rule D:/GitHub/opencv/opencv/CMakeLists.txt
At the end 10 failed:
========== Build: 86 succeeded, 10 failed, 0 up-to-date, 0 skipped ==========
NOTE: Before I was able to build the OpenCV with IE without any errors but in the new release there are many errors while compiling and building.
Any solution..?
I am ready to provide more logs and info if needed.
Thanks!!
The problem was solved by configuring CMake with ngraph flag and locating its CMake files DIR location.

s2i haskell stack failed to build cardano-sl

Im trying to build my cardano full node application
I am having some issue with the stack.yaml, I am using current stack-2.1.1 and I installed all the dependencies for cardano-sl:
https://github.com/input-output-hk/cardano-sl/blob/develop/docs/how-to/build-cardano-sl-and-daedalus-from-source-code.md
---> Building application from source...
Going to build: cardano-sl-networking cardano-sl-binary cardano-sl-util cardano-sl-crypto cardano-sl-core cardano-sl-db cardano-sl-chain cardano-sl-infra cardano-sl cardano-sl-node cardano-sl-client cardano-sl-generator cardano-sl-auxx cardano-sl-tools cardano-sl-explorer cardano-sl-wallet
Building cardano-sl-networking
stack build --ghc-options=" -Wwarn" --test --no-haddock-deps --bench --jobs=1 --no-run-tests --no-run-benchmarks --dependencies-only cardano-sl-networking
Could not parse '/opt/app-root/src/stack.yaml':
Aeson exception:
Error in $.packages[32]: failed to parse field 'packages': expected Text, encountered Object
See http://docs.haskellstack.org/en/stable/yaml_configuration/
Build failed
ERROR: An error occurred: non-zero (13) exit code from mycardano-s2i
My stack.yaml:
resolver: lts-12.17
flags:
ether:
disable-tup-instances: true
extra-package-dbs: []
packages:
util
util/test
networking
binary
binary/test
crypto
crypto/test
core
core/test
db
db/test
infra
infra/test
chain
chain/test
lib
generator
client
auxx
script-runner
explorer
node
tools
tools/post-mortem
utxo
wallet
node-ipc
faucet
acid-state-exts
x509
cluster
mnemonic
I have read some workaround where you have to update stack, but I am the moment I am using the last release 2.1.1.

Tensorflow Hub Build from Source Failing

I'm having a problem with Building TF Hub from Source. Can Anyone please help me out? I've been following the steps as given in https://github.com/tensorflow/hub/blob/master/tensorflow_hub/pip_package/PIP.md
I've installed bazel 0.24.1.
Error I'm Getting:
ERROR: /home/tf_hub/hub/WORKSPACE:17:1: name 'git_repository' is not defined
ERROR: /home/tf_hub/hub/WORKSPACE:40:1: name 'http_archive' is not defined
ERROR: /home/tf_hub/hub/WORKSPACE:47:1: name 'new_http_archive' is not defined
ERROR: Error evaluating WORKSPACE file
ERROR: error loading package '': Encountered error while reading extension file 'tools/build_defs/repo/http.bzl': no such package '#bazel_tools//tools/build_defs/repo': error loading package 'external': Could not load //external package
ERROR: error loading package '': Encountered error while reading extension file 'tools/build_defs/repo/http.bzl': no such package '#bazel_tools//tools/build_defs/repo': error loading package 'external': Could not load //external package
INFO: Elapsed time: 2.552s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
Bazel Version: 0.24.1
Python Version: 3
Tensorflow Version: 2.0.0a
Commands to Reproduce:
(env)~/tf_hub$ git clone https://github.com/tensorflow/hub
(env)~/tf_hub$ cd hub && bazel build tensorflow_hub/pip_package:build_pip_package
Expected Output: No Error. Build Sucessful
Latest Versions of Bazel Doesn't support git_repository (which is still used by tensorflow_hub), so Uninstalling Bazel 0.24.1 and installing Bazel 0.18.1 worked.

Error Launching CloudDataFlow Java App using Cloud Composer

Am a GCP Newbie and facing an error when trying to run a cloud data flow app for the BeamTutorial using GCP Cloud Composers DataflowJavaOperator. Airflow picks up the pipeline but fails with the below error.
gcp_dataflow_hook.py:115} INFO - Running command: java -cp /tmp/dataflow13ec2a50-BeamTutorial-0.0.1-SNAPSHOT.jar org.apache.beam.examples.tutorial.game.solution.Exercise2 --runner=DataflowRunner --project=..... --region=us-central1 --labels={"airflow-version":"v1-9-0-composer"} --jobName=run-beam-data-flow-java-1449a1da --outputPrefix=gs://..../ex2-spark/out
gcp_dataflow_hook.py:127} WARNING - Error: A JNI error has occurred, please check your installation and try again
[2018-10-18 09:35:00,316] {base_task_runner.py:98} INFO - Subtask: Exception in thread "main" java.lang.NoClassDefFoundError:org/apache/beam/sdk/options/PipelineOptions
This BeamTutorial-0.0.1-SNAPSHOT.jar is not a fat jar and runs the job successfully in Dataflow when submitted manually from gcp cloud shell manually as below
mvn compile exec:java -Dexec.mainClass="org.apache.beam.examples.tutorial.game.solution.Exercise2" -Dexec.args="--runner=dataflow --project=<project-name> --outputPrefix=gs://..../beam-tutorial/ex2-spark/out" -Pdataflow-runner
Appreciate any help in fixing this error. thank you.
When using the DataFlowJavaOperator you need to follow instructions here on how to create your ".jar" file:
Add the dependency and plugin from link
Run mvn package to create your ".jar" file
Once you do that I'd advise to make sure that the ".jar" file is actually running correctly before trying to run it inside Composer. So in this case following the tutorial, running:
java -jar target/BeamTutorial-0.0.1-SNAPSHOT.jar --runner=DataflowRunner --p
roject=<my-project> --tempLocation=<my-bucket>
I also get:
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/beam/sdk/options/PipelineOptions
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: org.apache.beam.sdk.options.PipelineOptions
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
So the issue looks more Java-related and how the pom is configured that is not creating a valid .jar file, or it is expecting some additional parameters. In any case you should troubleshoot the ".jar"/pom before going further.
For some other pipelines I have I ran them successfully using the DataflowJavaOperator and a valid ".jar" file.

Resources