I am trying to run the provided ARIMA model example (Spark spark-ts library) with the ARIMA test data using Java API based on 0.4.0 jar. I am using "ARIMA.autoFit(ts, 1, 1, 1);" for fitting the model.
However, I get two warnings as below, after which execution halts without any further progress or errors:
WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
Tried Google & installed "libgfortran3", but to no avail.
Any suggestions?
Thanks
I fixed the issue by building the jar from scratch via Maven rather than using the pre-built jar. Also, I built it on the Ubuntu machine where Spark runs.
Related
Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema.
I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
Any Idea/help on how this should be resolved.
Thanks
I am exploring the new EMR 6.0.0 with Docker support in order to make decision if we want to use it. One of our projects is written in Scala 2.11. But EMR 6.0.0 comes with Spark built from Scala 2.12. So I switched to try 6.00-beta, which is Spark 2.4.3 built from Scala 2.11. If it works on 6.0.0-beta, then we will upgrade our code to Scala 2.12 and use 6.0.0.
A few issues I am having are when I tried to run my Scala spark job:
When it tried to read parquet from S3, I got error: java.lang.RuntimeException: Cannot create temp dirs: [/mnt/s3]
When I tried to make API call with https, I got error: usun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.
When it tried to read files from S3, I got error: Class com.amazon.ws.emr.hadoop.fs.EmrFileSystem not found. I was able to able to hack this one by passing the path by --jars. Maybe not the best solution.
I am guessing there must be something I need to set either during bootstrap or in the Docker file.
Can someone please help? Thanks!
I figure out the S3 issue. In beta version, /mnt/s3 is not mounted and given the read and write permission.
So I need to add the "docker.allowed.rw-mounts" to the container-executor configuration like below:
docker.allowed.rw-mounts=/etc/passwd,/mnt/s3
I just a newbie. I have problem when I serving tensorflow model in this case:
I. Using this http://opennmt.net/OpenNMT-tf/quickstart.html to train the model.
II. Serving the model with following steps:
Create docker image with:
docker build --pull -t $USER/tensorflow-serving-devel -f tensorflow_serving/tools/docker/Dockerfile.devel .
Run docker container:
docker run --name=tf_container -it $USER/tensorflow-serving-devel
Serving the model:
tensorflow_model_server --port=9000 --model_name=model_name --model_base_path=/model_file &> result_log &
III.The result_log file content:
2019-10-21 02:46:12.840258: I tensorflow_serving/core/loader_harness.cc:155] Encountered an error for servable version {name: ente version: 1569320347}: Not found: Op type not registered 'GatherTree' in binary running on 1b79e5fb3ac4. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-10-21 02:46:12.840280: E tensorflow_serving/core/aspired_versions_manager.cc:359] Servable {name: ente version: 1569320347} cannot be loaded: Not found: Op type not registered 'GatherTree' in binary running on 1b79e5fb3ac4. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-10-21 02:46:13.664569: I tensorflow_serving/core/basic_manager.cc:280] Unload all remaining servables in the manager.
Failed to start server. Error: Unknown: 1 servable(s) did not become available: {{{name: ente version: 1569320347} due to error: Not found: Op type not registered 'GatherTree' in binary running on 1b79e5fb3ac4. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.}, } ```
I have searched Google and try to update some services, but the problem still here. Anyone have any idea please?
Thanks so much for any suggestions!
With the transition to TensorFlow 2.0, the GatherTree op that is used in beam search is currently not available in TensorFlow Serving.
If you exported your model with OpenNMT-tf 1.x, it uses the op GatherTree from tf.contrib which was removed in recent versions of TensorFlow Serving. You should use a previous version of TensorFlow Serving such as 1.15.0.
If you exported your model with OpenNMT-tf 2.x, it uses the op Addons>GatherTree from TensorFlow Addons which is presently not integrated in TensorFlow Serving. This is a work in progress. There are currently 2 workarounds:
use opennmt/tensorflow-serving:2.1.0 which is a custom Serving build that includes this op.
disable beam search in OpenNMT-tf by exporting your model with this configuration:
params:
beam_width: 1
I am new to Digital Image Processing and want to do it using OpenCV on Eclipse. I just want to know how can i start doing it and how i can configure opencv and eclipse using CMAKE . Please suggest me some good tutorial.Also please help me with adding opencv include files and library in Eclipse.
I am using Eclipse Juno.
on Windows 7.
Thanks.
Your best bet is to begin with the OpenCV documentation. Their Getting Started tutorial should be your first stop. They have another strategy with a custom FindOpenCV.cmake file as documented here but I would suggest sticking with the strategy outlined in Getting Started.
In terms of eclipse, cmake generates IDE related metadata for you and Kitware does provide an eclipse CDT generator, documented here. Two important things to keep in mind. First cmake actually generates the eclipse metadata which you then import as an existing project. Second, the example they give is intended to work with Unix makefiles:
cmake -G"Eclipse CDT4 - Unix Makefiles" -D CMAKE_BUILD_TYPE=Debug ../certi_src
If you are using windows you'll want to choose an appropriate generator instead of "Unix Makefiles"