I am battling down to build a simple Kafka Stream project as a native image. Maybe some build settings are missing or not well documented. I'd appreciate any comment.
This is happening to me only building Kafka Streams. Building other Kafka clients (producer/consumer) are perfectly ok.
Thank you
Framework: Quarkus 14.x
OS: MacOS (BigSur)
Error: Classes that should be initialized at run time got initialized during image building:
io.netty.buffer.PooledByteBufAllocator the class was requested to be initialized at run time (from 'META-INF/native-image/io.netty/netty-buffer/native-image.properties' in 'file:///project/lib/io.netty.netty-buffer-4.1.86.Final.jar' with 'io.netty.buffer.PooledByteBufAllocator'). To see why io.netty.buffer.PooledByteBufAllocator got initialized use --trace-class-initialization=io.netty.buffer.PooledByteBufAllocator
To see how the classes got initialized, use --trace-class-initialization=io.netty.buffer.PooledByteBufAllocator
com.oracle.svm.core.util.UserError$UserException: Classes that should be initialized at run time got initialized during image building:
io.netty.buffer.PooledByteBufAllocator the class was requested to be initialized at run time (from 'META-INF/native-image/io.netty/netty-buffer/native-image.properties' in 'file:///project/lib/io.netty.netty-buffer-4.1.86.Final.jar' with 'io.netty.buffer.PooledByteBufAllocator'). To see why io.netty.buffer.PooledByteBufAllocator got initialized use --trace-class-initialization=io.netty.buffer.PooledByteBufAllocator
To see how the classes got initialized, use --trace-class-initialization=io.netty.buffer.PooledByteBufAllocator
at org.graalvm.nativeimage.builder/com.oracle.svm.core.util.UserError.abort(UserError.java:73)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.classinitialization.ProvenSafeClassInitializationSupport.checkDelayedInitialization(ProvenSafeClassInitializationSupport.java:273)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.classinitialization.ClassInitializationFeature.duringAnalysis(ClassInitializationFeature.java:164)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGenerator.lambda$runPointsToAnalysis$10(NativeImageGenerator.java:748)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.FeatureHandler.forEachFeature(FeatureHandler.java:85)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGenerator.lambda$runPointsToAnalysis$11(NativeImageGenerator.java:748)
at org.graalvm.nativeimage.pointsto/com.oracle.graal.pointsto.AbstractAnalysisEngine.runAnalysis(AbstractAnalysisEngine.java:162)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGenerator.runPointsToAnalysis(NativeImageGenerator.java:745)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGenerator.doRun(NativeImageGenerator.java:578)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGenerator.run(NativeImageGenerator.java:535)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGeneratorRunner.buildImage(NativeImageGeneratorRunner.java:403)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGeneratorRunner.build(NativeImageGeneratorRunner.java:580)
at org.graalvm.nativeimage.builder/com.oracle.svm.hosted.NativeImageGeneratorRunner.main(NativeImageGeneratorRunner.java:128)
------------------------------------------------------------------------------------------------------------------------
Related
I am trying to display docs stored in repository created by backstage io component on backstage-io /docs page UI, but when I am trying to access the docs I am getting the following error
Building a newer version of this documentation failed. Error: "Failed to generate docs from C:\\Users\\Admin\\AppData\\Local\\Temp\\backstage-enprxk into C:\\Users\\Admin\\AppData\\Local\\Temp\\techdocs-tmp-W6iVab; caused by Error: Docker container returned a non-zero exit code (1)"
Files in my repository
docs folder only having index.md
and mkdocs.yml have
nav:
Home: index.md
I was getting similar issues working on a local POC of Backstage. The biggest problem was that I needed to install pip, python, mkdocs, and mkdocs-techdocs-core (i.e. pip3 install mkdocs-techdocs-core). If you have done that and then followed everything in this documentation, then it should start working. Hope that helps. I spent a couple of days trying to get past these types of errors.
For me the above issue is fixed by using below as it was not working inside my container in kubernetes.
I changed app-config.yaml -
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'local' // changed from docker to local here
Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.
I've used the Vitis docker tool container using only the CPU and Conda worked fine; however, when I want to use the GPU version for docker, I get the below error. I tried building the environment twice and each time it has failed to import the right libraries.
(vitis-ai-caffe) sam#Itec:~/cf_resnet50$ vai_q_caffe quantize -model float/trainval.prototxt -weights float/trainval.caffemodel
vai_q_caffe: error while loading shared libraries: libprotobuf.so.21: cannot open shared object file: No such file or director
How might I resolve this error? I will be happy to receive any help.
With version 1.0.0 there is an issue in the docker with the protobuf library version. It can be resolved by patching the conda_requirements.txt. To do so, open the conda_requirements.txt in a text editor and add the following two lines, then build the GPU docker:
libprotobuf==3.10.1
protobuf==3.10.1
I start of by this guide:
https://guides.micronaut.io/micronaut-function-graalvm-aws-lambda-gateway/guide/index.html
Which works and creates an API deployable in a local SAM instance.
In my real project I need access to JPA, database so I add the reference in the build.gradle:
compile group: 'com.oracle.ojdbc', name: 'ojdbc8', version: '19.3.0.0'
implementation("io.micronaut.configuration:micronaut-jdbc-hikari")
implementation( "io.micronaut.data:micronaut-data-hibernate-jpa:1.0.0.M5")
annotationProcessor("io.micronaut.data:micronaut-data-processor:1.0.0.M5")
Add also the TypeHint required for CRUD and the reflection-info required for ojdbc;
https://github.com/oracle/graal/issues/1748#issuecomment-542353582
https://micronaut-projects.github.io/micronaut-data/latest/guide/#graalJPA
At the write stage of the graal build I get these errors:
error: Classes that should be initialized at run time got initialized during image building:
org.jboss.logging.Logger was unintentionally initialized at build time ...
org.hibernate.internal.CoreMessageLogger_$logger was unintentionally initialized at build time...
and so on with multiple loggers that Hibernate tries to instantiate.
This is an example, the same error occurs in a my real project where I have the connection to the database and CRUD implemented. In the example, to reproduce, I have not added those.
I am trying to generate a new genesis-block in Hyperledger Iroha as it is suggested in
https://iroha.readthedocs.io/en/latest/getting_started/index.html#starting-iroha-node
and
https://hyperledger.github.io/iroha-api/#create-genesis-block
but unfortunately I can't do it because I am always getting the same error message.
$ cat peer.list
localhost:10001
$ ./iroha-cli --genesis_block --peers_address peer.list
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::out_of_range> >'
what(): bimap<>: invalid key
Aborted (core dumped)
I am receiving this error both on my local machine where I had compiled Iroha from scratch using the source code, as well as within an Iroha container.
I think I have the correct dependencies, otherwise I would have not been able to build Iroha from scratch. Also, note that I can start irohad correctly by using the configuration example from https://iroha.readthedocs.io/en/latest/getting_started/index.html#launching-iroha-daemon.
Any help or suggestion is greatly appreciated.
There was, indeed, a bug affecting the permissions needed to generate a block. It is fixed now and should not occur: https://github.com/hyperledger/iroha/pull/1351
This is a known issue in the development of hyperledger iroha, see here: https://github.com/hyperledger/iroha/issues/1362.
It arises when iroha is compiled with Ansible Playbook.
Try to uninstall Ansible from your system and re-compile iroha and you shouldn't encounter the same error.
Obviously this is just a work around, and you won't be able to take advantage of the ansible capabilities.