Recently I've been getting this error when running dataflow jobs written in Python. The thing is it used to work and no code has changed so I'm thinking it's got something to do with the env.
Error syncing pod d557f64660a131e09d2acb9478fad42f (""), skipping:
failed to "StartContainer" for "python" with CrashLoopBackOff:
"Back-off 20s restarting failed container=python pod=dataflow-)
Can anyone help me with this?
In my case, I was using Apache Beam SDK version 2.9.0 had the same problem.
I used setup.py and set-up field “install_requires” was filled dynamically by loading content of requirements.txt file. It’s okay if you’re using DirectRunner but DataflowRunner is too sensitive for dependencies on local files, so abandoning that technique and hard-coding dependencies from requirements.txt into “install_requires” solved an issue for me.
If you stuck on that try to investigate your dependencies and minimize them as much as you can. Please refer to the Managing Python Pipeline Dependencies documentation topic for help. Avoid using complex or nested code-structures or dependencies on the local filesystem.
Neri, thanks for your pointer to the SDK. I noticed that my requirements file was using an older version of the SDK 2.4.0. I've now changed everything to 2.6.0 and it's no longer stuck.
Related
I am trying to display docs stored in repository created by backstage io component on backstage-io /docs page UI, but when I am trying to access the docs I am getting the following error
Building a newer version of this documentation failed. Error: "Failed to generate docs from C:\\Users\\Admin\\AppData\\Local\\Temp\\backstage-enprxk into C:\\Users\\Admin\\AppData\\Local\\Temp\\techdocs-tmp-W6iVab; caused by Error: Docker container returned a non-zero exit code (1)"
Files in my repository
docs folder only having index.md
and mkdocs.yml have
nav:
Home: index.md
I was getting similar issues working on a local POC of Backstage. The biggest problem was that I needed to install pip, python, mkdocs, and mkdocs-techdocs-core (i.e. pip3 install mkdocs-techdocs-core). If you have done that and then followed everything in this documentation, then it should start working. Hope that helps. I spent a couple of days trying to get past these types of errors.
For me the above issue is fixed by using below as it was not working inside my container in kubernetes.
I changed app-config.yaml -
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'local' // changed from docker to local here
Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema.
I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
Any Idea/help on how this should be resolved.
Thanks
I am getting below error while running dataflow job. I am trying to update my existing beam version to 2.11.0 but I am getting below error at run time.
java.lang.IncompatibleClassChangeError: Class
org.apache.beam.model.pipeline.v1.RunnerApi$StandardPTransforms$Primitives
does not implement the requested interface
com.google.protobuf.ProtocolMessageEnum at
org.apache.beam.runners.core.construction.BeamUrns.getUrn(BeamUrns.java:27)
at
org.apache.beam.runners.core.construction.PTransformTranslation.(PTransformTranslation.java:58)
at
org.apache.beam.runners.core.construction.UnconsumedReads$1.visitValue(UnconsumedReads.java:49)
at
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:666)
at
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at
org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at
org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at
org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at
org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at
org.apache.beam.runners.core.construction.UnconsumedReads.ensureAllReadsConsumed(UnconsumedReads.java:40)
at
org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:868)
at
org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:660)
at
org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313) at
org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
This usually means that the version of com.google.protobuf:protobuf-java that Beam was built with does not match the version at runtime. Does your pipeline code also depend on protocol buffers?
UPDATE: I have filed https://issues.apache.org/jira/browse/BEAM-6839 to track this. It is not expected.
I don't have enough rep to leave a comment, but I ran into this issue and later figured out that my problem was that I had different beam versions in my pom.xml. Some had 2.19 and some had 2.20.
I would do a quick search of your versions in the pom or gradle file to make sure they are all the same.
This may be caused by incompatible dependencies. I successfully upgraded beam from 2.2.0 to 2.20.0 by upgrading the dependencies at the same time.
beam.version: 2.20.0
guava.version: 29.0-jre
bigquery.version: v2-rev20191211-1.30.9
google-api-client.version: 1.30.9
google-http-client.version: 1.34.0
pubsub.version: v1-rev20200312-1.30.9
I am trying to generate a new genesis-block in Hyperledger Iroha as it is suggested in
https://iroha.readthedocs.io/en/latest/getting_started/index.html#starting-iroha-node
and
https://hyperledger.github.io/iroha-api/#create-genesis-block
but unfortunately I can't do it because I am always getting the same error message.
$ cat peer.list
localhost:10001
$ ./iroha-cli --genesis_block --peers_address peer.list
terminate called after throwing an instance of 'boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<std::out_of_range> >'
what(): bimap<>: invalid key
Aborted (core dumped)
I am receiving this error both on my local machine where I had compiled Iroha from scratch using the source code, as well as within an Iroha container.
I think I have the correct dependencies, otherwise I would have not been able to build Iroha from scratch. Also, note that I can start irohad correctly by using the configuration example from https://iroha.readthedocs.io/en/latest/getting_started/index.html#launching-iroha-daemon.
Any help or suggestion is greatly appreciated.
There was, indeed, a bug affecting the permissions needed to generate a block. It is fixed now and should not occur: https://github.com/hyperledger/iroha/pull/1351
This is a known issue in the development of hyperledger iroha, see here: https://github.com/hyperledger/iroha/issues/1362.
It arises when iroha is compiled with Ansible Playbook.
Try to uninstall Ansible from your system and re-compile iroha and you shouldn't encounter the same error.
Obviously this is just a work around, and you won't be able to take advantage of the ansible capabilities.