Unable to use Kubernetes and Jackson 2 API plugins simuntaneously at Jenkins - jenkins

I am trying to deploy my web application usin Kubernetes at Jenkins, but the manifest.yml is not being properly read, as I constantly get the following error logs:
ERROR: Can't construct a java object for tag:yaml.org,2002:io.kubernetes.client.openapi.models.V1Deployment; exception=Class not found: io.kubernetes.client.openapi.models.V1Deployment
in 'reader', line 1, column 1:
apiVersion: v1
^
For what I've read in other entries, this seems to be an issue with the latest versions of the Jackson API plugin. Due to this, I've tried to downgrade this plugin, but this does not seem to be possible, as In have several plugins installed that require the latest Jackson 2 API plugin version, which is the reason why I get an error when trying to downgrade.
Therefore, I'd like to know if I have any alternative to fix this problem when parsing the manifest.yml rather thar diwngrading every dependent plugin one by one.

Related

Use Of experiments=no_use_multiple_sdk_containers in Google cloud dataflow

Issue Summary:
Hi,
I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema. I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
To Solve the issue , we set an experiment flag (--experiments=no_use_multiple_sdk_containers ) which ran fine.
I want to know a better solution of my issue and also does the above flag will effect the pipeline performance.
Please try with the dataflow run command:
--prebuild_sdk_container_engine=cloud_build --experiments=use_runner_v2
this would use cloud build to build the container with your extra dependencies and then would use it within the dataflow run.

How to Update a python library thats already present in GCP dataflow

I am using avro version 1.11.0 for parsing an avro file and decoding it. We have a custom requirement, so i am not able to use ReadFromAvro. When trying this with dataflow there arises a dependency issues as avro-python3 with version 1.82 is already available. The issue is of class TimestampMillisSchema which is not present in avro-python3. It fails stating Attribute TimestampMillisSchema not found in avro.schema.
I then tried passing a requirements file with avro==1.11.0 but now the dataflow was not able to start giving error "Error syncing pod" which seems to be because of dependencies conflicts.
Any Idea/help on how this should be resolved.
Thanks

Deploying Cloud Run via YAML gives Revision named 'yourservicename-00001-soj' with different configuration already exists

When using the following command to deploy a new Cloud Run revision,
gcloud run services replace service.yaml
The deployment fails with this error
ERROR: (gcloud.run.services.replace) ALREADY_EXISTS: Revision named 'yourservicename-00001-soj' with different configuration already exists.
This occurs when you have followed Google's documentation which instructs you to pull down the current service YAML description into a file, make edits and then redeploy it.
This is because the documentation is wrong, or Google's service has regressed since it was authored.
Edit the YAML and remove spec.template.metadata.name and try again.

EMR 6 Beta with Docker Support has S3 Access Issue

I am exploring the new EMR 6.0.0 with Docker support in order to make decision if we want to use it. One of our projects is written in Scala 2.11. But EMR 6.0.0 comes with Spark built from Scala 2.12. So I switched to try 6.00-beta, which is Spark 2.4.3 built from Scala 2.11. If it works on 6.0.0-beta, then we will upgrade our code to Scala 2.12 and use 6.0.0.
A few issues I am having are when I tried to run my Scala spark job:
When it tried to read parquet from S3, I got error: java.lang.RuntimeException: Cannot create temp dirs: [/mnt/s3]
When I tried to make API call with https, I got error: usun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target.
When it tried to read files from S3, I got error: Class com.amazon.ws.emr.hadoop.fs.EmrFileSystem not found. I was able to able to hack this one by passing the path by --jars. Maybe not the best solution.
I am guessing there must be something I need to set either during bootstrap or in the Docker file.
Can someone please help? Thanks!
I figure out the S3 issue. In beta version, /mnt/s3 is not mounted and given the read and write permission.
So I need to add the "docker.allowed.rw-mounts" to the container-executor configuration like below:
docker.allowed.rw-mounts=/etc/passwd,/mnt/s3

Grails JMS Plugin - Unable to resolve classes

I'm pretty new to grails so it's possible that i've missed something obvious, but I am trying to utilise the JMS plugin. I've included the following within the plugins section of my BuildConfig.groovy
compile ":jms:1.2"
However when I compile the app I get lots of "unable to resolve class" exceptions for imports within the jms plugin (40 in total, javax.jms.* and org.springframework.jms.* mostly).
e.g.
| Error Compilation error: startup failed:
C:\dev\prj\grails\tApp\target\work\plugins\jms-1.2\grails-app\utils\DefaultJmsBe
ans.groovy: 16: unable to resolve class org.springframework.jms.listener.Default
MessageListenerContainer
# line 16, column 1.
import org.springframework.jms.listener.DefaultMessageListenerContainer
^
C:\dev\prj\grails\tApp\target\work\plugins\jms-1.2\grails-app\services\grails\pl
ugin\jms\JmsService.groovy: 22: unable to resolve class javax.jms.Message
# line 22, column 1.
import javax.jms.Message
Is anyone able to point me in the right direction? The issue can be reproduced just by adding the plugin to the BuildConfig.groovy as mentioned above to a new grails project .
Grails version 2.3.3
Many thanks
Tom
While doing a Grails 2.2 -> 2.3.4 upgrade I ran into a similar issue and was able to get things working by manually adding spring-jms to my dependencies in BuildConfig.groovy:
compile 'org.springframework:spring-jms:3.2.5.RELEASE'
It's odd that this would stop working now of course, since the jms plugin hasn't changed in a very long time. My guess is that it depends on the spring-jms lib, but didn't have it listed as a dependency, instead relying on grails to bring it in. According to the 2.3.x upgrade guide, there have been changes to what grails brings in now, so perhaps spring-jms stopped getting a free ride.
The Grails MX website has a write-up that might help; it's built using 2.3.4:
http://grails.org.mx/2013/12/20/quickstart-jms-en-grails/
It was pretty helpful to me in getting a sample application up and running. It's in Spanish though, so may need to have Google translate it for you...
Have you tried executing the command grails refresh-dependencies before running grails run-app?
I wrote a blog post on installing a Grails plugin if you need more details.

Resources