MetricsSnapshotReporterFactory warning in samza job - apache-samza

I get the following warning in samza job:
[main] WARN o.a.s.m.r.MetricsSnapshotReporterFactory.warn(66) - Unable
to find implementation version in jar's meta info. Defaulting to
0.0.1.
How can i fix it? What am i missing?

This seems like a benign warning. It means that your StreamTask implementation (the class pointed by "task.class") does not list a version in the jar.
You can ignore it.
If you would like to include versioning information within the jar, you can add a Manifest.txt to your jar as per guidelines here. This should get rid of the warning.
I can think of a scenario where you want to distinguish metrics published from one version of your Samza job to another. In such cases, adding versioning information to your Manifest file helps. HTH!

Related

Jenkins [Groovy] Annotation Grab cannot be used in the sandbox

Following the remarkably terse docs here:
https://www.jenkins.io/doc/book/pipeline/shared-libraries/#using-third-party-libraries
I am trying to use #Grab to access a third party library from Jenkins.
#Grab(group='org.jsoup', module='jsoup', version='1.13.1')
is in a Groovy class in a library. The library is specified in Jenkins/configure
under Global Pipeline Libraries
Sharable libraries available to any Pipeline jobs running on this
system. These libraries will be trusted, meaning they run without
“sandbox” restrictions and may use #Grab.
I am not using the default branch of the library, but I am not sure whether that is relevant to my problem.
When running the pipeline, I get the following error:
java.lang.SecurityException: Annotation Grab cannot be used in the
sandbox. at
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.RejectASTTransformsCustomizer
Any hints would be deeply appreciated
This article on russian describes details about it: https://habr.com/ru/post/338032/
Summary: you need to create separate libarary in SCM and use Grab there.

I have not changed requirements in my Divio project, so why does the build fail with a dependency conflict?

The last time I deployed the project, the build worked perfectly.
In the meantime I have changed nothing that would affect the pip requirements, yet I get an error when building:
Could not find a version that matches Django<1.10,<1.10.999,<1.11,
<1.12,<1.9.999,<2,<2.0,==1.9.13,>1.3,>=1.11,>=1.3,>=1.4,>=1.4.10,
>=1.4.2,>=1.5,>=1.6,>=1.7,>=1.8
I get the same error when building the project locally with docker-compose build web.
What could be the problem?
The problem here is that although you may not have modified any requirements, the dependencies of a project can sometimes change on their own.
You may even have pinned all of your own requirements (which is generally a good idea) but that still won't help if one of them itself has an unpinned dependency.
Anywhere an unpinned dependency exists, you can run into this.
Here's an example. Suppose your requirements.in contains super-django==1.2.4. That's better than simply specifying super-django, as you won't be taken by surprised if a new, incompatible version of the Super Django package is released.
But suppose that in turn Super Django 1.2.4, in its requirements, lists:
Django==1.11
django-super-admin
If a new version of Django Super Admin is released, that requires say Django>=2.0, your next build will fail because of the mutually exclusive requirements.
To track down the culprit when you run into such a failure, you need to examine the build logs. You'll see there something like:
Could not find a version that matches Django==1.11,>=2.0 [etc].
So now you know to look back through the logs to find what is wanting to install Django>=2.0, and you'll find:
adding Django>=2.0
from django-super-admin==1.7.0
So now you know that it's django-super-admin==1.7.0 that is the key. Since you can't trust super-django to pin the correct version of django-super-admin, you'll have to do it yourself, by adding django-super-admin<1.7.0 to the requirements.in of your project.
There's more information about this at How to identify and resolve a dependency conflict.
You can also Pin all of your project’s Python dependencies to ensure this never happens again with any other dependency, though you sacrifice some flexibility for the guarantee.
Note: I am a member of the Divio team. This question is one that we see quite regularly via our support channels.

How do I set up a Jenkins Pipeline global library using perforce as the SCM?

I've spent many hours on this without any success at all. According to this I should be able to use any available SCM but I don't know how to map the paths, where, if anywhere, to insert the ${library.RegLib.version} or what workspace name to use.
I have a library set up as per the abovementioned docs:
<root>/src/org/somelib/MyLib.groovy
which contains:
package org.registration;
def doTest() {
echo "test running..."
}
I've tried many different things but nothing works. I've also tried restarting Jenkins, as mentioned here. No change.
My build reports:
Loading library MyLib##1
java.lang.ArrayIndexOutOfBoundsException: 1
at org.jenkinsci.plugins.p4.tasks.AbstractTask.setEnvironment(AbstractTask.java:106)
at org.jenkinsci.plugins.p4.PerforceScm.checkout(PerforceScm.java:391)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:109)
at org.jenkinsci.plugins.workflow.libs.SCMSourceRetriever.doRetrieve(SCMSourceRetriever.java:107)
at org.jenkinsci.plugins.workflow.libs.SCMRetriever.retrieve(SCMRetriever.java:63)
at org.jenkinsci.plugins.workflow.libs.LibraryAdder.retrieve(LibraryAdder.java:150)
at org.jenkinsci.plugins.workflow.libs.LibraryAdder.add(LibraryAdder.java:131)
at org.jenkinsci.plugins.workflow.libs.LibraryDecorator$1.call(LibraryDecorator.java:99)
at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1053)
at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:591)
at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:569)
at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:546)
at groovy.lang.GroovyClassLoader.doParseClass(GroovyClassLoader.java:298)
at groovy.lang.GroovyClassLoader.parseClass(GroovyClassLoader.java:268)
at groovy.lang.GroovyShell.parseClass(GroovyShell.java:688)
at groovy.lang.GroovyShell.parse(GroovyShell.java:700)
at org.jenkinsci.plugins.workflow.cps.CpsGroovyShell.reparse(CpsGroovyShell.java:67)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.parseScript(CpsFlowExecution.java:429)
at org.jenkinsci.plugins.workflow.cps.CpsFlowExecution.start(CpsFlowExecution.java:392)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:221)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:404)
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: Loading libraries failed
"Default version" is set to 1 because there's only been one commit. I've also tried #1. I don't know whether to map specific files or the top-level directory. If I remove the default version the build fails and complains that I haven't set a version. It's supposed to be optional but clearly isn't.
I've also tried using the vars directory
<root>/vars/doTest.groovy
which contains:
def call(msg) {
echo msg
}
but I presume that also requires the library to be loaded. The docs are unclear about that.
So...
Will this work with perforce?
How do I map the paths to make it work?
How do I make the code in vars accessible? Is that loaded as part of the overall library?
Is there an error somewhere in my code?
Many thanks.
Install Pipeline Shared Libraries Plugin.
The configuration is in Manage Jenkins -> Global Pipeline Libraries
The retrieval method should be legacy mode. add repository
Tick the Load implicitly to load the scripts in every build
Put the groovy files in vars/yourGroovy.groovy and call it from Jenkinsfile:
yourGroovy()
Seems to be an open issue with the p4 plugin, related to the p4 plugin being unable to deal with perforce checkouts at locations different from the workspace root:
https://issues.jenkins-ci.org/browse/JENKINS-40055
https://issues.jenkins-ci.org/browse/JENKINS-36243
Edit: You may be able to get this to work using older plugin versions, according to the reporter of the first issue:
The crash is not present in version 2.4 of workflow-cps-global-lib, it
started to happen in version 2.5 only.
This is really late, but I was wondering if you found a solution.
In Amityo's answer you commented that your Perforce source path is //<prod>/trunk/src/apps/jenkinslib#${library.RegLib.version}/..., where ${library.RegLib.version} = 1 if no other version is explicitly specified in pipeline.
I think Jenkins will literally look for a folder named jenkinslib#1, which it won't find since your folder is just named jenkinslib.
I don't know how you would set up your structure to support different versions, but maybe having just //<prod>/trunk/src/apps/jenkinslib/... as your source path in the map might work, even though the config page tells you to add library.RegLib.version.
I would've commented all this on Amityo's post instead but I don't have enough reputation to do so.
In reply to #HS10, I did and I've been meaning to update this for the benefit of others for ages but everything else in life seems to become higher priority. Since you've asked, here's what I did.
In Jenkins/Configuration, under Global Pipeline Libraries I set the following:
Specifically, provide a Name and set Default version tohead. Set the Retrieval method to Legacy SCM. Perforce doesn't have Modeern SCM support, yet. Under Source Code Management select Perforce Software. Note that this is the p4 plugin, not the old Perforce one which is listed as Perforce. I suspect that it's important to use the version written by Perforce themselves. Select a Credential that you have configured and provide a matching workspace name and mapping. I may have had that wrong earlier, I don't know. Other settings should be at your discretion. The library directory structure is as per the docs. I did think for a while that the workspace name had to be _global_lib but recent experiments appear to have disproved that.
In your pipeline, import the library like this:
#Library('plib') _
// do something
You should now have a working library.
I think I had this wrong earlier, as well. Note that the underscore is important. See the Global Lirary docs for more details. Getting this working caused me a lot of pain so I hope this saves someone from having a similar experience.

Deleting files when creating a Grails plugin

One of the tips Burt Beckwith provides when creating plugins is to delete files you don't use.
So if you don't use UrlMappings.groovy - delete it.
I was wondering about directories. If you have no controllers, should you delete the controller directory?
Thanks
The short answer is "Yes, you should." Looking at some of the other plugins you can see this is pretty standard practice. For example the Redis plugin on GitHub.
You can delete directories, but they'll get re-created after running various scripts, in particular package-plugin. I tend to remove them as source folders in GGTS so they're not distracting - I like to only see directories that are being used. I used to use an Ant script to do various build tasks for plugins, but at this point all I use them for is the post-package-cleanup task that deletes unused folders, e.g .https://github.com/grails-plugins/grails-spring-security-core/blob/master/build.xml.
It turns out that only three plugin files are required - all of the rest can be deleted if they're not used. These are the plugin descriptor, application.properties (although this is only used to specify the Grails version), and BuildConfig.groovy. BuildConfig.groovy might be optional too if you don't need to publish the plugin to a repo and have no dependencies. At a minimum it's needed to specify the release plugin, but if you don't need that they you can probably get by with just 2 files :)

What's the purpose of the Maven 3 ciManagement configuration?

I've read the section in the Maven documentation, but I haven't really seen any evidence of plugins using the <ciManagement> tag in a project POM.
My question is:
Are there CI systems that will use this information in some intelligent way?
Other than project self-documentation, is there any other benefit to specifying and maintaining this information in my project POM?
Does Maven itself (or any common plugins) use this information?
for documentation sample http://maven.apache.org/plugins/maven-compiler-plugin/integration.html
Historically this element was used by continuum for more details see http://maven.apache.org/ref/3.0.4/maven-model/maven.html#class_ciManagement it was possible to define the pom notifiers (mail,irc) and where to send notification.
SonarQube uses the <ciManagement> element of the pom to define the value of its own sonar.links.ci configuration property. This property is mentioned in the SonarQube documentation, but there's no explanation about what is it used for. It turns out it's one more thing to keep up-to-date. I would try to remove it from the pom to see if any visible issue comes up.
You can look here to find data about what types of <ciManagement/> are valid.
There's probably a better way than pointing to that hash, but the way the maven-plugins repo works is odd.

Resources