My motivation: our codebase is scattered across over at least 20 git repos. I want to consolidate everything into a single git repo with a single build system. Currently we use SBT, but we think the build would take too long, so I am examining the possibility of using Bazel instead.
Most of our codebase uses Scala 2.12, some of our codebase uses Scala 2.11, and the rest needs to build under both Scala 2.11 and Scala 2.12.
I'm trying to use bazelbuild/rules_scala.
With the following call to scala_repositories in my WORKSPACE, I can build using Scala 2.12:
scala_repositories(("2.12.6", {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}))
If I have the following call instead, I can build using Scala 2.11:
scala_repositories(("2.11.12", {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04"
}))
However, it is not possible to specify in my BUILD files on a package level which version(s) of Scala to build with. I must specify this globally in my WORKSPACE.
To workaround this, my plan is to set up configurable attributes, so I can specify --define scala=2.11 to build with Scala 2.11, and specify --define scala=2.12 to build with Scala 2.12.
First I tried by putting this code in my WORKSPACE:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
scala_repositories(
select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
),
select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
)
But this gave me the error config_setting cannot be in the WORKSPACE file.
So then I tried moving code into a Starlark file.
In tools/build_rules/scala.bzl:
config_setting(
name = "scala-2.11",
define_values = {
"scala": "2.11"
}
)
config_setting(
name = "scala-2.12",
define_values = {
"scala": "2.12"
}
)
def scala_version():
return select(
{
"scala-2.11": "2.11.12",
"scala-2.12": "2.12.6"
}
)
def scala_machinery():
return select(
{
"scala-2.11": {
"scala_compiler": "3e892546b72ab547cb77de4d840bcfd05c853e73390fed7370a8f19acb0735a0",
"scala_library": "0b3d6fd42958ee98715ba2ec5fe221f4ca1e694d7c981b0ae0cd68e97baf6dce",
"scala_reflect": "6ba385b450a6311a15c918cf8688b9af9327c6104f0ecbd35933cfcd3095fe04",
},
"scala-2.12": {
"scala_compiler": "3023b07cc02f2b0217b2c04f8e636b396130b3a8544a8dfad498a19c3e57a863",
"scala_library": "f81d7144f0ce1b8123335b72ba39003c4be2870767aca15dd0888ba3dab65e98",
"scala_reflect": "ffa70d522fc9f9deec14358aa674e6dd75c9dfa39d4668ef15bb52f002ce99fa"
}
}
)
And back in my WORKSPACE:
load("//tools/build_rules:scala.bzl", "scala_version", "scala_machinery")
scala_repositories(scala_version(), scala_machinery())
But now I get this error:
tools/build_rules/scala.bzl:1:1: name 'config_setting' is not defined
This confuses me, because I thought config_setting() was built in. I can't find where I should load it in from.
So, my questions:
How do I load config_setting() into my .bzl file?
Or, is there a better way of controlling from the command line which arguments get passed to scala_repositories()?
Or, is this just not possible?
$ bazel version
Build label: 0.17.2-homebrew
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Fri Sep 28 10:42:37 2018 (1538131357)
Build timestamp: 1538131357
Build timestamp as int: 1538131357
If you call native code from a bzl file, you must use the native. prefix, so in this case you would call native.config_setting.
However, this is going to lead to the same error: config_setting is a BUILD rule, not a WORKSPACE rule.
If you want to change the build tool used for a particular target, you can change the toolchain, and this seems to be supported via the scala_toolchain
And I believe you can use a config to select the toolchain.
I'm unfamiliar with what scala_repositories does. I hope it defines the toolchain with a proper versioned name, so that you can reference the wanted toolchain correctly. And I hope you can invoke it twice in the same workspace, otherwise I think there is no solution.
Related
Can somebody explain to me what the syntax of a premake script means? A premake script is a valid lua script. Then what are solution, configurations, project in the below code? Variables? keywords?
-- A solution contains projects, and defines the available configurations
solution "MyApplication"
configurations { "Debug", "Release" }
-- A project defines one build target
project "MyApplication"
kind "ConsoleApp"
language "C++"
files { "**.h", "**.cpp" }
configuration "Debug"
defines { "DEBUG" }
flags { "Symbols" }
configuration "Release"
defines { "NDEBUG" }
flags { "Optimize" }
Edit: They are function calls. So Then how is this part
configuration "Debug"
defines { "DEBUG" }
flags { "Symbols" }
configuration "Release"
defines { "NDEBUG" }
flags { "Optimize" }
executed? the defines and flags calls are to be called according to the context of configuartion?
Functions
If function takes only one argument that is a table or a string the parentheses can be omitted. Refer to 3.4.10 - Function Calls.
Additionally, in your example indention is arbitrary. You could write:
project("MyApplication")
kind("ConsoleApp")
language("C++")
files({"**.h", "**.cpp"})
And it would be as good as the original.
Regarding the second matter. Most likely configuration and related defines and flags operate on some hidden local state. When you call configuration it changes this local state to refer to e.g. "Debug" configuration and so all following calls also refer to this local state. As in:
do
local state
function set_state (name)
state = name
end
function print_with_suffix (suffix)
print(state, suffix)
end
end
set_state("hello")
print_with_suffix("world") --> hello world
I am trying to implement active reactive choice parameter .
In reactive parameter basically I am hard coding the different options based on the active parameter
Below is the sample code
if (Target_Environment.equals("Dev01")) {
return ["test_DEV"]
} else if (Target_Environment.equals("Dev02")) {
return ["test3_DEV02","test2_DEV02"]
} else if (Target_Environment.equals("Dev03")) {
return ["test3_DEV03"]
} else if (Target_Environment.equals("Sit03")) {
return ["test3_SIT03"]
}else if (Target_Environment.equals("PPTE")) {
return ["test3_PPTE"]
}
else {
return ["Please Select Target Environment"]
}
Instead hard coding the choices I want to read from a file in jenkins workspace and show the content as the choices , what would be an ideal way to go with that ?
The readFile is not working under return function
I am also trying with extended choice parameter but
there I am passing a property file with filename however how can I pass the property file with if else condition
I'm not 100% sure if this is accurate. But I would expect that you can't read from the job workspace in a parameter like that because workspaces are created after a build (see below error message).
So if you create the job with no previous builds, there will be no workspace file to read from?
Whenever I have seen that parameter type used in the past, they usually read from a file on the server instead of the workspace.
Error message from Jenkins jobs that have no previous builds:
Error: no workspace
A project won't have any workspace until at least one build is performed.
Run a build to have Jenkins create a workspace.
First, I came from a .NET background so please excuse my lack of groovy lingo. Back when I was in a .NET shop, we were using TypeScript with C# to build web apps. In our controllers, we would always receive/respond with DTOs (data xfer objects). This got to be quite the headache every time you create/modify a DTO you had to update the TypeScript interface (the d.ts file) that corresponded to it.
So we created a little app (a simple exe) that loaded the dll from the webapp into it, then reflected over it to find the DTOs (filtering by specific namespaces), and parse through them to find each class name within, their properties, and their properties' data types, generate that information into a string, and finally saved as into a d.ts file.
This app was then configured to run on every build of the website. That way, when you go to run/debug/build the website, it would update your d.ts files automatically - which made working with TypeScript that much easier.
Long story short, how could I achieve this with a Grails Website if I were to write a simple groovy app to generate the d.ts that I want?
-- OR --
How do I get the IDE (ex IntelliJ) to run a groovy file (that is part of the app) that does this generation post-build?
I did find this but still need a way to run on compile:
Groovy property iteration
class Foo {
def feck = "fe"
def arse = "ar"
def drink = "dr"
}
class Foo2 {
def feck = "fe2"
def arse = "ar2"
def drink = "dr2"
}
def f = new Foo()
def f2 = new Foo2()
f2.properties.each { prop, val ->
if(prop in ["metaClass","class"]) return
if(f.hasProperty(prop)) f[prop] = val
}
assert f.feck == "fe2"
assert f.arse == "ar2"
assert f.drink == "dr2"
I've been able to extract the Domain Objects and their persistent fields via the following Gant script:
In scripts/Props.groovy:
import static groovy.json.JsonOutput.*
includeTargets << grailsScript("_GrailsBootstrap")
target(props: "Lists persistent properties for each domain class") {
depends(loadApp)
def propMap = [:].withDefault { [] }
grailsApp.domainClasses.each {
it?.persistentProperties?.each { prop ->
if (prop.hasProperty('name') && prop.name) {
propMap[it.clazz.name] << ["${prop.name}": "${prop.getType()?.name}"]
}
}
}
// do any necessary file I/O here (just printing it now as an example)
println prettyPrint(toJson(propMap))
}
setDefaultTarget(props)
This can be run via the command line like so:
grails props
Which produces output like the following:
{
"com.mycompany.User": [
{ "type": "java.lang.String" },
{ "username": "java.lang.String" },
{ "password": "java.lang.String" }
],
"com.mycompany.Person": [
{ "name": "java.lang.String" },
{ "alive": "java.lang.Boolean" }
]
}
A couple of drawbacks to this approach is that we don't get any transient properties and I'm not exactly sure how to hook this into the _Events.groovy eventCompileEnd event.
Thanks Kevin! Just wanted to mention, in order to get this to run, here are a few steps I had to make sure to do in my case that I thought I would share:
-> Open up the grails BuildConfig.groovy
-> Change tomcat from build to compile like this:
plugins {
compile ":tomcat:[version]"
}
-> Drop your Props.groovy into the scripts folder on the root (noting the path to the grails-app folder for reference)
[application root]/scripts/Props.groovy
[application root]/grails-app
-> Open Terminal
gvm use grails [version]
grails compile
grails Props
Note: I was using Grails 2.3.11 for the project I was running this on.
That gets everything in your script to run successfully for me. Now to modify the println portion to generate Typescript interfaces.
Will post a github link when it is ready so be sure to check back.
Jenkins had 600+ plugins, in the real system, we are used to install lots of plugins.
And sometimes, we want to remove some plugins to make system more clean or replace with another mature plugin (different name).
This needs to make sure no one/no job use those plugins or I need to notify them.
Are there any ways in configuration or somewhere in Jenkins system to know whether the plugin is used by any jobs ?
UPDATE 2013
Based on the answer below, I maintain the simple "plugin:keyword" mapping, like
plugin_keys = {
"git":'scm class="hudson.plugins.git.GitSCM"',
"copyartifact":"hudson.plugins.copyartifact.CopyArtifact",
# and more
}
And search the plugin keyword from the config.xml, all the information (plugins,jobs,config) can be fetched via jenkins remote API
it works for me.
UPDATE 2014.04.26
Later jenkins version, it seems the config.xml is changed to have plugin name there directly
Like
<com.coravy.hudson.plugins.github.GithubProjectProperty plugin="github#1.4">
<hudson.plugins.throttleconcurrents.ThrottleJobProperty plugin="throttle-concurrents#1.7.2">
<hudson.plugins.disk__usage.DiskUsageProperty plugin="disk-usage#0.18"/>
<scm class="hudson.plugins.git.GitSCM" plugin="git#1.4.1-SNAPSHOT">
Therefore I just check this plugin="<plugin name>" in config.xml, it works again
UPDATE 2014.05.05
See complete script in gist jenkins-stats.py
UPDATE 2018.6.7
There is plugin usage plugin support this (no REST API yet)
Here are 2 ways to find that information.
The easiest is probably to to grep the job config files:
E.g. when you know the class name (or package name) of your plugin (e.g. org.jenkinsci.plugins.unity3d.Unity3dBuilder):
find $JENKINS_HOME/jobs/ -name config.xml -maxdepth 2 | xargs grep Unity3dBuilder
Another is to use something like the scriptler plugin, but then you need more information about where the plugin is used in the build.
import hudson.model.*
import hudson.maven.*
import hudson.tasks.*
for(item in Hudson.instance.items) {
//println("JOB : "+item.name);
for (builder in item.builders){
if (builder instanceof org.jenkinsci.plugins.unity3d.Unity3dBuilder) {
println(">>" + item.name.padRight(50, " ") + "\t UNITY3D BUILDER with " + builder.unity3dName);
}
}
}
}
Update: here's a small scriplet script that might ease you finding the relevant class names. It can certainly be improved:
import jenkins.model.*;
import hudson.ExtensionFinder;
List<ExtensionFinder> finders = Jenkins.instance.getExtensionList(ExtensionFinder.class);
for (finder in finders) {
println(">>> " + finder);
if (finder instanceof hudson.ExtensionFinder.GuiceFinder) {
println(finder.annotations.size());
for (key in finder.annotations.keySet()) {
println(key);
}
} else if (finder instanceof ruby.RubyExtensionFinder) {
println(finder.parsedPlugins.size());
for (plugin in finder.parsedPlugins) {
for (extension in plugin.extensions) {
println("ruby wrapper for " + extension.instance.clazz);
}
}
} else if (finder instanceof hudson.cli.declarative.CLIRegisterer) {
println(finder.discover(Jenkins.instance));
for (extension in finder.discover(Jenkins.instance)) {
println("CLI wrapper for " + extension.instance.class);
// not sure what to do with those
}
} else {
println("UNKNOWN FINDER TYPE");
}
}
(inlined scriplet from my original listJenkinsExtensions submission to http://scriptlerweb.appspot.com which seems down)
Don't forget to backup!
As of early 2018 there is a "Plugins Usage Plugin" that gives you a nice list of the plugins and where they are used. We've noticed that depending on the system sometimes it doesn't seems to catch all the plugins, but it gives a really lovely list of the plugins and all jobs related to a specific plugin in an expandable list.
https://plugins.jenkins.io/plugin-usage-plugin
Plugins used in pipeline scripts would not be listed normally as used by jobs, because they are used dynamically in Jenkinsfiles.
I can't comment because I don't have enough reputation, but if I could, I would point out that the broken link provided by coffeebreaks for the small scriplet script mentioned in the accepted answer can be found on the Internet Archive, at this link:
https://web.archive.org/web/20131103111754/http://scriptlerweb.appspot.com/script/show/97001
In case that link breaks, here is the content of the script:
import jenkins.model.*;
import hudson.ExtensionFinder;
List<ExtensionFinder> finders = Jenkins.instance.getExtensionList(ExtensionFinder.class);
for (finder in finders) {
println(">>> " + finder);
if (finder instanceof hudson.ExtensionFinder.GuiceFinder) {
println(finder.annotations.size());
for (key in finder.annotations.keySet()) {
println(key);
}
} else if (finder instanceof ruby.RubyExtensionFinder) {
println(finder.parsedPlugins.size());
for (plugin in finder.parsedPlugins) {
for (extension in plugin.extensions) {
println("ruby wrapper for " + extension.instance.clazz);
}
}
} else if (finder instanceof hudson.cli.declarative.CLIRegisterer) {
println(finder.discover(Jenkins.instance));
for (extension in finder.discover(Jenkins.instance)) {
println("CLI wrapper for " + extension.instance.class);
// not sure what to do with those
}
} else {
println("UNKNOWN FINDER TYPE");
}
}
I wrote this parameterized Jenkins job that searches config files. You only need to know what tags the plugin generates in the config file and to use that tag's name as parameter needle:
cd $JENKINS_HOME
cd jobs
echo searching for $needle
find . -name config.xml -type f -exec grep $needle /dev/null {} \;
My requirement is to invoke some processing from a Jenkins build server, to determine whether the domain model has changed since the last build. I've come to the conclusion that the way forward is to write a script that will invoke a sequence of existing scripts from the db-migration plugin. Then I can invoke it in the step that calls test-app and war.
I've looked in the Grails doc, and at some of the db-migration scripts, and I find I'm stuck - have no idea where to start trying things. I'd be really grateful if someone could point me at any suitable sources. BTW, I'm a bit rusty in Grails. Started to teach myself two years ago via proof of concept project, which lasted 6 months. Then it was back to Eclipse rich client work. That might be part of my problem, though I never go involved in scripts.
One thing I need in the Jenkins evt is to get hold of the current SVN revision number being used for the build. Suggestions welcome.
Regards, John
Create a new script by running grails create-script scriptname. The database-migration plugins scripts are configured to be easily reused. There are is a lot of shared code in _DatabaseMigrationCommon.groovy and each script defines one target with a unique name. So you can import either the shared script or any standalone script (or multiple scripts) and call the targets like they're methods.
By default the script generated by create-script "imports" the _GrailsInit script via includeTargets << grailsScript("_GrailsInit") and you can do the same, taking advantage of the magic variables that point at installed plugins' directories:
includeTargets << new File("$databaseMigrationPluginDir/scripts/DbmGenerateChangelog.groovy")
If you do this you can remove the include of _GrailsInit since it's already included, but if you don't that's fine since Grails only includes files once.
Then you can define your target and call any of the plugin's targets. The targets cannot accept parameters, but you can add data to the argsMap (this is a map Grails creates from the parsed commandline arguments) to simulate user-specified args. Note that any args passed to your script will be seen by the database-migration plugin's scripts since they use the same argsMap.
Here's an example script that just does the same thing as dbm-generate-changelog but adds a before and after message:
includeTargets << new File("$databaseMigrationPluginDir/scripts/DbmGenerateChangelog.groovy")
target(foo: "Just calls dbmGenerateChangelog") {
println 'before'
dbmGenerateChangelog()
println 'after'
}
setDefaultTarget foo
Note that I renamed the target from main to foo so it's unique, in case you want to call this from another script.
As an example of working with args, here's a modified version that specifies a default changelog name if none is provided:
println 'before'
if (!argsMap.params) {
argsMap.params = ['foo2.groovy']
}
dbmGenerateChangelog()
println 'after'
Edit: Here's a fuller example that captures the output of dbm-gorm-diff to a string:
includeTargets << new File("$databaseMigrationPluginDir/scripts/_DatabaseMigrationCommon.groovy")
target(foo: "foo") {
depends dbmInit
def configuredSchema = config.grails.plugin.databasemigration.schema
String argSchema = argsMap.schema
String effectiveSchema = argSchema ?: configuredSchema ?: defaultSchema
def realDatabase
boolean add = false // booleanArg('add')
String filename = null // argsList[0]
try {
printMessage "Starting $hyphenatedScriptName"
ByteArrayOutputStream baos = new ByteArrayOutputStream()
def baosOut = new PrintStream(baos)
ScriptUtils.executeAndWrite filename, add, dsName, { PrintStream out ->
MigrationUtils.executeInSession(dsName) {
realDatabase = MigrationUtils.getDatabase(effectiveSchema, dsName)
def gormDatabase = ScriptUtils.createGormDatabase(dataSourceSuffix, config, appCtx, realDatabase, effectiveSchema)
ScriptUtils.createAndPrintFixedDiff(gormDatabase, realDatabase, realDatabase, appCtx, diffTypes, baosOut)
}
}
String xml = new String(baos.toString('UTF-8'))
def ChangelogXml2Groovy = classLoader.loadClass('grails.plugin.databasemigration.ChangelogXml2Groovy')
String groovy = ChangelogXml2Groovy.convert(xml)
// do something with the groovy or xml here
printMessage "Finished $hyphenatedScriptName"
}
catch (e) {
ScriptUtils.printStackTrace e
exit 1
}
finally {
ScriptUtils.closeConnection realDatabase
}
}
setDefaultTarget foo