How to require two or more labels for a jenkins job? - jenkins

I don't know how it happens that the option named "Restrict where this project can be run" from Jenkins seems to allow only a single value inside "Label Expression" field.
I tried lots of combinations in order to add more than one label and I wasn't able to find any way to put two.
I need to mention that I need AND between these labels.
The irony is that this option even has an Info button which loads some documentations, which is missing to say how an expression is supposed to look like. Another small nail in the Jenkins UX coffin. On this one neither Google helped.

There is a Jenkins bug causing the help text not to be shown. It is present since 1.585 and fixed since 1.621 (or 1.609.3 respectively).
Here is the help text:
If you want to always run this project on a specific node/slave, just specify its name. This works well when you have a small number of nodes.
As the size of the cluster grows, it becomes useful not to tie projects to specific slaves, as it hurts resource utilization when slaves may come and go. For such situation, assign labels to slaves to classify their capabilities and characteristics, and specify a boolean expression over those labels to decide where to run.
Valid Operators
The following operators are supported, in the order of precedence.
(expr) parenthesis
!expr negation
expr&&expr and
expr||expr or
a -> b "implies" operator. Equivalent to !a|b. For example, windows->x64 could be thought of as "if run on a Windows slave, that slave must be 64bit." It still allows Jenkins to run this build on linux.
a <-> b "if and only if" operator. Equivalent to a&&b || !a&&!b. For example, windows<->sfbay could be thought of as "if run on a Windows slave, that slave must be in the SF bay area, but if not on Windows, it must not be in the bay area."
All operators are left-associative (i.e., a->b->c <-> (a->b)->c ) An expression can contain whitespace for better readability, and it'll be ignored.
Label names or slave names can be quoted if they contain unsafe characters. For example, "jenkins-solaris (Solaris)" || "Windows 2008"

Related

Doubts regarding config_setting with Bazel

I'm trying to understand config_setting for detecting the underlying platform and had some doubts. Could you help me clarify them?
What is the difference between x64_windows and x64_windows_(msvc|msys) cpus? If I create config_setting's for all of them, will only one of them trigger? Should I just ignore x64_windows?
To detect Windows, what is the recommended way? Currently I'm doing:
config_setting(
name = "windows",
values = {"crosstool_top": "//crosstools/windows"},
)
config_setting(
name = "windows_msvc",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msvc",
},
)
config_setting(
name = "windows_msys",
values = {
"crosstool_top": "//crosstools/windows",
"cpu": "x64_windows_msys",
},
)
By using this I want to use :windows to match all Windows versions and :windows_msvc, for example, to match only MSVC. Is this the best way to do it?
What is the difference between darwin and darwin_x86_64 cpus? I know they match macOS, but do I need to always specify both when selecting something for macOS? If not, is there a better way to detect macOS with only one config_setting? Like using //crosstools with Windows?
How do detect Linux? I know you can detect the operating systems you care about first and then use //conditions:default, but it'd be nice to have a way to detect specifically Linux and not leave it as the default.
What are k8, piii, etc? Is there any documentation somewhere describing all the possible cpu values and what they mean?
If I wanted to use //crosstools to detect each platform, is there somewhere I can look up all available crosstools?
Thanks!
Great questions, all. Let me tackle them one by one:
--cpu=x64_windows_msys triggers the C++ toolchain that relies on MSYS/Cygwin. --cpu=x64_windows_msvc triggers the Windows-native (MSVC) toolchain. -cpu=x64_windows triggers the default, which is still MSYS but being converted to MSVC.
Which ones you want to support is up to you, but it's probably safest to support all for generality (and if one is just an alias for the other it doesn't require very complicated logic).
Only one config_setting can trigger at a time.
Unless you'e using a custom -crosstool_top= flag to specify Windows builds, you'll probably want to trigger on --cpu, e.g:
config_setting(
name = "windows",
values = {"cpu": "x64_windows"}
There's not a great way now to define all Windows. This is a current deficiency in Bazel's ability to recognize platforms, which settings like --cpu and --crosstool_top don't quite model the right way. Ongoing work to create a first-class concept of platform will provide the best solution to what you want. But for now --cpu is probably your best option.
This would basically be the same story as Windows. But to my knowledge there's only darwin for default crosstools, no darwin_x86_64.
For the time being it's probably best to use the //conditions:default approach you'd rather not do. Once first-class platforms are available that'll give you the fidelity you want.
k8 and piii are pseudonyms for 86 64-bit and 32-bit CPUs, respectively. They also tend to be associated with "Linux" by convention, although this is not a guaranteed 1-1 match.
There is no definitive set of "all possible CPU values". Basically, --cpu is just a string that gets resolved in CROSSTOOL files to toolchains with identifiers that match that string. This allows you to write new CROSSTOOL files for new CPU types you want to encode yourself. So the exact set of available CPUs depends on who's using Bazel and how they have their workspace set up.
For the same reasons as 5., there is no definitive list. See Bazel's github tools/ directory for references to defaults.

Is there a solution for transpiling Lua labels to ECMAScript3?

I'm re-building a Lua to ES3 transpiler (a tool for converting Lua to cross-browser JavaScript). Before I start to spend my ideas on this transpiler, I want to ask if it's possible to convert Lua labels to ECMAScript 3. For example:
goto label;
:: label ::
print "skipped";
My first idea was to separate each body of statements in parts, e.g, when there's a label, its next statements must be stored as a entire next part:
some body
label (& statements)
other label (& statements)
and so on. Every statement that has a body (or the program chunk) gets a list of parts like this. Each part of a label should have its name stored in somewhere (e.g, in its own part object, inside a property).
Each part would be a function or would store a function on itself to be executed sequentially in relation to the others.
A goto statement would lookup its specific label to run its statement and invoke a ES return statement to stop the current statements execution.
The limitations of separating the body statements in this way is to access the variables and functions defined in different parts... So, is there a idea or answer for this? Is it impossible to have stable labels if converting them to ECMAScript?
I can't quite follow your idea, but it seems someone already solved the problem: JavaScript allows labelled continues, which, combined with dummy while loops, permit emulating goto within a function. (And unless I forgot something, that should be all you need for Lua.)
Compare pages 72-74 of the ECMAScript spec ed. #3 of 2000-03-24 to see that it should work in ES3, or just look at e.g. this answer to a question about goto in JS. As usual on the 'net, the URLs referenced there are dead but you can get summerofgoto.com [archived] at the awesome Internet Archive. (Outgoing GitHub link is also dead, but the scripts are also archived: parseScripts.js, goto.min.js or goto.js.)
I hope that's enough to get things running, good luck!

Create a user assistant using NLP

I am following a course titled Natural Language Processing on Coursera, and while the course is informative, I wonder if the contents given cater to what am I looking for.Basically I want to implement a textual version of Cortana, or Siri for now as a project, i.e. where the user can enter commands for the computer in natural language and they will be processed and translated into appropriate OS commands. My question is
What is generally sequence of steps for the above applications, after processing the speech? Do they tag the text and then parse it, or do they have any other approach?
Under which application of NLP does it fall? Can someone cite me some good resources for same? My only doubt is that what I follow now, shall that serve any important part towards my goal or not?
What you want to create can be thought of as a carefully constrained chat-bot, except you are not attempting to hold a general conversation with the user, but to process specific natural language input and map it to specific commands or actions.
In essence, you need a tool that can pattern match various user input, with the extraction or at least recognition of various important topic or subject elements, and then decide what to do with that data.
Rather than get into an abstract discussion of natural language processing, I'm going to make a recommendation instead. Use ChatScript. It is a free open source tool for creating chat-bots that recently took first place in the Loebner chat-bot competition, as it has done so several times in the past:
http://chatscript.sourceforge.net/
The tool is written in C++, but you don't need to touch the source code to create NLP apps; just use the scripting language provided by the tool. Although initially written for chat-bots, it has expanded into an extremely programmer friendly tool for doing any kind of NLP app.
Most importantly, you are not boxed in by the philosophy of the tool or limited by the framework provided by the tool. It has all the power of most scripting languages so you won't find yourself going most of the distance towards your completing your app, only to find some crushing limitation during the last mile that defeats your app or at least cripples it severely.
It also includes a large number of ontologies that can jump-start significantly your development efforts, and it's built-in pre-processor does parts-of-speech parsing, input conformance, and many other tasks crucial to writing script that can easily be generalized to handle large variations in user input. It also has a full interface to the WordNet synset database. There are many other important features in ChatScript that make NLP development much easier, too many to list here. It can run on Linux or Windows as a server that can be accessed using a TCP-IP socket connection.
Here's a tiny and overly simplistic example of some ChatScript script code:
# Define the list of available devices in the user's household.
concept: ~available_devices( green_kitchen_lamp stove radio )
#! Turn on the green kitchen lamp.
#! Turn off that damn radio!
u: ( turn _[ on off ] *~2 _~available_devices )
# Save off the desired action found in the user's input. ON or OFF.
$action = _0
# Save off the name of the device the user wants to turn on or off.
$target_device = _1
# Launch the utility that turns devices on and off.
^system( devicemanager $action $target_device )
Above is a typical ChatScript rule. Your app will have many such rules. This rule is looking for commands from the user to turn various devices in the house on and off. The # character indicates a line is a comment. Here's a breakdown of the rule's head:
It consists of the prefix u:. This tells ChatScript a rule that the rule accepts user input in statement or question format.
It consists of the match pattern, which is the content between the parentheses. This match pattern looks for the word turn anywhere in the sentence. Next it looks for the desired user action. The square brackets tell ChatScript to match the word on or the word off. The underscore preceding the square brackets tell ChatScript to capture the matched text, the same way parentheses do in a regular expression. The ~2 token is a range restricted wildcard. It tells ChatScript to allow up to 2 intervening words between the word turn and the concept set named ~available_devices.
~available_devices is a concept set. It is defined above the rule and contains the set of known devices the user can turn on and off. The underscore preceding the concept set name tells ChatScript to capture the name of the device the user specified in their input.
If the rule pattern matches the current user input, it "fires" and then the rule's body executes. The contents of this rule's body is fairly obvious, and the comments above each line should help you understand what the rule does if fired. It saves off the desired action and the desired target device captured from the user's input to variables. (ChatScript variable names are preceded by a single or double dollar-sign.) Then it shells to the operating system to execute a program named devicemanager that will actually turn on or off the desired device.
I wanted to point out one of ChatScript's many features that make it a robust and industrial strength NLP tool. If you look above the rule you will see two sentences prefixed by a string consisting of the characters #!. These are not comments but are validation sentences instead. You can run ChatScript in verify mode. In verify mode it will find all the validation sentences in your scripts. It will then apply each validation sentence to the rule immediately following it/them. If the rule pattern does not match the validation sentence, an error message will be written to a log file. This makes each validation sentence a tiny, easy to implement unit test. So later when you make changes to your script, you can run ChatScript in verify mode and see if you broke anything.

TFS drop, exclude obj folder using minimatch pattern

I'm setting up TFS 2015 on-prem and I'm having an issue on my last build step, Publish Build Artifacts. For some reason, the build agent appears to be archiving old binaries and I'm left with a huge filepath:
E:\TFSBuildAgent\_work\1a4e9e55\workspace\application\Development\project\WCF\WCF\obj\Debug\Package\Archive\Content\E_C\TFSBuildAgent\_work\1a4e9e55\workspace\application\Development\project\WCF\WCF\obj\Debug\Package\PackageTmp\bin
I'm copying the files using the example minimatch pattern to begin with:
**\bin
I'm only testing at the moment so this is not a permanent solution but how can I copy all binaries that are in a bin folder but not a descendant of obj?
From research I think that this should work, but it doesn't (It doesn't match anything):
**!(obj)**\bin
I'm using www.globtester.com to test. Any suggestions?
On a separate note, I'll look into the archiving issue later but if anyone has any pointers on it, feel free to comment. Thanks
In VSTS there are two kinds of pattern matching for URLs that are built-in to the SDKs. Most tasks nowadays use the Minimatch pattern as described in Matt's answer. However, some use the pattern that was used by the 1.x Agent's Powershell SDK. That format is still available in the 2.x Agent's Powershell SDK by the way.
So that means there are 5 kinds of tasks:
1.x agent - Powershell SDK
2.x agent - Node SDK
2.x agent - Powershell 1 Backwards compatibility
2.x agent - Powershell 3 SDK - Using find-files
2.x agent - Powershell 3 SDK - Using find-match
The ones in bold don't Minimatch, but the format documented in the VSTS-Task-SDK's find-files method.
The original question was posted in 2015, at which point in time the 2.x agent wasn't yet around. In that case, the pattern would, in all likelihood, be:
**\bin\$(BuildConfiguration)\**\*;-:**\obj\**
The -: excludes the items from the ones in front of it.
According to Microsoft's documentation, here is a list of
file matching patterns you can use. The most important rules are:
Match with ?
? matches any single character within a file or directory name (zero or one times).
Match with * or +
* or + matches zero or more characters within a file or directory name.
Match with # sign
# matches exactly once.
Match with Brackets (, ) and |
If you're using brackets with | it is treated as a logical OR, e.g. *(hello|world) means "Zero or more occurrances of hello or world"
Match with Double-asterisk **
** recursive wildcard. For example, /hello/**/* matches all descendants of /hello.
Exclude patterns with !
Leading ! changes the meaning of an include pattern to exclude. Interleaved exclude patterns are supported.
Character sets with [ and ]
[] matches a set or range of characters within a file or directory name.
Comments with #
Patterns that begin with # are treated as comments.
Escaping
Wrapping special characters in [] can be used to escape literal glob characters in a file name. For example the literal file name hello[a-z] can be escaped as hello[[]a-z].
Example
The following expressions can be used in the Contents field of the "Copy Files" build step to create a deployment package for a web project:
**\?(.config|.dll|*.sitemap)
**\?(.exe|.dll|.pdb|.xml|*.resx)
**\?(.js|.css|.html|.aspx|.ascx|.asax|.Master|.cshtml|*.map)
**\?(.gif|.png|.jpg|.ico|*.pdf)
Note: You might need to add more extensions, depending on the needs of your project.

Naming nodes in Erlang

I'm playing with the distributed programming tutorial from the 5.4 documentation, and have run into a problem with node names.
My MacBook's default name (jamess-macbook) does not play well with Erlang's node-naming scheme, due to the dash:
(salt#jamess-macbook)4> {my_process, pepper#jamess-macbook} ! start
** exception error: bad argument in an arithmetic expression
in operator -/2
called as pepper#jamess - macbook
I'm sure there's an easy way to get around this, short of renaming all the machines I want to run Erlang on, but I can't see it in the documentation.
Any suggestions?
You just need to quote the atom properly. 'pepper#jamess-macbook' (with the single quotes) is the name of the node.
An atom should be enclosed in single
quotes (') if it does not begin with a
lower-case letter or if it contains
other characters than alphanumeric
characters, underscore (_), or #.
-- Erlang Reference Manual
Using short node names (-sname) has various other consequences (limited interoperability with long node name nodes, doesn't load dns information into inet_db and so on).
Start the Erlang interpreter with:
$ erl -sname node_name
where node_name is the name you want to use to refer to the machine.
You can even simulate a distributed system on a single machine by starting several instances of the interpreter, each with a different node name.

Resources