In my model, I have some agents;
"Demand" agent,
"EnergyProducer1" agent
"EnergyProducer2" agent.
When my hourly energy demands are created in the Main agent with a function, the priority for satisfying this demand is belongs to "EnergyProducer1" agent. In this agent, I have a function that calculate energy production based on some situtations. The some part of the inside of this function is following;
**" if (statechartA.isStateActive(Operating.busy)) && ( main.heatLoadDemandPerHour >= heatPowerNominal) {
producedHeatPower = heatPowerNominal;
naturalGasConsumptionA = naturalGasConsumptionNominal;
send("boilerWorking",boiler);
} else ..... "**
Here my question is related to 4th line of the code. If my agent1 fails to satisfy the hourly demand, I have to say agent2 that " to satisfy rest of demand". If I send this message to agent2, its statechart will be active and the function of agent2 will be working. My question is that this all situations will be realized at the same hour ??? İf it is not, is accessing variables and parameters of other agent2 more appropiaote way???
I hope I could explain my problem.
thanks for your help in advance...
**Edited question...
As a general comment on your question, within AnyLogic environment sending messages is alway preferable to directly accessing variable and parameters of another agent.
Specifically in the example presented the send() function will schedule message delivery the next instance after the completion of the current function.
Update: A message in AnyLogic can be any Java class. Sending strings such as "boilerWorking" used in the example is good for general control, however if more information needs to be shared (such as a double value) then it is good practice to create a new Java class (let's call is ModelMessage and follow these instructions) with at least two properties msgStr and msgVal. With this new class sending a message changes from this:
...
send("boilerWorking", boiler);
...
to this:
...
send(new ModelMessage("boilerWorking",42.0), boiler);
...
and firing transitions in the statechart has to be changed to use if expression is true with expression being msg.msgString == "boilerWorking".
More information about Agent communication is available here.
Related
Is Beam State shared across different DoFns?
Lets say I have 2 DoFns:
StatefulDoFn1: { myState.write(1)}
StatefulDoFn2: { myState.read() ; do something ... output}
And then the pipeline in pseudocode:
pipline = readInput.........applyDoFn(StatefulDoFn1)......map{do something else}.......applyDoFn(StatefulDoFn2)
If I annotate myState identically in both StatefulDoFns - will what I write in StatefulDoFn1 be visible to StatefulDoFn2 , we implemented a pipeline with the assumption the answer is Yes ---- but it seems to be no
No, state is local to each stateful DoFn, and it is also actually local to each key (and window, if you are using a window) inside that DoFn.
Original situation:
I have a job in Jenkins that is running an ant script. I easily managed to test this ant script on more then one software version using a "Multi-configuration project".
This type of project is really cool because it allows me to specify all the versions of the two software that I need (in my case Java and Matlab) an it will run my ant script with all the combinations of my parameters.
Those parameters are then used as string to be concatenated in the definition of the location of the executable to be used by my ant.
example: env.MATLAB_EXE=/usr/local/MATLAB/${MATLAB_VERSION}/bin/matlab
This is working perfectly but now I am migrating this scripts to a pipline version of it.
Pipeline migration:
I managed to implement the same script in a pipeline fashion using the Parametrized pipelines pluin. With this I achieve the point in which I can manually select which version of my software is going to be used if I trigger the build manually and I also found a way to execute this periodically selecting the parameter I want at each run.
This solution seems fairly working however is not really satisfying.
My multi-config project had some feature that this does not:
With more then one parameter I can set to interpolate them and execute each combination
The executions are clearly separated and in build history/build details is easy to recognize which settings hads been used
Just adding a new "possible" value to the parameter is going to spawn the desired executions
Request
So I wonder if there is a better solution to my problem that can satisfy also the point above.
Long story short: is there a way to implement a multi-configuration project in jenkins but using the pipeline technology?
I've seen this and similar questions asked a lot lately, so it seemed that it would be a fun exercise to work this out...
A matrix/multi-config job, visualized in code, would really just be a few nested for loops, one for each axis of parameters.
You could build something fairly simple with some hard coded for loops to loop over a few lists. Or you can get more complicated and do some recursive looping so you don't have to hard code the specific loops.
DISCLAIMER: I do ops much more than I write code. I am also very new to groovy, so this can probably be done more cleanly, and there are probably a lot of groovier things that could be done, but this gets the job done, anyway.
With a little work, this matrixBuilder could be wrapped up in a class so you could pass in a task closure and the axis list and get the task map back. Stick it in a shared library and use it anywhere. It should be pretty easy to add some of the other features from the multiconfiguration jobs, such as filters.
This attempt uses a recursive matrixBuilder function to work through any number of parameter axes and build all the combinations. Then it executes them in parallel (obviously depending on node availability).
/*
All the config axes are defined here
Add as many lists of axes in the axisList as you need.
All combinations will be built
*/
def axisList = [
["ubuntu","rhel","windows","osx"], //agents
["jdk6","jdk7","jdk8"], //tools
["banana","apple","orange","pineapple"] //fruit
]
def tasks = [:]
def comboBuilder
def comboEntry = []
def task = {
// builds and returns the task for each combination
/* Map the entries back to a more readable format
the index will correspond to the position of this axis in axisList[] */
def myAgent = it[0]
def myJdk = it[1]
def myFruit = it[2]
return {
// This is where the important work happens for each combination
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//We won't declare a specific agent this part
node {
println "fruit=${myFruit}"
}
}
}
/*
This is where the magic happens
recursively work through the axisList and build all combinations
*/
comboBuilder = { def axes, int level ->
for ( entry in axes[0] ) {
comboEntry[level] = entry
if (axes.size() > 1 ) {
comboBuilder(axes[1..-1], level + 1)
}
else {
tasks[comboEntry.join("-")] = task(comboEntry.collect())
}
}
}
stage ("Setup") {
node {
println "Initial Setup"
}
}
stage ("Setup Combinations") {
node {
comboBuilder(axisList, 0)
}
}
stage ("Multiconfiguration Parallel Tasks") {
//Run the tasks in parallel
parallel tasks
}
stage("The End") {
node {
echo "That's all folks"
}
}
You can see a more detailed flow of the job at http://localhost:8080/job/multi-configPipeline/[build]/flowGraphTable/ (available under the Pipeline Steps link on the build page.
EDIT:
You can move the stage down into the "task" creation and then see the details of each stage more clearly, but not in a neat matrix like the multi-config job.
...
return {
// This is where the important work happens for each combination
stage ("${it.join('-')}--build") {
node(myAgent) {
println "Executing combination ${it.join('-')}"
def javaHome = tool myJdk
println "Node=${env.NODE_NAME}"
println "Java=${javaHome}"
}
//Node irrelevant for this part
node {
println "fruit=${myFruit}"
}
}
}
...
Or you could wrap each node with their own stage for even more detail.
As I did this, I noticed a bug in my previous code (fixed above now). I was passing the comboEntry reference to the task. I should have sent a copy, because, while the names of the stages were correct, when it actually executed them, the values were, of course, all the last entry encountered. So I changed it to tasks[comboEntry.join("-")] = task(comboEntry.collect()).
I noticed that you can leave the original stage ("Multiconfiguration Parallel Tasks") {} around the execution of the parallel tasks. Technically now you have nested stages. I'm not sure how Jenkins is supposed to handle that, but it doesn't complain. However, the 'parent' stage timing is not inclusive of the parallel stages timing.
I also noticed is that when a new build starts to run, on the "Stage View" of the job, all the previous builds disappear, presumably because the stage names don't all match up. But after the build finishes running, they all match again and the old builds show up again.
And finally, Blue Ocean doesn't seem to vizualize this the same way. It doesn't recognize the "stages" in the parallel processes, only the enclosing stage (if it is present), or "Parallel" if it isn't. And then only shows the individual parallel processes, not the stages within.
Points 1 and 3 are not completely clear to me, but I suspect you just want to use “scripted” rather than “Declarative” Pipeline syntax, in which case you can make your job do whatever you like—anything permitted by matrix project axes and axis filters and much more, including parallel execution. Declarative syntax trades off syntactic simplicity (and friendliness to “round-trip” editing tools and “linters”) for flexibility.
Point 2 is about visualization of the result, rather than execution per se. While this is a complex topic, the usual concrete request which is not already supported by existing visualizations like Blue Ocean is to be able to see test results distinguished by axis combination. This is tracked by JENKINS-27395 and some related issues, with design in progress.
I am trying to get userid who approved an "input" step in workflow jenkins groovy script. Below is the sample script
node('node1'){
stage "test"
input message: 'test'
}
In the workflow UI if a person hits "thumbs up" I want to print his userid in the log. I dont see any option to do it.
def cause = currentBuild.rawBuild.getCause(Cause.UserIdCause)
cause.userId
will print the person who started the build. I have googled this for days but i am not finding anything. Any help here will be greatly appreciated :)
This Jira issue describes how this is likely to work going forward, however it is still open.
In the meantime, the approach of getting the latest ApproverAction via the build actions API was suggested on #Jenkins IRC recently and should work, note it's not sandbox safe.
Something along the lines of the below for getting the most recent approver:
#NonCPS
def getLatestApprover() {
def latest = null
// this returns a CopyOnWriteArrayList, safe for iteration
def acts = currentBuild.rawBuild.getAllActions()
for (act in acts) {
if (act instanceof org.jenkinsci.plugins.workflow.support.steps.input.ApproverAction) {
latest = act.userId
}
}
return latest
}
The JIRA incident referenced u-phoria has been resolved and the fix released.
By setting the submitterParameter to a value, the variable specified by submitterParameter will be populated with the Jenkins user ID that responded to the input field.
2 Jenkins jobs: A and B.
A triggers B as blocking build step ("Block until the triggered projects finish their builds"). Is there a way to include B's console output into A's console output?
Motivation: for browser use of Jenkins A's console output contains a link to B's console output which is fine. But when using Jenkins via command line tools (jenkins-cli) there's no quick and easy way to see B's console output.
Any ideas?
Interesting. I'd try something like this.
From http://jenkinsurl/job/jobname/lastBuild/api/
Accessing Progressive Console Output
You can retrieve in-progress console output by making repeated GET requests with a parameter. You'll basically send GET request to this URL (or this URL if you want HTML that can be put into tag.) The start parameter controls the byte offset of where you start.
The response will contain a chunk of the console output, as well as the X-Text-Size header that represents the bytes offset (of the raw log file). This is the number you want to use as the start parameter for the next call.
If the response also contains the X-More-Data: true header, the server is indicating that the build is in progress, and you need to repeat the request after some delay. The Jenkins UI waits 5 seconds before making the next call. When this header is not present, you know that you've retrieved all the data and the build is complete.
So you can trigger a downstream job, but don't "block until downstream completes". Instead, add an extra step (execute shell, probably) and write a script that will read the console output of the other job as indicated above, and display it in console output of current job. You will have to detect when the child job finished by looking for X-More-Data: true header, as detailed above.
I know this is an old question, but i had to this myself recently. I figure this would help someone else looking to do the same. Here's a Groovy script that will read a given job's progressiveText URL. The code is written in such a way that it should be plug and play. Make sure to set the jenkinsBase and jobName first. The approach is no different to what has already been mentioned.
Here's a short set of instructions on how to use this: (1) Configure downstream job so that anonymous users hasRead and ViewStatus rights. (2) In the upstream job, create a Trigger/call builds on other projects step that will call the downstream job. (3) Do not check the "Block until the triggered projects finish their builds. (4) Right after that step, create an Execute Groovy script step and paste the following code:
def jenkinsBase = // Set to Jenkins base URL here
def jobName = // Set to jenkins job name
def jobNumber = 'lastBuild' // Tail last build
def address = null
def response = null
def start = 0 // Start at offset 0
def cont = true // This semaphore holds the value of X-More-Data header value
try {
while (cont == true) { // Loop while X-More-Data value is equal to true
address = "${jenkinsBase}/job/${jobName}/${jobNumber}/logText/progressiveText?start=${start}"
def urlInfo = address.toURL()
response = urlInfo.openConnection()
if (response.getResponseCode() != 200) {
throw new Exception("Unable to connect to " + address) // Throw an exception to get out of loop if response is anything but 200
}
if (start != response.getHeaderField('X-Text-Size')) { // Print content if the starting offset is not equal the value of X-Text-Size header
response.getInputStream().getText().eachLine { line ->
println(line)
}
}
start = response.getHeaderField('X-Text-Size') // Set new start offset to next byte
cont = response.getHeaderField('X-More-Data') // Set semaphore to value of X-More-Data field. If this is anything but true, we will fall out of while loop
sleep(3000) // wait for 3 seconds
}
}
catch (Exception ex) {
println (ex.getMessage())
}
This script can be further improved by programatically getting the downstream job number.
There is also a Python version of this approach here.
I am implementing on CTI application which will monitor all events of agent. Currently I am having trouble in getting auxcodes events. By check the agent state i get the auxcodes but i want an event for auxcode changes so that immediately i can get the auxcodes.
You may extract an Avaya extension of Agent from the AgentEvent and the get the AgentStateInfo from it.
Agent agent = agentTerminalEvent.getAgent();
LucentV5AgentStateInfo lasi = (LucentV5AgentStateInfo)((LucentAgent)agent).getStateInfo();
int state = lasi.state;
int rc = lasi.reasonCode;
int wm = lasi.workMode;
(if this is what you are looking for)
EDIT :
It seems that you can monitor full agent activty by monitoring the ACDAddress with ACDAddressListener.
ae-services-jtapi-programmers-guide-6_3_1.pdf Appendix A Page 60 :
To completely monitor agent activity, please use an
ACDAddressListener
OLD (may be outdated):
BUT : Other AgentTerminalEvents or ACDAddressEvents then Logon and Logoff are not produced if the change of the agent's state is not done
by the JTAPI itself.
That means if an agent changes his state to NOT_READY using his phone
you will not receive an AgentTerminalEvent.
If that state change is done by your program (Agent.setState...) then
you will receive an event.