Does modifying the RetryOptions passed to CallActivityWithRetry Async require versioning orchestration function? - azure-durable-functions

I've always been hesitant when making changes to orchestrator code because understanding the specifics of what is breaking vs a non breaking change has never been 100% clear to me despite reading the documentation over and over. I worry that I will break in-flight orchestrations and typically error on the side of caution.
In this case I really would prefer not to have to version, but I'm not sure of this qualifies as a change. I am currently calling CallActitivityWithRetryAsync in my orchestration and would like to set a value of 2.0 to BackoffCoefficient.
My question is whether or not setting this value would would break in-flight orchestrations and requires versioning my orchestration function so the two can work side by side.
Current:
var retryOptions = new RetryOptions(TimeSpan.FromMinutes(1), 5);
await context.CallActivityWithRetryAsync("MyActivity", retryOptions, null);
Desired:
var retryOptions = new RetryOptions(TimeSpan.FromMinutes(1), 5)
retryOptions.BackoffCoefficient = 2.0;
await context.CallActivityWithRetryAsync("MyActivity", retryOptions, null);

Ran this by Chris Gillum from the Durable Functions team and the answer is: it depends.
Changing the retry options settings could change the history that gets generated by your orchestrator function. For example, if making a retry policy change results in more or fewer retries for existing instances, they may fail with a non-determinism error.
In the case of the specific example asked by the OP a version change should not be required as it is simply changing the backoff coefficient and would not result in any changes to max attempts.

Related

Vaadin: after upgrading to v23.0.1 (from 22.0.2): Error with Binder opening a Form

After upgrading to Vaadin 23.0.x (from former 22.0.2) I now keep getting the following error when opening a certain dialog:
2022-08-01 18:56:25,977 ERROR [http-nio-8085-exec-5] net.mmo.utils.kism.ui.views.nodes.NodeView: java.lang.IllegalStateException: All bindings created with forField must be completed before calling readBean
at com.vaadin.flow.data.binder.Binder.checkBindingsCompleted(Binder.java:3070)
at com.vaadin.flow.data.binder.Binder.readBean(Binder.java:2110)
at net.mmo.utils.kism.ui.views.nodes.NodeForm.readBean(NodeForm.java:487)
at net.mmo.utils.kism.ui.views.nodes.NodeForm.setNode(NodeForm.java:211)
This dialog has worked perfectly fine since I wrote it (using version 18.0.x about 2 years ago) and up to v22.0.2. I can't make sense of that error message and I don't understand what the issue could be here. I verified that issue going back and forth and the difference is really only the Vaadin version upgrade. Before it, the dialog works just fine and after it I get the above Exception when opening it.
I also can't quite believe what I think the message is stating here: if it would indeed check that I define or complete any bindings AFTER calling Binder.readBean() - how could it know that already in that very moment, i.e. when the code calls readBean() - as indicated by the stacktrace?
If there would indeed be any bindings being defined afterwards, IMHO it could only find that out AFTER said readBean()-call, i.e. when any additional bindings were actually defined, couldn't it?
So, could someone please try to "translate" or explain that issue or the background behind it to me?
The error basically states the problem: in the process of binding a field to a property (or getter/setter in general), the finishing step of actually binding was not undertaken. So the process was started with .forField() but never finished by .bind().
Since the error message as of now only states the fact, but not the culprit, a developer would be in need of a debugger to inspect the private state of the Binder, where the map incompleteBindings holds the current state of the Binder. The content of this map may help to find the culprit, e.g. by only holding one entry and by inspecting the flow of the program so far, that would conclude, what binding attempt failed. Or e.g. via the included field types.
Other than plain "bugs" by the developer, there are some potential reasons, why this suddenly happens by like an update or what places to look for:
multiple (re-)binding was recently added (e.g. to first bind "automatically" and then hand-tune the result); this holds potential, that older versions of the code just kept the initial binding and ignored the dangling second process.
the binding process uses a builder pattern; builder must build up on the result of the previous steps. This means, that in imperative code, there is the chance, that this chained call miss reassigning the build step. E.g.
var b = binder.forField(field)
if (predicate)
b.asRequired() // XXX: should be `b = b.asRequired()`
b.bind(...)
(this may or may not be a source for this kind of problem, but it's good to point out here, since the binder builder implementation actually switche(s|d) the builder (in the past)

Side effects within xsl:accumulator

It appears that xsl:message does not work (i.e. no output is generated to message list) within an accumulator-rule. However, I don't see anything in the spec that disallows this.
<xsl:accumulator name="acc1" streamable="yes" initial-value="1">
<xsl:accumulator-rule match="cdf:ContestSelection">
<xsl:message>Output</xsl:message>
</xsl:accumulator-rule>
<xsl:accumulator>
There's all sort of possible reasons for this: the accumulator is never used, the rule never fires, the optimizer optimizes the call on xsl:message away, etc. One would need a complete repro to see what is actually going on.
Note that pretty well everything about xsl:message is implementation-defined, and one reason for that is to give the processor maximum freedom for optimization.

My code gives an "attempt to call a nil value" on the computer controlled seed analyzer in Minecraft

So I have spent a few hours looking for documentation on the item "Computer Controlled Seed Analyzer" with no current information that is useful. My goal is to set up a seed analyzer that will check for a plant next to the analyzer and analyze it.
My code:
local sides = require("sides")
if hasPlant(sides.left) and isAnalyzed() == false then
analyze(side.left)
end
From my logic, I believe the outcome should analyze the seed, but instead it gives an attempt to call a nil value (global hasPlant). From my research, sides were not defined at the time therefor I added the local line. What else would I be missing?
Two problems here:
The mods involved are currently buggy, so OpenComputers integration doesn't work at all. I opened pull request #1260 for AgriCraft and #31 for InfinityLib that will fix it. Until it's fixed, there's nothing you can do in-game to make it work. If you don't want to wait for official releases with the fixes, you can use my unofficial builds of AgriCraft and of InfinityLib, which I used to test my PRs and the below code.
The Lua code you're writing is wrong. I'm not sure where you got it from, but here's how you make it work:
if component.agricraft_peripheral.hasPlant("EAST") and component.agricraft_peripheral.isAnalyzed() == false then
component.agricraft_peripheral.analyze("EAST")
end
Of note:
The Agricraft API takes the strings DOWN, UP, NORTH, SOUTH, WEST, and EAST, rather than the numeric constants from side.
The functions provided by components in OpenComputers aren't globals; they're nested inside of component.
You may need local component = require("component"), so add it to the top if you get an error about it missing. (It works for me without it, but a bunch of documentation says you need it.)

Why does VkAccessFlagBits include both read bits and write bits?

In vulkan.h, every instance of VkAccessFlagBits appears in a pair that contains a srcAccessMask and a dstAccessMask:
VkAccessFlags srcAccessMask;
VkAccessFlags dstAccessMask;
In every case, according to my understanding, the purpose of these masks is to help designate two sets of operations, such that results of operations in the first set will be visible to operations in the second set. For instance, write operations occurring prior to a barrier should not get hung up in caches but should instead propagate all the way to locations from which they can be read after the barrier. Or something like that.
The access flags come in both READ and WRITE forms:
/* ... */
VK_ACCESS_SHADER_READ_BIT = 0x00000020,
VK_ACCESS_SHADER_WRITE_BIT = 0x00000040,
VK_ACCESS_COLOR_ATTACHMENT_READ_BIT = 0x00000080,
VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT = 0x00000100,
/* ... */
But it seems to me that srcAccessMask should probably always be some sort of VK_ACCESS_*_WRITE_BIT combination, while dstAccessMask should always be a combination of VK_ACCESS_*_READ_BIT values. If that is true, then the READ/WRITE distinction is identical to and implicit in the src/dst distinction, and so it should be good enough to just have VK_ACCESS_SHADER_BIT etc., without READ_ or WRITE_ variants.
Why are there READ_ and WRITE_ variants, then? Is it ever useful to specify that some read operations must fully complete before some other operations have begun? Note that all operations using VkAccessFlagBits produce (I think) execution dependencies as well as memory dependencies. It seems to me that the execution dependencies should be good enough to prevent earlier reads from receiving values written by later writes.
While writing this question I encountered a statement in the Vulkan specification that provides at least part of an answer:
Memory dependencies are used to solve data hazards, e.g. to ensure that write operations are visible to subsequent read operations (read-after-write hazard), as well as write-after-write hazards. Write-after-read and read-after-read hazards only require execution dependencies to synchronize.
This is from the section 6.4. Execution And Memory Dependencies. Also, from earlier in that section:
The application must use memory dependencies to make writes visible before subsequent reads can rely on them, and before subsequent writes can overwrite them. Failure to do so causes the result of the reads to be undefined, and the order of writes to be undefined.
From this I surmise that, yes, the execution dependencies produced by the Vulkan commands that involve these access flags probably do free you from ever having to put a VK_ACCESS_*_READ_BIT into a srcAccessMask field--but that you might in fact want to have READ_ flags, WRITE_ flags, or both in some of your dstAccessMask fields, because apparently it's possible to use an explicit dependency to prevent read-after-write hazards in such a way that write-after-write hazards are NOT prevented. (And maybe vice-versa?)
Like, maybe your Vulkan will sometimes decide that a write does not actually need to be propagated all the way through a particular cache to its final specified destination for the sake of a subsequent read operation, IF Vulkan happens to know that that read operation will simply read from that same cache, saving some time? But then a second write might happen, and write to a different cache, and there'll be two caches left in a race (with the choice of winner undefined) to send their two values to the same spot. Or something? Maybe my mental model of these caches is entirely wrong.
It is fairly solidly established, at least, that memory barriers are confusing.
Let's go over all the possibilities:
read–read — well yeah that one is pretty useless. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
read–write — execution dependency should be sufficient to synchronize without this. Khronos seems to agree #131 it is pointless value in src (basically equivalent to 0).
write–read — that's the obvious and most common one.
write–write — similar reason to write–read above. Without it the order of the writes would be undefined. It is a bit pointless for most situations to write something you haven't even read in between. But hey, now you have a way to synchronize it.
You can provide bitmask of more of these masks to both src and dst. In which case it makes sense to have both masks for driver to sort the dependencies out for you. (I don't expect performance overhead from this on API level, so it is allowed as convenience)
From API design perspective, it could mean adding different enum for srcAccess. But perhaps _READ variants could just be forbidden in srcAccess through "Valid Usage", making this argument weak. The src == READ variant might have been kept, because it is benign.

Sleep from within an Informix SPL procedure

What's the best way to do the semantic equivalent of the traditional sleep() system call from within an Informix SPL routine? In other words, simply "pause" for N seconds (or milliseconds or whatever, but seconds are fine). I'm looking for a solution that does not involve linking some new (perhaps written by me) C code or other library into the Informix server. This has to be something I can do purely from SPL. A solution for IDS 10 or 11 would be fine.
#RET - The "obvious" answer wasn't obvious to me! I didn't know about the SYSTEM command. Thank you! (And yes, I'm the guy you think I am.)
Yes, it's for debugging purposes only. Unfortunately, CURRENT within an SPL will always return the same value, set at the entry to the call:
"any call to CURRENT from inside the SPL function that an EXECUTE FUNCTION (or EXECUTE PROCEDURE) statement invokes returns the value of the system clock when the SPL function starts."
—IBM Informix Guide to SQL
Wrapping CURRENT in its own subroutine does not help. You do get a different answer on the first call to your wrapper (provided you're using YEAR TO FRACTION(5) or some other type with high enough resolution to show the the difference) but then you get that same value back on every single subsequent call, which ensures that any sort of loop will never terminate.
There must be some good reason you're not wanting the obvious answer:
SYSTEM "sleep 5". If all you're wanting is for the SPL to pause while you check various values etc, here are a couple of thoughts (all of which are utter hacks, of course):
Make the TRACE FILE a named pipe (assuming Unix back-end), so it blocks until you choose to read from it, or
Create another table that your SPL polls for a particular entry from a WHILE loop, and insert said row from elsewhere (horribly inefficient)
Make SET LOCK MODE your friend: execute "SET LOCK MODE TO WAIT n" and deliberately requery a table you're already holding a cursor open on. You'll need to wrap this in an EXCEPTION handler, of course.
Hope that is some help (and if you're the same JS of Ars and Rose::DB fame, it's the least I could do ;-)
I'm aware that the answer is too late. However I've recently encountered the same problem and this site shows as the first one. So it is beneficial for other people to place new anwser here.
Perfect solution was found by Eric Herber and published in April 2012 here: How to sleep (or yield) for a fixed time in a stored procedure
Unfortunately this site is down.
His solution is to use following function:
integer sysadmin:yieldn( integer nseconds )
I assume that you want this "pause" for debugging purposes, otherwise think about it, you'll always have some better tasks to do for your server than sleep ...
A suggestion: Maybe you could get CURRENT, add it a few seconds ( let mytimestamp ) then in a while loop select CURRENT while CURRENT <= mytimestamp . I've no informix setup around my desk to try it, so you'll have to figure the correct syntax. Again, do not put such a hack on a production server. You've been warned :D
Then you'll have to warp CURRENT in another function that you'll call from the first (but this is a hack on the previous hack ...).

Resources