SPOP is not allowed to be executed in lua. And if you do some non-deterministic commands firstly, then you is not allowed to execute write commands. This seems confusing to me. So why redis has such limitation?
It's explained fairly well in the Redis docs here.
The scripts are replicated to slaves by sending the script over and running it on the slave, so the script needs to always produce the same results every time it's run or the data on the slave will diverge from the data on the master.
You could try the new 'scripts effects replication' in the same link if you need perform non deterministic operations in a script.
you need to run this replicate commands function before any data changing command
redis.replicate_commands()
it's explained here
On a single Redis instance I cannot think about any negativ effect.
But say you're running some master-slave setup. the result of those lua scripts which calls e.g. TIME wouldn't be equal on the master.
you can act like what proposed at the last of error description
Additional information: ERR Error running script (call to
f_082d853d36ea323f47b6b00d36b7db66dac0bebd): #user_script:10:
#user_script: 10: Write commands not allowed after non deterministic
commands. Call redis.replicate_commands() at the start of your script
in order to switch to single commands replication mode.
Call redis.replicate_commands() at the start of your script in order to switch to single commands replication mode.
note that because of redis.call('time') call is non deterministic.
local function ticks(key,ticks)
redis.replicate_commands()
local t = ticks
local time = redis.call('time')
local last=redis.call('get',key..'_last')
if last==t then
local inc=redis.call('incr',key..'cnt')
t = t+inc
else
redis.call('set',key..'cnt',0)
end
redis.call('set',key..'_last',t)
return t
end
return ticks(#key,#ticks)
Related
We are setting up a federated scenario with Server and Client on different physical machines.
On the server, we have used the docker container to kickstart:
The above has been borrowed from Kubernetes tutorial. We believe this creates a 'local executor' [Ref 1] which helps create a gRPC server [Ref 2].
Ref 1:
Ref 2:
Next on the client 1, we are calling tff.framework.RemoteExecutor that connects to the gRPC server.
Our understanding based on the above is that the Remote Executor runs on the client which connects to the gRPC server.
Assuming the above is correct, how can we send a
tff.tf_computation
from the server to the client and print the output on the client side to ensure the whole setup works well.
Your understanding is definitely correct.
If you construct an ExecutorFactory directly, as seems to be the case in the code above, passing it to tff.framework.set_default_context will install your remote stack as the default mechanism for executing computations in the TFF runtime. You should additionally be able to pass the appropriate channels to tff.backends.native.set_remote_execution_context to handle the remote executor construction and context installation if desired, but the way you are doing it certainly works, and allows for greater customization.
Once you have set this up, running an example end-to-end should be fairly simple. We will set up a computation which takes a set of federated integers, prints on the clients, and sums the integers up. Let:
#tff.tf_computation(tf.int32)
def print_and_return(x):
# We must use tf.print here, as this logic will be
# serialized and run on the clients as TensorFlow.
tf.print('hello world')
return x
#tff.federated_computation(tff.FederatedType(tf.int32, tff.CLIENTS))
def print_and_sum(federated_arg):
same_ints = tff.federated_map(print_and_return, federated_arg)
return tff.federated_sum(same_ints)
Suppose we have N clients; we simply instantiate the set of federated integers, and invoke our computation.
federated_ints = [1] * N
total = print_and_sum(federated_ints)
assert total == N
This should cause the tf.prints defined above to run on the remote machine; as long as tf.print is directed to an output stream which you can monitor, you should be able to see it.
PS: you may note that the federated sum above is unnecessary; it certainly is. The same effect can be had by simply mapping the identity function with the serialized print.
I am trying to use the Golang SDK of Docker in order to maintain a slice variable with currently running containers on the local Docker instance. This slice is exported from a package and I want to use it to feed a web page.
I am not really used to goroutines and channels and that's why I am wondering if I have spotted a good solution for my problem.
I have a docker package as follows.
https://play.golang.org/p/eMmqkMezXZn
It has a Running variable containing the current state of running containers.
var Running []types.Container
I use a reload function to load the running containers in the Running variable.
// Reload the list of running containers
func reload() error {
...
Running, err = cli.ContainerList(context.Background(), types.ContainerListOptions{
All: false,
})
...
}
And then I start a goroutine from the init function to listen to Docker events and trigger the reload function accordingly.
func init() {
...
// Listen for docker events
go listen()
...
}
// Listen for docker events
func listen() {
filter := filters.NewArgs()
filter.Add("type", "container")
filter.Add("event", "start")
filter.Add("event", "die")
msg, errChan := cli.Events(context.Background(), types.EventsOptions{
Filters: filter,
})
for {
select {
case err := <-errChan:
panic(err)
case <-msg:
fmt.Println("reloading")
reload()
}
}
}
My question is, is it proper to update a variable from inside a goroutine (in terms of sync)? Maybe there is a cleaner way to achieve what I am trying to build?
Update
My concern here is not really about caching. It is more about hiding the "complexity" of the process of listening and update from the Docker SDK. I wanted to provide something like an index to easily let the end user loop and display currently running containers.
I was aware of data-races problems in threaded programs but I did not realize I was as actually in a context of concurrence here (I never wrote concurrent programs in Go before).
I effectively need to re-think the solution to be more idiomatic. As far as I can see, I have two options here: either protecting the variable with a mutex or re-thinking the design to integrate channels.
What means the most to me is to hide or encapsulate the method of synchronization used so the package users need not concern of how the shared state is protected.
Would you have any recommendations?
Thanks a lot for your help,
Loric
No, it is not idiomatic Go to share the Running variable between two goroutines. You do this by sharing it between the routine that runs your main function, and the listen function which is started with go—which spawns another goroutine.
Why, is because it breaks with
Do not communicate by sharing memory; instead, share memory by
communicating. ¹
So the design of the API needs to change in order to be idiomatic; you need to remove the Running variable and replace it with what? It depends on what you are trying to achieve. If you are trying to cache the cli.ContainerList because you need to call it often, and it might be expensive, you should implement a cache which is invalidated on each cli.Events.
What is your motivation?
I have 3 ubuntu machine(CPU). my dask scheduler and client both are present on the same machine, whereas the two dask workers are running on other two machines. when I launch first task, it gets scheduled on first worker, but then upon launching second worker, while the first one still executing, it does not get scheduled on second worker. here is the sample client code that I tried.
### client.py
from dask.distributed import Client
import time, sys, os, random
def my_task(arg):
print("doing something in my_task")
time.sleep(2)
print("inside my task..", arg)
print("again doing something in my_task")
time.sleep(2)
print("return some random value")
value = random.randint(1,100)
print("value::", value)
return value
client = Client("172.25.49.226:8786")
print("client::", client)
future = client.submit(my_task, "hi")
print("future result::", future.result())
print("closing the client..")
client.close()
I am running "python client.py" two times almost at the same time from two different terminal/machines. both the client seems to be executing, but it results in exactly the same output which it should not because the return type of the my_task() is a random value. I tested this on ubuntu machines.
However a month back, I was able to run same tasks in parallel on CentOs machines. And now if check back and ran same two tasks from those CentOs machines, the problem persist. This is strange. it did not run in parallel. Not able to figure out this behavior by dask. Am I missing any OS level settings or something else.?
Run the below almost at the same time,
python client.py # from one machine/terminal
python client.py # from another machine/terminal
these two tasks should run in parallel, each task should run on different worker(we have two free workers available), but this is not happening. I can't see any log on the second worker console nor on the scheduler, while the first task continues to execute. At the end I noticed both the tasks finishes exactly at the same time with exactly same output.
However the above client code works well in "parallel" in windows OS, each task running through multiple terminals. but I would like to run it on Ubuntu machines.
By default if you call the same function on the same inputs Dask will assume that this will produce the same value, and only compute it once. You can override this behavior with the pure=False keyword
future = client.submit(func, *args, pure=False)
I need to reboot System through a Lua Script.
I need to write some string before the Reboot happens and need to write a string in a Lua
script once the Reboot is done.
Example :
print("Before Reboot System")
Reboot the System through Lua script
print("After Reboot System")
How would I accomplish this?
You can use os.execute to issue system commands. For Windows, it's shutdown -r, for Posix systems it's just reboot. Your Lua code will therefore look like this:
Be aware that part of the reboot command is stopping active programs, like your Lua script. That means that any data stored in RAM will be lost. You need to write any data you want to keep to disk, using, for instance, table serialization.
Unfortunately, without more knowledge of your environment, I can't tell you how to call the script again. You could be able to append a call to your script to the end of ~/.bashrc or similar.
Make sure that loading this data and starting at a point after you call your reboot function is the first thing that you do when you come back! You don't want to get stuck in an endless reboot loop where the first thing your computer does when it turns on is to turn itself off. Something like this should work:
local function is_rebooted()
-- Presence of file indicates reboot status
if io.open("Rebooted.txt", "r") then
os.remove("Rebooted.txt")
return true
else
return false
end
end
local function reboot_system()
local f = assert(io.open("Rebooted.txt", "w"))
f:write("Restarted! Call On_Reboot()")
-- Do something to make sure the script is called upon reboot here
-- First line of package.config is directory separator
-- Assume that '\' means it's Windows
local is_windows = string.find(_G.package.config:sub(1,1), "\\")
if is_windows then
os.execute("shutdown -r");
else
os.execute("reboot")
end
end
local function before_reboot()
print("Before Reboot System")
reboot_system()
end
local function after_reboot()
print("After Reboot System")
end
-- Execution begins here !
if not is_rebooted() then
before_reboot()
else
after_reboot()
end
(Warning - untested code. I didn't feel like rebooting. :)
There is no way to do what you are asking in Lua. You may be able to do this using os.execute depending on your system and set up but Lua's libraries only include what is possible in the standard c libraries which does not include operating system specific functionality like restart.
I wrote the following two functions, and call the second ("callAndWait") from JavaScript running inside Windows Script Host. My overall intent is to call one command line program from another. That is, I'm running the initial scripting using cscript, and then trying to run something else (Ant) from that script.
function readAllFromAny(oExec)
{
if (!oExec.StdOut.AtEndOfStream)
return oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
return "STDERR: " + oExec.StdErr.ReadLine();
return -1;
}
// Execute a command line function....
function callAndWait(execStr) {
var oExec = WshShell.Exec(execStr);
while (oExec.Status == 0)
{
WScript.Sleep(100);
var output;
while ( (output = readAllFromAny(oExec)) != -1) {
WScript.StdOut.WriteLine(output);
}
}
}
Unfortunately, when I run my program, I don't get immediate feedback about what the called program is doing. Instead, the output seems to come in fits and starts, sometimes waiting until the original program has finished, and sometimes it appears to have deadlocked. What I really want to do is have the spawned process actually share the same StdOut as the calling process, but I don't see a way to do that. Just setting oExec.StdOut = WScript.StdOut doesn't work.
Is there an alternate way to spawn processes that will share the StdOut & StdErr of the launching process? I tried using "WshShell.Run(), but that gives me a "permission denied" error. That's problematic, because I don't want to have to tell my clients to change how their Windows environment is configured just to run my program.
What can I do?
You cannot read from StdErr and StdOut in the script engine in this way, as there is no non-blocking IO as Code Master Bob says. If the called process fills up the buffer (about 4KB) on StdErr while you are attempting to read from StdOut, or vice-versa, then you will deadlock/hang. You will starve while waiting for StdOut and it will block waiting for you to read from StdErr.
The practical solution is to redirect StdErr to StdOut like this:
sCommandLine = """c:\Path\To\prog.exe"" Argument1 argument2"
Dim oExec
Set oExec = WshShell.Exec("CMD /S /C "" " & sCommandLine & " 2>&1 """)
In other words, what gets passed to CreateProcess is this:
CMD /S /C " "c:\Path\To\prog.exe" Argument1 argument2 2>&1 "
This invokes CMD.EXE, which interprets the command line. /S /C invokes a special parsing rule so that the first and last quote are stripped off, and the remainder used as-is and executed by CMD.EXE. So CMD.EXE executes this:
"c:\Path\To\prog.exe" Argument1 argument2 2>&1
The incantation 2>&1 redirects prog.exe's StdErr to StdOut. CMD.EXE will propagate the exit code.
You can now succeed by reading from StdOut and ignoring StdErr.
The downside is that the StdErr and StdOut output get mixed together. As long as they are recognisable you can probably work with this.
Another technique which might help in this situation is to redirect the standard error stream of the command to accompany the standard output.
Do this by adding "%comspec% /c" to the front and "2>&1" to the end of the execStr string.
That is, change the command you run from:
zzz
to:
%comspec% /c zzz 2>&1
The "2>&1" is a redirect instruction which causes the StdErr output (file descriptor 2) to be written to the StdOut stream (file descriptor 1).
You need to include the "%comspec% /c" part because it is the command interpreter which understands about the command line redirect. See http://technet.microsoft.com/en-us/library/ee156605.aspx
Using "%comspec%" instead of "cmd" gives portability to a wider range of Windows versions.
If your command contains quoted string arguments, it may be tricky to get them right:
the specification for how cmd handles quotes after "/c" seems to be incomplete.
With this, your script needs only to read the StdOut stream, and will receive both standard output and standard error.
I used this with "net stop wuauserv", which writes to StdOut on success (if the service is running)
and StdErr on failure (if the service is already stopped).
First, your loop is broken in that it always tries to read from oExec.StdOut first. If there is no actual output then it will hang until there is. You wont see any StdErr output until StdOut.atEndOfStream becomes true (probably when the child terminates). Unfortunately, there is no concept of non-blocking I/O in the script engine. That means calling read and having it return immediately if there is no data in the buffer. Thus there is probably no way to get this loop to work as you want. Second, WShell.Run does not provide any properties or methods to access the standard I/O of the child process. It creates the child in a separate window, totally isolated from the parent except for the return code. However, if all you want is to be able to SEE the output from the child then this might be acceptable. You will also be able to interact with the child (input) but only through the new window (see SendKeys).
As for using ReadAll(), this would be even worse since it collects all the input from the stream before returning so you wouldn't see anything at all until the stream was closed. I have no idea why the example places the ReadAll in a loop which builds a string, a single if (!WScript.StdIn.AtEndOfStream) should be sufficient to avoid exceptions.
Another alternative might be to use the process creation methods in WMI. How standard I/O is handled is not clear and there doesn't appear to be any way to allocate specific streams as StdIn/Out/Err. The only hope would be that the child would inherit these from the parent but that's what you want, isn't it? (This comment based upon an idea and a little bit of research but no actual testing.)
Basically, the scripting system is not designed for complicated interprocess communication/synchronisation.
Note: Tests confirming the above were performed on Windows XP Sp2 using Script version 5.6. Reference to current (5.8) manuals suggests no change.
Yes, the Exec function seems to be broken when it comes to terminal output.
I have been using a similar function function ConsumeStd(e) {WScript.StdOut.Write(e.StdOut.ReadAll());WScript.StdErr.Write(e.StdErr.ReadAll());} that I call in a loop similar to yours. Not sure if checking for EOF and reading line by line is better or worse.
You might have hit the deadlock issue described on this Microsoft Support site.
One suggestion is to always read both from stdout and stderr.
You could change readAllFromAny to:
function readAllFromAny(oExec)
{
var output = "";
if (!oExec.StdOut.AtEndOfStream)
output = output + oExec.StdOut.ReadLine();
if (!oExec.StdErr.AtEndOfStream)
output = output + "STDERR: " + oExec.StdErr.ReadLine();
return output ? output : -1;
}