Detecting when ChucK child shred has finished - chuck

Is it possible to determine when a ChucK child shred has finished executing if you have a reference to the child shred? For example, in this code:
// define function go()
fun void go()
{
// insert code
}
// spork another, store reference to new shred in offspring
spork ~ go() => Shred # offspring;
Is it possible to determine when offspring is done executing?

I'd say so, let me quote from the "VERSIONS" file from the latest release;
- (added) int Shred.done() // is the shred done?
int Shred.running() // is the shred running?
I'm not 100% sure what "running" is supposed to refer to (perhaps I misunderstand it?) but "done" seems to suit your needs;
================== 8<================
fun void foo()
{
second => now;
}
spork ~ foo() #=> Shred bar;
<<<bar.done()>>>;
<<<bar.running()>>>; // why is this 0? Bug?
2::second => now;
<<<bar.done()>>>;
<<<bar.running()>>>;
==========8<======================
Please note that calling these on a Shred object with no shred process attached to it will return more or less random numbers which is probably a bug.
---Answer from Kassen on chuck-users mailing list.

Related

Why does `set` method defined on `Cell<T>` explicitly drops the old value? (Rust)

Interested why does set method defined on Cell, on the last line explicitly drops old value.
Shouldn't it be implicitly dropped (memory freed) anyways when the function returns?
use std::mem;
use std::cell::UnsafeCell;
pub struct Cell<T> {
value: UnsafeCell<T>
}
impl<T> Cell<T> {
pub fn set(&self, val: T) {
let old = self.replace(val);
drop(old); // Is this needed?
} // old would drop here anyways?
pub fn replace(&self, val: T) -> T {
mem::replace(unsafe { &mut *self.value.get() }, val)
}
}
So why not have set do this only:
pub fn set(&self, val: T) {
self.replace(val);
}
or std::ptr::read does something I don't understand.
It is not needed, but calling drop explicitly can help make code easier to read in some cases. If we only wrote it as a call to replace, it would look like a wrapper function for replace and a reader might lose the context that it does an additional action on top of calling the replace method (dropping the previous value). At the end of the day though it is somewhat subjective on which version to use and it makes no functional difference.
That being said, the real reason is that it did not always drop the previous value when set. Cell<T> previously implemented set to overwrite the existing value via unsafe pointer operations. It was later modified in rust-lang/rust#39264: Extend Cell to non-Copy types so that the previous value would always be dropped. The writer (wesleywiser) likely wanted to more explicitly show that the previous value was being dropped when a new value is written to the cell so the pull request would be easier to review.
Personally, I think this is a good usage of drop since it helps to convey what we intend to do with the result of the replace method.

Retrieving/returning gamestate from erlang's message passing construct

I was looking at some games made using erlang and I found one simple tic-tac-toe game here. I understood this game but I had a simple question that the person has used io:format() to show gamestate. So when I make a move like
gameclient:make_move(Player1, ChallengedPlayer, Message),
all I get in return is
{make_move,"player1",a3}
But I wanted to know that how can I retrieve the current gamestate on calling the function make_move/3.
I don't think using mnesia is a good option here.
Can anyone suggest a way to retrieve/return the gamestate rather than just printing it using io:format.
You may use ETS table for example
create table at startup:
ets:new(tik_tak_tab, [public, {read_concurrency, true}, ordered_set,named_table]).
store data into the table:
loop(Name) ->
receive
{ msg, Message } ->
ets:insert(tik_tak_tab, {state, Message}),
loop(Name)
end.
Make new function to retreive the state:
some_func() ->
case ets:lookup(tik_tak_tab, state) of
[{state, Message}] -> Message;
_ -> error
end.
_. There is also cheap way to use records
You may check detailed here

Why does Rust not have a return value in the main function, and how to return a value anyway?

In Rust the main function is defined like this:
fn main() {
}
This function does not allow for a return value though. Why would a language not allow for a return value and is there a way to return something anyway? Would I be able to safely use the C exit(int) function, or will this cause leaks and whatnot?
As of Rust 1.26, main can return a Result:
use std::fs::File;
fn main() -> Result<(), std::io::Error> {
let f = File::open("bar.txt")?;
Ok(())
}
The returned error code in this case is 1 in case of an error. With File::open("bar.txt").expect("file not found"); instead, an error value of 101 is returned (at least on my machine).
Also, if you want to return a more generic error, use:
use std::error::Error;
...
fn main() -> Result<(), Box<dyn Error>> {
...
}
std::process::exit(code: i32) is the way to exit with a code.
Rust does it this way so that there is a consistent explicit interface for returning a value from a program, wherever it is set from. If main starts a series of tasks then any of these can set the return value, even if main has exited.
Rust does have a way to write a main function that returns a value, however it is normally abstracted within stdlib. See the documentation on writing an executable without stdlib for details.
As was noted by others, std::process::exit(code: i32) is the way to go here
More information about why is given in RFC 1011: Process Exit. Discussion about the RFC is in the pull request of the RFC.
The reddit thread on this has a "why" explanation:
Rust certainly could be designed to do this. It used to, in fact.
But because of the task model Rust uses, the fn main task could start a bunch of other tasks and then exit! But one of those other tasks may want to set the OS exit code after main has gone away.
Calling set_exit_status is explicit, easy, and doesn't require you to always put a 0 at the bottom of main when you otherwise don't care.
Try:
use std::process::ExitCode;
fn main() -> ExitCode {
ExitCode::from(2)
}
Take a look in doc
or:
use std::process::{ExitCode, Termination};
pub enum LinuxExitCode { E_OK, E_ERR(u8) }
impl Termination for LinuxExitCode {
fn report(self) -> ExitCode {
match self {
LinuxExitCode::E_OK => ExitCode::SUCCESS,
LinuxExitCode::E_ERR(v) => ExitCode::from(v)
}
}
}
fn main() -> LinuxExitCode {
LinuxExitCode::E_ERR(3)
}
You can set the return value with std::os::set_exit_status.

why does dart create closures when referencing a method?

void main() {
A one = new A(1);
A two = new A(2);
var fnRef = one.getMyId; //A closure created here
var anotherFnRef = two.getMyId; //Another closure created here
}
class A{
int _id;
A(this._id);
int getMyId(){
return _id;
}
}
According to the dart language tour page referencing methods like this creates a new closure each time. Does anyone know why it does this? I can understand creating closures when defining a method body as we can use variables in an outer scope within the method body, but when just referencing a method like above, why create the closure as the method body isn't changing so it can't use any of the variables available in that scope can it? I noticed in a previous question I asked that referencing methods like this effectively binds them to the object they were referenced from. So in the above example if we call fnRef() it will behave like one.getMyId() so is the closure used just for binding the calling context? ... I'm confused :S
UPDATE
In response to Ladicek. So does that mean that:
void main(){
var fnRef = useLotsOfMemory();
//did the closure created in the return statement close on just 'aVeryLargeObj'
//or did it close on all of the 'veryLargeObjects' thus keeping them all in memory
//at this point where they aren't needed
}
useLotsOfMemory(){
//create lots of 'veryLarge' objects
return aVeryLargeObj.doStuff;
}
Ladicek is right: accessing a method as a getter will automatically bind the method.
In response to the updated question:
No. It shouldn't keep the scope alive. Binding closures are normally implemented as if you invoked a getter of the same name:
class A{
int _id;
A(this._id);
int getMyId() => _id;
// The implicit getter for getMyId. This is not valid
// code but explains how dart2js implements it. The VM has
// probably a similar mechanism.
Function get getMyId { return () => this.getMyId(); }
}
When implemented this way you will not capture any variable that is alive in your useLotsOfMemory function.
Even if it really was allocating the closure inside the useLotsOfMemory function, it wouldn't be clear if it kept lots of memory alive.
Dart does not specify how much (or how little) is captured when a closure is created. Clearly it needs to capture at least the free variables of itself. This is the minimum. The question is thus: "how much more does it capture"?
The general consensus seems to be to capture every variable that is free in some closure. All local variables that are captured by some closure are moved into a context object and every closure that is created will just store a link to that object.
Example:
foo() {
var x = new List(1000);
var y = new List(100);
var z = new List(10);
var f = () => y; // y is free here.
// The variables y and z are free in some closure.
// The returned closure will keep both alive.
// The local x will be garbage collected.
return () => z; // z is free here.
}
I have seen Scheme implementations that only captured their own free variables (splitting the context object into independent pieces), so less is possible. However in Dart this is not a requirement and I wouldn't rely on it. For safety I would always assume that all captured variables (independent of who captures them) are kept alive. I would also make the assumption that bound closures are implemented similar to what I showed above and that they keep a strict minimum of memory alive.
That's exactly right -- the closure captures the object on which the method will be invoked.

C++ equivalent of .NET's Task.Delay?

I'm writing a C++/CX component to be consumed by Window's store Apps. I'm looking for a way to accomplish what Task.Delay(1000) does in C#.
Old Question, but still unanswered.
You can use
#include <chrono>
#include <thread>
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
This will need C++11, which shouldn't be a problem when using C++/CX.
After one year of using C++/CX, I have a general and reasonably correct answer to this question.
This link (from the Visual C++ Parallel Patterns Library documentation) includes a snippet for a function called complete_after(). That function creates a task that will complete after the specified number of milliseconds. You can then define a continuation task that will execute afterwards:
void MyFunction()
{
// ... Do a first thing ...
concurrency::create_task(complete_after(1000), concurrency::task_continuation_context::use_current)
.then([]() {
// Do the next thing, on the same thread.
});
}
Or better yet, if you use Visual C++'s coroutines capabilities simply type:
concurrency::task<void> MyFunctionAsync()
{
// ... Do a first thing ...
co_await complete_after(1000);
// Do the next thing.
// Warning: if not on the UI thread (e.g., on a threadpool thread), this may resume on a different thread.
}
You could create a concurrency::task, wait for 1000 time units and then call the ".then" method for the task. This will ensure that there is at least a wait of 1000 time units between the time you created the task and between the time it gets executed.
I'm not going to claim to be a wizard - I'm still fairly new to UWP and C++/CX., but what I'm using is the following:
public ref class MyClass sealed {
public:
MyClass()
{
m_timer = ref new Windows::UI::Xaml::DispatcherTimer;
m_timer->Tick += ref new Windows::Foundation::EventHandler<Platform::Object^>(this, &MyClass::PostDelay);
}
void StartDelay()
{
m_timer->Interval.Duration = 200 * 10000;// 200ms expressed in 100s of nanoseconds
m_timer->Start();
}
void PostDelay(Platform::Object^ sender, Platform::Object ^args)
{
m_timer->Stop();
// Do some stuff after the delay
}
private:
Windows::UI::Xaml::DispatcherTimer ^m_timer;
}
The main advantage over other approaches is that:
it's non-blocking
You're guaranteed to be called back on the XAML UI thread

Resources