I want to collect Operating Systems Parameters..in my mbean so that after registering it i can see those values on JConsole..I have collected some parameters but I cant collect the values for ProcessCpuTime,SystemCpuLoad I tried it with OperatingSystemMXBean interface object but it doesn't work. Also i read on google that those method needs APIs which are not supported on Windows.So is there another way to calculate those values mathematically...Please help me
The Windows JVM implementation of java.lang.management.OperatingSystemMXBean is com.sun.management.OperatingSystem. You can reference it directly by casting the former:
import java.lang.management.*;
import com.sun.management.*;
...
OperatingSystem os = (OperatingSystem)ManagementFactory.getOperatingSystemMXBean();
There's some variability between the Java 6 and java 7 versions of this class, which can be observed with this Groovy script:
import java.lang.management.*;
os = ManagementFactory.getOperatingSystemMXBean();
println os.getClass().getName();
try { println "Process CPU Load:${os.getProcessCpuLoad()}"; } catch (e) {}
try { println "Process CPU Time:${os.getProcessCpuTime()}"; } catch (e) {}
try { println "System CPU Load:${os.getSystemCpuLoad()}"; } catch (e) {}
Java 6 Output:
com.sun.management.OperatingSystem
Process CPU Time:79810111600
Java 7 Output:
com.sun.management.OperatingSystem
Process CPU Load:-1.0
Process CPU Time:1840811800
System CPU Load:0.4902940980431365
So it's not as useful as the Linux [et.al.] variant, but you might get what you're looking for.
Related
I am using dask in jupyter notebook on a linux server to run python functions on multiple CPUs. The python functions have standard print statement. I would like the output of the print to be shown in the jupyter notebook right below the cell. However, the print out were all shown in the console. Can anyone explain why this happens and how to make dask.function.print output to the notebook, or both the console and the notebook.
The following is a simplified version of the problem:
import dask
import functools
from dask import compute, delayed
iter_list=[0,1]
def iFunc(item):
print('Meme',item)
# call this function itself will print normally to the
# notebook below the cell, desired.
with dask.config.set(scheduler='processes',num_workers=2):
func1=functools.partial(iFunc)
ret=compute([delayed(func1)(item) for item in iter_list])
# surprisingly, Meme 0, Meme 1 only print out to the console,
# not the notebook, Not desired, hard to debug. Any clue?
The whole point of dask is leveraging multiple threads, processes, or nodes/machines to distribute work. The workers you create are therefore not on the same thread as your client, and may not be on the same process, or even the same machine (or like, in the same country) as your client, depending on how you set up your cluster.
If you start a LocalCluster from your jupyter notebook, whether you're using threads or processes, you should see printed output appearing as output in the cells which execute jobs on the workers:
In [1]: import dask.distributed as dd
In [2]: client = dd.Client(processes=4)
In [3]: def job():
...: print("hello from a worker!")
In [4]: client.submit(job).result()
hello from a worker!
However, if a different process is spinning up your workers, it is up to that process to decide how to handle stdout. So if you're spinning up workers using the jupyterlab terminal, stdout will appear there. If you're spinning up workers in a kubernetes pod, stdout will appear in the worker logs. Dask doesn't actively manage standard out, so it's up to you to handle this. Note that this also applies to logging - neither stdout nor logs are captured by dask. This is actually a really important design feature - many distributed systems have their own systems for managing the standard out & logging of nodes, and dask does not want to impose its own parallel/conflicting system for handling output. The main focus of dask is executing the tasks, not managing a distributed logging system.
That said, dask does have the infrastructure for passing around messages, and this is something the package could support. There is an open issue and pull request attempting to add this ability as a feature, but it looks like there are a lot of open design questions that would need to be resolved before this could be added. Many of them revolve around the issues I raised above - how to add a clean distributed logging feature without overburdening the scheduler, complicating the already complex set of configuration options, or overriding the important, existing logging systems users rely on. The dask core team seems to agree that this is a good idea, if the tough design questions can be resolved.
You certainly always have the option of returning messages. For example, the following would work:
In [10]: def job():
...: return_blob = {"diagnostics": {}, "messages": [], "return_val": None}
...: start = time.time()
...: return_blob["diagnostics"]["start"] = start
...:
...: try:
...: return_blob["messages"].append("raising error")
...: # this causes a DivideByZeroError
...: return_blob["return_val"] = 1 / 0
...: except Exception as e:
...: return_blob["diagnostics"]["error"] = e
...:
...: return_blob["diagnostics"]["end"] = time.time()
...: return return_blob
...:
In [11]: client.submit(job).result()
Out[11]:
{'diagnostics': {'start': 1644091274.738912,
'error': ZeroDivisionError('division by zero'),
'end': 1644091274.7389162},
'messages': ['raising error'],
'return_val': None}
I just recently setup a Centos7 VM to play around with GraalVM. I downloaded graalvm-1.0.0-rc1, installed Netbeans8.2, and downloaded the FastR extension (via gu). I then wrote a simple java program to test some of the various supported languages. Below is the code I wrote:
package javatest;
import org.graalvm.polyglot.*;
import java.io.PrintStream;
import java.util.Set;
public class JavaTest {
public static void main(String[] args) {
PrintStream output = System.out;
Context context = Context.create();
Set<String> languages = context.getEngine().getLanguages().keySet();
output.println("Current Languages available in GraalVM: " + languages);
// TODO code application logic here
System.out.println("Java: Hello World");
context.eval("js","print('JavaScript: Hello World')");
context.eval("R", "print('R: Hello World');");
}
}
Output is as follows:
run:
Current Languages available in GraalVM: [R, js, llvm]
Java: Hello World
JavaScript: Hello World
FastR unexpected failure: error loading libR from: /usr/local/graalvm-1.0.0-
rc1/jre/languages/R/lib/libR.so.
If running on NFI backend, did you provide location of libtrufflenfi.so as
value of system property 'truffle.nfi.library'?
The current value is '/usr/local/graalvm-1.0.0-
rc1/jre/lib/amd64/libtrufflenfi.so'.
Details: Access to native code is not allowed by the host environment.
Exception in thread "main" org.graalvm.polyglot.PolyglotException
at org.graalvm.polyglot.Context.eval(Context.java:336)
at javatest.JavaTest.main(JavaTest.java:32)
As you can see by the initial call to view the supported languages it recognizes that R is installed but once I call the eval on the language it kicks out. The trufflenfi.so file is there and available. I have defined it as a run parameter (even though I shouldn't need to).
I can find nothing on why the "access to native code is not allowed by the host environment" is being displayed and am at a loss. Any ideas on what I'm doing wrong? Note: I also tried the same test with python and ruby and got the same result but removed for the simplest of test cases.
This is a security feature of polyglot contexts created with the GraalVM polyglot API. By default every language is isolated from the host environment, therefore it is not allowed to acccess Java classes, native access or files in your filesystem. Currently with GraalVM 1.0.0-RC1 the languages Ruby and R need native access to boot their environment up. The languages JavaScript and Python don't need native access to boot.
If you want to create a context with all access you can create the context like this:
Context.newBuilder().allowAllAccess(true).build();
You can also just selectively allow access to native code:
Context.newBuilder().allowNativeAccess(true).build();
Here is your example fixed:
package javatest;
import org.graalvm.polyglot.*;
import java.io.PrintStream;
import java.util.Set;
public class JavaTest {
public static void main(String[] args) {
PrintStream output = System.out;
Context context = Context.newBuilder().allowAllAccess(true).build();
Set<String> languages = context.getEngine().getLanguages().keySet();
output.println("Current Languages available in GraalVM: " + languages);
// TODO code application logic here
System.out.println("Java: Hello World");
context.eval("js","print('JavaScript: Hello World')");
context.eval("R", "print('R: Hello World');");
}
}
Here are some more examples that uses all access for Ruby and R:
http://www.graalvm.org/docs/graalvm-as-a-platform/embed/
I want to know how erlang's VM preempts the running code and contexts the stack. How it can be done in a language such as c?
The trick is that the Erlang runtime has control over the VM, so it can - entirely in userspace - keep track of how many VM instructions it's already executed (or, better yet, an estimate or representation of the actual physical computation required for those instructions - a.k.a. "reductions" in Erlang VM parlance) and - if that number exceeds some threshold - immediately swap around process pointers/structs/whatever and resume the execution loop.
Think of it something like this (in kind of a pseudo-C that may or may not actually be C, but I wouldn't know because I ain't a C programmer, but you asked how you'd go about it in C so I'll try my darndest):
void proc_execute(Proc* proc)
{
/* I don't recall if Erlang's VM supports different
reduction limits for different processes, but if it
did, it'd be a rather intuitive way to define process
priorities, i.e. making sure higher-priority processes
get more reductions to spend */
int rds = proc->max_reductions;
for (; rds > 0; rds--) {
/* Different virtual instructions might execute different numbers of
physical instructions, so vm_execute_next_instruction will return
however many reductions are left after executing that virtual
instruction. */
rds = vm_execute_next_instruction(proc, rds);
if (proc->exited) break;
}
}
void vm_loop(Scheduler* sched)
{
Proc *proc;
for (;;) {
proc = sched_next_in_queue(sched);
/* we'll assume that the proc will be null if the
scheduler doesn't have any processes left in its
list */
if (!proc) break;
proc_execute(proc);
}
}
Proc* sched_next_in_queue(Scheduler* sched)
{
if (!sched->current_proc->exited) {
/* If the process hasn't exited yet, readd it to the
end of the queue so we can resume running it
later */
shift(sched->queue, sched->current_proc);
}
sched->current_proc = pop(sched->queue);
return sched->current_proc;
}
This is obviously quite simplified (notably excluding/eliding a lot of important stuff like how VM instructions are implemented and how messages get passed), but hopefully it illustrates how (if I'm understanding right, at least) Erlang's preemptive scheduler and process model works on a basic level.
All code of Erlang will compile to operation code of Erlang's VM. Erlang's VM execute Erlang's operation code by OS's threads which are created at startup of Erlang's VM.
Erlang's code run on Virtual CPUs which are controlled by Erlang's VM. And Erlang's VM consider IO as interrupt of Virtual CPUs. So Erlang's VM implements a machine and a scheduler like an OS. Because of operation code and non-blocking IO, we can implements preempts in Erlang's VM using C languange.
This is probably not of major importance, however I have noticed during testing that the performance of the print statement and also stdout is much faster in the Dart-Editor than from the command-line. From the command-line the performance of print takes around 36% longer than using stdout from the command-line. However, running the program from within the editor, using stdout takes around 900% longer than using the print statement in the editor, but both are considerably faster than from the command-line. ie. Print from a program running in the editor takes around 2.65% of the time it takes from the command-line.
Some relative timings based on average performance from my test :
Running program from command line (5000 iterations) :
print 1700 milliseconds.
stdout 1245 milliseconds.
Running program within Dart-Editor (5000 iterations) :
print 45 milliseconds
stdout 447 milliseconds.
Can someone explain to me the reason for these differences – in particular why performance in the Dart-Editor is so much faster? Also, is it acceptable practice to use stdout and what are the pros and cons versus using print?
Why is the Dart Editor faster?
Because the output handling by the command line is just really slow, and this blocks the output stream, and subsequently the call to print/stdout.
You can test this for yourself - test the following java program (with your own paths, of course):
public static void main(String[] args) {
try {
// the dart file does print and stdout in a loop
Process p = Runtime.getRuntime().exec("C:\\eclipse\\dart-sdk\\bin\\dart.exe D:\\DEVELOP\\Dart\\Console_Playground\\bin\\console_playground.dart");
BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
StringBuffer buf = new StringBuffer();
String line;
while((line = in.readLine()) != null) {
buf.append(line + "\r\n");
}
System.out.print(buf.toString());
} catch (IOException e) {
e.printStackTrace();
}
}
On my machine, this is even slightly faster than the Dart Editor (which probably does something like buffering the input and rendering it periodically, but I don't really know).
You will also see that adding a Thread.sleep(1); into the loop will severely impact the performance of the dart program, because the stream is blocked.
Should stdout be used?
I think that's highly subjective. I, for one, do whatever lets me write code more quickly. When i just want to dump a variable, i use print(myvar);. But with stdout, you can do neat stuff like this: stdout.addStream(new File(r"D:\test.csv").openRead());. Of course, if performance is an issue, it depends on how your application will be used - for example, called by another program (where print is faster) vs. command line (where stdout is faster, for some reason).
Why is stdout faster in command line?
I have no idea, sorry. It's the only environment I tested where print() is slower, so I'd guess it has something to do with how the console handles incoming data.
Is there a way to detect the platform (Window / Linux) in which the website is running by Groovy / Grails?
System.properties['os.name']
will return the name of the OS, e.g. "Windows XP". So if you want to figure out whether you're running on Windows or not, you could do something like:
if (System.properties['os.name'].toLowerCase().contains('windows')) {
println "it's Windows"
} else {
println "it's not Windows"
}
Alternatively, org.apache.commons.lang.SystemUtils (from the Apache commons-lang project) exposes some boolean constants that provide the same information as the code above, e.g.
SystemUtils.IS_OS_MAC
SystemUtils.IS_OS_WINDOWS
SystemUtils.IS_OS_UNIX
More specific constants such as these are also available
SystemUtils.IS_OS_WINDOWS_2000
SystemUtils.IS_OS_SOLARIS
SystemUtils.IS_OS_MAC_OSX
Or for short:
if (System.env['OS'].contains('Windows')){ println "it's Windows" }
Since Groovy provides a map access to getAt/putAt methods.