I need to recreate the functionalities of the jps tool programmatically. I need to find out all Java running processes along with their id so I can attach to that process (similar to what JConsole does).
I thought the VirtualMachine API would help, but did not get expected result when I run the following
public class ProcessList {
public static void main(String[] args){
List<VirtualMachineDescriptor> vms = VirtualMachine.list();
for(VirtualMachineDescriptor vm : vms){
System.out.println (vm.id());
}
}
}
When I run the code above, it returns just one ID, but when I run jps on the same machine I see several other processes.
jps uses an internal class - MonitoredHost of the Oracle/Sun JRE. The activeVMs() method is used to obtain the list of all active VMs on a host. You can refer to the source of the sun.tools.jps.Jps class of OpenJDK, to find out how the jps tool works under the hood.
This is the correct API, ultimately 'MonitoredHost#activeVMs()' and 'VirtualMachine.list()' use the same discovery code via jstat technology. Do you run jps on the command line as a different user? In that case, you would see different JVMs.
See here how JPS is implemented.
you can do following :
1) Create platform specific script files (.bat for windows, .sh for linux etc)
2)Use "wmic process"(Windows), "ps -ef"(linux) etc commands in those scripts to list the processes (pipe on the result to get the java processes).
3)Launch the above scripts using Runtime's API and get the output result
Related
I'm trying to create a GraalVM native image using the maven plugin but having some issues.
Here the config for the maven plugin
I'm using GraalVM JDK (installed through Sdkman):
$ java -version
openjdk version "16.0.1" 2021-04-20
OpenJDK Runtime Environment GraalVM CE 21.1.0 (build 16.0.1+9-jvmci-21.1-b05)
OpenJDK 64-Bit Server VM GraalVM CE 21.1.0 (build 16.0.1+9-jvmci-21.1-b05, mixed mode, sharing)
I have a done simple main class like:
package it.r;
public class Main {
public static void main(String[] args) {
System.out.println("********");
System.out.println(Main.class.getConstructors().length);
System.out.println("********");
}
}
When executing it using mvn exec:java -Dexec.mainClass=it.r.Main I get as a result:
********
1
********
But when doing mvn package and then executing the created executable, I have as result:
********
0
********
Why is this happening?
Here the git repo to reproduce
This issue seems to impact Jackson deserialization, as in another example I have an error from jackson that cannot deserialize a yaml file because it can't find constructors for my class.
When GraalVM native image builds your application into a native binary it statically analyzes your application.
The analysis is static, so several dynamic features your application might use require explicit configuration, for example:
reflection
serialization
method handles
using resources (like classloader.getResource())
JNI
This explicit configuration is provided as json configuration files, for example,
You can provide the config files manually, but you can also run your application using a javaagent which will record usages of features requiring configuration.
In a nutshell, you run your application like this:
java -agentlib:native-image-agent=config-output-dir=/path/to/config-dir/
and exercise the code paths that use the code you want to be configured. This is important because the tracing agent can only record the config for the code it actually saw running.
Then the output directory will contain a json file, for example looking like this:
[
{
"name":"StringCapitalizer",
"methods":[{"name":"capitalize","parameterTypes":["java.lang.String"] }]
},
{
"name":"StringReverser",
"methods":[{"name":"reverse","parameterTypes":["java.lang.String"] }]
}
This file lists the classes that need to be included into the analysis and the binary result and their members that need to be accessed.
It’s fairly straightforward but a bit tedious to create manually that’s why the agent approach is preferred.
There’s also a programmatic way to configure classes and members be registered for reflection, but using it means you need to include a dependency on the GraalVM code into your app.
Classes using reflections need to be registered in order to include them in the native image built, more info in the docs
I have a .NetCore web API and and need to connect with a C++ shared library (libCppAppOutput.so). For this I am using DllImport inside the controller of the Web API as in the below code:
[DllImport("libCppAppOutput.so", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi, EntryPoint = "methodNameinCppProject")]
public static extern int MethodNameInWebAPI(string param1, string param2);
When I am debugging it in Visual studio with Windows environment, I am getting an exception like
An attempt was made to load a program with an incorrect format.
(0x8007000B).
This may be because of trying to access a Linux file in Windows environment, if I am using a CppAppOutput.dll in place of libCppAppOutput.so I can successfully call the methods inside the dll. But my actual requirement is to run the entire application in Linux environment and to be proceed with .so file only, so I have created a docker image with target OS as Linux and tried to call the methods inside it from running docker with the below command :
docker run -p 8081:80 name_of_docker_image:tagname
Then I am getting an exception like:
Unable to load shared library 'libCppAppOutput.so' or one of its dependencies. In order to help diagnose loading problems, consider
setting the LD_DEBUG environment variable: libCppAppOutput.so: cannot
open shared object file: No such file or directory.
But I can see the file, libCppAppOutput.so is present inside the docker image by using the following commands,
docker create --name name_of_container name_of_docker_image:tagname
docker start name_of_container
docker exec -ti name_of_container /bin/bash
root#xyzabc:/app# ls
I am using below URL to test the application
http://localhost:8081/launchUrl/MethodNameInWebAPI/param1/param2
Please help me in this regard like how can I call a .so file from the Linux docker.
if this issue to be elaborated more, kindly comment and I will do.
Thanks in advance.
I could able to solve this issue with help from my colleagues and friends.
There was a dependency with libCppAppOutput.so (an XML parser). Even though that dependency package is installed in the container, it was missed to configure while creating the libCppAppOutput.so.
This was identified by executing ldl libCppAppOutput.so command in bash.
Thank you all for spend time on reading and analyzing the question !
Have more N more great coding days!
I'd like to import (or automate the import function in Virtuoso) some RDF files on a .Net application to my local Virtuoso by using some commands in ISQL workspace.
The commands in PowerShell are:
>ISQL
>ld_dir_all('PATH', '*.rdf', 'ONTOLOGY');
>set isolation='uncommitted';
>rdf_loader_run();
It means that, the isql command must finished and then I have to run the next.
I've tried to use PowerShellInstance in .net but when I combine all commands in this form : PowerShellInstance.AddScript("ISQL;ld_dir_all('PATH', '*.rdf', 'ONTOLOGY');set isolation='uncommitted';rdf_loader_run();"); there is an error.
So I need your help or ideas for automation import data to Virtuoso in .Net application. (Maybe there will a solution in dotNetRDF!)
Your combined command is telling PowerShell to execute 4 commands on the OS commandline. Instead, you need to tell PowerShell to run iSQL, which must then be told to execute 3 statements.
Note that none of these statements is iSQL-specific; you're just executing them through iSQL. You can actually execute these statements through any data connection to Virtuoso, including ADO.NET (or ODBC, JDBC, etc.). I do not have specific advice for doing so with dotNetRDF.
The problem
Most of the articles and books about Erlang I could find focus on creating long running server-like applications, leaving the process of command line tools creation not covered.
I have a multi-app rebar3 project consisting of 3 applications:
myweb - a Cowboy-based web service;
mycli - a command line tool to prepare assets for myweb;
mylib - a library used by both myweb and mycli, depends on a NIF.
As a result of the build I want to get such artifacts:
an executable for the web part that is going to serve HTTP requests;
an executable command line tool for the assets preparation;
a set of libraries used by the above.
Requirements
CLI should behave like a sane non-interactive command line tool: handle arguments, deal with stdin/stdout, return non-zero exit code on error, etc;
both server and CLI should be able to use NIFs;
it should be easy to package the artifacts as a set of deb/rpm packages, so both server and CLI should reuse common dependencies.
Things tried so far
Building an escript
One of the ways I've seen in the wild is to create a self-contained escript file. At least rebar3 and relx do so. So I gave it a try.
Pros:
has support for command line arguments;
in case of errors, it returns non-zero exit code.
Cons:
embeds all the dependencies in a single file making it impossible to reuse mylib;
since *.so files get embedded into the resulting escript file, they cannot be loaded at runtime, thus NIFs don't work (see erlang rebar escriptize & nifs);
rebar3 escriptize doesn't handle dependencies well (see bug 1139).
Unknowns:
Should the CLI app become a proper OTP application?
Should it have a supervision tree?
Should it be started at all?
If so, how do I stop it when the assets have been processed?
Building a release
Another way to build a command line tool was described in a How I start: Erlang article by Fred Hebert.
Pros:
Each of the dependency applications get into their own directory, making it easy to share and package them.
Cons:
there's no defined entry point like escript's main/1;
as a consequence both command line arguments and exit code must be handled manually.
Unknowns:
How to model the CLI OTP app in a non-interactive way?
How to stop the app when the assets have been processed?
Neither of the approaches above seem to work for me.
It would be nice to get the best of both worlds: get the infrastructure that is provided by escript such as main/1 entry point, command line parameters, and exit code handling while still having a nice directory structure that is easy to package and which doesn't hinder the use of NIFs.
Regardless if you are starting a long-running daemon-like application in Erlang, or a CLI command, you always need the following:
erts application - the VM and kernel in a particular version
Erlang OTP applications
Your applications' dependencies
CLI entry point
Then in either case the CLI entry point has to start the Erlang VM and execute the code that it supposed to execute in a given situation. Then it will either exit or continue running - the later for a long-running application.
The CLI entry point can be anything that starts an Erlang VM, e.g. an escript script, sh, bash, etc. The obvious advantage of escript over generic shell is that escript is already being executed in the context of an Erlang VM, so no need to handle starting/stopping the VM.
You can start Erlang VM in two ways:
Use system-wide Erlang VM
Use an embedded Erlang release
In the first case you don't supply erts nor any OTP application with your package, you only make a particular Erlang version a dependency for your application. In the second case you supply erts and all required OTP applications along with your application's dependencies in your package.
In the second case you also need to handle setting the code root properly when starting the VM. But this is quite easy, see the erl script that Erlang uses to start the system-wide VM:
# location: /usr/local/lib/erlang/bin/erl
ROOTDIR="/usr/local/lib/erlang"
BINDIR=$ROOTDIR/erts-7.2.1/bin
EMU=beam
PROGNAME=`echo $0 | sed 's/.*\///'`
export EMU
export ROOTDIR
export BINDIR
export PROGNAME
exec "$BINDIR/erlexec" ${1+"$#"}
This can be handled by scripts, for example the node_package tool that Basho uses to package their Riak database for all major operating systems. I am maintaining my own fork of it which I am using with my own build tool called builderl. I just say that so you know that if I managed to customize it you will well be able to do that as well :)
Once the Erlang VM is started, your application should be able to load and start any application, either supplied with Erlang or with your application (and that includes the mylib library that you mentioned). Here are some examples how this could be achieved:
escript example
See this builderl.esh example how I handle loading other Erlang applications from builderl. That escript script assumes that the Erlang installation is relative to the folder from which it's executed. When it's a part of another application, like for example humbundee, the load_builderl.hrl include file compiles and loads bld_load, which in turn loads all remaining modules with bld_load:boot/3. Notice how I can use standard OTP applications without specifying where they are - builderl is being executed by escript and so all the applications are loaded from where they were installed (/usr/local/lib/erlang/lib/ on my system). If libraries used by your application, e.g. mylib, are installed somewhere else, all you need to do is add that location to the Erlang path, e.g. with code:add_path. Erlang will automatically load modules used in the code from folders added to the code path list.
embedded Erlang
However, the same would hold if the application was a proper OTP release installed independently from the system-wide Erlang installation. That's because in that case the script is executed by escript belonging to that embedded Erlang release rather than the system-wide version (even if it's installed). So it knows the location of all applications belonging to that release (including your applications). For example riak does exactly that - in their package they supply an embedded Erlang release that contains its own erts and all dependent Erlang applications. That way riak can be started without Erlang being even installed on the host operating system. This is an excerpt from a riak package on FreeBSD:
% tar -tf riak2-2.1.1_1.txz
/usr/local/sbin/riak
/usr/local/lib/riak/releases/start_erl.data
/usr/local/lib/riak/releases/2.1.0/riak.rel
/usr/local/lib/riak/releases/RELEASES
/usr/local/lib/riak/erts-5.10.3/bin/erl
/usr/local/lib/riak/erts-5.10.3/bin/beam
/usr/local/lib/riak/erts-5.10.3/bin/erlc
/usr/local/lib/riak/lib/stdlib-1.19.3/ebin/re.beam
/usr/local/lib/riak/lib/ssl-5.3.1/ebin/tls_v1.beam
/usr/local/lib/riak/lib/crypto-3.1/ebin/crypto.beam
/usr/local/lib/riak/lib/inets-5.9.6/ebin/inets.beam
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.app
/usr/local/lib/riak/lib/bitcask-1.7.0/ebin/bitcask.beam
(...)
sh/bash
This doesn't differ much in principle from the above apart from having to explicitly call the function that you want to execute when starting the Erlang VM (the entry point or the main function as you called it).
Consider this script that builderl generates to start an Erlang application just to execute a specified task (generate the RELEASES file), after which the node shuts down:
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee -noshell -noinput -eval \"{ok, Cwd} = file:get_cwd(), release_handler:create_RELEASES(Cwd, \\\"releases\\\", \\\"releases/$APP_VSN/humbundee.rel\\\", []), init:stop()\""
This is a similar script but doesn't start any specific code or application. Instead, it starts a proper OTP release, so which applications are started and in what order depends on the release (specified by the -boot option).
#!/bin/sh
START_ERL=`cat releases/start_erl.data`
APP_VSN=${START_ERL#* }
run_erl -daemon ../hbd/shell/ ../hbd/log "exec erl ../hbd releases releases/start_erl.data -config releases/$APP_VSN/hbd.config -args_file ../hbd/etc/vm.args -boot releases/$APP_VSN/humbundee"
In the vm.args file you can provide additional paths to your applications if required, e.g.:
-pa lib/humbundee/ebin lib/yolf/ebin deps/goldrush/ebin deps/lager/ebin deps/yajler/ebin
In this example these are relative, but could be absolute if your application is installed into a standard well-known location. Also, this would be only required if you are using the system-wide Erlang installation and need to add the additional paths to locate your Erlang applications, or if your Erlang applications are located in non-standard location (e.g. not in lib folder, as Erlang OTP requires). In a proper embedded Erlang release, where the applications are located in the code root/lib folder, Erlang is able to load those applications without specifying any additional paths.
Summing up and other considerations
The deployment of Erlang applications doesn't differ much from other projects written in scripting languages, e.g. ruby or python projects. All those projects have to deal with similar issues and I believe each operating system's package management deals with them one way or another:
Get to know how your operating system deals with packaging projects that have run-time dependencies.
See how other Erlang applications are packaged for your operating system, there are plenty of them that are usually distributed by all major systems: RabbitMQ, Ejabberd, Riak among others. Just download the package and unpack it to a folder, then you will see where all the files are placed.
EDIT - reference the requirements
Coming back to your requirements, you have the following choices:
Install Erlang as an OTP release system-wide, as an embedded Erlang, or as a bag with applications in some random folders (sorry Rebar)
You can have multiple entry points in the form of sh or escript scripts executing a selection of applications from the installed release. Both will work as long as you configured the code root and paths to those applications correctly (as outlined above).
Then each of your applications: myweb and mycli, would need to be executed in its own new context, e.g. start a new VM instance and execute the required application (from the same Erlang release). In case of myweb the entry point can be a sh scripts that starts a new node according to the release (similar to Riak). In case of mycli the entry point can be an escript, which finishes executing once the task is completed.
But it's entirely possible to create a short-running task that exits the VM even if it's started from sh - see the example above. In that case mycli would require separate release files - the script and boot to boot the VM. And of course it's also possible to start a long-running Erlang VM from escript.
I provided an example project that uses all these methods at once, humbundee. Once it's compiled it provides three access points:
The cmd release.
The humbundee release.
The builder.esh escript.
The first one is used to start the node for installation and then shut it down. The second is used to start a long-running Erlang application. The third is a build tool to install/configure the node. This is how the project looks like once the release has been created:
$:~/work/humbundee/tmp/rel % ls | tr " " "\n"
bin
erts-7.3
etc
lib
releases
$:~/work/humbundee/tmp/rel % ls bin | tr " " "\n"
builderl.esh
cmd.boot
humbundee.boot
epmd
erl
escript
run_erl
to_erl
(...)
$:~/work/humbundee/tmp/rel % ls lib | tr " " "\n"
builderl-0.2.7
compiler-6.0.3
deploy-0.0.1
goldrush-0.1.7
humbundee-0.0.1
kernel-4.2
lager-3.0.1
mnesia-4.13.3
sasl-2.7
stdlib-2.8
syntax_tools-1.7
yajler-0.0.1
yolf-0.1.1
$:~/work/humbundee/tmp/rel % ls releases/hbd-0.0.1 | tr " " "\n"
builderl.config
cmd.boot
cmd.rel
cmd.script
humbundee.boot
humbundee.rel
humbundee.script
sys.config.src
The cmd entry point will use application deploy-0.0.1 and builderl-0.2.7 as well as release files cmd.boot, cmd.script, and some OTP applications. The standard humbundee entry point will use all applications apart from builderl and deploy. Then the builderl.esh escript will use application deploy-0.0.1 and builderl-0.2.7. All from the same embedded Erlang OTP installation.
A small escript that then goes into code from 'conventional' modules could be a solution.
As an example, Concuerror is expected to be used as a command line tool and uses an escript as its entry point. It handles command-line arguments via getopt. All the main code is in regular Erlang modules, which are included in the path with simple arguments to the escript.
As far as I understand, NIFs can then be loaded with regular -onload attributes (Concuerror does not use NIFs).
I have created a small test application and when on a linux machine I would like to see the messages that appear on the erlang shell to appear on the linux as well. I am not sure how it could be done, but is it possible?
I am running the application in detached mode(erlang shell).
EDIT:
I meant the message passing to the linux shell.
Example:
When I start my application using,
test application <start/stop>
I would get a message on the Linux shell Test Application has successfully started
But if I was like copying files, I would like the progress of the file copy to be displayed on the Linux shell like,
Copy successful. Number of files copied : 1
Copy successful. Number of files copied : 2
....
So is it possible to do the above using the message passing from Erlang to Linux?
I believe you can use some logger for this purposes. For example lager can redirect logs to several facilities at the same time, e.g. file and erlang console.
Other option is to use "run_erl", utility which comes with erlang distribution and allows "redirect the standard input and standard output streams so that all output can be logged".
Try erl -man run_erl or you can see can generate "release" with rebar, it will generate startup scripts which use "run_erl".