Dump execution data at run time - code-coverage

I'm using JaCoCo to generate code coverage report and I have a number of scenarios to generate separate reports for. The problem is that the program is extremely huge and takes around 2 minuted to start and load all the class files.
I want to fetch the execution data on run time as soon as one of those scenarios is completed and then start with the next scenario, instead of restarting the server for each scenario.
Is there a way to do so?

All below is taken from official JaCoCo documentation at http://www.jacoco.org/jacoco/trunk/doc/
Java Agent described at http://www.jacoco.org/jacoco/trunk/doc/agent.html has option output:
file: At VM termination execution data is written to the file specified in the destfile attribute.
tcpserver: The agent listens for incoming connections on the TCP port specified by the address and port attribute. Execution data is
written to this TCP connection.
tcpclient: At startup the agent connects to the TCP port specified by the address and port attribute. Execution data is
written to this TCP connection.
and option jmx:
If set to true the agent exposes functionality via JMX
exposed via JMX functionality as described in JavaDoc among others provides three following methods:
byte[] getExecutionData(boolean reset)
Returns current execution data.
void dump(boolean reset)
Triggers a dump of the current execution data through the configured output.
void reset()
Resets all coverage information.
Again from documentation there is also Ant Task dump -
http://www.jacoco.org/jacoco/trunk/doc/ant.html:
This task allows to remotely collect execution data from another JVM without stopping it.
Remote dumps are usefull for long running Java processes like application servers.
dump command in Command Line Interface -
http://www.jacoco.org/jacoco/trunk/doc/cli.html
dump goal in jacoco-maven-plugin - http://www.jacoco.org/jacoco/trunk/doc/dump-mojo.html
API usage examples include:
MBeanClient.java This example connects to a coverage agent to collect
execution data over the JMX.
ExecutionDataClient.java This example
connects to a coverage agent to collect execution data over the remote
protocol.
ExecutionDataServer.java This example starts a socket server
to collect execution data from agents over the remote protocol.

Related

How can i access real time driver log (not with a lag of 5 min) other than Azure databricks sparkUI

I am trying to integrate the driver logs to Control-M scheduler.
How can i access real time driver log (not with a lag of 5 min) other than Azure databricks sparkUI. Using some API or accessing the location of real time written logs.
Also I am planning to do elastic analysis on top of it.
Such things (real-time collection of metrics or logs) are usually done via installation of some agent (for example, filebeat) via init scripts (global or cluster-level init scripts).
The actual script content heavily depends on the type of the agent used, but Databricks' documentation contains some examples of that:
Blog post on setting Datadog integration
Notebook that shows how setup init script for Datadog

How to monitor gRPC Server?

Does gRPC server span a separate thread for each incoming request?
I think, prometheus helps monitor incoming & outgoing traffic. But, how to monitor gRPC Server like Threads (Idle/ Active), Memory Usage (heap), IO, sessions etc?
Finally, any documentation on gRPC Server internals will help.
By default server utilize cached thread pool, but we can provide another one while building the server instance.
ServerBuilder<?> builder = ServerBuilder.forPort(port)
.executor(Executors.newFixedThreadPool(10))
// ...
;
From the javadoc "executor" method:
/** * Provides a custom executor.It's an optional
parameter. If the user has not provided an executor when the server is
built, the builder will use a static cached thread pool.The server won't take ownership of the given executor. It's
caller's responsibility to shut down the executor when it's
desired.
#return this
#since 1.0.0
public abstract T executor(#Nullable Executor executor);
You can provide some name for your pool & try to monitor activity with VisualVM

erlang ssh_sftp start_channel function call is getting failed

We have a distributed system system built on erlang with one server node and hundreds of client nodes(the systems are distributed over the intra-net). We have a requirement that all the client nodes will connect to the server node and try to download some file(mostly the same file will be accessed by all client nodes) simultaneously by using sftp. The steps we follow for downloading the file is:
Establish ssh sftp connection between the server node and client node using function call as below:
ssh_sftp:start_channel/2 .
Then reads the file by doing function call as below:
ssh_sftp:read_file/2
The problem what we are facing is that when the number of clients are more then it is observed that few client nodes are failing to establish connection between server node. i.e. the ssh_sftp:start_channel/2 function call is failing.
Can somebody please explain me;
Is there any limitation for the number of sftp session what we can establish in a single system ?
What are the possible reason because of which the connection request fails ?
Is there anything wrong we are doing in this approach ?
Is there any better solution by which we can guarantee that all client nodes will be able to connect to server and will be able to download the file.
Observation: We tried to connect 25 client nodes to the server; during the first try only 2 nodes failed to connect and on the second try 5 nodes failed to connect. Why this random behavior ?.
I think I can answer some question below (fix me if I wrong):
Erlang is really strong when you use this concurrent, so your issue here is power of your physical hardware (server).
I don't really know what the issue in your project but my project about telecom can easy handle a thousand call which 2 process handle each call. 1 process is the master process handle the session (connection) and each other watching and handle error master process, so that be can NOT failed the connection

How to ensure the receiving application is up and running in client-server communication?

I am presently working on a client-server solution to transfer files to another machine via a socket network connection. Since I intend to do some evaluation on the receiving end as well I am assuming that I will need to have some kind of client or server programme running there, too.
I am fairly new to the whole client-server thing and therefore have the following elementary question:
My present understanding is that client and server will be two independent programmes running on two different machines. How would one typically ensure that the communication partner (i.e., the server when sending from a client and the client when sending from a server) is actually up and running on the remote machine that I want to transfer a file to?
So far, I have been looking into the following options:
In the sending programme include an ssh access to the remote
machine and start an instance of the receiving programme on the
remote machine.
Have the receiving programme run as a demon process on the remote
machine. This would mean that the receiving programme should always
be running on the remote machine. However, how would I know whether
the process has crashed or has been shut down for some reason and
how would one recover from that without option 1) above?
So, my main question is: Are there any additional options that might be worth considering?
Thanks for your view on this!
Depending on how your client server messages are setup, a ping (I don't mean the ICMP ping, but the basic idea) message, where the server can respond with "I am alive" would help. This way at least you know the server end is running.
It is not uncommon in production environments using these that monitoring systems are put in place. Other options worth considering - xinet.d scripts - stuff that gets started on incoming connections.
There probably new ways to achieve the automatic start/restart or start on connection of this with systemd/systemctl but I am not familiar enough with them to give you the specifics.
A somewhat crude, but effective means may be a cron job that periodically runs a script to enforce keeping the service up.

how to : do 2 way communication between user mode and kernel mode

I have written a driver, that extracts a value from IRP buffer. Now based on this keyword I have to pass or discard the IRP. So I need to communicate with the database which is not easy from kernel mode driver. So I am using an application or exe for doing this which will result in true or false based on which I will pass or discard the IRP.
I want to link the driver with the application that I get the data in the client application.
I thought about using temp file that can act as a pipe.
Please suggest something.
I would go with IOCTLs.
The application communicating with database starts with sending one or more IOCTLs to the driver. Let's call IOCTLs of this type IOCTL-1.
The completion of IOCTL-1 idicates a request from driver to the database. The request details can be passed in IOCTL output buffer.
The application detects IOCTL-1 completion, retrieves the request details, runs the query and passes results to the driver using a different IOCTL (IOCTL-2). Then it re-sends IOCTL-1 so that the driver can issue another request.

Resources