Does gRPC server span a separate thread for each incoming request?
I think, prometheus helps monitor incoming & outgoing traffic. But, how to monitor gRPC Server like Threads (Idle/ Active), Memory Usage (heap), IO, sessions etc?
Finally, any documentation on gRPC Server internals will help.
By default server utilize cached thread pool, but we can provide another one while building the server instance.
ServerBuilder<?> builder = ServerBuilder.forPort(port)
.executor(Executors.newFixedThreadPool(10))
// ...
;
From the javadoc "executor" method:
/** * Provides a custom executor.It's an optional
parameter. If the user has not provided an executor when the server is
built, the builder will use a static cached thread pool.The server won't take ownership of the given executor. It's
caller's responsibility to shut down the executor when it's
desired.
#return this
#since 1.0.0
public abstract T executor(#Nullable Executor executor);
You can provide some name for your pool & try to monitor activity with VisualVM
Related
I'm trying to create a reasonable setup for client-client-communication for our existing infrastructure. I've been reading the docs for Spring, Websocket, STOMP, SockJS and ActiveMQ for over a week now and I'm not not sure whether what I am trying to do is feasible or sensible. Spring server and JavaScript client were up and running relatively quickly and sending messages between clients just works (direct connection from JS client to Spring server). This setup won't suffice for our needs so we decided to put dedicated brokers in between. Configuring ActiveMQ is a nightmare probably because I don't really know where to start. I have not worked with a dedicated broker so far.
Environment
170 independent servers (Tomcat, Spring, SockJS, STOMP)
2 ActiveMQ (Artemis) brokers (load balance, failure safety)
a few thousand clients (JavaScript/.NET, SockJS, STOMP)
Requirement
I need every client to be able to talk to every other client. Every message has to be curated by one of the servers. I'd like the clients to connect to one of the ActiveMQ brokers. The ActiveMQ brokers would hold a single connection to every single server. The point is to avoid that all my clients would have to open 170 WebSocket connections to all the servers. The servers do not need to talk to each other (yet/necessarily) since they are independent with different responsibilities.
Question
Is ActiveMQ or any other dedicated broker viable as transparent proxy/relay i.e. can it handle this situation and are there ways to dynamically decide the correct recipients or should I go another route like rolling my own Spring-based relay?
In a traditional messaging use-case (e.g. using ActiveMQ Artemis with STOMP) the broker manages "destinations" and any messages sent to those destinations. Messages are only dispatched to clients if the client specifically creates a consumer on a destination.
In your use-case all of your 170 "servers" would actually be messaging clients. They would need to create a consumer on the broker in order to receive messages. To be clear, once the consumer is created then the broker would dispatch messages to it as soon as they arrived.
I'm not sure what exactly you mean by "transparent," but if it means that the process(es) receiving the message(s) don't have to do anything then no message broker will handle your use-case.
We are implementing an MQ/IIB architecture where we will have one QM and one Broker each on 2 RHEL servers load-balanced with each other to divide incoming traffic.
We have consumer applications which connect our servers through JMS bindings file. We also have IIB applications running on both of them.
Now, since one bindings file could have only one QMGR name while creating a connection factory, it's not recommended to keep different QM/Broker names on each servers. Since this bindings file would be shared with consumers, it has to be with unique QM name.
But if we have same QM/Broker names on each server, all logs on IIB record and replay tool will have one Broker name (from both servers) which is again difficult to identify which server actually served the incoming request.
Could you please suggest best possible approach in such scenario?
Or else suggest if above approach can be modified to achieve our goal.
In general it is not a good practice to have two queue managers with the same name. The same would be true for IIB brokers for the reasons you stated.
In the Binding file you can leave QMANAGER blank (null). This will allow the application to connect to any queue manager listening on the HOSTNAME and PORT that you specify.
If the queue managers on the 2 RHEL servers use the same port you could even set hostname to localhost and use the same binding file on both servers.
Example is below if both queue managers listened on the same port:
DEFINE CF(CF_NAME) QMANAGER() TRANSPORT(CLIENT) CHANNEL(MY.SVRCONN) HOSTNAME(localhost) PORT(1414)
I'm using JaCoCo to generate code coverage report and I have a number of scenarios to generate separate reports for. The problem is that the program is extremely huge and takes around 2 minuted to start and load all the class files.
I want to fetch the execution data on run time as soon as one of those scenarios is completed and then start with the next scenario, instead of restarting the server for each scenario.
Is there a way to do so?
All below is taken from official JaCoCo documentation at http://www.jacoco.org/jacoco/trunk/doc/
Java Agent described at http://www.jacoco.org/jacoco/trunk/doc/agent.html has option output:
file: At VM termination execution data is written to the file specified in the destfile attribute.
tcpserver: The agent listens for incoming connections on the TCP port specified by the address and port attribute. Execution data is
written to this TCP connection.
tcpclient: At startup the agent connects to the TCP port specified by the address and port attribute. Execution data is
written to this TCP connection.
and option jmx:
If set to true the agent exposes functionality via JMX
exposed via JMX functionality as described in JavaDoc among others provides three following methods:
byte[] getExecutionData(boolean reset)
Returns current execution data.
void dump(boolean reset)
Triggers a dump of the current execution data through the configured output.
void reset()
Resets all coverage information.
Again from documentation there is also Ant Task dump -
http://www.jacoco.org/jacoco/trunk/doc/ant.html:
This task allows to remotely collect execution data from another JVM without stopping it.
Remote dumps are usefull for long running Java processes like application servers.
dump command in Command Line Interface -
http://www.jacoco.org/jacoco/trunk/doc/cli.html
dump goal in jacoco-maven-plugin - http://www.jacoco.org/jacoco/trunk/doc/dump-mojo.html
API usage examples include:
MBeanClient.java This example connects to a coverage agent to collect
execution data over the JMX.
ExecutionDataClient.java This example
connects to a coverage agent to collect execution data over the remote
protocol.
ExecutionDataServer.java This example starts a socket server
to collect execution data from agents over the remote protocol.
I have put together an architecture that at high level is best described below
Five node docker swarm cluster
Have say 5 instances of my dockerized micro service running one copy on each of the swarm nodes
The service offers functionality via REST end points
One such functionality is downloads and they work perfectly, I wrote some code in Scala/Play framerwork, dockerized the service and deployed it.
I also know that since I use swarm , it internally does LB per request for me.
I have some questions on WebSocket and how load balancer does not ruin things during download.
I start a 5GB file download and it works. I am using HTTP stream or chunked I guess it does not matter. Now my question is once my REST end point for download is hit, the TCP connection remains open and since it is open until the server closes the connection, it is due to this that the swarm load balancing does not interfere? In short, each time a client requests a HTTP call, swarm load balances it but once the TCP socket is established as in case of specific download example, the request is served by one node as the connection is not re-stablished during the download process?
If a client opens a web socket, it will hit one of the nodes of swarm where the service is running and the websocket connection since it is open, the same service instance will push the notifications?
If for some reason the websocket dies, a new connection might be established by client but the request might end up on some other service instance and will remain like that until a new connection is again established?
Are above 3 points correct in my understanding? Is there some reading material/blogs I can find more on elaborating this?
Maybe using nginx like proxy LB, ip_hash mode
Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses. The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. In the latter case client requests will be passed to another server. Most probably, it will always be the same server as well.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
i have a client server application (TCP) that's designed with indy delphi.
i want to have a queue and flow control in my server side application.
my server should not lose any clients data when server traffic is full.
for example , in my server side application i want determine maximum of bandwidth for server is 10Mbps and then if server bandwidth (this 10Mbps) was full the other clients be on queue until bandwidth get free .
so i want to know how can i design this with delphi ?
thanks
best regard
The client should not send the message directly to the server. Put the message in a local store (f.i. sqlite-db) and in a thread you read the first message from the local store and try to send it to the server.
If the message was delivered to the server (no exception raised) delete the message from the local store and process the next "first" message in the local store.
Within the TIdTCPServer.OnExecute method which receives the client data, it is possible to 'delay' processing of the incoming request with a simple Sleep command. The client data will stay in the TCP socket until the Sleep command finished.
If your server keeps track of the current 'global' bandwidth usage for all clients, it is possible to set the Sleep time dynamically. You could even set different priorities for different clients.
So you would need a simple but thread safe bandwidth usage monitor, an algorithm which calculates sensible Sleep time values, and a way to assign this Sleep time to the individual client connection contexts.
See also:
https://en.wikipedia.org/wiki/Token_bucket
https://github.com/bandwidth-throttle/token-bucket for an example implementation in PHP
http://www.nurkiewicz.com/2011/03/tenfold-increase-in-server-throughput.html