Netty client acting as a service - network-programming

I am currently working on a client-server application using netty, some of the clients are not going to be doing anything until they recieve a message. I have read the api and can´t find a way to do so. I mean I could try to have "in.readline()" on the main so it won´t end but it Doesn´t feel right. Also could have endless loops but I don´t think its the right way either.
The question here is: is there a way to bind the socket for incoming messages just like the server having the main method ending?
public void run(){
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap bootstrap = new Bootstrap()
.group(group)
.channel(NioSocketChannel.class)
.handler(new ChatClientInitializer());
Channel channel = bootstrap.connect(host,port).sync().channel();
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
System.out.println("Inserte su nombre");
String nombre = in.readLine();
MyClientChannel canal = new MyClientChannel(channel,nombre);
canal.write("SM",nombre);
in.readLine();
See that at the end I had to write "in.readline()" so the program wouldn´t end and the handler would be still up for incomming messages

The easiest thing to do would be to replace:
in.readLine();
With:
channel.closeFuture().await();
When the connection to the server is disconnected, the client will terminate.
You will also want to spend some time defining your client's life-cycle, so that the channel's state doesn't affect when your application is running and when it's not.

Related

How to properly configure SQS without using SNS topics in MassTransit?

I'm having some issues configuring MassTransit with SQS. My goal is to have N consumers which create N queues and each of them accept a different message type. Since I always have a 1 to 1 consumer to message mapping, I'm not interested in having any sort of fan-out behaviour. So publishing a message of type T should publish it directly to that queue. How exactly would I configure that? This is what I have so far:
services.AddMassTransit(x =>
{
x.AddConsumers(Assembly.GetEntryAssembly());
x.UsingAmazonSqs((context, cfg) =>
{
cfg.Host("aws", h =>
{
h.AccessKey(mtSettings.AccessKey);
h.SecretKey(mtSettings.SecretKey);
h.Scope($"{mtSettings.Environment}", true);
var sqsConfig = new AmazonSQSConfig() { RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(sqsConfig);
var snsConfig = new AmazonSimpleNotificationServiceConfig()
{ RegionEndpoint = RegionEndpoint.GetBySystemName(mtSettings.Region) };
h.Config(snsConfig);
});
cfg.ConfigureEndpoints(context, new BusEnvironmentNameFormatter(mtSettings.Environment));
});
});
The BusEnvironmentNameFormatter class overrides KebabCaseEndpointNameFormatter and adds the environment as a prefix, and the effect is that all the queues start with 'dev', while the h.Scope($"{mtSettings.Environment}", true) line does the same for topics.
I've tried to get this working without configuring topics at all, but I couldn't get it working without any errors. What am I missing?
The SQS docs are a bit thin, but is at actually possible to do a bus.Publish() without using sns topics or are they necessary? If it's not possible, how would I use bus.Send() without hardcoding queue names in the call?
Cheers!
Publish requires the use of topics, which in the case of SQS uses SNS.
If you want to configure the endpoints yourself, and prevent the use of topics, you'd need to:
Set ConfigureConsumeTopology = false – this prevents topics from being created and connected to the receive endpoint queue.
Set PublishFaults = false – this prevents fault topics from being created when a consumer throws an exception.
Don't call Publish, because, obviously that will create a topic.
If you want to somehow establish a convention for your receive endpoint names that aligns with your ability to send messages, you could create your own endpoint name formatter that would use message types and then use those same names to call GetSendEndpoint using the queue:name short name syntax to Send messages directly to those queues.

Grails controller service - src/groovy poll controller for property value

I'm basically trying to get the percentage of time a task is taking to display to the user on the screen in an overlay template.
I have a service that is calculating the process percentage:
def progressCalculation(requestsToSend, requestsSent, requestsFailed, progressPercentage) {
progressPercentage = 100 / requestsToSend * (requestsSent + requestsFailed)
progressPercentage = Math.round(progressPercentage * 1) / 1
MyController upCont = new MyController()
upCont.progress(progressReport.progressPercentage)
}
this continues to send progressReport.progressPercentage to the controller:
def progress(progressData) {
int statusToView = progressData
if (statusToView % 5==0) {
[statusToView: statusToView]
}
}
I have created a src/groovy file that is using websockets from here: https://github.com/vahidhedayati/grails-websocket-example/blob/master/README.md
My connection is working but I need to show the percentage on the view using the websocket which is working.
#OnMessage
public String handleMessage(String message) {
message = MyController.progressPercentage
String replyMessage = "echo "+message
return replyMessage
}
now what I'm trying to so here is return the progressPercentage value from the controller to the src/groovy file so that my view can continually updated with the latest property value whilst the task is completing.
MyController upCont = new MyController() seriously?
It is good idea to move the code that hosts and modifies progressPercentage variable to service layer and access it using service rather than controller.
myService.progressPercentage rather than MyController.progressPercentage
Also you must inject myService , not instantiate it as myService = new MyService(), services are singletons you can not instantiate them like this. They are managed by the spring container.
Actually if you do MyController upCont = new MyController()
and you try to access a property of upCont you will get this beautiful error message:
java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
I put those instructions together so if I can help you in any way do let me know.
Websockets require as much frontend work as backend. so to get back the data via websockets you need to expand on the java script as well as expand on the backend websocket sending that information to the java script.
So if you had a button on the frontend gsp that rather than was a typical
You can take a look at some of my plugins that already do this. There is a ping/pong that happens discretely in https://github.com/vahidhedayati/jssh which if user defines within taglib the websocket connection triggers a pong that frontend javascript receives and sends ping - and they continue doing this..
Here is another example which is what you probably need to use:
This is the result back from websocket
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L411
which when recieved updates this span or div id:
https://github.com/vahidhedayati/grails-jenkins-plugin/blob/master/grails-app/views/jen/_process.gsp#L213
so you need to get your websocket to send it back in some json format that your frontend javascript picks up the json request and if it is of a certain convention to look for a value and update a div on the frontend.
There is a good video I have done on wschat which shows you updating frontend using websocket client/server. it may help you understand it better
https://www.youtube.com/watch?v=xagMYM9n3l0 or https://www.youtube.com/watch?v=zAySkzNid3E
unsure which one it was in
E2A: it will need to be a service:
https://github.com/vahidhedayati/grails-wschat-plugin/blob/master/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L63 then the few lines ahead registers those services in the websocket endpoint. Now going back in the history of the code or if you follow onMessage to verifyAction - you will need to send something from frontend - or upon when a connection is made to then send a message to frontend https://github.com/vahidhedayati/grails-wschat-plugin/blob/75590bf10ea040c18548377dedc716fdab2aa820/src/groovy/grails/plugin/wschat/WsChatEndpoint.groovy#L148. You can use userSession to directly message the person making the socket connection. On webpage using javascript parse json and update div as mentioned above

Executable can not send an email when called by Task Scheduler

I have written a C# .net executable that sends an email through an outlook exchange server. Everything works fine when I run it manually, but when I use a scheduled task to call the executable it doesn't send the email. Everything else works fine, but the email doesn't get sent. I set the scheduled task to run as my user account. When the task is running I can see in Task Manager that the executable is running under my username. This rules out any obvious permissions issues.
While debugging I made the program output some text to a file on a network share on the same machine on which Exchange is running. This file outputs fine, so I know that the program can connect to that machine.
Can anyone help?
Ok, as you can see above I was trying to send mail through a running instance of Outlook. Although I wasn't able to post code without in a comment box without pulling my hair out #amitapollo gave me the clue to use the System.Net.Mail namespace. At the end of the day I got it to work. Here's my code:
System.Net.Mail.SmtpClient smtpClient = new System.Net.Mail.SmtpClient("myExchangeServerIPAddress");
smtpClient.UseDefaultCredentials = false;
smtpClient.Credentials = new System.Net.NetworkCredential("myDomain\\myUsername", "myPassword");
smtpClient.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network;
smtpClient.EnableSsl = true;
System.Security.Cryptography.X509Certificates.X509Store xStore = new System.Security.Cryptography.X509Certificates.X509Store();
System.Security.Cryptography.X509Certificates.OpenFlags xFlag = System.Security.Cryptography.X509Certificates.OpenFlags.ReadOnly;
xStore.Open(xFlag);
System.Security.Cryptography.X509Certificates.X509Certificate2Collection xCertCollection = xStore.Certificates;
System.Security.Cryptography.X509Certificates.X509Certificate xCert = new System.Security.Cryptography.X509Certificates.X509Certificate();
foreach (System.Security.Cryptography.X509Certificates.X509Certificate _Cert in xCertCollection)
{
if (_Cert.Subject.Contains("myUsername#myDomain.com"))
{
xCert = _Cert;
}
}
smtpClient.ClientCertificates.Add(xCert);
//I was having problems with the remote certificate no being validated so I had to override all security settings with this line of code...
System.Net.ServicePointManager.ServerCertificateValidationCallback = delegate(object s, System.Security.Cryptography.X509Certificates.X509Certificate certificate, System.Security.Cryptography.X509Certificates.X509Chain chain, System.Net.Security.SslPolicyErrors sslPolicyErrors) { return true; };
smtpClient.Send("myUsername#myDomain.com", "myUsername#myDomain.com", "mySubject", "myBody");

Any idea why requests to vertx embedded in grails are synchronously queued up

Environment: Mac osx lion
Grails version: 2.1.0
Java: 1.7.0_08-ea
If I start up vertx in embedded mode within Bootstrap.groovy and try to hit the same websocket endpoint through multiple browsers, the requests get queued up.
So depending on the timing of the requests, after one request is done with its execution the next request gets into the handler.
I've tried this with both websocket and SockJs and noticed the same behavior on both.
BootStrap.groovy (SockJs):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
def sockJSServer = vertx.createSockJSServer(server)
def config = ["prefix": "/eventbus"]
sockJSServer.installApp(config) { sock ->
sleep(10000)
}
server.listen(8088)
javascript:
<script>
function initializeSocket(message) {
console.log('initializing web socket');
var socket = new SockJS("http://localhost:8088/eventbus");
socket.onmessage = function(event) {
console.log("received message");
}
socket.onopen = function() {
console.log("start socket");
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket");
}
}
OR
BootStrap.groovy (Websockets):
def vertx = Vertx.newVertx()
def server = vertx.createHttpServer()
server.setAcceptBacklog(10000);
server.websocketHandler { ws ->
println('**received websocket request')
sleep(10000)
}.listen(8088)
javascript
socket = new WebSocket("ws://localhost:8088/ffff");
socket.onmessage = function(event) {
console.log("message received");
}
socket.onopen = function() {
console.log("socket opened")
socket.send(message);
}
socket.onclose = function() {
console.log("closing socket")
}
From the helpful folks at vertx:
def server = vertx.createHttpServer() is actually a verticle and a verticle is a single threaded process
As bluesman says, each verticle goes in its own thread. You can span your verticles across cores in your hardware, even clustering them with more machines. But this add capacity to accept simultaneous requests.
When programming realtime apps, we should try to build the response as soon as posible to avoid blocking. If you think your operation can be time intensive, consider this model:
Make a request
Pass the task to a worker verticle and assign this task an UUID (for example), and put it into response. The caller now knows that the work is in progress and receive the response so fast
When the worker ends the task, put a notification in event bus using the UUID assigned.
The caller check the event bus for the task result.
This is tipically done in a web application vía websockets, sockjs, etc.
This way you can accept thousands of request without blocking. And clients will receive the result without blocking the UI.
Vert.x use the JVM to create a so called "multi-reactor pattern", that is a reactor pattern modified to perform better.
As far as I understood is not true that each verticle has its own thread: the fact is that each verticle is always served by the same event loop, but more verticles can be binded with the same event loop and there can be multiple event loops. An event loop is basically a thread, so few threads should serve many verticles.
I didn't use vert.x in embedded mode (and I don't know if the main concept change) but you should perform much better instantiating many verticles for the job
Regards,
Carlo
As mentioned before Vertx concept is based on reactor pattern which means the single instance has at least one single-threaded event loop and processes events sequentially. Now the request processing may consist of several events, the point here is to serve the request and each event with non-blocking routines.
E.g. when you wait for Web Socket message the request should be suspended and in the event of message it is woken back. Whatever you do with the message should be also non-blocking thus asynchronous, like any file IO, networking IO, DB access. Vertx provides basic elements which you should use to build such async flow: Buffers, Pumps, Timers, EventBus.
To wrap it up - just never block. The use of sleep(10000) kills the concept. If you really need to halt the execution use VertX's Timers instead.

MBeanServerConnection.invoke hangs forever

We have an app that invokes various remote methods on MBeans using MBeanServerConnection.invoke.
Occasionally one of these methods hangs.
Is there any way to have a timeout on the call? so that it will return with an exception if the call takes too long?
Or do I have to move all those calls into separate threads so they don't lock up the UI and require killing the app?
See http://weblogs.java.net/blog/emcmanus/archive/2007/05/making_a_jmx_co.html
===== Update =====
I was thinking about this stuff when I first responded, but I was on my mobile and I can't type worth a damn on it.....
This is really an RMI problem, and unless you use a different protocol, there's not much you can do, except, as you say, move all those calls into separate threads so they don't lock up the UI.
But.... if you have the option of fiddling with the target server and you can customize the connecting client, you have at least 1 option which is to customize the JMXConnectorServer on your target servers.
The standard JMXConnectorServer implementation is the RMIConnectorServer. Part of it's specification is that when you create a new instance using any of the constructors (like RMIConnectorServer(JMXServiceURL url, Map environment)), the environment map can contain a key/value pair where the key is RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE and the value is a RMIClientSocketFactory. Therefore, you can specify a socket factory method like this:
RMIClientSocketFactory clientSocketFatory = new RMIClientSocketFactory() {
public Socket createSocket(String host, int port) {
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
}
};
This factory creates a Socket and then sets its SO_TIMEOUT using setSoTimeout, so when the client connects using this socket, all operations, including connecting, will timeout after 3000 ms.
You could also checkout the JMXMP connector and server in the jmx-optional package of the OpenDMK. (links are to my github mavenized). No built in solution, mind you, but they're super easy to extend and JMXMP is simple TCP socket based rather than RMI, so this type of customization would be trivial.
Cheers.
# Nicholas : The above code is not working.I mean request is not getting timeout after 3000. ms.
map.put(RMIConnectorServer.RMI_CLIENT_SOCKET_FACTORY_ATTRIBUTE , new RMIClientSocketFactory() {
#Override
public Socket createSocket(String host, int port) throws IOException {
if(logger.isInfoEnabled() ){
logger.info("JMXManager inside createSocket..." + host + ": port :" + port);
}
Socket s = new Socket(host, port);
s.setSoTimeout(3000);
return s;
}
});
cs = JMXConnectorServerFactory.newJMXConnectorServer(url,map,mbeanServer);
As I answered on: How to set request timeout for JMX Connector the RMI properties can help you. All the properties are on Oracle documentation site:
http://docs.oracle.com/javase/7/docs/technotes/guides/rmi/sunrmiproperties.html.
For example: -Dsun.rmi.transport.tcp.responseTimeout=60000 is a client side tcp response timeout. There are also properties for connect timeout and for server side connections.
I also am not happy how the JMX/RMI/TCP stack hides important settings from lower level protocols, and makes it not available for a single connection.

Resources