How do I monitor my connection to a Naming Service? - corba

I have an application that binds object references to multiple Naming Services. If any of these Naming Services are restarted, I'd like to be able to detect that and rebind my references to it. Right now, the only way I can think of doing it is to periodically poll the Naming Service context object with something like the following (using omniorbpy):
def check_connection(context):
try:
if CORBA.is_nil(context):
return False
if context._non_existent():
return False
except CORBA.Exception:
return False
else:
return True
I know that _non_existent() isn't intended to be used as a "ping" operation, but I can't think of any other way to do it. It would be nice if there was a way to be notified with a callback when the connection is lost without having to constantly poll the service. Any CORBA experts out there have any ideas?
Note: The network architecture and Naming Service implementations are out of my control. So switching to a persistent Naming Service isn't an option unfortunately.

If you can't use a persistent Naming Service then I think your only option is to poll. But I would probably just try to rebind the reference rather than check it or call _non_existant().

Related

SignalR connection (hub proxy) lifetime

What is best practice for connecting clients to SignalR hub? In client, is it better to keep connection (hub proxy) somewhere, or is it better to create connection (hub proxy) for each hub method call?
Per https://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-server#multiplehubs
There is no performance difference for multiple Hubs compared to
defining all Hub functionality in a single class.
Whether or not you use multiple hubs is simply a matter of deciding how you want to logically organize your code. Standard OOP practices apply here.
Later in the same documentation...
If you need to use the context multiple-times in a long-lived object,
get the reference once and save it rather than getting it again each
time. Getting the context once ensures that SignalR sends messages to
clients in the same sequence in which your Hub methods make client
method invocations. For a tutorial that shows how to use the SignalR
context for a Hub, see Server Broadcast with ASP.NET SignalR.
...not sure if that last bit is relevant to what you're asking, but it's good to know as you plan your signalr architecture.
The optimal way to go is to keep just a single connection for all method calls. Every new connection you open will waste network resources and processing, as SignalR needs to keep a live connection with the server for each connection. That means battery drain on mobile devices and more server workload.
[UPDATE]
After reading #alex-dresko answer I realized my answer needs some clarification.
It doesn´t matter how many proxies you create under the same connection, it won´t change performance:
hubConnection = new HubConnection(BASE_ADDRESS);
var chatProxy = hubConnection.CreateHubProxy("chatHub");
var otherProxy = hubConnection.CreateHubProxy("otherHub");
var nProxy = hubConnection.CreateHubProxy("nHub");
However, you are asking if
is it better to keep connection (hub proxy) somewhere
Well, connection is one thing (HubConnection) and the proxy is another thing.
New connections will open a new bridge between the client and the server, so creating and persisting a single connection globally in your app makes sense. Then you can reuse the very same connection to create as many proxies as you want.
You can easily test this scenario. Create a console app that creates a connection and 2 proxy hubs. Then create 2 connections and 1 hub on each one and check the signalr logs...

What is a Grails "transactional" service?

I'm reading the Grails docs on services which make numerous mention of transactions/transactionality, but without really defining what a transactional service method really is.
Given the nature of services, they frequently require transactional behaviour.
What exactly does this mean? Are transactional methods only those that use JPA/JDBC to communicate with a relational DB, or do they apply to anything covered by JTA?
Is there any reason why I just wouldn't make a service class #Transactional in case it evolves to some day use a transaction? In other words, are there performance concerns to making all service methods transactional?
Grails services are transactional by default - if you don't want a service to be transactional, you need to remove all #Transactional annotations (both Grails' #grails.transaction.Transactional and Spring's #org.springframework.transaction.annotation.Transactional) and add
static transactional = false
If you haven't disabled transactions with the transactional property and have no annotations, the service works the same as if it were annotated with Spring's annotation. That is, at runtime Spring creates a CGLIB proxy of your class and registers an instance of the proxy as the Spring bean, and it delegates to an instance of your actual class to do the database access and your business logic. This lets the proxy intercept all public method calls and start a new transaction, join an existing one, create a new one, etc.
The newer Grails annotation has all of the same settings as the Spring annotation, but it works a bit differently. Instead of triggering the creation of a single proxy, each method is rewritten by an AST transform during compilation, essentially creating a mini proxy for each method (this is obviously a simplification). This is better because the database access and transaction semantics are the same, but if you call one annotated method from another annotated with different settings, the different settings will be respected. But with a proxy, it's a direct call inside the delegate instance, and the proxy is bypassed. Since the proxy has all of the logic to create a new transaction or use other different settings, the two methods will use the first method's settings. With the Grails annotation every method works as expected.
There is a small performance hit involved for transactional methods, and this can accumulate if there are a lot of calls and/or a lot of traffic. Before your code runs, a transaction is started (assuming one isn't active) and to do this, a connection must be retrieved from the pool (DataSource) and configured to turn off autocommit, and make the various transaction settings (isolation, timeout, readonly, etc.) have to be made. But the Grails DataSource is actually a smart wrapper around the "real" one. It doesn't get a real JDBC Connection until you start a query, so all of the configuration settings are cached until then, and then "replayed" on the real connection. If the method doesn't do any database work (either because it never does, or because it exits early based on some condition before the db access code fires), then there's basically no database cost. But if it does, then things work as expected.
Don't rely on this DataSource proxying logic though - it's best to be explicit about which services are transactional and which aren't, and within each service which methods are transactional and which aren't. The best way to do this is by annotating methods as needed, or adding a single annotation at the class level if all methods use the same settings.
You can get more info in this talk I did about transactions in Grails.
First, if your performance concerns are due to the fact your services are transactional then you have reached nirvana. I say this because there are going to be plenty of other bottle necks in your application long before this is a major (or even minor) concern. So, don't fret about that.
Typically in Grails a transaction relates to the transactional state of a database connection or hibernate session. Though it could be anything managed by the JTA with the proper Spring configuration.
In simple terms, it's usually means (by default) a database transaction.

Using private fields in the grails controller

I know that service in grails is singleton by default.
Is it bad practice to use private fields in controller/service? Could anyone explain, why?
Controllers are not singletons by default. They are created for each request. Services are, by default singletons. It's not bad practice to use private fields in Services. It's fairly common that Services have private fields to hold configuration state at runtime.
I suspect your concern is about using private fields as a means of storing state for a particular request within a Service. Which is obviously bad considering there could be N requests being serviced by the Service. So long as you are using private fields to control the service from an application perspective and not a request perspective you will be fine.
Edit (further information)
As stated, services can and often do have private members. However, you should never use these as a means for storing information about the current request being processed. Obviously, since this is a singleton that would cause interleaving issues. Only use private members to store information that is visible across all requests. Typically these will be configuration settings of the service itself.
It's best to make your service stateless in regards to the requests they are processing. Any state you need should be encapsulated in the parameter(s) or input/output of your Service methods. Services should act on data, not the other way around.

CXF client loads wsdl for both service and port?

In a java web app, I need to call a remote soap service, and I'm trying to use a CXF 2.5.0-generated client. The soap service is provided by a particular ERP vendor, and its wsdl is monstrous, thousands of types, dozens of xsd imports, etc. wsdl2java generates the client ok, thanks to the -autoNameResolution flag. But at runtime it retrieves the remote wsdl twice, once when I create the service object, and again when I create a port object.
MyService_Service myService = new MyService_Service(giantWsdlUrl); // fetches giantWsdl
MyService myPort = myService.getMyServicePort(); // fetches giantWsdl again
Why is that? I can understand retrieving it when creating myService, you want to see that it matches the client I'm currently using, or let a runtime wsdl location dictate the endpoint address, etc. But I don't understand why asking for the port would reload everything it just went out on the wire for. Am I missing something?
Since this is in a web application, and I can't be sure that myPort is threadsafe, then I'd have to create a port for each thread, except that's way too slow, 6 to 8 seconds thanks to the monstrous wsdl. Or add my own pooling, create a bunch in advance, and do check-outs and check-ins. Yuck.
For the record, the JaxWsProxyFactoryBean creation route does not ever fetch the wsdl, and that's good for my situation. It still takes a long time on the first create(), then about a quarter second on subsequent create()s, and even that's less than desirable. And I dunno... it sorta feels like I'm under the hood hotwiring the thing rather than turning the key. :)
Well, you have actually answered the question yourself. Each time you invoke service.getPort() the WSDL is loaded from remote site and parsed. JaxWsProxyFactoryBean goes absolutely the same way, but once the proxy is obtained it is re-used for further invocations. That is why the 1st run is slow (because of "warming up"), but subsequent are fast.
And yes, JaxWsProxyFactoryBean is not thread-safe. Pooling client proxies is an option, but unfortunately will eat a lot of memory, as JAX-WS runtime model is not shared among client proxies; synchronization is perhaps better way to follow.

Logout clients from XMPP

I have an xmpp/ejabberdb app that uses an external service to provide eventing features, but when this service becomes unavailable, I want to disconnect/logout all of my clients. Is this possible? How?
I got it working the way I needed. In fact, I didn't find any simple way to make my own server logout all connected users given some kind of situation, so I dug into ejabberd's code and figured out a way to do it myself.
In ejabberd_c2s.erl module, when a client logs out or it's socket is dropped for some reason, the FSM is terminated, doing all necessary clean up to maintain ejabberd's consistency.
What I had to do was just create an exported function shutdown/1 in this module that calls gen_fsm:send_all_state_event/2 sending a signal for it to terminate.
As for each connection there's one c2s process, I need to call this function for each user.
---UPDATING---
Actually there's no need to create this shutdown function, as ejabber_c2s already has the ability to process 'closed' signal, which does the same thing. So, instead of creating the shutdown function, simply doing ge_fsm:send_event(C2SPid, closed) might be enough.
---UPDATING---
To discover the user's c2s process PID I just use ejabberd_sm:get_session_pid/1 or ejabberd_sm:dirty_get_sessions_list/0 (for all sessions).
This worked fine for me, but if anyone has a better idea, please add here.
Thanks
I don't know the ejabberd specifics, but you could write a custom XMPP component which polls the external service (or listens for presence events, if it's another XMPP component), then logs out users when the service becomes unavailable.

Resources