Remoting (server side) - c#-2.0

I´m relative new on remoting (2.0 C#). Is there any/someway to lock the server side object/instance to one client?
I have up to 10 clients that will connect to the server. The server will offer 3 different task/operations/classes and if one client does a request and if the server is not working on that, I´ll like to lock this operation to that client. The reason for this is that the requests works with HW that only can handle on task at the time. Hope you understand what I like too do.
EDIT:
I´ll try to explain my problem again...
I have 3 classes that will have X number of methods/operations (operations that will trigger a external hardware to do some measuring). When a client "connects" to one class (at the time) and request a measuring to be performed I want to lock that class to the client, hence, the client will own this class and it shall be able to execute all methods. No other client shall be able/allowed to access this class while the first client has control. The other tow classes should be open for requests from other clients, but the same principle/rules shall apply to these classes. As soon as a client request a lock it shall have it as long as it requires it. I´ll will have an intreface that all clients must follow. Call a method called Lock() to require the control over the class and Unlock() to release the control. I/We will develop all the clients and the server!
Thanks for all the help, so far!
Regards
/Anders

You have to lock the task by using semaphores in order to ensure only one thread at a time. Look into the Semaphore and Mutex classes.
Edit:
You can do many ways from locking to complex semaphores, here you have two samples:
This one only locks to ensure that one execution is being done at a time:
private static object lockObject=new object();
public void Test()
{
lock (lockObject)
{
//your code here
}
}
This one uses a Mutex to wait until it is released, but with a timeout that will return with some information to the client indicating that the method could not be executed.
private static Mutex mutex = new Mutex();
public bool Test2()
{
if (!mutex.WaitOne(500))
{
return false;
}
try
{
//your code here
}
finally
{
mutex.ReleaseMutex();
}
return true;
}

Ok, now I see the point.
You can use the CAO approach instead: create a factory (can be a singleton) that gives you a CAO (Client Activated Object) if nobody else owns an instance.
CAO is good for that because it will ensure that if the client dies the CAO would be released.
Explaining a CAO is too much for a simple answer, it is something like this: CAO is a class inherited from MarshalByRefObject that you will create from your factory and return the instance from one method (i.e.: your Lock method); the object lives in the server and the client receives only a proxy. The object will live into the server while it's lease is being refreshed by the client (done automatically while the object is referenced and client are alive).
You may take a look to the Ingo Rammer's articles and books on remoting.

jmservera, thanks for all your help.
I have now found a solution that will work for me...I´m using the proxy pattern combined with the factory pattern. I do use the WellKnownObjectMode.Singleton method so I can control how many active instances I have on my server.
And by doing it this way, i don´t need to share my code with the client, only the interface (as you said before).
Regards
/Anders

Related

How to set up logback MDC in apache beam and dataflow?

We are using apache beam and would like to setup the logback MDC. logback MDC is a great GREAT resource when you have a request come in and you store let's say a userId (in our case, it's custId, fileId, requestId), then anytime a developer logs, it magically stamps that information on to the developers log. the developer no longer forgets to add it every log statement he adds.
I am starting in an end to end integration type test with apache beam direct runner embedded in our microservice for testing (in production, the microservice calls dataflow). currently, I am see that the MDC is good up until after the expand() methods are called. Once the processElement methods are called, the context is of course gone since I am in another thread.
So, trying to fix this piece first. Where should I put this context such that I can restore it at the beginning of this thread.
As an example, if I have an Executor.execute(runnable), then I simply transfer context using that runnable like so
public class MDCContextRunnable implements Runnable {
private final Map<String, String> mdcSnapshot;
private Runnable runnable;
public MDCContextRunnable(Runnable runnable) {
this.runnable = runnable;
mdcSnapshot = MDC.getCopyOfContextMap();
}
#Override
public void run() {
try {
MDC.setContextMap(mdcSnapshot);
runnable.run();
} Catch {
//Must log errors before mdc is cleared
log.error("message", e);. /// Logs error and MDC
} finally {
MDC.clear();
}
}
}
so I need to do the same with apache beam basically. I need to
Have a point to capture the MDC
Have a point to restore the MDC
Have a point to clear out the MDC to prevent it leaking to another request(really in case I missed something which seems to happen now and then)
Any ideas on how to do this?
oh, bonus points if it the MDC can be there when any exceptions are logged by the framework!!!! (ie. ideally, frameworks are supposed to do this for you but apache beam seems like it is not doing this. Most web frameworks have this built in).
thanks,
Dean
Based on the context and examples you gave, it sounds like you want to use MDC to automatically capture more information for your own DoFns. Your best bet for this is, depending on the lifetime you need your context available for, to use either the StartBundle/FinishBundle or Setup/Teardown methods on your DoFns to create your MDC context (see this answer for an explanation of the differences between the two). The important thing is that these methods are executed for each instance of a DoFn, meaning they will be called on the new threads created to execute these DoFns.
Under the Hood
I should explain what's happening here and how this approach differs from your original goal. The way Apache Beam executes is that your written pipeline executes on your own machine and performs pipeline construction (which is where all the expand calls are occurring). However, once a pipeline is constructed, it is sent to a runner which is often executing on a separate application unless it's the Direct Runner, and then the runner either directly executes your user code or runs it in a docker environment.
In your original approach it makes sense that you would successfully apply MDC to all logs until execution begins, because execution might not only be occurring in a different thread, but potentially also a different application or machine. However, the methods described above are executed as part of your user code, so setting up your MDC there will allow it to function on whatever thread/application/machine is executing transforms.
Just keep in mind that those methods get called for every DoFn and you will often have mutiple DoFns per thread, which is something you may need to be wary of depending on how MDC works.

how to check performance improvement by ConcurrentConsumers in spring-amqp

I am pretty new to Spring AMQP module. I was successful to create simple project which produces and consumes messages. What I don't understand is following:
If there is only one listener and more than one concurrent consumers are configured in SimpleMessageListenerContainer, how it will improve the performance? As per my understanding, as long as I have single listener which processes the message, no matter how many consumers(threads) picks up the messages from the queue does not matter.
Here's my code for your reference:
#Bean
public SimpleMessageListenerContainer messageListenerContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(someQueue());
container.setMessageListener(messageListenerAdapter());
container.setConcurrentConsumers(3);
return container;
}
I am sure, I am missing something in my understanding. Can someone throw light please. Thanks in advance.
The container will create 3 threads; each of which is registered as a consumer. It is equivalent to creating 3 separate containers with 1 consumer each.
Each consumer will call your single listener - it must be thread-safe - no shared data/fields or access to any such data must be synchronized.
It's generally best, however, to use a state-less bean for your listener so you don't have to worry about concurrency.
If you can't make your listener thread-safe, you must create 3 separate containers and provide each one with its own instance of your listener.

Correct client-server framework architecture approach (different server versions)

I am trying to design and implement a framework to communicate with server (it's an iOS framework written in Swift). The challenge I am facing is the architecture - there are two ways of communicating with the server and I have to implement both (different versions). I really want to achieve having a stateless client, with methods such as: Client.authenticate() or Client.downloadFile(). The problem is when having two implementations I would end up with methods in my Client class like this one:
public class func authenticate(state: state) {
if (state.type == 1) {
Client1.authenticate(state)
} else {
Client2.authenticate(state)
}
}
Repeated for every single method...
I wanted to initially keep the client like this - stateless and static and have only state objects that hold the actual state as there could be many connections to the server with various states. By that I wanted to avoid having the client as an object and both holding the state and performing the calls to the server. The problem is that this approach is just...dirty I guess. What would be a more DRY, readable and sustainable way of doing this?
I don't fully understand your intention without more code samples, but the patterns I will present to you will surely clear things up for you.
If your Client class always uses either Client1 or Client2 (or more specifically, if your every client object state variable doesn't change through it's instances lifetime) you should use Dependency Injection.
You create a procol (Let's call it RemoteClient with authenticate method (and every other method that the server client should implement) and make Client1 and Client2 conform to that protocol.
Now you make your Client class to accept a RemoteClient in it's constructor.
Now whatever creates the Client object, it can decide what to inject into the constructor: the Client1 concrete class object, or Client2.
There's a lot of articles about Dependency Injection so I won't cover it in much detail.
Example article
You can also use the Strategy design pattern, which is very similiar but kind of different in intent:
Strategy design pattern
Difference between DI and Strategy
EDIT
After you've clarified what you want to do in comments below:
In that case, you can use reflection/metadata and use dictionary/map to invoke the client you want.
(pseudocode)
enum ServerType
{
client1,
client2
}
Dictionary* serversDictionary; // key = ServerType , value = object of protocol type RemoteLocation
static init
{
serversDictionary[client1] = Client1.self; // using swift class metadata
serversDictionary[client2] = Client2.self; // using swift class metadata
}
static authenticate(ServerType type) {
let locationToSendAuthTo = serversDictionary[type];
locationToSendAuthTo.authenticate(type);
}
I'm not sure if Swift works that way because I've just started using it. I'm not sure if you can call a static method on a class type. The docs are pretty thin on that.
More here:
Swift class introspection & generics

Is this MVC Fire and Forget approach bad Design?

I have a controller's action that performs a task and at the end, it sends a confirmation e-mail to the user. The e-mail part of it is not very important, so I do not want to make my action break if the sending of the e-mail throws an exception, and I don't want my HTTP response to wait for the e-mail to be sent either. I want this to be a fire and forget thing.
In a nutshell, this is how I approached it:
public async Task<ActionResult> MyAction(){
// Do stuff
await DoStuff();
Thread sendEmailThread = new Thread(SendEmail);
sendEmailThread.Start();
return result;
}
private async void SendEmail(){
await smtpClient.SendMessageAsync();
}
Is this approach proper?
It is not a good idea to start a new Thread whenever a new email is arrived.
Alternative Approach (especially for Email)
We normally run a background scheduling system behind the application. For example, Quartz.NET
Then we queue email in a queue (or database), and let the background thread pick up from queue (or database), and preform the process.
By doing so, we can re-send emails if SMTP has an error.
It's unnecessary to start a new thread for sending the email as that method will return as soon as the async operation is kicked off and the thread will end before the operation is complete.
Async operations do not use a thread so you are best off to just return a Task from that method and await it. Async void returning methods are a bad idea as well as no exceptions propagate out of them and you can't tell when the operation is completed. See Best Practices in Async programming for more details.
If you really want to do a fire and forget task, see Stephen Cleary's blog on the subject.
I can't give you a direct solution but the fire and forget thing is implemented by open source eCommerce solution for asp.net and is called nopCommerce. I really love their solution, i just wanted to share it with you.
Here is the codeplex code;
Go to Src -> Libraries -> Nop.Services -> Tasks
https://nopcommerce.codeplex.com/SourceControl/latest#src/Libraries/Nop.Services/Tasks/TaskManager.cs
Now have a look at the TaskManager class. You can check the demo online here.
Go to Admin Panel -> System Menu -> Schedule Tasks
Explanation
They are using this class as queue emails, keep alive, clear caching, exchange rates auto update and deleting many other things. And it works exactly you wanted. If any exception occurs, it will just retry and it won't stop or break the app. You can check the demo.

how to synchronize methods in actionscript?

The question is how could I stop a method being called twice, where the first call has not "completed" because its handler is waiting for a url to load for example?
Here is the situation:
I have written a flash client which interfaces with a java server using a binary encrypted protocol (I would love to not have had to re-invent the whole client/server object communcation stack, but I had to encrypt the data in such a way that simple tools like tamper data and charles proxy could not pick them up if using SSL).
The API presents itself to flas as an actionscript swf file, and the API itself is a singleton.
the api exposes some simple methods, including:
login()
getBalance()
startGame()
endGame()
Each method will call my HttpCommunicator class.
HttpCommunicator.as (with error handling and stuff removed):
public class HttpCommunicator {
private var _externalHalder:function;
public function communicate(data:String, externalHandler:APIHandler):void {
// do encryption
// add message numbers etc to data.
this._externalHalder = externalHandler;
request.data = encrypt(addMessageNumers(data)));
loader.addEventListener(Event.COMPLETE, handleComplete);
loader.load(request);
}
private function handleComplete(event:Event):void {
var loader:URLLoader = URLLoader(event.target);
String data = decrypt(loader.data);
// check message numbers match etc.
_externalHandler(data);
}
The problem with this is I cant protect the same HttpCommunicator object from being called twice before the first has handled the complete event, unless:
I create a new HttpCommunicator object every single time I want to send a message. I also want to avoid creating a URLLoader each time, but this is not my code so will be more problematic to know how it behaves).
I can do something like syncronize on communicate. This would effectivly block, but this is better than currupting the data transmission. In theory, the Flash client should not call the same api function twice in a row, but I but it will happen.
I implement a queue of messages. However, this also needs syncronization around the push and pop methods, which I cant find how to do.
Will option 1. even work? If I have a singleton with a method say getBalance, and the getBalance method has:
// class is instantiated through a factory as a singleton
public class API{
var balanceCommunicator:HttpCommunicator = new HttpCommunicator(); // create one for all future calls.
public funciton getBalance(playerId:uint, hander:Fuction):Number {
balanceCommunicator.communicate(...); // this doesnt block
// do other stuff
}
Will the second call trounce the first calls communicator variable? i.e. will it behave as if its static, as there is onlyone copy of the API object?
If say there was a button on the GUI which had "update balance", and the user kept clicking on it, at the same time as say a URLLoader complete event hander being called which also cals the apis getBalance() function (i.e. flash being multithreaded).
Well, first off, with the exception of the networking APIs, Flash is not multithreaded. All ActionScript runs in the same one thread.
You could fairly easily create a semaphore-like system where each call to communicate passed in a "key" as well as the arguments you already specified. That "key" would just be a string that represented the type of call you're doing (getBalance, login, etc). The "key" would be a property in a generic object (Object or Dictionary) and would reference an array (it would have to be created if it didn't exist).
If the array was empty then the call would happen as normal. If not then the information about the call would be placed into an object and pushed into the array. Your complete handler would then have to just check, after it finished a call, if there were more requests in the queue and if so dequeue one of them and run that request.
One thing about this system would be that it would still allow different types of requests to happen in parallel - but you would have to have a new URLLoader per request (which is perfectly reasonable as long as you clean it up after each request is done).

Resources