I wrote a server programe which is planed to use in multi player game. I should be generate multiple instances of my server for every client/player. but i cont able to generate.
I took my server in separate module and I am calling the sever exported function from another module under a process for each client whenever the external aim to connect.
could any one give me some solution other than gen_tcp: server.............
I guess you need to re-think your architecture and identify layers in your system. The first would be connection - ok, it seems you got it. Then you want to present some logic to the user, moreover, you want to give him unique instance of that.. Just spawn it. If it is separate from the rest of the functionality it is going to be trivial. If I ware you, I would read one of Eralng/OTP books (Joe Amstrong, Thomson&Cesarini, Logan recently) to understand nature of Erlang systems and managing processes.
Related
I am building framework for realtime web applications. I started to do it in Elixir, because
it is modern way how to develop application for Erlang VM. Erlang should be good if you need concurrency, fault tolerant, scalable apps (something like web server etc.). That is exactly what i need.
Question: Realtime framework always need for instance keep information about who is interested in what. This will be accomplished by using publish/subscribe pattern. So i will have 1000 clients subscribing to topic "newest-message". I need to save those clients (pid of process representing each client) somewhere to later access them if content for topic "newest-message" appears.
This is where i am confused if Erlang is really good for my framework.
ETS is probably the only option where to store shared data, but ETS is always copying everything if you save/access records. So that means copy 1000 pids always when i need to access them (instead of just iterating over some list, if i will do it for instance in c/java/python).
This will be probably great bottleneck if still copying many and many records from ETS (many clients, many subscriptions etc), i am right?
Sharing the state may be a sign of bad design. You can for example have process for each queue/topic and it will store its own list of subscribers. You send a message to that topic process and it in turn sends the message to clients. This way, you don't copy entire subscriber list.
If you need to process them in parallel, you can split the subscriber list between more processes.
The fault tolerance of Erlang is achieved, because it doesn't let you share state and you have to put more thought to the design, that will not involve state sharing, but will be efficient. This will pay off in the long run, so Erlang/Elixir is definitely good language for this kind of apps. Just look at RabbitMQ.
In my opnion, if you plan to save states like "who is interested in what" Erlang alone may not be a good idea. Of course, sometimes it is very convenient to pass everything in signals (like you'd do in Erlang), but when there is much content to store - lack of state in Erlang starts to hinder you rather than help.
On the other hand, you can keep a broad piece of convenience of Erlang and use it with a Java application, for example. Erlangs interface for Java enables you to connect both technologies quite easily, and at the same time you can use a Java app to store information for you (and save them somewhere, when necessary) and Erlang for the whole concurrent signaling real time part. Even better than that: you can still implement OTP with architecture like that, so you can create quite a lightweight application (because real-time logic is done by Erlang for you) being able to access stored data easily (because Java helps you here).
I just finished Erlang in Practice screencasts (code here), and have some questions about distribution.
Here's the is overall architecture:
Here is how to the supervision tree looks like:
Reading Distributed Applications leads me to believe that one of the primary motivations is for failover/takeover.
However, is it possible, for example, the Message Router supervisor and its workers to be on one node, and the rest of the system to be on another, without much changes to the code?
Or should there be 3 different OTP applications?
Also, how can this system be made to scale horizontally? For example if I realize now that my system can handle 100 users, and that I've identified the Message Router as the main bottleneck, how can I 'just add another node' where now it can handle 200 users?
I've developed Erlang apps only during my studies, but generally we had many small processes doing only one thing and sending messages to other processes. And the beauty of Erlang is that it doesn't matter if you send a message within the same Erlang VM or withing the same Computer, same LAN or over the Internet, the call and the pointer to the other process looks always the same for the developer.
So you really want to have one application for every small part of the system.
That being said, it doesn't make it any simpler to construct an application which can scale out. A rule of thumb says that if you want an application to work on a factor of 10-times more nodes, you need to rewrite, since otherwise the messaging overhead would be too large. And obviously when you start from 1 to 2 you also need to consider it.
So if you found a bottleneck, the application which is particularly slow when handling too many clients, you want to run it a second time and than you need to have some additional load-balancing implemented, already before you start the second application.
Let's assume the supervisor checks the message content for inappropriate content and therefore is slow. In this case the node, everyone is talking to would be simple router application which would forward the messages to different instances of the supervisor application, in a round robin manner. In case those 1 or 2 instances are not enough, you could have the router written in a way, that you can manipulate the number of instances by sending controlling messages.
However for this, to work automatically, you would need to have another process monitoring the servers and discovering that they are overloaded or under utilized.
I know that dynamically adding and removing resources always sounds great when you hear about it, but as you can see it is a lot of work and you need to have some messaging system built which allows it, as well as a monitoring system which can monitor the need.
Hope this gives you some idea of how it could be done, unfortunately it's been over a year since I wrote my last Erlang application, and I didn't want to provide code which would be possibly wrong.
I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.
I'm using MailboxProcessor classes in order to keep separate agents that do their own thing. Normally agents can communicate with one another in the same process, but I want agents to talk to one another when they are on separate processes or even different machines. What kind of mechanism is best for implementing communication between them? Is there some standard solution?
Please note that I'm using Ubuntu instances to run the agents.
I think you're going to have write your own routines to serialize messages, pass them accross the process boundaries and then dispatch them on the other side. This will also require a implementation of a ID system where each mailbox has an ID and processes can send messages to IDs instead of just Mailbox.Send. This is not easy, as local boxes will be able to access local memory, but remote mailboxes will not.
I would look at something like RPyC (http://rpyc.wikidot.com/) as it provides a protocol somewhat like you are looking for.
Basically the answer is 'no' there isn't really a good way to do this.
I'm in the planning phase of our new site - it's an extension of some mobile apps we've built. We want to provide our users with a central point for communication and also provide features for users who don't want to/can't use the mobile apps. One of the features we're looking at adding is a reputation system similar in nature to the SO badge system. We're designing the system to use SOA.
I don't want to have to code all of this logic into the main app as discreet chunks. I'm thinking of creating a means to accomplish this which will allow us to define new thresholds and rules for gaining reputation and have them injected into some service. The two ways I've thought of doing this so far are:
To look for certain traits in a users actions and respond, this would mean having a service running that can run through the 'plugged in' award definitions and check for thresholds that have been met and respond appropriately.
To fire events when the user performs actions - listen out for those events and respond appropriately. Because the services which will be carrying out these actions are running in separate app domains potentially on separate servers the only way I can see having a central message bus to listen and respond to these events is by using something like MassTransit, nServiceBus or Rhino.Esb.
I know that using a service bus can very easily be inappropriately designed into an application that simply doesn't need it and most times - unless you're integrating disparate, heterogenous systems - you most likely won't need one when designing a new system but I'm a bit lost for options as to the best way to do this. I don't like the idea of having a service hammer the Db all the time in the background. But it does sound like it might be a lot simpler early on - later on - I dread to think!
Has anyone here designed a system like this? How did you accomplish this? We're designing for high throughput as we expect there will be times when the system will need to be able to cope with bursts of users.
I've designed a system that had similar requirements. To achieve this the key elements were:
Plugins
Event messaging - using Emesary
The basic concept is that the core is not aware of exactly which module will perform any given task.
The messages are defined and at points within the system they are dispatched. The sender is not aware if the message is required. This effectively decouples vast chunks of the system.
So to perform a job some code is plugged in, that registers with the event messaging bus and will receive messages. When it receives a message that it needs to process it will process it.
The Emesary code is extremely small and efficient in the first instance I've called it (Emesary and you're free to use it; or from Emesary CodePlex
As the system becomes more complex it is possible that there are lots of events flying about, if you get more than 20k a second it was always in my design to add filtering and routing (implemented by the recipient interface being extended to allow a recipient to specify messages it wants to receive during registration). I've never needed to add this filtering because Emesary is sufficiently efficient that it is the processing of the messages that takes the time.
I've build a version of Emesary which bridges two Notifiers across disparate systems using WCF, Corba and TCP/IP. I investigated using RabbitMQ and decided it was possible to use this underneath Emesary if needed.
Base Class Diagram
Scalable server.
This is a fairly complex example however it shows where Emesary fits in. In this diagram anything with a drop shadow can have multiple instances and this is managed outside of what I'm trying to explain here.