I was wondering if someone had already use this three technologies together. I know Erlang and Comet are widely used, buy I can't find anything related of Comet + Membase or Erlang + Membase. Are them together a bad idea for some reason?
I'm doing a research using Freemind to map the ideas about these three technologies (Erlang, Membase and Comet). As I'm new to the three I am not certain if they are a good combination.
Basically, the application I have in mind will have many clients ("clients A") sending small amounts of data to the server. The server needs to save this data as fast as possible, and send it on request to another set of clients ("clients B"), where clients B are quite fewer than clients A.
This application is just an idea I have had for a while (nothing new, it's been done already), but I would like to experiment with Erlang and Comet and they seem to fit.
If anyone can provide my with some hints I would appreciate it very much. This is my first question on this site, it my be to open. If it's so, please let me know and I will post it somewhere else.
Thank you!
For membase, you just need to use any of several memcached clients. It should work quite well for what you're describing.
Give it a try. Describe what doesn't work for you. :)
Related
thank you all for reading my question~
Before asking some (2~3) questions, I will briefly explain what I am trying to do. I am trying to build a turn based multiplayer (1 vs 1) game. I have a little knowledge of Swift/IOS development, mysql, html, and jsp. I am planning to learn php since many people say combination of mysql and php is good for ios back-end programming.
Here are actual questions:
Do I need to get a droplet from places like Digital Ocean and deploy my own server? Or is it possible to use web hosting such as bluehost for this purpose?
This is the part which I have no clue about... How do I actually develop a multiplayer game? I can guess that this has something to do with tcp/ip socket programming. My experience with tcp/ip programming did not require mysql or something similar if I remember it correctly. In that vein, what's the role of some database like mysql when it comes to developing a multiplayer game? Of course it becomes handy when saving scores and achievement; however, I do not know why it is required for finding a match and establishing a connection. Wouldn't a private server needed instead for receiving and sending sockets? Does this mean I have to lease a dedicated server and setup everything by myself?
It seems like there are more than three questions... sorry for asking too many questions. I am a noob programmer and needs many advice from experts!
Is is necessary for me to deploy a web application for handling all backend stuffs? By the way, I am trying to avoid having users make an account. As a result, if I have to manage each user, maybe keeping each user's unique phone id can help me distinguish each user?
I have looked into both Google's and Apple's free game match system... It is very tempting to use them, but my friends designed game match screen and separate ranking systems. As so, I need to come up with a way to solve this conundrum. Please help me!!! Thank you all for reading my question again and help a poor soul here...
I think about technology stack for my project and I think about using ejabberd. The project will look like classic multi-user dungeon RPG where players will move across the world from one location to another (locations looks exactly like chat rooms), and they also will figth each other as well as creatures with AI in turn-based mode.
I never used ejabberd, but I have some experience in writing server applications using erlang.
Is ejabberd an overkill for this kind of game? It has a lot of features that I won't need ever. However it is well-known to erlang developers and is also very stable and mature. Is ejabberd worth using it as kind of transport layer for my online game, or I should better invent my own wheel, something tiny and simple?
I have several years of commercial experience, using ejabberd for things like this. So, my take:
Pros:
It is certainly technically capable.
It is quite easy to grok.
It is very easy to extend and modify.
If you update it regularly, it will solve two really important aspects. A. network security (this is extremely important thing for me); and B.properly done authentication. These two alone are enough of a reason to use it.
It is surprisingly fast.
It gives you chat, presence and friends list for free.
It gives you MUC (rooms) for free. With all the things like permissions solved quite well.
Cons:
Don't really expect to find any usable documentation. Source is mostly your only friend.
Don't really expect to find a community. It's a lonely path. There is a room - ejabberd#conference.jabber.ru but it's very quiet (and almost empty). Most of the people there are not developers, but just ejabberd users. Mailing list is a bit better, but usually not enough to find you the answer you seek.
The source code per-se is not the best example of an erlang project. If you want to learn how to write big, modular, distributed erlang software, better have a look at something like Riak.
The internal APIs are not very stable (they change quite a lot with releases). Because of this, I recommend writing your software as a separate erlang application, connecting to ejabberd as an external XMPP component. Thus you will be guaranteed that you communicate over a stable protocol (XMPP). Of course, you cannot escape having to write some internal stuff as well. Authentication and Roster (friend list) modules are the first that come to mind. This combination is quite hard to maintain and update, especially if you need hot code loading, but it's yet the best solution for me. Try to keep the "in-ejabberd" code to the viable minimum.
That being said, there is only one (to my knowledge) usable XMPP erlang library. It's called exmpp and is developed by the same company as ejabberd (ProcessOne). It's not yet considered stable. I have been using it for quite some time, and for now there are no problems, but you never know. It is also mostly undocumented (or was, when I was learning it).
I am looking to start developing a relatively simple web application that will pull data from various sources and normalizing it. A user can also enter the data directly into the site. I anticipate hitting scale, if successful. Is it worth putting in the time now to use scalable or distributed technologies or just start with a LAMP stack? Framework or not? Any thoughts, suggestions, or comments would help.
Disregard my vague description of the idea, I'd love to share once I get further along.
Later. I can't remember who said it (might have been SO's Jeff Atwood) but it rings true: your first problem is getting other people to care about your work. Worry about scale when they do.
Definitely go with a well structured framework for your own sanity though. Even if it doesn't end up with thousands of users, you'll want to add features as time goes on. Maintaining an expanding codebase without good structure quickly becomes fairly horrible (been there, done that, lost the client).
btw, if you're tempted to write your own framework, be aware that it is a lot of work. My company has an in-house one we're quite proud of, but it's taken 3-4 years to mature.
Is it worth putting in the time now to use scalable or distributed technologies or just start with a LAMP stack?
A LAMP stack is scalable. Apache provides many, many alternatives.
Framework or not?
Always use the highest-powered framework you can find. Write as little code as possible. Get something in front of people as soon as you can.
Focus on what's important: Get something to work.
If you don't have something that works, scalability doesn't matter, does it?
Then read up on optimization. http://c2.com/cgi/wiki?RulesOfOptimization is very helpful.
Rule 1. Don't.
Rule 2. Don't yet.
Rule 3. Profile before Optimizing.
Until you have a working application, you don't know what -- specific -- thing limits your scalability.
Don't assume. Measure.
That means build something that people actually use. Scale comes later.
Absolutely do it later. Scaling pains is a good problem to have, it means people like your project enough to stress the hardware it's running on.
The last company I worked at started fairly small with PHP and the very very first versions of CakePHP that came out (when it was still in beta). Some of the code was dirty, the admin tool was a mess (code-wise), and sure it could have been done better from the start. But do you know what? They got it out the door before their competitors did, and became extremely successful.
When I came on board they were starting to hit the limits of their current potential scalability, and that is when they decided to start looking at CDN's, lighttpd caching techniques, and other ways to clean up the code and make things run smoother when under heavy load. I don't work for them anymore but it was a good experience in growing an architecture beyond what it was originally scoped at.
I can tell you right now if they had tried to do the scalability and optimizations before selling content and getting a website live - they would never have grown to the size they are now. The company is www.beatport.com if you're interested in who I'm talking about (To re-iterate, I'm not trying to advertise them as I am no longer affiliated with them, but it stands as a good case study and it's easier for people to understand what I'm talking about when they see their website).
Personally, after working with Ruby and Rails (and understanding the separation!) for a couple of years, and having experience with PHP at Beatport - I can confidently say that I never want to work with PHP code again =p
Funny to ask "scale now or later?" and label it "ruby on rails".
Actually, Ruby on Rails was created by David Heinemeier Hansson, who has a whole chapter in his book labeled "Scale later" :))
http://gettingreal.37signals.com/ch04_Scale_Later.php
I agree with the earlier respondents -- make it useful, make it work and get people motivated to use it first. I also agree that you should pick off-the shelf components (of which there are many) rather than roll your own, as much as possible. At the same time, make sure that you choose components for your infrastructure that you know to be scalable so that you can go there when you need to, without having to re-write major chunks of your application.
As the Product Manager for Berkeley DB, I've seen countess cases of developers who decided "Oh, we'll just write that to a flat file" or "I can write my own simple B-tree function" or "Database XYZ is 'good enough', I don't have to worry about concurrency or scalability until later". The problem with that approach is that a) you're re-inventing the wheel (and forgoing what others have learned the hard way already) and b) you're ignoring the fact that you'll have to deal with scalability at some point and going with a 'good enough' solution.
Good luck in your implementation.
I have to confirm the detail of my gradutaion project recently.
My setup a goal for myself, that is it should have values( maybe as a opensource project or tools that can be use by others).
Can you suggest some ideas or projects pertaining to one of :
Web architect, Social Media, Ruby, ROR, Testing.
Thanks!:D
First choose something that both interests you and is in the scope of your abilities.
After you have made such a choice, formalize the decision, perform research and, build requirements; at this stage one can still set "how big a bite they can chew". Most professors I have dealt with are understanding of partial implementations as long as the expectations have been previously established.
Finally, decided on the tools/language and approach for implementation that best fits in the requirement and resources (this includes your time, desired level of effort vs payout, and ability).
I personally find web work absolutely dull, but if I were to write something new, by choice, that was "web-related" and "social" it would be a multi-user interactive whiteboard which is in turn an extension of a real-time collaborative document. (I actually used this as one of my own projects, albeit I focused on a specific protocol implementation.)
i just had this thing a while ago .... and i really needed some help with that same problem ....
i gut a couple of ideas, witch i already used one of them, so i'll suggest the other:
its a network monitoring system based on "SNMP" protocol, gets it's data from the snmp agent on the desired machine (witch can be a computer, a router, a printer, ... ,any thing connected to the network), and alert the administrator (when somthing wrong is there like too many ports are open, or denial of service problem, or too many tcp packets, so it might be a tcp ping problem, ...) with any way u would like to (email, sms, a live ajax warning, ...) ...
sorry .... it sounds messy, but basically it will be like the "CACTI" or the "openNMS" systems (just google them), and it's based on alot of technologies ,thing like: ruby, mysql(to save the actions and to have users DB), linux(i would use Debian), SNMP agents, cron (to schedule the basic system working), SSH/telnet (take a reaction of some harmful action), PHP/RubyonRails to build an web interface that can also connect to your database, ...
i know it sounded like a big fat thing to do, but it's not that hard .... i can provide more things if u want, caus i worked some kind of a specification for this thing.
When I was in college, I used to look into a lot of the programming contests (which involved 3-4 months of projects). Recently came across https://tgmc.in/project_scenario.php. Quite possible that you can get some ideas after reading these project descriptions!
I need to deploy a Delphi app in an environment that needs centralized data and file storage system (for document imaging) but has multiple branch offices with relatively poor inter connectivity. I believe a 3 tier database application is the best way to go so I can provide a rich desktop experience with relatively light-weight data transfer needs. So far I have looked briefly at Delphi Datasnap, kbmMW and Remobjects SDK. It seems that kbmMW and Remobjects SDK use the least bandwidth. Does anyone have any experience in deploying any of these technologies in a challenging environments with a significant number of users (I need to support 700+)? Thanks!
Depends if you are tied to remote datasets. If you aren't dataset bound then SOAP would likely be a good choice. Or, what I've done is write my own protocol that is similar to SOAP in nature. This was done before SOAP was standard and I'm glad I did - this gives you the ability to control more of the flow of data. It's given that if you have poor connectivity then you will be spending time supporting it. It's very nice if it's your own code you are supporting versus having to wait on a vendor. (Although KBM and REM are known to be pretty good vendors.)
Personal note: 700 users in a document imaging application over poor connectivity sounds like a mess. Spend the money on upgrading connectivity as it'll be cheaper in the long run.
Both kbmMW and RO SDK offer binary format, which is more compact than SOAP format,specially you are working with documents.
RO sdk seems to offer more GUI tools to help you doing your services.
Also give a RealThinClient SDK a look, it's a lightweight remoting framework.
But what ever framework you go with, your design of work will make it fast or slow, I have some applications working on slow 128kb lines, and it's working perfect without any user complain, but I don't do a large transfer for files.
One thing to remember...its not the number of users, but the number of them using the resources at the same time that will be the issue. Attempt to develop your application "server stateless" if at all possible, this will allow greater flexibility in the long term if you find you have to add more servers to the pool to support your customer base. The hardest thing about n-tier is scaling beyond the first server...plan on that from the start. Each request should not know anything about a prior request...or at the very least the request should have a way of passing the context so the server can look it up in a session table or something.
Personally, I would recommend RemObjects. I have used it with good results.
I don't know if it's the very best / most efficient (glad you asked this question!), but I've had good results w/RemObjects SDK + DataAbstract. The latter made much of the plumbing details less involved, which was helpful. Still implementing, but so far so good.
If you really wanna go "low-bandwidth" use BSD Sockets API - that'll give you full control over what's being sent and there you can send as little information as you want. Of course then you'll have to implement all the tiers yourself, but hey - that's still an option :D