How much difference will using HTTP or TCP make? - ios

I am making a gamer server for an ipad,iphone application.
It is a two-player card game but there could be multiple games going on between any two players.
After going through a lot of forums and blogs, I decided to use nodeJS and mongo-db combination.
Now I am new to both but I have time to learn these things and I have a decent amount of experience in JS.
What I am not sure about is, If my client side is iOS and objective-C stack.
What could be the best approach among TCP,HTTP with REST and WebSockets considering,
Reliable libraries avaliable.
complexity level
performance
In case you feel that I should not be using nodeJS in the first place itself, please point me towards the right direction as I am yet to start.

If you're considering using iOS, WebSockets are a no-go -- I'm sure you don't want to make your whole game out of a single big UIWebView.
TCP: well, that's an interesting question. Plain TCP has generally a smaller overhead than HTTP because of no avket headers/etc. are required, but implementing your own protocol is a much greater challenge that should be mecessary for writing a game and you'll end up with the same pitfalls that of HTTP with regard to speed/performance. Also, the BSD sockets API with which you can do TCP networking on Unix is not obvious to use at first glance. However, if you decide to use TCP, here's my OO wrapper for the API: http://github.com/H2CO3/TCPHelper
HTTP: ypu should probably choose it. It has a great history, it's a very mature protocol, and there are quite a few high quality C and Objective-C libraries out there for it. Cocoa (Touch) has the NSURL* kind of Objective-C classes and you also have libCURL for C.
On server side, you also might want to prefer HTTP as modern servers implicitly and automagically support it and you don't have to mess with the protocol to send a message, instead you simply say
<?php echo "Hello World"; ?>
Again, if you want to dig deeper, you can use WebSockets at server side if you deicde to use plain TCP.
I hope this will help.

Related

How do I compile an existing C++ ibrary using sockets to WASM?

I have some existing server code written in C++, Node.js and Python that I would like to port to WASM. This code makes heavy use of UDP/TCP sockets. Ideally with little to no modification of the code.
I know this type of question has been asked a number of times, and the answer is to convert them to websockets and use a bridge. However, this adds another moving part that I'd like to avoid if at all possible.
The code runs as a service on desktop, not in the browser. I know that there's a WASI proposal for sockets that is implemented by WasmEdge.
I'm quite new to WASM and still a little hazy on how it all fits together. Any advice on how to solve this would be greatly appreciated.
Thanks in advance!
I've tried two approaches and am unsure how best to proceed:
#1 Use WasmEdge - but I don't quite understand how to compile to WASM in a way that will take advantage of WasmEdge's socket support. For example, using emscripten will always convert the sockets to websocket.
#2 Create a Node.js wrapper that handles the sockets directly and passes the contents of the buffers back and forth with the embedded WASM module. This isn't ideal as I'd have to modify the source code or at the least create a shim of some kind to replace networking.

HTTP Libraries in elixir

This question is more of getting information about different HTTP Libraries in elixir.
I have been using HTTPoison for a long time in my all elixir projects. which is a wrapper around Hackney. I read some stuff on http://big-elephants.com/2019-05/gun/ about hackney as
Occasionally hackney could get stuck, so all the calls to HTTPoison
would be hanging and blocking caller processes.
Let me ask with an example, I was using Poison in my project and I considered it to be a best JSON Parser, then I saw Jason.
The primary reason is performance. Jason is about twice as fast and
uses half the memory while being almost 100% functionally compatible
with Poison (it covers all the features used directly in Phoenix).
I know the reason why I want to switch to Jason instead of Poison.
In HTTP, I can see there are Tesla, Mint and Gun. I just want to know the perspective of switching from something Famous to I may say Accurate one?

Design a chat client in a website. Shoud I use XMPP?

I am going to design a website using ruby on rails. And one of the features i want to implement is the chat functionality. Where users can chat with the other users/members of the website. What i should be using or in other words start learning in order to design something like that ?
Is XMPP the answer. if so, i would be glad if someone could be a bit descriptive on where to go from there and/or suggest some books. thanks !
I said XMPP because i know Facebook uses that and i plan to create something similar
Protocols can become a pretty hairy topic, implementing a well-working protocol yourself can be pretty daunting without prior experience. Especially if it has to do with (near) real-time communication between several parties. If this is supposed to scale to any significant number of visitors, implementing this correctly can be pretty tough.
XMPP is a protocol that is already well established, is shaken down and already has many stable implementations. So when using it, you do not need to worry about designing or implementing the protocol anymore. For that reason, I'd really recommend it. It's also a rather easy to understand protocol, even if you will have to spend some time reading up on the basics in the beginning. Look neigh further than http://xmpp.org for documentation.
Setting up an XMPP server can be done in minutes, depending on your OS and the server you choose. The caveat is that if you want to customize the server at all, you will have to learn about the innards of it as well to some degree, which may or may not take some time.
The bottom line is: choosing XMPP and existing XMPP libraries and servers, you get 90% of the functionality for free and can concentrate on implementing your client. The question is, how much will you have to dig into the details of XMPP and the server, will this take longer than rolling your own protocol and will your own protocol suit your needs in the long term as well as XMPP would?
You always have to think about how much you want to spend on implementing this.
If you go with XMPP you will be able to run a XMPP standard chat server (outside of Rails) and should be able to use a JavaScript Client with a XMPP to HTTP Bridge.
A project a quick Google Search brought up doing this is Strophe.
But I'd argue that you should think long and hard about if XMPP really suits your needs and if you really want to go through all that trouble for a Chat.
Implementing your own is also not straightforward, especially when you are writing all the long-polling and signaling stuff yourself.
But it's not impossible and should give you a simple working solution in a couple of days.
Doing the chat yourself in Rails will require you however to use an alternative Database since Rails can't store data in-memory between requests and persisting chat data in ActiveRecord seems like not a very scalable and good idea.
Using XMPP obviously has the benefit of your users being able to connect to your Chat service using iChat, Jabber or any other XMPP Client..

Should I make and implement a network protocol by hand or use a middleware (if so which)?

I have some data that I need to share between multiple services on multiple machines. Stuffing the data into a database or shuffling it over http won't work in this situation and ideally the different pieces of software will need to communicate with each other directly (or through one central coordinator that can send and receive).
Is it recommended to create and implement a network protocol or use some tool to do the communication?
If I did go the route of creating a protocol myself, it wouldn't have to be very complex. Under 10 different message types, but it would have to be re-implemented in a few different languages for this project, and support unicode. I have read plenty (and done some) with handling sockets, but don't have much knowledge in handling a protocol I create. Are there any good resources on this?
There are also things like ICE and RPC that look intresting. The limit of my experience is using ICE and XMLRPC for a few days each. Is this the better route to go? If so what tools are out there?
Recently I've been using Google Protocol Buffers for encoding and shipping data between different machines running software written in different languages. It is quite easy to do, and takes away a lot of the hassle of designing a custom protocol.
Without knowing what technologies and platforms you are dealing with, it's difficult to give you a very specific answer - so I'll try to give you some general feedback.
If the system(s) you are wishing to connect span more than a single platform and/or technology you are probably better using an existing transport mechanism and protocol to maximize the chance your base platform will already have a library (or multiple) to interact over it. Also, integrating security and other features in a stack with known behaviors is more likely to be documented (with examples floating around). RPC (and ICE, though I've less familiarity with it) has some useful capabilities, but it also requires a lot of control over the environment and security can be convoluted (particularly if you are passing objects between different languages).
With regards to avoiding polling, this is a performance related issue; there are design patterns which can help you to handle such things - if you understand how you need the system to work (e.g. the observer pattern - kind of a dont-call-us-we'll-call-you approach). The network environment you are playing in will dictate which options are actually viable (e.g. a local LAN will have different considerations from something which runs over a WAN or the internet). Factors like firewall tunneling, VPN traversal, etc. should play part in your final selected technology profile.
The only other major consideration (that I can think of just now... ;-)) would be to consider the type of data you need to pass about. Is it just text, or do you need to stream binary objects? Would an encoding format (like XML or JSON or bJSON) do the trick? You mention "less than ten message types" as part of the question, but is that the only information which would ever need to be communicated by the system?
Either way, unless the overhead of existing protocols is unacceptable you're better of leveraging established work 99% of the time. Creativity is great - but commercial projects usually benefit from well-known behaviors, even if not the coolest or slickest (kind of the "as long as it works..." approach).
hth!

What weaknesses can be found in using Erlang?

I am considering Erlang as a potential for my upcoming project. I need a "Highly scalable, highly reliable" (duh, what project doesn't?) web server to accept HTTP requests, but not really serve up HTML. We have thousands of distributed clients (other systems, not users) that will be submitting binary data to central cluster of servers for offline processing. Responses would be very short, success, fail, error code, minimal data. We want to use HTTP since it is our best chance of traversing firewalls.
Given this limited information about the project, can you provide any weaknesses that might pop up using a technology like Erlang? For instance, I understand Erlang's text processing capabilities might leave something to be desired.
You comments are appreciated.
Thanks.
This sounds like a perfect candidate for a language like Erlang. The scaling properties of the language are very good, but if you're worried about the data processing abilities, you shouldn't be. It's a very powerful language, with many libraries available for developers. It's an old language, and it's been heavily used/tested in the past, so everything you want to do has probably already been done to some degree.
Make sure you use erlang version R11B5 or newer! Earlier versions of erlang did not provide the ability to timeout tcp sends. This results in stalled or malicious clients being able to execute a DoS attack on your application by refusing to recv data you send them, thus locking up the sending process.
See issue OTP-6684 from R11B5's release notes.
With Erlang the scalability and reliability is there but from your project definition you don't outline what type of text processing you will need.
I think Erlang's main limitation might be finding experienced developers in your area. Do some research on the availability of Erlang architects and coders.
If you are going to teach yourself or have your developers learn it on the job keep in mind that it is a very different way of coding and that while the core documentation is good a lot of people do wish there were more examples. Of course the very active community easily makes up for that.
I understand Erlang's text processing
capabilities might leave something to
be desired.
The starling project already provides basic unicode support and there is a EEP (Erlang Enhancement Proposal) currently in draft, but going in to bring it into the mainstream of Erlang/OTP support.
I encountered some problems with Redis read performance from Erlang. Here is my question. I tend to think the reason is Erlang-written module, which has troubles while processing tons of strings during communication with Redis.

Resources