What are the disadvantages of fastcgi for high performance websites? - fastcgi

I have read comments that fastcgi is not a good choice for high performance websites:
nginx, fastcgi and open sockets
What specifically makes it a poor choice? What other solutions (the thread above suggests shared memory or building HTTP support into the application directly) might be superior and why?

Related

What is better: Using multiple channels or Having multiple conditions in single channel?

I am using Rails ActionCable.
I can choose between two choices mainly. One of them is to use multiple channels for different functionalities. Other option is to use same channel with multiple conidtions to create the same functionality.
Which one is better while scaling up? What are the disadvantages of relying too much on websockets (Actioncable) while building applications?
Can someone refer me some good article which explains websockets, redis caching and its effect when the application scales up.
Thanking you guys in anticiaption of positive response.
Although I think the question is a duplicate of "Multiple websocket channels, single ws object?", I will add a few specific ActionCable considerations just to clarify.
Which one is better while scaling up?
A single WebSocket connection is (usually) better when scaling up.
Servers have a limit on the number of connections they can handle, which means that adding WebSocket connections per client will consume a limited server resource.
For example, if each client requires 2 WebSocket connections instead of 1, the server's capacity is cut by half (drops from 100% to 50%).
What are the disadvantages of relying too much on websockets (Actioncable) while building applications?
Some machines run older browsers that don't support WebSockets. Also, WebSocket applications and clients are often harder to code, which translates to higher maintenance costs.
Having said that, WebSockets are a wonderful solution to issues that plagued web applications for ages and are superior to polling techniques.
All in all, I would argue that the disadvantages should be ignored since the advantages far outweigh the costs.
However:
Having said that, note that currently the Actioncable implementation is quite slow.
In fact, one might argue that the implementation is so slow that polling would be better.
Comparing ActionCable to AnyCable or the server-side Iodine WebSocket + Pub/Sub solution would immediately highlight the fact that ActionCable should be replaced by other solutions until such time as it's fixed.
Further reading:
I just started reading this article about Ruby WebSockets, Push and Pub/Sub, which seems very well written.
I also wrote an article about the main issues concerning Ruby implementations for WebSockets and how a server-side WebSocket solution could solve these issues. You can read it here.

Is SUAVE production ready for web application development with millions of user traffic?

We are a startup and currently in the evaluation mode for using SUAVE with F# as the web application development framework. I am very enthusiastic for using the SUAVE framework for developing my applications.
I just want to know if SUAVE is production ready and if any performance benchmarking has been done on it as compared to OWIN for concurrent users and how many user traffic can the web server handle.
Altough this thread now 8 months old, I wanted to share my experience with using Suave as web server.
First, measuring performance based on simple benchmarks won't tell you the truth about the overall performance of a more complicated system.
However, when using Suave, it's unlikely that it will be the bottleneck in your application.
It depends a lot more on the entire architecture, the sum of mechanics between request and response, and implementation details (e.g. random access on Lists is rather slow).
I used Suave in 3 projects now, always with great success.
All of them heavily used paralellization and multi-threading.
Two of them where simply run directly by Suave behind an Nginx-Proxy, one used IIS.
Running under IIS did not have any measurable influence on the performance.
When I came across any performance issues, Suave was never the place too look for them.
When utilizing the awesome concurrency and parallelization features of F#, your application will benefit from vertical scaling.
For example, I built an image processing service which performed rather bad on AWS, but great on a notebook with a quad core Pentium processor.
But again, this has nothing to do with Suave.
Actually it pretty much goes out of your way.
Suave itself is a great, and solid choice. In about 2 years, I did not run into edge cases, where Suave would be the cause of trouble.
I have to mention, that my expeciences are based on simple web servers and services.
Suave was used for a fairly flat web layer to serve RPC or REST-APIs.
Other tasks, like streaming or soft-realtime applications maybe would require another approach, and might not be suited well for Suave.

How is Apache Thrift scalable?

In their website Apache Thrift is introduced as
software framework, for scalable cross-language services
development...
but I couldn't find what makes it scalable. So my question is what makes it scalable? Does just using Thrift make your application scalable? If not, how do I use thrift in a scalable way?
"Scalability", in this context, means the ability to partition the application in as many or few pieces, using as few or as as many different processors, as necessary. The same app can be "built out" simply by adding hardware.
From the Thrift white paper:
https://thrift.apache.org/static/files/thrift-20070401.pdf
Thrift has enabled Facebook to build scalable backend services
efficiently by enabling engineers to divide and conquer. Application
developers can focus on application code without worrying about the
sockets layer. We avoid duplicated work by writing buffering and I/O
logic in one place, rather than interspersing it in each application.

What is Erlang's secret to scalability?

Erlang is getting a reputation for being untouchable at handling a large volume of messages and requests. I haven't had time to download and try to get inside Mr. Erlang's understanding of switching theory... so I'm wondering if someone can teach me (or point to a good instructional site.)
Say as a thought-experiment I wanted to port the Erlang ejabberd to a combination of Python and C, in a way that gave me the same speed and scalability. What structures or patterns would I have to understand and implement? (Does Python's Twisted already do this?)
How/why do functional languages (specifically Erlang) scale well? (for discussion of why)
http://erlang.org/course/course.html (for a tutorial chain)
As far as porting to other languages, a message passing system would be easy to do in most modern languages. Getting the functional style can be done in Python easily enough, although you wouldn't get the internal dispatching features of Erlang "for free". Stackless Python can replicate much of Erlang's concurrency features, although I can't speak to details as I haven't used it much. If does appear to be much more "explicit" (in that it requires you to define the concurrency in code in places that Erlang's design will allow concurrency to happen internally).
Erlang is not only about scalability but mostly about
reliability
soft real-time characteristics (enabled by soft real-time GC which is possible because immutability [no cycles] and share nothing and so)
performance in concurrent tasks (cheap task switch, cheap process spawn, actors model, ...)
scalability - debatable in current state , but rapidly evolving (about 32 cores well, it is better than most competitors but should be better in near future).
Another of the features of erlang that have an impact on scalability is the the lightweight cheap processes. Since processes have so little overhead erlang can spawn far more of them than most other languages. You get more bang for your buck with erlang processes than many other languages give you.
I think the best choice for Erlang is Network bound applications - makes communication much simpler between nodes and things like heartbeat monitoring, auto restart using supervisor are built into OTP.

Are there benchmarks comparing the respective memory usage of django, rails and PHP frameworks?

I have to run a Web server with many services on an embedded server with limited RAM (1 GB, no swap). There will be a maximum of 100 users. I will have services such as a forum, little games (javascript or flash), etc.
My team knows Ruby on Rails very well, but I am a bit worried about Rails' memory usage. I really do not want to start a troll here, but I am wondering if there are any serious (i.e. documented) benchmarks comparing Rails, Django, CakePHP or any other PHP framework?
Could you please point to benchmarks or give me your opinion about Rails' memory usage? Please please please no troll.
In terms of memory usage it's generally going to be Python > Ruby > PHP, which of course leads to Django > Rails > CakePHP. Not just memory but that also tends to hold for raw performance. EDIT: Also worth noting that there are, of course, no absolutes here. There are plenty of usage scenarios in which Ruby will beat Python, hands down. I think we can all agree that Ruby and Python will always beat PHP, though :)
Here's a straight-forward 3-way benchmarking (with Symfony on the PHP side of things) that bears out the above: http://wiki.rubyonrails.com/rails/pages/Framework+Performance. Though of course it's easy to find stats to support your own viewpoint :)
That said, it's still very easy to make a crappy, slow, and inefficient Django application and a lean, fast, and efficient Rails application, or vice-versa. Skill, knowledge, and expertise with the system you are using will do far more for its memory and performance footprint than just the framework itself. Database optimizations, server choices and architectures (Apache vs. proxy setups using nginx/lighttpd, etc.), and fundamental design decisions are likely going to overwhelm the framework's inherent characteristics pretty quickly.
So I guess what I'm saying is if your team knows Rails, and your expertise lies in Rails, I would stick with Rails.
I just stumbled upon this benchmark which looks pretty good. It just gives data about Rails' memory usage (and performance) but it only partially answers the question because it does not compare Rails with other frameworks.
http://www.rubyenterpriseedition.com/comparisons.html
My own experience is that Rails memory usage can be high, especially on 64 bit machines (min. is around 95-100 MB with thin as web front-end). PHP tends to be used with different patterns so it is a bit difficult to compare directly.

Resources