TOP showing Multi Process with same name - htop

I'm trying to figure out why in top (CentOS) There are several programs that have a lot of processes with same name , instead one single instance.
I tried to understand this clearly, I hope someone here will help with good explaination.
from Programs that i encountered with many processes with top:
nscd , php-fpm , httpd , nginx.
Thank you ,
Alex.

this is because multiple instances of the same binary/process are running in parallel. for the mentioned server programs this most certainly is on purpose. it might be reconfigured per service, but then you lose all the advantages it brings, e.g. faster response times by concurrent handling of requests...

Related

Can't generate more than ~8000 RPM from Locust

I'm using Locust to load test my web servers. I'm running Locust in distributed mode. The worker nodes are written in Java, and use the Locust/Java port using locust4j. The master node and the worker nodes are containerized, our orchestrator is Kubernetes. When I want to spin up more workers, I'm doing it from there.
The problem that I'm running into is that no matter how many users I add, or worker nodes I add, I can't seem to generate more than ~8000 RPM. This is confirmed by the Locust web frontend, as well as the metrics I'm collecting from my web server.
Does anyone have any ideas why this is happening?
I've attached an image of timings I've collected. The snapshots are from running the load test for 60 seconds, I'm timing it from a stopwatch.
The usual culprit in these kinds of situations is your servers can't handle more than that. In my experience, the behavior you'll see client side as the servers get overwhelmed is you'll start to see a slow but steady increase in response times. This is one big reason why Locust includes those in the metrics it shows you.
Based on what I'm seeing in your screenshots, this is most likely the case for you. You have some very low minimum times but your average, median, and 90%iles are a lot higher than your minimums; your maximums are very significantly higher than those. Without seeing your charts I can't know for sure but that's a big red flag.
For more things to look out for, check out this question in the FAQ (especially see the list of server stats to investigate):
https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps

Is docker a possible solution to ruby GIL limitation in rails?

This is just an idea, let me know if I'm missing anything or if it could be a good one.
It's common to have N rails processes running on a single server/VM, but they can't perform at best due GIL (Global Interpreter Lock).
Instead of running N processes inside a single server I could run N containers each one with a single rails process (each one running on a different port).
In this way I should be able to execute more rails processes in parallel, right?
I guess containers add overhead but probably it could make sense anyway.
Any opinions?
Thank you
This would be far less efficient than running N processes. The simple reason here is that most process managers for Ruby on Rails use the "pre-fork" model where a large amount of code is loaded in before the processes are split off.
A fork of two process uses very little additional memory, the second process inherits a near exact copy of the first. Any changes made to this will incur more memory overhead, but as things like the Rails library and other gems are not changed, that comes along for free, basically.
If you had multiple processes that are independent, each would need to load, parse, and initialize every Ruby class necessary to run Rails.
This isn't to say that the container-ized approach isn't without merit, but it may necessitate a hybrid approach: N containers with M processes each.
Remember, if you're really having trouble with the GIL, just use Jruby which doesn't have one.
This won't improve concurrency at all: the GIL applies to threads within a single process. Multiple processes can already execute concurrently - the pattern of having multiple rails processes arises because of the GIL.
As tadman says, you'll also potentially increase memory usage. You might be able to estimate it (assuming you deploy using passenger) as the passenger-memory-stats tool allows you to get RSS as well as private dirty RSS (i.e. memory that is resident, but not shared with the parent process). If the non shared memory is almost none then you wouldn't waste any by following a non fork model.
Containers are wonderful things, but what you've outlined isn't a reason to use them.

Erlang supervisor processes

I have been learning Erlang intensively, and after finishing 'Programming Erlang' from Joe Armstrong, there is one thing that I keep coming back to.
In my mind a Supervisor spawns One process per child handler. So each declared gen_server type handler will run as a separate process.
What happens if you are building a tiny web server and you want each requests to be its own process. Do you still conform to OTP principles and use a gen_server somehow (how ?), or do you create your own behaviour?
How does Cowboy handle this for eg. ? Does it still use gen_server ?
tl;dr: I find that trying to figure out the "correct" supervision structure a the beginning of a project is a form of premature optimization.
The "right" way to design your supervision tree depends on what the worker parts of your application are doing. In the case of a web server I would probably first explore something along the lines of:
top supervisor (singular)
data service supervisor (one per service type)
worker pool (all workers under the service sup)
client connection supervisor (one)
connection worker pool (or one per connection, have to play with it to decide)
logical supervisor (as appropriate -- massive variance here, depending on problem domain)
workers or supervisors (as appropriate -- have to explore/know the problem domain to have any idea how this should be structured)
So that's several workers per supervisor type at the lower level. I haven't used Cowboy so I don't know how it is organized. The point I'm trying to make is that while the mechanics of handling data services serving web pages are relatively trivial, the part of the system that actually does the core problem-solving work might not be and this is going to dictate everything interesting about the system.
It is a bad thing to have your problem-solving bits mixed in the same module as your web-displaying or connection handling bits. Ideally you should be able to use the same logic units in a native application, a web application and a networked service without any changes.
Ultimately the answer to whether you should have 1:1 supervisors to workers or 1:n depends on what you're doing and what restart strategy gives you the best balance among recovery to a known consistent state, latency felt by the user, and resource usage.
One of my favorite things about Erlang is that I can start with a naive supervisor structure like the one above, play with it until I see where its not so good, and rather easily switch things around and experiment with alternatives without fundamentally altering my system much. (The same goes for playing with alternative data representations if you write proper abstractions around them.) So first, get something that works in testing. Then load it up and see if you can break it. Then start worrying about the details, after you understand where the problems actually are.
It is a common pattern to spawn one server per client in erlang, You will then use a supervisor using the simple_one_to_one strategy for the children servers. This allows to ask the server to start a server on_demand. Generally this is used when you don't know how many processes you will need, and when the processes are independent (a crash of one process should not impacts the other).
There is a very good information in the site learningyousomeerlang.com (LYSE supervisor chapter). the whole site is worth to read.

How to improve performance of multiple rails applications with nginx running on a single core 378mb VPS?

I have a VPS which is hosting (currently) 5 different rails applications, all with different domains. To make them work I've added one server {} listener per app in my nginx config file. I've left everything else as default, for instance there's only one nginx worker process.
Concurrently, I also have 2 rails workers for one of the apps.
Now, this works as is, but performances are low, in particular speed. How could I make my apps quicker by adhering to my constraints?
Thanks!
Your problem is that you are deep into swap. The slowness you experiencing switching apps is the system loading the requested app from swap into physical memory.
To address this, you can observe who is hogging the memory (also using 'top'), and address that. It's possible you'll find some things to tune, but also quite possible you'll find that you are near the physical limits of what's possible without significant architectural changes.
If your time is worth much, your best course of action will be to upgrade to an instance with at least a 1GB of memory, because you are already using nearly that much.
The nginx "worker_processes" should be set to the number of cores that you have available to work with. You mentioned you had it set to 1. Do you have more cores than that?

Are you using AWSDBProxy? Is there a performance hit when scaling out?

It seems that the only tutorials out there talking about using Amazon's SimpleDB in a rails site are using AWSDBProxy... Personally, I find this counter-intuitive to scaling out, considering the server layout of a typical Rails site below (using AWSDBProxy):
Plugin here: http://agilewebdevelopment.com/plugins/aws_sdb_proxy
Image here: http://www.freeimagehosting.net/uploads/91be4e0617.png
As you can see, even if we add more mongrels, we have two problems.
We have a single point of failure far less stable than our load balancer
We have to force all our information through this one WEBrick server
The solution is, of course, to add more AWSDBProxies... but why not then just use the following code in say, a class, skipping the proxy all together?
service = AwsSdb::Service.new(Logger.new(nil),
CONFIG['aws_access_key_id'],
CONFIG['aws_secret_access_key'])
service.query(domain, query)
So what I'm getting at, is if you are using AWSDBProxy, what are you justifications for it? And if you are indeed using it, what is your performance like? If you have hard numbers, this would be even more appreciated!
I'm not using it, nor have I ever heard of it, but this is what I would think are reasonable reasons.
You're running your main app server on EC2, so the chance of Internet FAIL doesn't really affect you more than once.
You run one proxy on each of your app servers. So it's connection going down is no worse than it's connection(s) to the database going down.
Because it can be done. This is as good a reason as any in an open source project. Sometimes it takes building a thing before you know whether said thing is a good/bad idea.
You don't have the traffic levels to need a load balancer. Then your diagram squashes down to a line, if not a single machine.

Resources