I use rails for small applications, but I'm not at all an expert. I'm hosting them on a Digital Ocean server with 512MB ram, which seems to be insufficient.
I was wondering what are Ruby on Rails server requirements (in terms of RAM) for a single app.
Besides I can I measure if my server is able to support the number of application on my server?
Many thanks
It depends on how much traffic you think you need to handle. We have two machines (a 32GB RAM, usage see below) with 32 unicorn workers two serve one app with loads of traffic and we have one machine with loads of 2 worker apps that have very few traffic.
We also have to consider the database (which needs the most RAM by far in our case due to big caches we granted it). And on top of that all we have *nix which caches the filesystem in unused RAM.
Conclusion: It is very hard to tell without you telling us what sort of traffic you expect.
Our memory usage on one of the two servers for the big app: https://gist.github.com/2called-chaos/bc2710744374f6e4a8e9b2d8c45b91cf
The output is from a little ruby script I made called unistat: https://gist.github.com/2called-chaos/50fc5412b34aea335fe9
Related
I have a single page Angular app that makes request to a Rails API service. Both are running on a t2xlarge Ubuntu instance. I am using a Postgres database.
We had increase in traffic, and my Rails API became slow. Sometimes, I get an error saying Passenger queue full for rails application.
Auto scaling on the server is working; three more instances are created. But I cannot trace this issue. I need root access to upgrade, which I do not have. Please help me with this.
As you mentioned that you are using T2.2xlarge instance type. Firstly I want to tell you should not use T2 instance type for production environment. Cause of T2 instance uses CPU Credit. Lets take a look on this
What happens if I use all of my credits?
If your instance uses all of its CPU credit balance, performance
remains at the baseline performance level. If your instance is running
low on credits, your instance’s CPU credit consumption (and therefore
CPU performance) is gradually lowered to the base performance level
over a 15-minute interval, so you will not experience a sharp
performance drop-off when your CPU credits are depleted. If your
instance consistently uses all of its CPU credit balance, we recommend
a larger T2 size or a fixed performance instance type such as M3 or
C3.
Im not sure you won't face to the out of CPU Credit problem because you are using Xlarge type but I think you should use other fixed performance instance types. So instance's performace maybe one part of your problem. You should use cloudwatch to monitor on 2 metrics: CPUCreditUsage and CPUCreditBalance to make sure the problem.
Secondly, how about your ASG? After scale-out, did your service become stable? If so, I think you do not care about this problem any more because ASG did what it's reponsibility.
Please check the following
If you are opening a connection to Database, make sure you close it.
If you are using jquery, bootstrap, datatables, or other css libraries, use the CDN links like
<link rel="stylesheet" ref="https://cdnjs.cloudflare.com/ajax/libs/bootstrap-select/1.12.4/css/bootstrap-select.min.css">
it will reduce a great amount of load on your server. do not copy the jquery or other external libraries on your own server when you can directly fetch it from other servers.
There are a number of factors that can cause an EC2 instance (or any system) to appear to run slowly.
CPU Usage. The higher the CPU usage the longer to process new threads and processes.
Free Memory. Your system needs free memory to process threads, create new processes, etc. How much free memory do you have?
Free Disk Space. Operating systems tend to thrash when the file systems on system drives run low on free disk space. How much free disk space do you have?
Network Bandwidth. What is the average bytes in / out for your
instance?
Database. Monitor connections, free memory, disk bandwidth, etc.
Amazon has CloudWatch which can provide you with monitoring for everything except for free disk space (you can add an agent to your instance for this metric). This will also help you quickly see what is happening with your instances.
Monitor your EC2 instances and your database.
You mention T2 instances. These are burstable CPUs which means that if you have consistenly higher CPU usage, then you will want to switch to fixed performance EC2 instances. CloudWatch should help you figure out what you need (CPU or Memory or Disk or Network performance).
This is totally independent of AWS Server. Looks like your software needs more juice (RAM, StorageIO, Network) and it is not sufficient with one machine. You need to evaluate the metric using cloudwatch and adjust software needs based on what is required for the software.
It could be memory leaks or processing leaks that may lead to this as well. You need to create clusters or server farm to handle the load.
Hope it helps.
I'm building a system that manages home HVAC and uses a Raspberry Pi running Ruby on Rails 3.2 with Ruby 2.0. Nearly all web content is dynamic. Scalability is not important, as it's only used within a home network and by only one or two simultaneous users. What's important is using minimal memory. Reliability is also very important. Fast is good. Does it make sense to just use the default Webrick server in production, or is there value in fronting it with say Apache or Cherokee and/or replacing it with Passenger or Puma or something else?
Don't use use Webrick in production environments. It lacks support for handling concurrent requests and is optimized for development. Adding a fully blown webserver like Apache would not help you to reduce the memory.
I recommend to use Thin in your scenario. It's a fast and lightweight webserver written in Ruby.
I have a problem that membase is being very slow on my environment.
I am running several production servers (Passenger) on rails 2.3.10 ruby 1.8.7.
Those servers communicate with 2 membase machines in a cluster.
the membase machines each have 64G of memory and a100g EBS attached to them, 1G swap.
My problem is that membase is being VERY slow in response time and is actually the slowest part right now in all of the application lifecycle.
my question is: Why?
the rails gem I am using is memcache-northscale.
the membase server is 1.7.1 (latest).
The server is doing between 2K-7K ops per second (for the cluster)
The response time from membase (based on NewRelic) is 250ms in average which is HUGE and unreasonable.
Does anybody know why is this happening?
What can I do inorder to improve this time?
It's hard to immediately say with the data at hand, but I think I have a few things you may wish to dig into to narrow down where the issue may be.
First of all, do your stats with membase show a significant number of background fetches? This is in the Web UI statistics for "disk reads per second". If so, that's the likely culprit for the higher latencies.
You can read more about the statistics and sizing in the manual, particularly the sections on statistics and cluster design considerations.
Second, you're reporting 250ms on average. Is this a sliding average, or overall? Do you have something like max 90th or max 99th latencies? Some outlying disk fetches can give you a large average, when most requests (for example, those from RAM that don't need disk fetches) are actually quite speedy.
Are your systems spread throughout availability zones? What kind of instances are you using? Are the clients and servers in the same Amazon AWS region? I suspect the answer may be "yes" to the first, which means about 1.5ms overhead when using xlarge instances from recent measurements. This can matter if you're doing a lot of fetches synchronously and in serial in a given method.
I expect it's all in one region, but it's worth double checking since those latencies sound like WAN latencies.
Finally, there is an updated Ruby gem, backwards compatible with Fauna. Couchbase, Inc. has been working to add back to Fauna upstream. If possible, you may want to try the gem referenced here:
http://www.couchbase.org/code/couchbase/ruby/2.0.0
You will also want to look at running Moxi on the client-side. By accessing Membase, you need to go through a proxy (called Moxi). By default, it's installed on the server which means you might make a request to one of the servers that doesn't actually have the key. Moxi will go get it...but then you're doubling the network traffic.
Installing Moxi on the client-side will eliminate this extra network traffic: http://www.couchbase.org/wiki/display/membase/Moxi
Perry
Can I set up a replica set in MongoDB 1.8 using servers with different amounts of RAM?
server1: 5gb
server2: 2gb
server3: 4gb
If yes, what are the pros and cons?
No, you do not need equal RAM. (Yes, you could set up a replica set as described.)
MongoDB uses memory-mapped files for all caching, which means that cache paging is handled by the operating system. The replicas with more memory will keep more of the database in memory; those with less will page more to disk.
MongoDB will eventually bring the entire database into memory if it can. If you're using two replicas for reads and one for writes, you might want to use the 5gb and 4gb machines for reads, so they are more likely to be hitting RAM.
Yes, you can configure a replica set this way.
If yes, what are the pros and cons?
Here's a doc explaining the major features of replica sets. Let's take a look at these in light of the RAM differences.
Pros:
More computers means better data redundancy. Having that 2GB node at least means that you have one more copy of the data.
Having a full 3 nodes on a replica set makes it easier to take one down for maintenance.
Cons:
Having servers of different sizes isn't great for automated failover. Let's say that your 5GB server is the primary. What happens when it goes down and the 2GB server wins the election? You still have automated fail-over, but your performance has probably dropped dramatically.
Read scaling may not work very well. Depending on your read patterns, sending reads to the 2GB server may result in lots of extra disk hits and slower performance.
So, the big problem here, is really one of performance. If you're just doing this for a dev setup, then it will basically work. But in production you run the risk of completely tanking your app. If your app is used to living on 4GB+ of RAM and then suddenly drops to 2GB, it may become unusable.
Most production setups want to fail over to another "equally-powered" computer.
I have been working on my [first] startup for a month now, and while it's probably atleast one more month away from an alpha release, I want to know how to deploy it the right way. The site will have an initial high amount of load (network + CPU) for a new user, so I am thinking of having a separate server/queue for this initial process, so that it doesn't slow down the site for existing users.
Based on my research so far, I am currently leaning towards nginx + haproxy + unicorn/thin + memcached + mysql, and deploying on Linode. However, I have no prior experience in any of the above; hence I am hoping to tap the community's experience.
Does the above architecture seem reasonable? Any suggestions/articles/books that you would recommend?
Is Linode a good choice? Heroku/EY seem too expensive for me (atleast until I have enough revenue), but am I missing some other better option? MediaTemple?
Any good suggestions on the load balancing architecture? I am still reading up on this.
Is it better have 2 separate Rails server instances on 2 separate linodes, or running 1 instance on a linode of twice capacity (in terms of RAM/storage/bandwidth)? How many Linodes should I start with?
Which Linux distribution should I choose? (Linode offers 8 different ones - http://www.linode.com/faq.cfm) Are there any relative advantages/disadvantages between them for a Rails site?
I apologize if any of my questions are stupid or contradictory; please attribute it to my inexperience.
Architecture
You're on the right track. I personally prefer Passenger over thin/unicorn (having run nginx to thin backends for a long while) just for the convenience, but your proposed setup is fairly standard. If you're on Ruby 1.8.7, I'd recommend that you consider REE + Passenger for framework memory savings, though.
Hosting & Load Balancing
Linode is fantastic, and I use them for just about everything I can, but you will need to be aware of RAM limits. Each Rails processes uses a nontrivial amount of RAM, and you'll want to avoid getting the machine into swap. Plan on running enough Rails instances per machine so that your memory allocation is about 90% of the memory on the Linode. You'll likely want another Linode dedicated to your database, though you can start with them both on the same machine; just be prepared to split off MySQL as you grow. You can set up communications between Linodes in the same data center on private IPs, which don't count against your bandwidth quota.
Your scaling strategy should be as horizontal as possible, so I'd recommend just getting a second Linode and adding it to your haproxy pool when you need more horsepower - Linode charges you $20 for 512mb more RAM, or you can just get a whole 'nother Linode (with CPU, RAM, HDD, and bandwidth quota) for that same $20. Seems a no-brainer!). In Rails' case, an instance is an instance is an instance, so it really doesn't matter if it's on the same VM or not, as long as the time to connect to your database machine or whatnot are more or less the same. You could be running 10 Linodes each running 10 Rails processes apiece without much of an issue. Linode also offers IP failover, so that if your primary Linode (with haproxy) goes down, it can fail over automatically to a secondary Linode, which you would then have haproxy running on, and ready to act in the same capacity as the first.
Distribution
Honestly, this is up to you! Many folks go with Ubuntu or Redhat (CentOS/Fedora) distros - I like CentOS myself - but it's really just about what you feel most comfortable with. If you don't have a favorite distro, I would recommend trying Ubuntu/CentOS, as they tend to be quite friendly to the beginner, and have extremely robust community support.
You will probably want to pick a 32-bit distro unless you have a compelling reason to pick a 64-bit distro; 64-bit executables require more RAM than their 32-bit counterparts, and since RAM is likely to be your most precious resource, it makes sense to save it where you can.