Server runs out of RAM - How to make more efficient? - memory

My website seems to go down for up to an hour almost every day. I don't know much about servers so would appreciate advice on what to do next.
The website (3dsbuzz.com) has a Wordpress section, Wiki, Gallery and a VBulletin forum each with their own custom code and plug-ins. So there are lots of potential places which could be inefficiently coded. It runs from a cloud server with 2GB of RAM. The MySQL database is 370MB. We get around 30,000 page views per day.
What would be the best way to reduce the amount of down time? Should I upgrade my server (I can't really afford to) or is 2GB reasonable? I have plenty of errors in my error log, but they don't always occur around the same times as the server going down, so I'm not sure how relevant it is.

A solution is to remove each conponent one by one until the server no longer crashes.
Then, you know where to look for and can investigate further.

Related

How can I know if I need more instances?

How do I know if I need more instances?
At the moment my setup is the following. I have a wordpress website running LAMP server on a micro instance with Cloudfront. Now recently this website has been getting quite a lot of traffic. Think 100-150k pageviews a day with 150-200 visitors simultaneously at peak times.
At the moment both the database and all the static files are in the server itself, but that's really not an issue as they're served from Cloudfront anyway. I could move the database to an RDS instance and the statics to S3 and then set up a scaling stack. But before going into this I would like to know if this is a solution to anything, or even if I have a problem. The website seems slightly sluggish, but I'm not even sure where that's coming from.
So my question is: how to I determine if I even have a problem? Is the "sluggishness" just my imagination? If I do, it's not something very obvious, but when the number of visitors becomes large, even a seemingly imperceptible slow down might hurt your metrics.
In my experience, first you need to monitoring your web application performance with some tools, such as new relic. then add feedback on your site, your users can send feedback to you when they feel visit your site slowly.

Scaling to support a massive amount of traffic in a short period of time

Until now, our site has had a modest amount of traffic. None of our developers are big ops guys, but we've stayed ahead of it and keep the site up and running pretty quick. That said, our dev team is stretched, we've accumulated some technical debt, and there's plenty of opportunity to optimize.
Without getting into specifics, we just found out that we'll be expecting a massive amount of traffic in the near future in a very short period time. On the order of several million hits in a few hours. Scaling is one thing, but this is several orders of magnitude greater than what we're seeing now.
We're a Rails app hosted on S3 using ELB, and Postgresql.
I wanted to field some recommendations for broad starting points for scaling and load testing given this situation.
Update: Sorry, EC2, late night :)
#LastZactionHero
Pretty interesting question, let me answer you in detail, I hope you are talking about some e-commerce applications, enterprise or B2B apps doenst see spikes as such. Since you already mentioned that you are hosted your rails app on s3. Let me make couple of things clear.
1)You cant host an rails app on s3. S3 is simple storage service. Where you can only store files.
2) I guess you have hosted your rails app on AWS ec2 with a elastic load balancer attached above the ec2 instances which is pretty good.
3)You have a self managed Postgresql deployed on a ec2 instance.
If you are running on AWS you are half way safe and you can easily scale up and scale down.
I can see one problem in your present model, that your db. AWS has got db as a service. Thats called Relation database service.Which supports Mysql Oracle and MS SQL server.
RDS comes with lot of features like auto back up of your database, high IOPS etc.
But it doesnt support your Postgresql. You need to have or manage a self managed ec2 instance and run postgresql database, but make sure its fail safe and you do have proper back and restore system at place.
AWS provides auto scaling api and command line tools, pretty easy.
You dont have worry about the bandwidth issue etc, but I admit Angelo's answer too.
You can use elastic mem cache for caching your app. Use CDN if need to speed your app. RDS can manage upto 30000 IOPS, its a monster to it will do lot of work for you.
Feel free to ask me if you need any kind of help.
(Disclaimer: I am a senior devOps engineer working for an e-commerce company, use ruby on rails)
Congratulations and I hope your expectation pans out!!
This is such a difficult question to comprehensively answer given the available information. For example, is your site heavy on db reads, writes or both (and is your sharding/replication strategy in line with your db strain)? Is bandwidth an issue, etc? Obvious points would focus on making sure you have access to the appropriate hardware and that your recipies for whatever you use to provision/deploy your hardware is up to date and good to go. You can often throw hardware at a sudden spike in traffic until you can get to the root of whatever bottlenecks you discover (and yes, you will discover them at inconvenient times!)
Regarding scaling your app, you should at least:
1) Cache whatever you can. Pay attention to cache expiration, etc.
2) Be sure your DB has appropriate indexes set up (essentially, you should have an index on any field you're searching on.)
3) Watch your logs closely to identify potential long queries, N+1 queries, long view renders, etc.
4) Do things like what Shopify outlines in this post: http://www.shopify.com/technology/7535298-what-does-your-webserver-do-when-a-user-hits-refresh#axzz2O0gJDotV
5) Set up a good monitoring system (Monit, God, etc) for each layer of your stack - sudden spikes in traffic can quickly bottleneck your application in unexpected places and lead to more issues. The cascade can happen quickly.
6) Set up cron to automate all those little tasks you currently do manually...that you will probably forget about doing once you're dealing with traffic spikes.
7) Google scaling rails and you'll see tons of good info.
8) etc, etc, etc...
You can use some profiling tools (rubyperf, or something like NewRelic, etc) Whatever response you get from them is probably best to be considered as a rough baseline at best. Simple reason being that your profiling is dependent on your hardware stack which will certainly change depending on actual traffic patterns. Pretty easy to do if you have a site with one page of static content...incredibly difficult to do if you have a CMS site with a growing db and growing traffic.
Good luck!!!

Can't figure out what is causing my performance bottleneck in my rails app

My rails app, according to my heroku logs, is serving requests on average of about 1700 to 2500 milliseconds (this is the entire roundtrip). I used new relic to profile my app, and it seems that the majority of the request is not spent in my database but rather in the "Web Transaction" section of New Relic. It seems like the "Controller" category tends to be the slowest among requests, followed by the "SQL - SELECT" segment in the "Database" category.
I'm not quite sure what could be causing my performance bottleneck in my controllers, nor do I think I can dive deeper into new relic without paying for the premium version. I recently added indexes to the foreign keys of my application, although I do not think this made much of a difference in terms of database response times.
I know this is not enough information to figure out what is causing these bottlenecks, but I do not even know where to start or what info to give. If people could tell me what info is needed to diagnose these issues, then that would be helpful to me.
New Relic for Ruby includes a free, standalone developer mode. When running in RAILS_ENV=development, the New Relic gem adds a route that will show you a detailed profile for each request. Go to http://localhost:3000/newrelic after you hit your app a few times.
The profile includes time for each SQL query, as well as for components of your code. You can use custom instrumentation to break down big chunks of code into smaller segments (or individual methods) that get timed separately. This feature is a lot like the transaction traces you get in the paid Pro version, one major difference being that you wouldn't want to run the free dev mode in production.
(Full disclosure: I work for NR. Not many people know about the free dev mode, though, so I thought it was worth mentioning.)
You could potentially make Javascript loading appear even faster with something like head.js, which will load your JS files asynchronously and in parallel.
Take a look at this slide show:
http://www.slideshare.net/drhenner/optimize-the-obvious-7636674
Might not be enough but it goes through some common faults.
Digging a little deaper take a look at this video: http://windycityrails.org/videos2011/#2
It is longer but gives a lot of places to look.
On a different note. Do you use a CDN?

Getting low rails mongrel requests per second (8-15 per second)

So I have tried this out on multiple computers with multiple setups (servers/apps) and I seem to consistently get Rails completing 8-15 requests per second even for doing selects on empty tables with 1 field. I think I'm doing something wrong here because I've read a lot of stats online where people are getting 60-200 with mongrel. So being down at 8 seems just awful. The first app I tested this on was a little more involved and had 2 queries in 1 controller but they were just selecting a few rows, not a big deal.
Is there some trick to this I don't realize? Ruby.exe is taking up nearly 50% of my cpu cycles but still this is pretty bad. I feel like I've tried this when messing with rails last year and have gotten something like 50 requests per second. Is it possible that routing is screwed up some how?
Any advice would be greatly appreciated. Even info as far as profiling tools go so I could at least figure out WHERE the problem is occurring.
Thanks ahead of time.
If you're on windows then that seems about right. Rails runs terribly slow on windows. Try running it on a linux box, or a mac if you have one. You could also try heroku. They have a free starter plan you can use for development.
If you must run in a windows environment you could try jruby for some extra speed.

Web App Performance Problem

I have a website that is hanging every 5 or 10 requests. When it works, it works fast, but if you leave the browser sit for a couple minutes and then click a link, it just hangs without responding. The user has to push refresh a few times in the browser and then it runs fast again.
I'm running .NET 3.5, ASP.NET MVC 1.0 on IIS 7.0 (Windows Server 2008). The web app connects to a SQLServer 2005 DB that is running locally on the same instance. The DB has about 300 Megs of RAM and the rest is free for web requests I presume.
It's hosted on GoGrid's cloud servers, and this instance has 1GB of RAM and 1 Core. I realize that's not much, but currently I'm the only one using the site, and I still receive these hangs.
I know it's a difficult thing to troubleshoot, but I was hoping that someone could point me in the right direction as to possible IIS configuration problems, or what the "rough" average hardware requirements would be using these technologies per 1000 users, etc. Maybe for a webserver the minimum I should have is 2 cores so that if it's busy you still get a response. Or maybe the slashdot people are right and I'm an idiot for using Windows period, lol. In my experience though, it's usually MY algorithm/configuration error and not the underlying technology's fault.
Any insights are appreciated.
What diagnistics are available to you? Can you tell what happens when the user first hits the button? Does your application see that request, and then take ages to process it, or is there a delay and then your app gets going and works as quickly as ever? Or does that first request just get lost completely?
My guess is that there's some kind of paging going on, I beleive that Windows tends to have a habit of putting non-recently used apps out of the way and then paging them back in. Is that happening to your app, or the DB, or both?
As an experiment - what happens if you have a sneekly little "howAreYou" page in your app. Does the tiniest possible amount of work, such as getting a use count from the db and displaying it. Have a little monitor client hit that page every minute or so. Measure Performance over time. Spikes? Consistency? Does the very presence of activity maintain your applicaition's presence and prevent paging?
Another idea: do you rely on any caching? Do you have any kind of aging on that cache?
Your application pool may be shutting down because of inactivity. There is an Idle Time-out setting per pool, in minutes (it's under the pool's Advanced Settings - Process Model). It will take some time for the application to start again once it shuts down.
Of course, it might just be the virtualization like others suggested, but this is worth a shot.
Is the site getting significant traffic? If so I'd look for poorly-optimized queries or queries that are being looped.
Your configuration sounds fine assuming your overall traffic is relatively low.
To many data base connections without being release?
Connecting some service/component that is causing timeout?
Bad resource release?
Network traffic?
Looping queries or in code logic?

Resources