I’ve set a client up with Heroku for their Ruby on Rails application and have had a great deal of trouble over the years with their application not running well regardless of how much money we spend on additional resources, find their documentation highly confusing. I’ve never been able to understand their specific terminology and documentation. We are constantly getting "H12" errors and "R14" errors etc. The memory usage and dyno loads are constantly spiking. And yet this is a small to medium-sized business without a massive amount of traffic. Wondering if anybody out there who does understand the ins and outs of Heroku can look this configuration over and tell me if it makes sense:
DB_POOL: 10
MALLOC_ARENA_MAX: 2
RAILS_MAX_THREADS: 5
WEB_CONCURRENCY: 4
Ruby 2.7
Rails 6.0
Puma
8 2x web dynos
5 1x worker dynos
$50 Postgres standard 0 database
$15 Memcachier
$10 Rediscloud
...etc addons
Your WEB_CONCURRENCY is too high for your Standard-2x dynos. The recommended default is 2: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#recommended-default-puma-process-and-thread-configuration
This is likely contributing to your R14 errors as higher web concurrency means more memory usage. So you need to either lower your web concurrency (which may mean you also need to increase the # of dynos to compensate) or you need to use bigger dynos.
You already have MALLOC_ARENA_MAX=2 but not sure if you are using jemalloc. You might want to try that too.
Of course, you may also have other memory issues in your app - check out some tips here. I also recommend adding a monitoring tool like AppSignal as it's capable of tracking memory allocations per transaction.
For mitigating H12s:
Ensure you have installed something like the rack-timeout gem, which ensures that a long-running request is dropped at the dyno-level and thus avoids the H12 error (you get a Rack::TimeoutError exception instead). Set the timeout to 15s so that it is well under the 30s for H12 timeout.
Investigate your slow transactions. A monitoring tool is key here, i.e. New Relic (start with lowest-priced paid plan - free plan does not allow transaction tracing). Here is their blog post on how to trace transactions
When you've identified the problem - fix it!
if the bottleneck is external:
check for external API limits and throttling
add timeouts and make app resilient to slow external responses
if the bottleneck is due to the database:
optimize slow queries
check cache hit rates
check for the # of waiting connections and db locks -> if the number of waiting connections is consistently above 0 for X minutes, that indicates you have some long locks that you'll need to investigate. Waiting connections is easiest to track over time with Librato (free plan should do fine)
if the bottleneck is other app code:
add more custom instrumentation to get more insights, i.e. New Relic instructions
address app code issues
I want to stress the importance of monitoring tools to help diagnose issues and help determine optimal resource usage. Doing things like figuring out the correct concurrency configs, the correct size and # of dynos to run are virtually impossible without proper monitoring tools. Hopefully you have some already that are covered by your etc add-ons that are not listed, but if you do not, I'll summarize my recommendations and mention a couple other tips:
To get more metrics info, ensure you have enabled log-runtime-metrics
Also enable Ruby language metrics
Add a monitoring tool that can track Ruby memory allocations like AppSignal. Scout APM can do this too but I think their plans capable of this are more expensive (requires Scout Insights feature)
Add the lowest-paid version of New Relic. This is my go-to tool for transaction tracing. AppSignal can do this too if you don't want to pay for another tool, but I find it easier with New Relic.
Add Librato. It offers some great charts out of the box, including a set of Postgres charts in its own dashboard.
Set alerts in your monitoring apps to warn you about things like response times so you can look into them!
And of course, make all your changes in staging first AND load test them to see the impacts of your changes before attempting in production!
Update: I also just noticed that you said you are using Standard-0 Postgres, which means it has a 120 connection limit. So if you end up lowering your WEB_CONCURRENCY and increasing the # of dynos, watch out for your total connections to that database. Beyond just the fact that there is a limit, more connections also mean more overhead for your db anyway so if you are close to your connection limit, you are more likely to see db performance suffer. You may want to upgrade to another plan that has a higher connection limit or use pgbouncer as your connection pooler to avoid connection limits.
Related
I have a Rails application which has 100% CPU utilization most of the time.
I am not able to figure out why there is so much load on the server. I am using the Puma web server with a default configuration, and am running multiple background jobs using the sucker-punch gem. There are 7 files which are using sucker punch jobs with 5 workers:
include SuckerPunch::Job
workers 5
I ran the top -i query and found the following processes running on the server. I can see multiple Ruby commands on the server. Can someone tell me whether this is normal behavior on a server, or if something is wrong?
Some Ways to Reduce Resource Contention
Your user space load is high (~48%), so you'll probably want to reduce the number of workers in your web application, increase the number of CPUs available on your instance, move to a version of Ruby that has better concurrency and real multi-core support (e.g. Rubinius or JRuby), or some combination of these options. Depending on what your code is actually doing, you may also need to re-architect your application to offload expensive I/O from the application server.
In addition, your steal time is quite high (~41%), so your EC2 instance is probably overloaded. Simply moving your application to a less-loaded instance may free up sufficient resources to reduce application wait times.
We have developed a Rails app in Heroku, we have around 3 web dynos, and 2-3 worker dynos. We have some exporting and importing functionalities that use a lot of our worker dynos, when that happens, everything crashes, and we get an App error un the website.
Sentry tells us that it is due to a Timeout. We are trying to find out which functionality of our software is taking so much worker time. The problem is that it affects all of our users, some of them only using web layer functionalities.
But I was wondering, is there a way to isolate our worker dyno problems from the web dynos work? I mean, Is there a way that our site does not crash when one user exports a big amount of data and saturates the workers?
Thanks in advance!
Regards,
Gonzalo
thank you for the answers, let me give you some related info:
- We use delayed_job for the workers, which is async.
- We used to have a DB connection before, it supported 120 connections, and we never saw it completely busy. The current AWS RDS only reached 24% of its use and we only saw 28 concurrent connections on the day of the crash.
- New Relic did not indicate delay in the DB.
- The web dynos start to generate timeout in many functionalities, if the crash is not related to the workers, it may be because of some functionality that is not in jobs.
Update:
- We have set some limits in our exports, even when those are in jobs, this was affecting our web and giving some App Errors. When we set the limit, the App Errors were dramatically reduced.
- We are still searching for any other unoptimized functionality.
My database is very cpu constrained, and I can't find the root cause of the issue. I currently have two applications servers each wit a Rails api connecting to PostgreSQL via the ruby-pg gem. Both application server also have sidekiq running background jobs, and I have a handful of support servers processing new posts from a national feed via sidekiq. If I were running out of memory, the solution would seemingly be straight forward. Any general ideas why I am CPU constrained?
Database Specs:
Rackspace 8GB Performance Tier cloud VM (8GB RAM, 8x Core CPU, SSD)
Debian 7 Wheezy Linux OS
PostgreSQL 9.1 with PostGIS extension
Possible Problems:
PostgreSQL 9.1 is bad at indexes
The database has nearly 10GB of indexes. I am going to upgrade my database to PostgreSQL version >= 9.2. In version 9.2, index only scans were introduced.
Too many connections
In the postgresql.conf, I have set max connection equal to '500'. Usually throughout the day, only 175 connections are utilized, but during peak times, sidekiq tasks will increase the current connections to 350. How many connections are recommended with an 8GB server instance?
Idol Connections
When I take a look at pg_stat_activity in the psql console, I see sidekiq is leaving a lot of IDLE connections. Could these connections result in CPU inflation? Does the fix exist in the api or in sidekiq?
Need a more powerful server
Maybe there is not a bug. I might need to simply increase the server instance. Again this would make more sense if I was memory bound. However, both app servers and 3 of the support sidekiq servers are 4gb performance tier instances. Essentially, servers that interact with the database have combined more than double the resources of the database. Should this even matter?
Additional questions:
What tools/techniques should I employ to troubleshoot the issue?
Any basic settings in the postgresql.conf related to cpu usage?
Are there any known issues related to rails, sidekiq, or the pg gem that could be a contributing factor? (I havent seen any open issues.)
Are there any general postgreSQL guideline for CPU usage?
Any other ideas thoughts that might help my search?
You are using massively too many concurrent connections. PostgreSQL will be wasting lots of its time on housekeeping and juggling concurrent queries. All the concurrent work will be fighting for CPU and buffer space, there'll be heavy contention on spinlocks, and it'll all generally be a mess.
On an 8 core machine, you should probably not have more than 20 actively working connections if you're mostly CPU constrained. If you're I/O limited, you can go higher, but 350 is just ridiculous.
If possible, put a PgBouncer in transaction pooling mode in front of your PostgreSQL instance, so queries get queued up and executed rapidly in series instead of slowly in parallel.
See number of database connections (Pg wiki).
Additionally, PostGIS can be very CPU-heavy. It sometimes needs to do very complex calculations. I suggest using the auto_explain module to record long running queries, and using pg_stat_statements / pg_stat_plans to record what's taking up resources. Examine these queries to see if they need improvement.
Your idle in transaction sessions must be dealt with, too. Depending on why they're idle and whether they have a transaction ID or not, they might be causing serious table bloat. They're also creating unnecessary signalling overhead within PostgreSQL, as it has to do more co-ordination with backends that're actively doing things. Finally, the number of open transactions its self increases the cost of some internal housekeeping operations.
So. Your DB will probably perform better if you reduce the connection counts, put a PgBouncer in transaction pooling mode in front, and fix those idle connections.
Most likely you are CPU constrained because your work needs a lot of CPU. :)
9.1 is not generally bad at indexes. There may be some specific issues, as all versions might, which exactly what they are might change from version to version.
Index-only-scans are mostly a benefit when you are IO constrained. I wouldn't hold out much hope for that being a magic bullet for you.
350 connections are certainly not helpful, but probably are not very harmful, either. But when they are harmful, it can be downright catastrophic. The correct value is more determined by the number of cores, not the amount of RAM. If it is easy to throttle down the sidekiq connections, do it even if you can't prove that it helps.
If the connections are just IDLE, not IDLE in transaction, then they probably aren't very harmful, but again there are a few cases where they can be. That is pretty much the same issue as the number of connections.
The connection you showed from top was idle in transaction. That status shouldn't be taking up much CPU, so that probably means it is rapidly cycling through statements and top just happens to catch it while it is between them. But you didn't say how many similar lines there were in top, if it is just that one it suggests your code is not running concurrently and 7 of you 8 CPUs are wasted.
Regarding the db server versus the other servers, if the database is fundamentally the limit, beating on it with a bigger hammer is not going to help. Often there is some flexibility about where computation is done. If you can get the app servers to do more computation that is currently done on the db and let the db focus on ACID issues, that would be good. But no one but you can know if that is possible or feasible.
My first stop would be to use pg_stat_statements to see what SQL statements are taking the most time. Maybe just adding an index to the slowest/most frequent query would make the problem magically go away.
Recently our ruby on rails application was upgraded to 2.3.8. We also replaced Mongrel/Mongrel cluster with Phusion Passenger during upgrade.
Whenever we try to deploy our application, it seems to be responding faster initially but response time gradually increases. We also noticed that cpu usage on the database box spiking to 400% and lot of requests waiting in global queue. This seems to be happening only in our production environment.
Can anyone let me know how I should go about debugging this issue?
Is there anyway we can restrict the number of connections between passenger and DB?
Also is there a way we can setup connection pooling in passenger?
Thanks,
Sivakumar.
I don't think the issue is with Passenger. If you are having a large spike on your DB box CPU, your issue probably lies there. Without more information about your database it's hard to give you specifics, but here are a few things you could try:
Run top and check how much of your total memory the mysqld or other database process is consuming. If this amount is not high then you probably need to tune MySQL settings to take advantage of the RAM on your DB box.
Analyze your running database queries with the mytop command. You might have some queries that are hogging all of your system resources or causing lots of swapping.
Look through your MySQL slow log and see if you have queries that are taking over 1 second to run.
Check your database engines. Are you using MYISAM and/or InnoDB? You might need to tune your database differently so that it can allocate the proper amount of memory and resources to each engine.
Consult a DBA. They'll be able to take a look at your usage and application and tell you more definitively if the issue lies with your database or application.
P.S: I would recommend upgrading to Passenger 3, it performs better than Passenter 2.
I'm developing a very sensitive application for a client that needs to have 99.9999999999% uptime guarantee.
It's a Rails application with MySQL database. I am thinking of hosting it on EngineYard due the low maintenance requirements and easiness to run.
Heroku does not seems to be the perfect solution due to uptime problems.
EC2 can also be a good solution but maybe it requires too much work to install and maintain.
My question is: how to make a redundant system using EngineYard, Heroku, EC2 or any other Rails hosting that you propose? Do I need to have 2 instances in different places of the world being replicated? Please advise the best way.
Regards.
Everyone wants 100% uptime, but achieving it is pretty much impossible. Since down-time can be caused by any of the links in the chain, and there usually are dozens, to achieve such a high standard you will need to buy gold-plated everything. Essentially, you'll have to spend a fortune. The difference between 99% uptime, which means your site is unavailable for around 88 hours a year, and 99.9% uptime, where it's less than ten hours is considerable, and from there to 99.99% is even higher, where the tolerance is under an hour for a full year.
Going beyond 99.99% is simply impractical. Nobody will sign a guarantee like this unless they're being dishonest, the agreement so loaded down with caveats as to be unenforcable, or don't mind dishing out heavy credits all the time. Amazon EC2's SLA is 99.99% for instance.
The metrics I've seen collected on a provider like Linode shows uptimes of about 99.97% to 99.99%. Occasionally you will see datacenters with 100% uptime, but this is the network level only and doesn't take into account intermittent internal glitches that may knock your server offline.
Choosing a managed hosting provider like Engine Yard might be the solution for you, because it can minimize your exposure to random events, but it won't get you such a high uptime in and of itself. They're very good at maintaining the system layer, but their ability to fix or work-around bugs in your application is very limited, and they are subject to the same intermittent networking issues with EC2 as anyone else.
There are two kinds of reliability you should be concerning yourself with. One is availability, which is purely a measure of how likely a client is to be able to use the application. The other is data integrity, which is a measure of how likely data is to be retained given any number of disaster scenarios.
Most people will accept that an application might be down every so often for brief periods of time, but people refuse to accept that data may go missing every now and then.
It isn't hard to get a "99.9999999999%" data retention rate, but you will need to plan out your backup, replication, and recovery strategy in detail and will have to exercise your systems regularly to ensure they are working as designed.
Where you have almost no control over the often patchy routing on the internet in general, the defect rate in the hardware of your server, the power in your data center and so on, you do have a huge amount of control over your backup strategy.
EY uses a company called TerraMark for their hosting, which is some pretty serious hosting infrastructure. Out of the 3 you listed, I would go with them.
For up time, you want to look at master/slave replication of your data, automatic failover, and you want to build redundancy wherever you can. High availability is a fairly involved topic, and has more to do with IT then dev, I would recommend asking where to start over at serverfault.com.