Rails 7 and AWS Aurora (MySQL) - ruby-on-rails

I have a Rails 7 application, using an AWS RDS instance. Unfortunately, I a MySQL RDS is cannot be automatically scaled, so I am thinking about moving to Aurora, with MySQL.
I am however not sure how to prepare my Rails 7 application for the switch, and primarily have a couple of questions:
Do I still use the mysql gem?
What changes do I need to make to ActiveStorage, to address the Aurora architecture, which is a minimum of two instances, one for reading and one for writing?

Related

How To Manage AWS RDS Database Connections?

I'm fairly new when it comes to building and managing the back-end architecture of an application.
I'm hosting a Ruby on Rails application through AWS and one of the services I'm using is AWS RDS.
I've recently come across an issue where I reached the limit on the number of database connections I can make on my DB instance (seemingly as a result of Elastic Beanstalk deployments connecting to my DB when running the DB migrations, and not closing (?) the connections after it's done), and don't know how to best go about addressing it and managing it.
For anyone that has had experience using Amazon RDS with a PostgreSQL DB, what resources/services do I need to setup in order to make sure I manage my database connections correctly (so that I avoid the limit as much as possible)?
I have heard of PGBouncer for managing Database Connections, but I was wondering if there were other resources/services that anyone else can share so that I can make a more informed decision on what to use.
Had a similar issue myself awhile back. You can look into the Rails Reaper as well to see if that suits your purposes, but it was PGBouncer that ended up fixing my issue

setting up mongodb via AWS opsworks

I am trying to setup a rails stack on AWS Opsworks and i want to use mongodb as the database.
I think that you set this up by creating a new custom layer and adding your chef reciepts to the relevant life cycle hooks but i am unsure as to what receipts to put where.
Can anyone help with how to add mongodb via a chef to AWS Opsworks?
I have seen there is a community mongodb cookbook but from what i can see its not compatible with Opsworks.
Does anyone have any experience of setting this up ?
Please can anyone help with this.
thanks a lot
Rick
I tried setting up a MongoDB 3-node replica set in OpsWorks a few months back. I will share a bit of my experience:
1) How to install a single MongoDB:
It is possible and easy to install a single mongodb using the EDelight Chef MongoDB Cookbook. Just add it as a submodule in your custom opsworks chef repository.
To get it to work create a custom layer and call it MongoDB and excecute the following recipes
SETUP: mongodb:10gen_repo
CONFIGURE: mongodb:default
This will install the latest version of MongoDB.
NOTE: I used Ubuntu instances.
2) MongoDB Best Practices
If you talk to MongoDB engineers or customer service reps, they will all tell you that the recommended setup for MongoDB is a 3 node replica set. This means one master and two read replicas hopefully in different availability zones. Also an ideal setup will have lots of RAM, to give you an example: the smallest instance provided by MongoDB that you can find in the AWS market place is a standard large:
You also have to consider using EBS on RAID10, maybe reserved IOPS...
See the white paper MongoDB on AWS for more info.
3) Security Considerations
Ideally you want only application instances to access the DB instances. In AWS you could create a security group with custom rules and assign EC2 instances to the group you just created... Is not quite like this when it comes to OpsWorks as it forces you to have default security groups that have very lax restrictions. AWS will always adopt lax permissions over stricter ones.
4) Time and Money Considerations
If the recommended setup is a 3 node replica set using large instances you are looking at at least $600 (on demand) for the DB and this doesn't add reserved IOPS, EBS, and so on. Automating this setup is possible yet not simple. It will take time or an expert in the subject to get you going. If you have the resources and personnel to deal with this go for it. If you are part of a small development team that want's to code more and do less operations, read on.
5) Find a reliable Managed Solution
At first I was reluctant to the idea of using a third party company that offered MongoDB as a service. After much evaluation of the different options (Managed, AWS Marketplace, OpsWorks, Direct EC2 installation), I concluded that for our small team the best thing to do was to use either MongoLab or MongoHQ. They host and mange MongoDB instances of all sizes and prices. They even let you choose the hosting (AWS, Rackspace, etc), region and AZ. Price wise will be more expensive if you look at the hardware alone, but like I mentioned before you have to consider not only the price but the operational time MongoDB will require.
I have been there, done that, and ended up not using OpsWorks to host MongodDB. Hopefully this will save you some time and headaches.
I just tested this repo on github, it works for MongoDB / OpsWorks. Goto OpsWorks > Layers > In the "Custom Chef Recipes" section reference this github link "https://github.com/Cyclic/cookbooks" then add "yum::default" to the Setup lifecycle event. Then add "mongodb::10gen_repo" "mongodb::default" and "mongodb::10gen_remrepo" to the Setup lifecycle event

Rails + Heroku + Amazon RDS - Production vs Development

It is my first time working with a Rails app on Heroku. It uses a mysql db hosted on Amazon RDS. At some point I want to move it from development to production. I want to keep developing and adding features. What is the best way to accomplish this?
I see Heroku has some kind of staging app feature. Is that the best option for me to keep a separate app to test? And what about the database? I'm guessing I need to create a separate db on Amazon RDS for development and for production?
I am on a budget so I don't want to have to pay for 2 apps on Heroku and 2 db's on Amazon. Can I create both on the fly each time I do development work and then destroy them when I'm done, or is that too much? If so can I then copy the production data over to the development data? I would do local testing but I feel like I need to make sure it's working on Heroku as well.
I'm just trying to get a general idea of what workflow is best practice or most commonly used. Any comments are appreciated.
Unfortunately, as you're on RDS you're going to have to pay for two. If you were using Heroku Postgres you'd be able to get a simple small PG database for free.
Regarding applications - Heroku apps are free if you use less than 750 'dyno' hours a month (which is a little bit more than one dyno for an entire month), which is normally fine for staging small apps as long as you don't have masses of workers required.
You certainly could create the staging env whenever you need it, but only you know how complex this application is and what sort of overhead that would provide.

Moving from a VPS host to EC2 with as little downtime as possible

We have a rather large server stack (more than 30 machines), and we can't go on using our current VPS provider. we must move, and Amazon's EC2 seems to be our solution.
We use rails, mysql, mongo, redis and other stuff, and we need to move these with as little downtime and with no data loss.
Has anyone here done such a task? anyone with tips on how to do that?
First, MySQL and MongoDB is suppout for write/read replication, write at the current environment and read slave at the new box.
Secondly, code in rails is not changed, you may run apps at new node and use DB at the old one firstly and switch DB to new box smoothly.

amazon simpledb with aws-sdb-proxy suitable for high traffic production app?

i am using amazon simpledb with the aws_sdb gem and aws-sdb proxy as outlined in a documentation from amazon with ruby on rails and a local aws proxy that runs on webrick (providing a bridge with ActiveResource).
see http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1242
i am wondering if the aws-sdb-proxy (webrick!) is suitable for high traffic load, since webrick is supposed to be a development server. anyone has comments or experiences?
I've tried Rails with simple_record and I can tell you it's much slower compared to MySQL. You will also have to do quite some work to change your code to adapt to this.
Therefore if you have any high traffic tables that update frequently, I'd say just pass on it. Use MySQL or a different solution. SimpleDB is good only to store metadata for whatever doesn't update very often, and if you get a lot of traffic to that you definitely should get some memcached servers in front of it.
Check this out for some numbers (disregard the Dynamo part of it, I'm now on SDB and moving either back to RDS or Dynamo tonight) Moving MySQL table to AWS DynamoDB - how to set it up?

Resources