I have a rails app which is now hosted on dedicated server. Today something happened: app doesn't respond and I have no ssh access, restarting doesn't help and I am waiting for tech support to respond. But this is not a question, I just need this app to be online even if server fails. Which is the easiest option? Can I create second server on different hosting and serve from there in case of failure, if so, how to sync db and files? Application is not heavily loaded, I just need it to be available.
Difficult problem to solve. There's no one proven way to make this happen, but in general you need "No Single Point of Failure"
There's an entire science devoted to reliability in web applications -- no way can you get that answered in a SO question.
You can take frequent backups of your database, store them on S3 (and/or somewhere else). You can then
have an image of your applications server at your host
spin it up when your server dies
restore the database
Have the new application server take over responsibility (easiest way: assume the old server's IP address)
Related
I have developed an application in ASP.Net MVC 4.5 framework which is hosted on Application server. I am using two DB Server, One is for Development and another one is for production.
Whenever I have linked my application to Production DB Server, it uploads file very slowly, and when I have linked my application to Development DB Server it uploads file quickly.
Note : Application is hosted on different server.
So, Please guys suggest me, what I have to do.
You either have a network problem or the production server is slow.
Since your local machine's web application can reach the database I guess the database I guess you can upload files too. Try to copy a similar file through Windows Explorer and decide if it is fast or not. If it is slow, you know more.
Get hold of an ops s/he can help you with sniffing the network because being two at a problem is nicer and ofter smarter than sitting in your own chamber.
If the network is fast. Use the same ops to find out the load on the database, if it is using to much cpu/memory.
I am currently using an AWS micro instance as a web server for a website that allows users to upload photos. Two questions:
1) When looking at my CloudWatch metrics, I have recently noticed CPU spikes, the website receives very little traffic at the moment, but becomes utterly unusable during these spikes. These spikes can last several hours and resetting the server does not eliminate the spikes.
2) Although seemingly unrelated, whenever I post a link of my website on Twitter, the server crashes (i.e.,Error Establishing a Database Connection). Once restarting Apache and MySQL, the website returns to normal functionality.
My only guess would be that the issue is somehow the result of deficiencies with the micro instance. Unfortunately, when I upgraded to the small instance, the site was actually slower due to fact that the micro instances can have two EC2 compute units.
Any suggestions?
If you want to stay in the free tier of AWS (micro instance), you should off load as much as possible away from your EC2 instance.
I would suggest you to upload the images directly to S3 instead of going through your web server (see some example for it here: http://aws.amazon.com/articles/1434).
S3 can also be used to serve most of your web pages (images, js, css...), instead of your weak web server. You can also add these files in S3 as origin to Amazon CloudFront (CDN) distribution to improve your application performance.
Another service that can help you in off loading the work is SQS (Simple Queue Service). Instead of working with online requests from users, you can send some requests (upload done, for example) as a message to SQS and have your reader process these messages on its own pace. This is good way to handel momentary load cause by several users working simultaneously with your service.
Another service is DynamoDB (managed NoSQL DB service). You can put on dynamoDB most of your current MySQL data and queries. Amazon DynamoDB also has a free tier that you can enjoy.
With the combination of the above, you can have your micro instance handling the few remaining dynamic pages until you need to scale your service with your growing success.
Wait… I'm sorry. Did you say you were running both Apache and MySQL Server on a micro instance?
First of all, that's never a good idea. Secondly, as documented, micros have low I/O and can only burst to 2 ECUs.
If you want to continue using a resource-constrained micro instance, you need to (a) put MySQL somewhere else, and (b) use something like Nginx instead of Apache as it requires far fewer resources to run. Otherwise, you should seriously consider sizing up to something larger.
I had the same issue: As far as I understand the problem is that AWS will slow you down when you reach a predefined usage. This means that they allow for a small burst but after that things will become horribly slow.
You can test that by logging in and doing something. If you use the CPU for a couple of seconds then the whole box will become extremely slow. After that you'll have to wait without doing anything at all to get things back to "normal".
That was the main reason I went for VPS instead of AWS.
I got the above error message running Heroku Postgres Basic (as per this question) and have been trying to diagnose the problem.
One of the suggestions is to use connection pooling but it seems Rails has this built in. Another suggestion is that the app is configured improperly and opens too many connections.
My app manages all it's connections through Active Record, and I had one direct connection to the database from Navicat (or at least I thought I had).
How would I debug this?
RESOLUTION
Turns out it was an Heroku issue. From Heroku support:
We've detected an issue on the server running your Basic database.
While we pinpoint this and address it, we would recommend you
provision a new Basic database and migrate over with PGBackups as
detailed here:
https://devcenter.heroku.com/articles/upgrade-heroku-postgres-with-pgbackups
. That should put your database on a new server. I apologize for this
disruption – we're working to fix this issue and prevent it from
occurring in the future.
This has happened a few times on my app -- somehow there is a connection leak, then all of a sudden the database is getting 10 times as many connections as it should. If it is the case that you are getting swamped by an error like this, not traffic, try running this:
heroku pg:killall
That will terminate all connections to the database. If it is dangerous for your situation to possibly cut off queries be careful. I just have a rails app, and if it goes down, losing a couple queries is not a big deal, because the browser requests will have looooooong since timed out anyway.
You might be able to find why you have so many connections by inspecting view pg_stat_activity:
SELECT * FROM pg_stat_activity
Most likely, you have some stray loop that opens new connection(s) without closing it.
To save you the support call, here's the response I got from Heroku Support for a similar issue:
Hello,
One of the limitations of the hobby tier databases is unannounced maintenance. Many hobby databases run on a single shared server, and we will occasionally need to restart that server for hardware maintenance purposes, or migrate databases to another server for load balancing. When that happens, you'll see an error in your logs or have problems connecting. If the server is restarting, it might take 15 minutes or more for the database to come back online.
Most apps that maintain a connection pool (like ActiveRecord in Rails) can just open a new connection to the database. However, in some cases an app won't be able to reconnect. If that happens, you can heroku restart your app to bring it back online.
This is one of the reasons we recommend against running hobby databases for critical production applications. Standard and Premium databases include notifications for downtime events, and are much more performant and stable in general. You can use pg:copy to migrate to a standard or premium plan.
If this continues, you can try provisioning a new database (on a different server) with heroku addons:add, then use pg:copy to move the data. Keep in mind that hobby tier rules apply to the $9 basic plan as well as the free database.
Thanks,
Bradley
I'm thinking of deploying a small Rails app on Heroku. In an effort to save money, I'd like my app to use an external database (to which I have free access), rather than a Heroku-hosted database. The trouble is that the free database only accepts local connections. To access it from Heroku, I'd need to do so via an SSH tunnel.
Is it possible for a Heroku app to persist its data in an external DB accessed via SSH? If so, how?
(For bonus points, here's a second question: is this a good idea? On the one hand, this scheme would save me from paying for a Heroku database. On the other hand, it means having to encrypt all my database traffic. I imagine that this would massively slow down my web dynos, and reduce the number of requests they can serve. Would the money I save on the database get used up paying for more dynos? Am I likely to come out ahead by doing this?)
Yes, you can.
It is possible to set up a tunnel on Heroku to an external database.
You don't want to do it for the reasons the O.P. mentions (to avoid paying for a local database) for the reasons #sgrif mentions (it would be painfully slow and probably not really save anything)
But there are legitimate reasons for wanting to tunnel to an external database, for example if data is residing in a legacy system that you need to analyze.
Rather than simply repeat myself (it's long), here's a link to the recipe that worked for me: SSH tunneling from Heroku
No, and even if it was an option it's a really bad idea as you'd be adding massive latency to every request, since you'd for all intents and purposes have to open a new tunnel for every request.
Your best option is likely to use Heroku's development or starter tiers. The free development tier will work if your database is less than 10,000 rows. Their $15/mo starter tier works for up to 1,000,000 rows.
Our company has started development of own systems "in-house". We already got couple of developers, who will be responsible for writing code in Ruby/RoR.
We are currently discussing about infrastructure and I would like to ask: should we develop everything on local machines, then put it to test server and later to production, or develop everything on development/test server, then publish it for testing and later to production?
Just an update to the description above: under "local machines" I meant developers' desktops and this test/development server is a machine in our office.
It's a valid question, and as such there's a trade-off to consider.
Generally; work locally. Web app development has a natural flow that leads developers to be saving and refreshing browsers many times an hour. All the time you save on network latency will actually add up, and be less frustrating for the developers.
There are downsides to working locally however, you'll need to make sure that your set-up is EXACTLY as it will be on the testing/production servers. That means everything down to your kernel version, apache version, ruby/rails version. DNS is easy, but again must mimic the live situation perfectly in order for AJAX calls etc to work seamlessly.
Even if you ensure all of the above, you will likely have to make a few minor changes when you move the app to a live server, there just always seems to be something in my experience.
Also, running on a live server isn't SO painful for a developer. Saving a source file from a text editor/IDE via FTP should take less than a second even over the internet, and refreshing a remote browser session will give your UI designers a better feel for the real user experience and flow. If you use SVN rather than FTP much the same applies.
Security isn't much of a concern, lock down FTP and SSH to the office IP, but have a backdoor available if a developer needs to edit a source from somewhere else, so they can temporarily open the firewall to their own IP.
I have developed PHP and Rails apps on a remote test server, on an in-office server and on a local machine. After many years doing each, I can say that as a developer, I don't mind any so much.
As a developer, my suggestion is that you need to 1st do all developing work on your local server. After testing, you need to send to client to make it live.
I'm working as a web developer on Ruby on Rails # andolasoft.com, we are following the same procedure. Hope you got the idea.
Thanks