Coding on multiple machines - ruby-on-rails

What methods would you use to securely use multiple machines to work on code in active development?
My Ideal Situation
Sharing development code securely among multiple machines (at least two)
Automatic synchronization (think Google docs whereby any user's changes update all the others immediately). The reason for this is that I'd like to be able to use these computers interchangeably without having to commit / clone every time I switch. My understanding is that automatic synchronization would make it possible to switch machines seamlessly without having to commit a bunch of files each time.
The location of the development code is such that it can be accessed by a local Rails server and rendered on localhost:3000.
The solution works for Apple machines (both my computers are Apple).
I'm not sure if this question is a 'reasonable' question in terms of its specificity but it's the best first attempt I have. Thanks!

If you are the only person working on this project, then a service like Dropbox would work and provide you with the automatic synchronisation you're after.
However, if you're working with someone else on this project, or you're likely to do so in the future, then it's worth learning the basics of Git (or some other distributed version control system). It's probably not as hard as you expect:
You can get by with a few basic commands (see Everyday Git with 20 commands or so).
You can simplify things even further with git-up (this isn't perfect, but in most cases it makes fetching remote changes into a single command).
There are various OS X GUIs available to help you, including GitHub for Mac and GitX
You can get private repository hosting from GitHub (for a small fee) or from butbucket (for free).
Syncing with Git won't be automatic, but it does give you a lot of flexibility.

use any version control system (guess you do already) and if you really need automatic synchronisation put something like this in your crontab (assuming svn and growl for notifications):
* * * * * cd /path/to/checkout; svn update && svn commit -m '' || (echo "sync failed" |growlnotify )
I think version control is safest best practice for code sharing.
If it is only you working on the code, you can certainly set up another mode of syncing like auto-sync dropbox folder.
Also, if the assumption is you always got your laptop accessible as well when working on desktop, you could just use the actual laptop's code location as a shared network folder on your desktop, so no syncing is needed :)

Dropbox is the best free and easy solution. I use it as well for my personal projects. Only beware to run dropbox on two computers at once when from one pc you sending changes and second one is only running dropbox, you propably loose your work(that was my case).

Related

Developing FrontEnd App without installing BackEnd

As I'm preparing my team growth, I searched informations for months about advices or good practices when it comes to welcome a FrontEnd Developer.
We are running Rails API on BackEnd and Angular on FrontEnd. Right now, we use 2 separate git repos, one for front and one for back.
Bot are hosted on heroku.
But our developers work with a local BackEnd. As we welcome new frontEnds we don't want them to install the whole huge package and config of Rails.
I looked at different solutions and don't know if it's possible/best :
BackEnd deployed on Heroku and making a tunnel for the FE dev to
access it ( what about CORS ? )
Deploying a Vagrant or RailsBox ? (we tried Docker and didn't work
at all)
Moving the front repo in the back repo ( yes, I know that doesn't
fix the main issue, but I'm wondering if the repo split could be a
stop about our goal ? )
I would definitely merge the repos. This will allow you to deploy a changeset (frontend + backend) across the stack. Otherwise, deployment will get to be complicated at a stage where you don't need that.
I think the best way is to have a one-liner for your frontend-developer to install the backend environment. At uberVU, we used Vagrant for this purpose, but anything works IMO as long as it's a one-liner and works across operating systems.
You have to keep a few facts in mind:
make the configuration tying the frontend to the backend as dumb as possible; have good defaults in place. You don't want frontend people wasting valuable time figuring out how to link the two after running them together
make sure that whatever solution you use, it updates the running backend automatically when they pull in new code. One very frequent mistake in our case was frontend people updating the code and not seeing API endpoints working correctly, etc. Something that watches for filesystem changes and restarts the backend daemons should work. Be careful on whether that works correctly with shared filesystems between the host machine and the virtualization solution you choose
make sure that the virtualization solution you choose runs on Windows and MacOS. While backend devs tend to use MacOS and Linux, frontend developers are also big fans of Windows. The latest versions are more and more handsome.

Ruby/RoR development: locally or server

Our company has started development of own systems "in-house". We already got couple of developers, who will be responsible for writing code in Ruby/RoR.
We are currently discussing about infrastructure and I would like to ask: should we develop everything on local machines, then put it to test server and later to production, or develop everything on development/test server, then publish it for testing and later to production?
Just an update to the description above: under "local machines" I meant developers' desktops and this test/development server is a machine in our office.
It's a valid question, and as such there's a trade-off to consider.
Generally; work locally. Web app development has a natural flow that leads developers to be saving and refreshing browsers many times an hour. All the time you save on network latency will actually add up, and be less frustrating for the developers.
There are downsides to working locally however, you'll need to make sure that your set-up is EXACTLY as it will be on the testing/production servers. That means everything down to your kernel version, apache version, ruby/rails version. DNS is easy, but again must mimic the live situation perfectly in order for AJAX calls etc to work seamlessly.
Even if you ensure all of the above, you will likely have to make a few minor changes when you move the app to a live server, there just always seems to be something in my experience.
Also, running on a live server isn't SO painful for a developer. Saving a source file from a text editor/IDE via FTP should take less than a second even over the internet, and refreshing a remote browser session will give your UI designers a better feel for the real user experience and flow. If you use SVN rather than FTP much the same applies.
Security isn't much of a concern, lock down FTP and SSH to the office IP, but have a backdoor available if a developer needs to edit a source from somewhere else, so they can temporarily open the firewall to their own IP.
I have developed PHP and Rails apps on a remote test server, on an in-office server and on a local machine. After many years doing each, I can say that as a developer, I don't mind any so much.
As a developer, my suggestion is that you need to 1st do all developing work on your local server. After testing, you need to send to client to make it live.
I'm working as a web developer on Ruby on Rails # andolasoft.com, we are following the same procedure. Hope you got the idea.
Thanks

What is Plan B for when heroku goes down on your production app?

This question is inspired by this recent outage:
https://status.heroku.com/incident/212
There doesn't seem to be much I can do here. I can't push at all, and pushing seemed to be what broke it in the first place. AFAIK, I can't switch over to a new server deployed on aws or elsewhere without fiddling with the DNS records. What should I do?
When you use an "all-in-one" service like Heroku you accept and understand than, in case of this kind of issue, you're in their hands and there's nothing you can do.
You can keep a backup system configured elsewhere but, from my point of view, this is a waste of time and resources because:
it requires you to configure and clone all Heroku settings and features
it's a double work
in case of issues, the only way to redirect the traffic to your app is to change DNS settings. The change requires DNS changes to propagate
if you can clone Heroku features, you might not want to use Heroku at all
It's a good idea to have an off-site backup of your application, database and features. But on the other side, these issues are the trade off of using this kind of services.
The only real thing you could do would be to not rely on a single service provider for your application. This means that you would need to break out the DNS from the hosting platform so that you can re-point to a different platform (such as AWS).
Depending on your hosting platform, there are different options, but in a nutshell, the key is to reduce single points of failure and have plans in place to switch over when things to fail.

How to prepare for data loss in a production website?

I am building an app that is fast moving into production and I am concerned about the possibility that due to hacking, some silly personal error (like running rake db:schema:load or rake db:rollback) or other circumstance we may suffer data loss in one database table or even across the system.
While I don't find it likely that the above will happen, I would be remiss in not being prepared in case it ever does.
I am using Heroku's PG Backups (which is to be replaced with something else this month), and I also run automated daily backups to S3: http://trevorturk.com/2010/04/14/automated-heroku-backups/, successfully generating .dump files.
What is the correct way to deal with data loss on a production app?
How would I restore the .dump file in case I need to? Can I do a selective restore if a small part of the system is hit?
In case a selective restore is not possible: assume one table loses data 4 hours after the last backup. Result => would fixing the lost table require rolling back 4 hours of users' activity? Any good solution to this?
What is the best way to support users through the inconvenience if something like this happens?
A full DR (disaster recovery) solution requires the following:
Multisite. If a fire, flood, Osama Bin Laden or whathaveyou strikes the Amazon (or is it Salesforce?) data center that Heroku uses, you want to be sure that your data is safe elsewhere.
On-going replication of the data to a separate site (or sites). That means that every transaction that's written to your database on one site, is replicated within seconds to the mirror database on the other site. Most RDBMS's have mechanisms to let you do a master-slave replication like that.
The same goes for anything you put on a filesystem outside of the database, such as images, XML configuration files etc. S3 is a good solution here - they replicate everything to multiple data centers for you.
I won't hurt to create periodic (daily or so) dumps of the database and store them separately (e.g. on S3). This helps you recover from data corruption that propagates to the slave DBs.
Automate the process of data recovery. You want this to just work when you need it.
Test everything. Ideally, you want to automate the test process and run it periodically to ensure that your backups can restore. Netflix Chaos Monkey is an extreme example of this.
I'm not sure how you'd implement all this on Heroku. A complete solution is still priced out of reach for most companies - we're running this across our own data centers (one in the US, one in EU) and it costs many millions. Work according to the 80-20 rule - on-going backup to a separate site, plus a well tested recovery plan (continuously test your ability to recover from backups) covers 80% of what you need.
As for supporting users, the best solution is simply to communicate timely and truthfully when trouble happens and make sure you don't lose any data. If your users are paying for your service (i.e. you're not ad-supported), then you should probably have an SLA in place.
About backups, you cannot be sure at 100 percent every time that no data will be lost. The best is to test it on another server. You must have at leat two types of backup :
A database backup, like pg-dump. A dump is uniquely SQL commands so you can use it to recreate the whole database, just a table, or just a few rows. You loose the data added in the meantime.
A code backup, for example a git repository.
in addition to Hartator's answer:
use replication if your DB offers it, e.g. at least master/slave replication with one slave
do database backups on a slave DB server and store them externally (e.g. scp or rsync them out of your server)
use a good version control system for your source code, e.g. Git
use a solid deploy mechanism, such as Capistrano and write your custom tasks, so nobody needs to do DB migrations by hand
have somebody you trust check your firewall setup and the security of your system in general
The DB-Dumps contain SQL-commands to recreate all tables and all data... if you were to restore only one table, you could extract that portion from a copy of the dump file and (very carefully) edit it and then restore with the modified dump file (for one table).
Always restore first to an independent machine and check if the data looks right. e.g. you could use one Slave server, take if offline, then restore there locally and check the data. Good if you have two slaves in your system, then the remaining system has still one master and one slave while you restore to the second slave.
To simulate a fairly simple "total disaster recovery" on Heroku, create another Heroku project and replicate your production application completely (except use a different custom domain name).
You can add multiple remote git targets to a single git repository so you can use your current production code base. You can push your database backups to the replicated project, and then you should be good to go.
The only step missing from this exercise verses a real disaster recovery is assigning your production domain to the replicated Heroku project.
If you can afford to run two copies of your application in parallel, you could automate this exercise and have it replicate itself on a regular basis (e.g. hourly, daily) based on your data loss tolerance.

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Resources