How to deploy web apps in the agile way [closed] - ruby-on-rails

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What is the best set of practices and tools that could support me in the continuous deployment of an web application?
We should be able to deploy effortless several times a day.
It is a Ruby on Rails 3 app. We use Git and Github.

What is the best set of practices
I am assuming you are trying to be agile here. You trying to derive a set of best practices for deployment is a little scary. For that matter, any sort of list of best practices for an Agile Team is scary. If you carefully study Agile, you will realize that it requires the Team to inspect and adapt and continuously improve, the moment you think that your Team has found the "best practices", you by default agree that you can stop improving and hence stop inspecting, adapting, and improving. Mike Cohn, author Agile Estimating and Planning, suggests that an agile team should not come up with a set of Best practices, instead it should continuously improve, by inspecting and adapting.
To give you some constructive feedback, here are some of the practices our Scrum Team followed, which we ourselves figured out by inspecting and adapting our own deployment process. I will add information about our source code checking in practices along with deployment as well.
Every time a developer checked in code, Hudson-ci used scm poll trigger and automatically built and deployed code to a development enviroment. It sent appropriate notifications of success or failure via email.
There was a nightly build in the development environment which was triggered by Hudson-ci automatically every night.
After the features were ready and preliminarily tested in Dev environment, the QA on the team triggered a Hudson-ci build and deploy to the integration server, where the features could be integration tested. The Integration environment was an exact replica of the Production environment.
Production deployment were usually done using Hudson again based on the release plan.
and tools that could support me in the continuous deployment of an web application?
There are several CI tools out there. My favorite out of the lot is Hudson-ci. Others are Continuum and Cruise control. But I think Hudson is the most versatile and easy to use tool, and because it has community driven plugins it will be very easy for you to find a plugin for git, and ruby on rails apps to fit in.

IMVU is the poster child for continuous deployment and they got their by following the rule of "if we are sure we didn't break anything we need to deploy immediately." They now have very impressive automation around their process, but it started with that rule.
I think some of the ingredients that help with continuous deployment include:
always have a working build. this means continuous integration running automated unit tests on commit, and responding immediately to any failures. at IMVU they go as far as automatically reverting commits that break the build
extensive functional tests. this is what gives you confidence that you haven't broken anything. these tests tend to be slow so you'll need a strategy to keep the test time down, such as running tests in parallel across many machines or using a service like SauceLabs.
automated deployments. never deploy manually. never change configuration manually. deploy to all environments using the same technology.
When you say continuous deployment most people think of going automatically out to production without human intervention. You can stop short of that -- push-button production deployments -- and still get a lot of value. We (urbancode, makers of AnthillPro) help a lot of customers put these kinds of elements in place. Few people do the automatic production deploy, but automated deployment is helpful for everyone.
Jtf

Related

Best practices for iOS back-end versioning [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never worked at a tech company. I built an app from scratch with a NodeJS API and MongoDB hosted on AWS.
I will start hiring back-end developers to work with me.
What is the best way to making changes to the backend? All the .js files are stored in a repository, but what's the best way to test changes to the API before shipping?
The app is live on the app store. Should I put a duplicate build on TestFlight and connect it to version_2.js files and test it like that?
Or is there a better way to do it? Curious about what industry folks typically do.
There are a number of considerations:
Environments
Many would have multiple “environments”. Frequently, you would have at least “development”, “test”, and “production” environments. Many organizations have a fourth environment, adding a “staging” or “pre-production” environment to that mix, something that mirrors production as closely as possible and is the last test platform before going to production.
For each of those environments, you would then have corresponding builds of your app, one that targets each of those environments. Each of those would be a separate TestFlight target so that testers can download and simultaneously run dev, test, and production builds of the app, which would go against the relevant backend.
Testing
There are a variety of types of tests that you should have. At a minimum, you should be thinking about:
Unit testing
Integration testing
UI testing
The building of these tests should be part of your development methodology. You should be automating these tests, doing regression testing with each new release, reviewing code coverage, etc.
Clearly, there are other types of tests that should be run (e.g., stress/load testing, etc.), but the above should get you going.
Source control
You should have all of your code under source control with branches that correspond to your environments (outlined in first point above), so you know what code is placed in which of the environments.
Project management
You will want a project management system that dictates what is under development. Depending upon your development methodology, this will consist of “stories” that describe the work being done (which should correlate to the commits in your source control).
You will often have a kanban board that correlates to these stories to where they are in the development/testing/deployment process and which environment they are in. This is where developers will indicate that code can be migrated to one of the test environments, where reviewers can either send PR back for further development or advance a PR to acceptance testing, where acceptance testers can indicate whether something is ready for deployment, etc.
Semantic versioning
I'd suggest a semantic versioning system. E.g., you might have the server communicate the API version, so that the client knows whether a newer version of the app is required or recommended.
You might also consider namespacing your endpoints. For example, you might be working on v1 endpoints, but when you introduce some breaking changes, that might be v2 endpoints that will be stood up in parallel (at least temporarily) with v1 endpoints. Etc.

A basic question about continuous integration

This is not a programming question, but I don't know any more active forum and besides programmers are the best people to be able to answer my question.
I am trying to understand the rationale behind continuous integration. On one hand, I understand that it is a good practice to daily commit your code before heading to home whether or not the coding and testing is complete or not and then there is continuous integration concept where the minute something is committed, it triggers a build and all the test cases are run. Aren't the two things contradictory?. If we commit daily whatever coding is done, it will cause daily failed builds..Why don't we manually trigger builds once the coding and testing is complete?.
Usually when you save your code daily is to be sure that your work will not be lost.
On the counterpart the CI or Continuous Integration is to test if what you produced is ok, in the majority of projects the CI isn't applied to individual branches ie: feature, bugfix, it's applied on major branches ie: master, develop, releases, etc. And these branches aren't updated daily as they need a pull request to be update and someone to approval that pull request.
The use case for having CI implemented on individual branches (feature, bugfix) is to check before merging a pull request into a major branch when it will check the tests and if the code builds.
So resuming, yes you need to commit your code daily, but you don't need to apply CI to it daily.
I suggest to you check the Gitflow workflow: https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow
The answer is obvious.
1. Committing Code: In general the code is committed only after testing with the environment locally.
Consider Developer_A working on Component_A hence one has to commit with minimum verification as the scope is to develop Component_A.
No imagine complex system with 50 developer developing Component_B...Component_Z++
If someone is committing the code without minimum test it is most probably going to give you failed result.
Or else developer might have it committed on development branch that all together depends on SCM strategy adapted in project.
2. Continues Integration test scope:
On the other hand integrator principally collects and synergies different codes (Software Components) together into 1 container and perform different tests.
Most importantly, integrator need to ensure that all the Components Developed from different developers is fitting good and at the end Software is working as expected. To ensure that, Integrator have acceptance criteria and to proactively prevent something which can go wrong, it is important to have these criteria automated with the help of Continues integration.
But among all factors, it is important to give feedback on the quality of software to the developers. It is best in favor of project (economically), to know about the bug earlier hence Continues Integration and DevOps.
In Complex System it is worth to have automated watcher to catch the sneaked mistakes from developers.
3 Tools and Automation:
To create human independent system, automation tools like Jenkins are helpful.
Based on the testing strategy different testing levels can be performed with the help of Automation tools.

Purposes and differences between Magallanes, Travis, Jenkins, Ansible [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to get myself updated on workflow/deployment automation tools but the quantity is overwhelming and I cannot discern the differences of purposes of the multiple tools I'm finding.
So far the ones that I found interesting are:
Magallanes:
What I understood so far: It's a deployment tool. Its purpose is to automatize deployment so you can get rid of most human errors and time to do the deployment.
Travis:
What I understood so far: Continuous integration tool. It's used to automatize test integration on commit/deployment. But... can It automatize de deployment as well? Should I integrate it with Magallanes so Travis can manage the deployment with Magallanes?
Jenkins:
What I understood so far: Same as Travis but not as a service but as a tool you can configure and install instead. Same doubts, can I automatize deployments? just test integrations?
Ansible:
What I understood so far: Automatization of multiple tasks, deployment, sevice configuration management... I guess I can get rid of Magallanes and use Ansible, is this right? Can I integrate Ansible with Travis? or does travis also the Ansible deployment work (is the only automation I am interested at the moment)?
As you can see I'm lost here.
Wow: Already a close vote, where should I put this? It's a programming related question, they're programming related tools.
Edit: The thing is that I need to implement a deployment tool with the team and the projects I'm working on.
The doubts I do have are about, which tool should I be using (or which tools should I be integrate together). For example: I know Travis is for Test automation, but Can I use it for deployment? As I said, should I use it together with a more deployment oriented tool (Magallanes, or Ansible)... Maybe directly with Git?
The team was using Filezilla to upload things to production and SVN as a code sharing tool (no branches)... I was thinking of usin Git (server side, bye bye Filezilla) with hooks and a fine branch system but I know there're better ways and more complete deployment flows.
Travis and Jenkins are both continuous integration tools. Their primary purpose is to run your test suite on all commits, but some tools in this category can also trigger automatic deployments when the build is passing. People writing code that needs to be compiled will sometimes talk about build artifacts, which are the things that can actually be deployed, but if you're using PHP, you're probably just doing a git pull or dropping a tarball on the server, so you don't need to be concerned with this aspect of CI tools.
I haven't heard of Magallanes before now, but yes, it appears to be a deployment tool. Many companies create their own deploy tools for their specific situation, sometimes based on a tool like Capistrano or Fabric.
Ansible is a configuration management tool. Primarily this is for managing configuration of your servers, but as a side benefit, since it knows about all your servers, it can also handle deploying new code to them. Other popular tools in this category are Puppet, Chef, and Salt.
These tools are all about automation of pre-existing processes. So, as you find a step that you're doing over and over again, go research what tool can be used to solve that problem; I find this to be a much better approach than to find the tools first and try to determine what problem of yours they can solve.

Considering Porting App from .NET to Erlang - need advice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am looking at Erlang for a future version of a distributed soft-real-time hosted web-based telephony app (i.e. Erlang looks like absolutely the perfect choice for this kind of app). I come from a .NET background and the current version of this app uses a combination of C#, WCF and JQuery to deliver the service. I now need Erlang to allow me to add extra 9s to my up-time and to allow me to get more bang for my server bucks.
Previously I'd set up a development process here combining VS.NET, GIT, TeamCity and auto-deployment of MSI files to the various environments we maintain. It's not perfect, but we're all now pretty comfortable with it. I'm wondering whether a process like we have is even appropriate for such a radically different technology stack (LYME)?
I'm confident that all of the programming challenges we previously solved using .NET can be better solved in less code with Erlang, so I'm completely sold on the language choice. What I don't yet understand from reading the Pragmatic and O'Reilly books on Erlang, is how I should adapt my software engineering and application life-cycle management (ALM) processes to suit the new platform. I see that in-place code updates could make my (and my testing and ops team's) life much easier (compared to the god-awful misery of trying to deploy MSI files across a windows network) but I am not sure how things should change when I use Erlang.
How would you:
do continuous integration in Erlang (is it commonly used?)
use it during a QA cycle (we often run concurrent topic branches using GIT, that get their own mini-QA cycle, so they all get deployed into a test environment)
build and distribute your code to DEV, TEST, UAT, STAGING, and PROD environments
integrate code generation phases into your build cycle (we currently use MSBUILD + T4 templates)
centralize logging for a bunch of different servers (we currently use Log4Net, MSMQ, etc)
do alerting with tools like SCOM
determine whether someone/something has misconfigured your production servers
allow production hot-fixes only after adequate QA (only by authorized personnel)
profile the performance (computation and communication) of your apps
interact with windows-based active directory servers
I guess I need to know what worked for you and why! What tools and frameworks did you use? What did you try that failed? What would you do differently if you could start over, knowing what you know now?
Whoa, what a long post. First, you should be aware that the 99.9% and better kool-aid is a bit dangerous to drink while blind. Yes, you can get some astounding stability figures, but you need to write your program in a way facilitating this. It does not come for free. It does not happen by magic either. Your application must be designed in a way such that other subsystems recover. OTP will help you a lot - but it still takes time to learn.
Continuous integration: Easily done. If you can call rebar or make through your build-bot you are probably set here already. Look into eunit, cover and Erlang QuickCheck (the mini variant is free for starters) - all can be run from rebar.
QA Cycle: I have not had any problems here. Again, if using rebar you can build embedded releases that are minimized erlang vm's you can copy anywhere and run (they are self-contained). You can even hot deploy fixes to such a system pretty easily by altering the code path a bit so you have an overlay of newer fixes. Your options are numerous. Git already help you here a lot.
Environmentalization: Easily done.
Logging centralization: Look into SASL and the error_logger. You can do anything you want here.
Alerting: The system can be probed for all you need (introspection is strong in Erlang). But you might have to code a bit to hook it up to the system of your choice.
Misconfiguration: Configuration files are Erlang terms. If it can be computed, it can be done.
Security: Limit who has access. It is a people problem, not a technical one in my opinion.
Profiling: cprof, cover, eprof, fprof, instrument + a couple of distributed systems for doing the same. Random sampling is also easy (introspection is strong in Erlang).
Windows interaction: Dunno. (Bias: last time I used windows professionally was in 1998 or so).
Some personal observations:
Your largest problem might end up being that you try to cram Erlang into your existing process and it might resist. It is a new environment, so new approaches will be needed in places and you should expect to adapt and workaround limitations you find along the way. The general consensus is that it can work (it is working for several big sites).
It looks like you have a well-established and strict process. How much is that process allowed to be sacrificed to give way to a new kind of thinking?
Are your programmers willing to throw out almost all of their OO knowledge? If not, you will end with a social problem rather than a technical one. If they are like me however, they will cheer, clap in their hands and get a constant high by working with an interesting language solving an interesting problem in a new way.
How many Erlang-experienced programmers do you have? If you have rather few, then better cut your teeth on some smaller subsystems first and then work towards the larger goal. Getting the full benefit of the system takes months if not years. Getting partial benefit can be had in weeks though.

How to release often with Lean/Kanban? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am quite new to Lean/Kanban, but have poured over online resources over the last few weeks and have come up with a question that I haven't found a good answer for. Lean/Kanban seems otherwise such a good fit for our company, who is already using Scrum, but have reached some limitations inside that methodology. I hope someone here can give me a good idea.
As I see it, one of the biggest advantages of Scrum over Waterfall is the use of sprints. By having everything ready every 14 days you get short feedback cycles and can release often. However, as I have understood from reading about Lean, there are some costs associated with this (for example, time spent in sprint planning meetings, team commitment meetings & some problems with finding something useful for everyone at the end of the sprints).
Lean/Kanban will remove these wastes, but only at the cost of not being able to release every 14 days. Or have I missed an important point? For, in Kanban, how can you work on new development tasks and release at the same time? How do you make sure you don't ship something that is only halfway done? And how can you test it properly?
My best "solutions/ideas" so far are:
Don't release often and allow the waste associated with running out of new development tasks. Not really a solution to the question asked though.
Develop in branches and then merge into the main trunk. Makes you have to support at least two branches continuously internally.
Use some smart automatic labelling system to automatically build only certain finished tasks and not others.
As a summary, my question is: When you use Lean/Kanban, can you release often without introducing waste? Or is release often not part of Lean/Kanban?
Additional info specific to my company:
We use Team Foundation System & Source Control and have previously had some bad experiences in regards to branching and merging. Could this be solved simply by bringing in some expertise in this area?
The problem you describe seems more a source control program -- how to separate done features from features in-progress, than about Kanban. You seem to put a heavy penalty on running many branches -- which is the case for source control systems not based around the idea of multiple branches. On Distributed Source Control systems, such as GIT and Mercury, everything is a branch, and having them and working with them is lightweight.
I assume you read this blog about Kanban vs SCRUM, and the associated practical guide?
And, in answer to your question, yes, you can release often with Kanban.
You need to understand pull systems, which is what Kanban is designed to manage.
A customer (or product owner or similar) request for a feature in the running system is what triggers the process.
The request is a signal that goes to deployment. Deployment look for a tested item with properties that match the request. If none is there, you write the tests and look at development if there is a development slot that can be used to implement something that fulfils the test. When development has done its development (maybe looking for a suitable analysis first and so on), the test does its test, and deployment deploys.
The requests going backwards through the system are permissions to start working. As soon as the request has arrived, this triggers a lot of activity, where each activity should be completed as quickly as possible. There you have your turbo deployment.
Just like the request for a car goes to the dealer who looks in the ship who signals to the car factory, who signals to the suppliers.
Kanban is not about pushing requests through a system. It is about pulling functionality out of the system in exchange for a request that enters via the last step.
The team I manage uses Kanban and we release around every two weeks. If you're strict about what gets integrated into your mainline code branch (tests passing, customer approved, etc.), Kanban allows you to release whenever you want. You need to make sure that the stories moving through your system aren't co-dependent in order to do this, but on my team that's usually not a problem - a large part of our work involves maintenance, which consists of several unrelated bug fixes / features per release.
The way we handled weekly releases on a sustained engineering project that used Kanban was to implement a branching strategy. The devs worked in a sandbox branch, and made one checkin per work item. Our testers would test the work item in the sandbox; if it passed the regression tests the checkin would be migrated to our release branch. We locked the release branch from noon Monday until the release went out (usually by Wednesday, occasionally by Thursday, the drop dead date was Friday), and re-ran the regression tests for all migrated checkins as well as integration tests for the product, dropping a release once all of the tests passed.
This strategy let devs continually be working on issues without being frozen out of their branch during the release process. It also let them work on issues that took more than a week to resolve; if it wasn't checked in and tested/approved it didn't get migrated.
If I were running Kanban for a new version of a project, I'd use a similar strategy but group all related checkins as a 'feature', migrating a feature en masse to the release branch once the feature was done and then performing additional unit/integration/acceptance/regression testing in the release branch before dropping a release with that feature. Note that a key concept of Kanban is limiting work in progress, so I might restrict my team to work on one feature at a time (this would probably be several work items/user stories).
There's more to this than just source control, but your choice of TFS is going to limit you. When the Burton project was conceived back in 2004, Microsoft wasn't paying attention to Agile, much less Lean. It's going to be your weakest mechanical link for some time. Your hackles should have been raised by CodePlex's own adoption of Mercurial after having been offered to the Microsoft community as the poster child of TFS implementation.
A more salient issue here is Work Design. It encompasses the order that you choose to implement features (work schedule), as well as prioritization and cost of delay, and the shape and size of work items.
Scrum is commonly interpreted to say that non-technical "Product Owners" can determine work schedule based solely on their own concerns. If you follow this path, you're going to incur a lot waste by not taking the opportunities to do work together that belongs together. Work that belongs together can't just be determined by Product Owner wishes. Technical and workforce (skills) opportunities must also be taken into consideration.
For work to be done in the most productive way, the work itself has to be designed that way. This means that in a Lan Product Development team, decisions are made not by a non-technical worker, but by what Toyota calls someone of "Towering Technical Competence" who is close to the product, close to the customers, and close to the team.
This role is a stark contrast to Scrum's proposition. A Chief Engineer on a Lean team is himself (or herself) the voice of the customer, and the role of Product Owner is unnecessary.
Scrum's "Product Owner" is a recognition of an under-developed role in software development organizations, but it's far from a sustainable solution that consistently avoids waste. The role of "Software Architect" is often insufficient as well, as in some developer sub-cultures, the architect has become far too removed from the work.
Your issues of continuous deployment are only partially addressed with technology and tools. Look also to organizational issues, and perhaps give some thought to Scrum's purpose as a transitional approach from waterfall rather than one that can serve your organization indefinitely.
For source control I'd highly recommend Perforce. It makes branching and integrating changes from other branches relatively straightforward, and provides the best interface for source control that I've seen so far.
Continuous integration helps as well - i.e. lots of small, more than daily commits, instead of huge and potentially challenging merges. Tools like CruiseControl can help highlight when the source gets broken by a bad commit. Also, if everyone makes many small changes then conflicting changes will be rare.
I'd also advice not to try to follow things like lean, scrum, kanban & co. too closely. Just solve the problems yourself, looking to these ideas for guidance rather than instruction. The specifics of your problems will more than likely require some flexibility for the best management.
How we do it:
We have a pipeline with the following stages
Backlog
TODO
In progress (Develop and quick testing)
Code review
Test (Rigorous testing)
Integration test and general acceptance tests
Deploy
Each story is developed as a branch based on the latest version to leave the Deploy stage. They are then integrated as part of preparing the integration test.
QA pulls from the code review stage and can prepare releases at any pace the want. I think we have a pace of roughly one release every week.
By removing the "master" branch from git and not doing any merge before the code review stage we've made sure that there is no possibility to "sneak" code into releases. Which, as an interesting by-product, has forced us to visualize a lot of the work that used to be hidden.

Resources