Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to get myself updated on workflow/deployment automation tools but the quantity is overwhelming and I cannot discern the differences of purposes of the multiple tools I'm finding.
So far the ones that I found interesting are:
Magallanes:
What I understood so far: It's a deployment tool. Its purpose is to automatize deployment so you can get rid of most human errors and time to do the deployment.
Travis:
What I understood so far: Continuous integration tool. It's used to automatize test integration on commit/deployment. But... can It automatize de deployment as well? Should I integrate it with Magallanes so Travis can manage the deployment with Magallanes?
Jenkins:
What I understood so far: Same as Travis but not as a service but as a tool you can configure and install instead. Same doubts, can I automatize deployments? just test integrations?
Ansible:
What I understood so far: Automatization of multiple tasks, deployment, sevice configuration management... I guess I can get rid of Magallanes and use Ansible, is this right? Can I integrate Ansible with Travis? or does travis also the Ansible deployment work (is the only automation I am interested at the moment)?
As you can see I'm lost here.
Wow: Already a close vote, where should I put this? It's a programming related question, they're programming related tools.
Edit: The thing is that I need to implement a deployment tool with the team and the projects I'm working on.
The doubts I do have are about, which tool should I be using (or which tools should I be integrate together). For example: I know Travis is for Test automation, but Can I use it for deployment? As I said, should I use it together with a more deployment oriented tool (Magallanes, or Ansible)... Maybe directly with Git?
The team was using Filezilla to upload things to production and SVN as a code sharing tool (no branches)... I was thinking of usin Git (server side, bye bye Filezilla) with hooks and a fine branch system but I know there're better ways and more complete deployment flows.
Travis and Jenkins are both continuous integration tools. Their primary purpose is to run your test suite on all commits, but some tools in this category can also trigger automatic deployments when the build is passing. People writing code that needs to be compiled will sometimes talk about build artifacts, which are the things that can actually be deployed, but if you're using PHP, you're probably just doing a git pull or dropping a tarball on the server, so you don't need to be concerned with this aspect of CI tools.
I haven't heard of Magallanes before now, but yes, it appears to be a deployment tool. Many companies create their own deploy tools for their specific situation, sometimes based on a tool like Capistrano or Fabric.
Ansible is a configuration management tool. Primarily this is for managing configuration of your servers, but as a side benefit, since it knows about all your servers, it can also handle deploying new code to them. Other popular tools in this category are Puppet, Chef, and Salt.
These tools are all about automation of pre-existing processes. So, as you find a step that you're doing over and over again, go research what tool can be used to solve that problem; I find this to be a much better approach than to find the tools first and try to determine what problem of yours they can solve.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never worked at a tech company. I built an app from scratch with a NodeJS API and MongoDB hosted on AWS.
I will start hiring back-end developers to work with me.
What is the best way to making changes to the backend? All the .js files are stored in a repository, but what's the best way to test changes to the API before shipping?
The app is live on the app store. Should I put a duplicate build on TestFlight and connect it to version_2.js files and test it like that?
Or is there a better way to do it? Curious about what industry folks typically do.
There are a number of considerations:
Environments
Many would have multiple “environments”. Frequently, you would have at least “development”, “test”, and “production” environments. Many organizations have a fourth environment, adding a “staging” or “pre-production” environment to that mix, something that mirrors production as closely as possible and is the last test platform before going to production.
For each of those environments, you would then have corresponding builds of your app, one that targets each of those environments. Each of those would be a separate TestFlight target so that testers can download and simultaneously run dev, test, and production builds of the app, which would go against the relevant backend.
Testing
There are a variety of types of tests that you should have. At a minimum, you should be thinking about:
Unit testing
Integration testing
UI testing
The building of these tests should be part of your development methodology. You should be automating these tests, doing regression testing with each new release, reviewing code coverage, etc.
Clearly, there are other types of tests that should be run (e.g., stress/load testing, etc.), but the above should get you going.
Source control
You should have all of your code under source control with branches that correspond to your environments (outlined in first point above), so you know what code is placed in which of the environments.
Project management
You will want a project management system that dictates what is under development. Depending upon your development methodology, this will consist of “stories” that describe the work being done (which should correlate to the commits in your source control).
You will often have a kanban board that correlates to these stories to where they are in the development/testing/deployment process and which environment they are in. This is where developers will indicate that code can be migrated to one of the test environments, where reviewers can either send PR back for further development or advance a PR to acceptance testing, where acceptance testers can indicate whether something is ready for deployment, etc.
Semantic versioning
I'd suggest a semantic versioning system. E.g., you might have the server communicate the API version, so that the client knows whether a newer version of the app is required or recommended.
You might also consider namespacing your endpoints. For example, you might be working on v1 endpoints, but when you introduce some breaking changes, that might be v2 endpoints that will be stood up in parallel (at least temporarily) with v1 endpoints. Etc.
I've been learning Travis CI and I want to use it to help automate tests on a MEAN application, then deploy it. However, there are some ways to go about this.
After reading, I learned I can create two separate repositories, thus maintaining two separate applications: a client application and a backend application. Since they are separate repositories, I can have separate .travis.yml files on each and perform continuous integration on the client application and backend application. However, I need advice on this approach because I have questions:
For the client app, I have to write tests. Since I'll be using angular, I want to test responsiveness and if components are working as intended. The client application also has to communicate with the backend application and I want to see if it is properly getting the correct results (such as clicking a button triggers a GET request and see if I'm getting the correct response body). Since the client app is on a separate repository, and when I build it on TravisCI, how will I connect the client application to the backend application if it exists on a separate repository?
I read around and I can use submodules in git. Thus, the client application and the backend application can be submodules for a 'master repository'. Therefore, how will the trigger in TravisCI work? Will I have separate travis.yml files in each submodule, or will I have to have one in the "master repository"?
If I were to get everything to work properly and have the client application and backend application both successfully deploy and the two are hosted on different servers, how will I fix the cross-domain issue?
The other approach is to host the static files produced by ng build --prod and have the node backend application host them. When Travis CI is triggered, I can first build the node backend application and run the tests on it first and then run the tests on the angular client application. After all of the tests are passed, where do I deploy? I know I have to deploy the node application since it will host the static files, so I how exactly will I deploy the backend application in Travis CI?
I know this is going to push it, but I'll ask it anyway. In the future, I want to learn how to implement microservices, and I want to use Nginx for the purpose of load balancing. How will I go about that? Docker can help me create a production environment where I can see if the Nginx server and node application are working well, but how do I include that in Travis CI?
If my question is a bit vague, please let me know what parts of it are vague so I can edit it that way I can make more sense of what I'm asking for. Thank you, and I look forward to an answer :)
Question is ultra-broad. You should solve one problem at a time, because by the time you solve 1 and 2 I doubt that 3 will be your only concern, and all of these issues are not really related.
try spending a bit of time reading Travis CI documentation, but also how to write tests and what different types of tests will do for you. Your issue is less about Travis than about what are unit tests vs. what are integration tests. So write simple standalone tests for your frontend, simple standalone tests for your backend, maybe run integration tests manually for a while, and when it becomes a real issue, then you will have better knowledge of how everything works together and you will find a way. Long story short: there is no single best way to run integration tests and it mostly depends on many, many things in your app (how does it run, what type of DB do you use, etc.)
you should read about submodules. Perhaps you need them, perhaps not. There is no way to tell. You can use submodules with Travis CI, but you can also not use submodules. Depends on what you want to achieve. Focus on the end goal for your architecture, not on what Travis CI needs!
what cross-domain issue? Again, this is a very different problem, and probably not the most prominent one you will face. Since I have no idea what server technology you will use there is no way I can answer that question properly. If you use express, this may be what you are looking for: https://expressjs.com/en/resources/middleware/cors.html
General bit of advice: all of your questions boil down to experience. Try solving one problem at a time, get started with your project and when you hit a specific issue, it's much much easier to solve than asking about "microservices". There are many ways to do microservices right, each solving a different issue. Without any knowledge of what your application is about and what issues you want to solve, microservices may or may not be what you are looking for, but there are also many other components that can affect your stack. Just get started and don't think about all of this too much for now - it's better to have something soon that you can test and learn upon, than think for weeks about something that you will never get because it is only theory.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What is the best set of practices and tools that could support me in the continuous deployment of an web application?
We should be able to deploy effortless several times a day.
It is a Ruby on Rails 3 app. We use Git and Github.
What is the best set of practices
I am assuming you are trying to be agile here. You trying to derive a set of best practices for deployment is a little scary. For that matter, any sort of list of best practices for an Agile Team is scary. If you carefully study Agile, you will realize that it requires the Team to inspect and adapt and continuously improve, the moment you think that your Team has found the "best practices", you by default agree that you can stop improving and hence stop inspecting, adapting, and improving. Mike Cohn, author Agile Estimating and Planning, suggests that an agile team should not come up with a set of Best practices, instead it should continuously improve, by inspecting and adapting.
To give you some constructive feedback, here are some of the practices our Scrum Team followed, which we ourselves figured out by inspecting and adapting our own deployment process. I will add information about our source code checking in practices along with deployment as well.
Every time a developer checked in code, Hudson-ci used scm poll trigger and automatically built and deployed code to a development enviroment. It sent appropriate notifications of success or failure via email.
There was a nightly build in the development environment which was triggered by Hudson-ci automatically every night.
After the features were ready and preliminarily tested in Dev environment, the QA on the team triggered a Hudson-ci build and deploy to the integration server, where the features could be integration tested. The Integration environment was an exact replica of the Production environment.
Production deployment were usually done using Hudson again based on the release plan.
and tools that could support me in the continuous deployment of an web application?
There are several CI tools out there. My favorite out of the lot is Hudson-ci. Others are Continuum and Cruise control. But I think Hudson is the most versatile and easy to use tool, and because it has community driven plugins it will be very easy for you to find a plugin for git, and ruby on rails apps to fit in.
IMVU is the poster child for continuous deployment and they got their by following the rule of "if we are sure we didn't break anything we need to deploy immediately." They now have very impressive automation around their process, but it started with that rule.
I think some of the ingredients that help with continuous deployment include:
always have a working build. this means continuous integration running automated unit tests on commit, and responding immediately to any failures. at IMVU they go as far as automatically reverting commits that break the build
extensive functional tests. this is what gives you confidence that you haven't broken anything. these tests tend to be slow so you'll need a strategy to keep the test time down, such as running tests in parallel across many machines or using a service like SauceLabs.
automated deployments. never deploy manually. never change configuration manually. deploy to all environments using the same technology.
When you say continuous deployment most people think of going automatically out to production without human intervention. You can stop short of that -- push-button production deployments -- and still get a lot of value. We (urbancode, makers of AnthillPro) help a lot of customers put these kinds of elements in place. Few people do the automatic production deploy, but automated deployment is helpful for everyone.
Jtf
I am looking for a comparison between IBM Build Forge (Rational) and Hudson CI.
At work we have full licenses for BuildForge but recently we started using Hudson for doing continuous integration and automating other tasks.
I used BuildForge very little and I would like to see if there are any special advantages of BuildForge over Hudson.
Also it would be very helpful to see a list of specific advantages of Hudson over BuildForge.
I not sure if it important or not, but I found interesting that Build Forge is not listed under continuous integration tools at wikipedia.
Thanks for bringing attention to the fact it was not on the wikipedia list of continuous integration applications. I have now added it. Build Forge has been a leader in providing continuous integration capabilities by use of it's SCM adapters for many, many years. Build Forge has a strength in supporting many platforms through its use of agents. These agents can run on Windows, Linux, AIX, Solaris, System Z, and many more -- they even give you the source code for the agents for free so you can compile it on just about any platform. The interface allows you to easily automate tasks that run sequentially or in parallel on one or multiple boxes. Selectors allow you to select a specific build server by host name or by criteria such as "any windows machine with 2gb of ram" from a pool of available agents. The entire process is fully auditable, utilizes role based permissions, and is stored in a central enterprise database such as DB2, Oracle, SQL Server, and others.
One of the most compelling reasons to use Build Forge is it's Rational Automation Framework for WebSphere. It allows a full integration into WebSphere environments to automate deployments and configurations of WebSphere through out of the box libraries. The full installation, patching, deployment of apps, and configuration of WAS and Portal can be performed using these libraries. To find out more, it is best to contact your IBM Rational representative.
You can use RAFW (IBM Rational Automation Framework for WebSphere) with BuildForge. It does not make sense to use RAFW with other ci servers, since RAFW requires BuildForge.
You have support for BuildForge and it integrates with other IBM software like ClearCase. Theoretically you have only to deal with one vendor if something in the chain does not work, but IBM has different support teams for their products and you might become their ping pong ball. :(
Hudson is open source (if you like that), that means you can get the source and modify it to serve you better. But the release cycle is very short (about 1 week, agile development). There is a more stable version with support available now (for cash of course) from the company of the main author of Hudson.
Hudson is currently main stream and is actively developed. I don't know how the usability of BuildForge is, but Hudson is good (not always perfect). The plugin concept of Hudson is a great plus, not sure if BuildForge has it as well.
Currently, we are using Hudson, but BuildForge was not looked at in detail.
You need to define what you would need continuous integration for (e.g. building, testing). Having used Hudson, I can vouch for its usefulness and effectiveness. There are many plugins that extend Hudson that can suit various needs. And you can't beat the price point (free).
You need to inquire as to why a BuildForge license was obtained at your place of employment. Perhaps someone on your team knows why this was done. If it isn't necessary for your needs, don't renew your BuildForge license and simply continue using Hudson.
Being a BuildForge/RAFW user, I have to object to one point stated above. It is perfectly possible to use RAFW without BuildForge. It is driven by a command line script, and you could use for example Hudson and RAFW together just fine.
A sample command would look like:
rafw.sh -e env -c cell -t was_common_configure_start_dmgr
The primary differentiators IMO:
Hudson/Jenkins is more readily extensible with the many existing plugins. It has a large active community and plenty information and documentation.
BuildForge can be configured with agents running on multiple machines and tasks can be assigned to run on a target agent. Reliable vendor support.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am looking for a free alternative to TFS. What would be the best alternative stack(source control, bug tracking, project management/planning, wiki, automated builds (ci))? Keeping in mind that it would be nice if they all integrated well.
For example, it would be nice to be able to link bugs to source control, and then be able to link to a project plan and then be able to automate building.
I do not have issues with using Microsoft project to manage project planing.
I know i would like to use these....:
SVN
TeamCity
NUnit
But i am struggling to find a good Wiki/Project Planning/Bug tracking, that would integrate well.
Any questions let me know.
Bug tracking / project management / planning
JIRA (not free but $10 for 10 users)
There are various modules to add to this. I use GreenHopper which is an agile process addon. YMMV. If you like gant chart planning, there is a MS Project pluggin (I've not used it)
Also checkout YouTrack. This integrates with TeamCity as a bug tracking tool. It's quite cheap, but I didn't get on with it in the beta. YMMV.
There are loads bug tracking systems. This will be where you spend most of your time so I would select one that suits the way you work, rather than one that integrates with your source control (unless you have a very strong requirement for change control/auditability)
CI / automated build
All CI servers will integrate with your source control. I would hazard a guess that all CI servers can be made to work with all source control systems so in this sense they are all integrated.
I used TeamCity for a while which is awsome. I stopped because I hit the 20 projects limit, and we couldn't afford to upgrade :( I would recommend starting with TeamCity as it is very easy to setup. Its actually quite easy to change your CI server as long as you keep the actual build in a script.
Also available is Hudson. This is free. There are addons to integrate with SVN + JIRA (in fact the addon scene in Hudson is a real strength). This means that commits to SVN containing links to issues generates a html link between the hudson build/ jira issue / svn commit making it easier to match code changes to features. Hudson works, but to me it seems like a beta product. This could change rapidly but I don't have the confidence in it that I had in TeamCity. However, I shouldn't complain because it's free and I haven't even contributed a bug report :)
The other main CI server is CruiseControl which I've not used.
Build scripts
Nant / msbuild / rake / others. I use msbuild which with a bit of head scratching can be made to do the business. I've not used to others.
Test frameworks
In .Net land I would guess that NUnit is probably the best supported and most widely known. There are others, especially those that encourage BDD.
Wikis
Not sure what you are after here (I've not used TFS) There are loads of free wiki engines to choose from. Full disclosure, I've not used any!
Best I could come up with.
SVN (Source Control)
Team City (Continuous Integration)
NUnit
Trac (Project Management/Bug Tracking/Wiki)
Its funny though as a result you now have to install Java (Team City) and python (Trac), to get your free alternative.