Create a local end-to-end development environment - docker

I am beginning to use terraform to control staging and production environments on various cloud providers (AWS for example). Is there a way to use terraform configuration files to create a local development environment for, say, a multi-tier application environment or do I have to maintain a different configuration via, say, vagrant for my development needs?
This may not be too difficult to do with two tools since most components are dockerized, but it would be nice to have a single configuration.

The problem with a cross platform orchestration tool is that it ends up catering for the lowest common denominator in terms of features available on all clouds. Terraform just describes the infrastructure using the resources available from the desired provider.
So long story short you'll need separate configurations, but if you're deploying to a cloud there is nothing stopping you from using that for QA or acceptance test environments.

Related

How to setup Multiple Environment (Dev,stage,Prod) using Jenkins?

I have a scenario where I am given a AWS instance ( Production, staging, testing), a Docker for development and need to work on a infrastructure work flow. We also need to take care of continuous deployment and integration using Jenkins.
Can you please help me out to figure out a rigid Work flow to create an environment with the above Tech space.
Man, with these tools it is possible to do a lot of different strategies of CI and CD. i recommend to adding GIT and Jenkins in your deck of development tools.
Start doing it in a simple way, first build you application then create new docker images and deliver this in a dev server, then think in how to deliver
to others envs.
After that store this images in a private repository using nexus or similar and try to think about an hierarchy of images (base images to app and ready images).
It all depends on your goals.

How to build and deploy Azure Cloud Service with multiple configurations in VSTS Release management?

We are using Team Services to maintain our web projects and Azure for hosting. At now, there are several Web Roles (asp mvc) and Worker Roles which is being hosted as Cloud Services. We are going to setup Continuous Integration and Delivery for them.
As you know, Team Services Build Definitions suggest to use Azure Cloud Service Template for building and Azure Cloud Service Deployment Task to deploy. We’ve tried it for single cloud service and it works.
In our case, there are web project (web role) and scheduler (worker role) as separate Cloud Services and they should be deployed simultaneously (in sequence), let it be DEV environment. But we have much more environments: dev, qa, ta, demo, preview, production, etc. Furthermore, each of them has slightly different web.config, ServiceDefinition.csdef and ServiceConfiguration.cscfg. And it became much more complicated task than just deploy one Cloud Service.
Questions are:
Should we build dozens of Cloud Service pachages (artifacts) and later decide which of them deploy or not? Could you suggest how to do it in a proper way? (in most cases it will be only Dev environment and we will waste time and resources for building other artifacts).
Will it be better to build one common artifact and later replace all configurations for specific environment? (It’s more complicated task because Cloud Service package zipped on several occasions with preconfigured ServiceDefinition and ServiceConfiguration)
What is the best way to replace configuration tokens (web.config, serviceconfiguration, etc) in Deployment mode, or it should be done while projects is being built?
I would be grateful if you suggest any best practices.
For azure cloud project, it’s better to apply changes to the project per to the environment before build, so you can build the project during the release process.
Regarding to deploy to corresponding environment, you can configure artifact filter with build tag.
For example:
Add a file (e.g. json, xml or txt) to project that used to determine which environments the release should deploy
Add a PowerShell task to build definition to read the data from that file (step1) and add build tag(s) through logging command (Write-Host "##vso[build.addbuildtag]build tag")
Add Publish Build Artifacts task to upload the source files
Create a release definition and link artifacts and add multiple environments
Configure Pre-deployment conditions for each environment: Enable Artifact filters=>Select artifact=>Specify build tags
Add tasks and variables (e.g. visual studio build) for each environment to deploy to corresponding environments.
On the other hand, regarding replace value, there are many ways, such as Replace Tokens, XDT Transform

DevOps vs Docker

I am wondering how exactly does docker fit into CI /CD .
I understand that with help of containers, you may focus on code , rather than dependencies/environment. But once you check-in your code, you will expect tools like TeamCity, Jenkins or Bamboo to take care of integration build , integration test/unit tests and deployment to target servers ( after approvals) where you will expect same Docker container image to run the built code.
However, in all above, Docker is nowhere in the CI/CD cycle , though it comes into play when execution happens at server. So, why do I see articles listing it as one of the things for DevOps.
I could be wrong , as I am not a DevOps guru, please enlighten !
Docker is just another tool available to DevOps Engineers, DevOps practitioners, or whatever you want to call them. What Docker does is it encapsulates code and code dependencies in a single unit (a container) that can be run anywhere where the Docker engine is installed. Why is this useful? For multiple reasons; but in terms of CI/CD it can help Engineers separate Configuration from Code, decrease the amount of time spent doing dependency management etc., can use it to scale (with the help of some other tools of course). The list goes on.
For example: If I had a single code repository, in my build script I could pull in environment specific dependencies to create a Container that functionally behaves the same in each environment, as I'm building from the same source repository, but it can contain a set of environment specific certificates and configuration files etc.
Another example: If you have multiple build servers, you can create a bunch of utility Docker containers that can be used in your CI/CD Pipeline to do a certain operation by pulling down a Container to do something during a stage. The only dependency on your build server now becomes Docker Engine. And you can change, add, modify, these utility containers independent of any other operation performed by another utility container.
Having said all of that, there really is a great deal you can do to utilize Docker in your CI/CD Pipelines. I think an understanding of what Docker is, and what Docker can do is more important that a "how to use Docker in your CI/CD" guide. While there are some common patterns out there, it all comes down to the problem(s) you are trying to solve, and certain patterns may not apply to a certain use case.
Docker facilitates the notion of "configuration as code". I can write a Dockerfile that specifies a particular base image that has all the frameworks I need, along with the custom configuration files that are checked into my repository. I can then build that image using the Dockerfile, push it to my docker registry, then tell my target host to pull the latest image, and then run the image. I can do all of this automatically, using target hosts that have nothing but Linux installed on them.
This is a simple scenario that illustrates how Docker can contribute to CI/CD.
Docker is also usefull for building your applications. If you have multiple applications with different dependencies you can avoid having a lot of dependencies and conflicts on your CI machine by building everything in docker containers that have the necessary dependencies. If you need to scale in the future all you need is another machine running your CI tool (like jenkins slave), and an installation of docker.
When using microservices this is very important. One applicatio can depend on an old version of a framework while another needs the new version. With containers thats not problem.
Docker is a DevOps Enabler, Not DevOps Itself: Using Docker, developers can support new development, enhancement, and production support tasks easily. Docker containers define the exact versions of software in use, this means we can decouple a developer’s environment from the application that needs to be serviced or enhanced.
Without Pervasive Automation, Docker Won’t Do Much for You : You can’t achieve DevOps with bad code. You must first ensure that the code being delivered is of the highest quality by automating all developer code delivery tasks, such as Unit testing, Integration testing, Automated acceptance testing (AAT), Static code analysis, code review sign offs & pull request workflow, and security analysis.
Leapfrogging to Docker without Virtualization Know-How Won’t Work : Leapfrogging as an IT strategy rarely works. More often than not new technologies bring about abstractions over existing technologies. It is true that such abstractions increase productivity, but they are not an excuse to skip the part where we must understand how a piece of technology works.
Docker is a First-Class Citizen on All Computing Platforms : This is the right time to jump on to the Docker bandwagon. For the first time ever Docker is supported on all major computing platforms in the world. There are two kinds of servers: Linux servers and Windows servers. Native Docker support for Linux existed from Day 1, since then Linux support has been optimized to the point of having access to the pint-sized.
Agile is a Must to Achieve DevOps : DevOps is a must to achieve Agile. The point of Agile is adding and demonstrating value iteratively to all stakeholders without DevOps you likely won’t be able to demonstrate the value you’re adding to stakeholders in a timely manner. So why is Agile also a must to achieve DevOps? It takes a lot of discipline to create a stream of continuous improvement and an Agile framework like Scrum defines fundamental qualities that a team must possess to begin delivering iteratively.
Docker saves the wastage for your organization capital and resources by containerizing our application. Containers on a singe host are isolated from each other and thy uses same OS resources. This frees up RAM, CPU and storage etc. Docker makes it easy to package our application along with all the required dependencies in an image. For most of the application we have readily available base images. One can create customized base image as well. We build our own custom image by writing simple Dockerfile. We can have this image shipped to central registry from where we can PULL it to deploy into various environments like QA, STAGE and PROD. This All these activities can be automated by CI tools like Jenkins.
In a CI/CD pipeline you can expect the Docker coming into picture when the build is ready. Initially CI server (Jenkins) will checkout the code from SCM in a temporary workspace where the application is built. Once you have the build artifact ready, you can package it as an image with the dependencies. Jenkins does this by executing simple docker build commands.
Docker removes what we all know the matrix from hell problem, making the environments independent with its container technology. An open source project Docker changed the game by simplifying container workflows and this has resulted in a lot of excitement around using containers in all stages of the software delivery lifecycle, from development to production.
It is not just about containers, it involves building Docker images, managing your images and dependencies on any Docker registry, deploying to an orchestration platform, etc. and it all comes under CI/CD process.
DevOps is a culture or methodology or procedure to deliver our development is very fast. Docker is a one of the tool in our devops culture to deploy application as container technology (use less resources to deploy our application).
Docker just package devloper environment to run on other system so that developer need not to worry about whether there code work in there system and not work in production due to differences in environment and operating system.
It just make the code portable to other environments.

Create environment with multiple servers - TFS Release Management

Is there a way (or some plugin/add-on) to add servers to an environment in TFS Release Management 2015?
I came from a team that used Octopus Deploy for DevOps. One thing that was extremely helpful was the ability to add multiple servers to an environment. Then, when you execute deployment steps on an environment, it applies those actions to all the servers that are part of the environment -- making deployments super easy. I have yet to find similar functionality in TFS Release Management and it's quite sad. They have a concept of an environment, but it's more like a "stage" than a logical/physical group of servers. To deploy the same step to multiple servers in an environment, you have to re-create the step multiple times or specifically write the names of all the servers in each step. Sad!
There isn’t the feature that execute deployment steps on an environment and applies to all the servers that in the environment.
But for web-based release management, you can provide a comma separated list of machine IP addresses or FQDNs along with ports for many steps/tasks of remote deploy, such as PowerShell on Target Machines, IIS Web Deployment and so on.
There is an article that may benefit you: Environments in Release Management
Regarding server and client based release management, there is environment that can include multiple servers, but you need to add steps multiple times for each server. I recommend that you use web-based release management.

jenkins - infrastructure provisioning

I've just finished setting up my Jenkins server but there seems to be one glaring problem that I can't find an answer to.
Shouldn't the Jenkins environment be an exact clone of the production environment to make it effective? None of the tutorials i've read mention this fact at all. So how do people deal with this fact? especially if they are using a single Jenkins instance for multiple projects that may be running on different infrastructure?
I can imagine puppet or chef might help here but wouldn't you be constantly re-provisioning the jenkins server for different projects. sounds pretty dangerous to me.
The best solution to me seems to be to not run the test on the Jenkins server itself but to spin up a clone of the production environment and run the tests on that? But I can't find a single solitary tutorial on how this could be done on EC2 for example.
Sorry if this is a bit of a rambling question. So how does everyone else deal with ensuring an exact replica of the production environment for Jenkins to run tests on? This includes database migrations as well now that I think about it.
Thanks.
UPDATE:
A few of the answers given seem to concern compiled languages like java or objective-c. In those situations I can see the logic of having the tests be plaform agnostic. I should have been more specific and mentioned that I use Jenkins for LAMP stack testing. So in that situation the infrastructure is as much a component that needs testing as anything else.Seeing as having PHP5.3 on one machine is enough to break a project that requires PHP5.4 for example.
There is another approach you can consider.
With Vagrant, you can create completely virtual environment that simulates your production. This is especially useful, when you want to test many environments (production, single node env, different OS, different DB) but you don't have enough "bare metal" machines.
You define proper vagrant environment. Next in Jenkins test item you setup proper machines, and execute tests on created vagrant environment.
Jenkins also supports a Master/Slave system (see here for details).
The jenkins slave itself does not need much configuration and could run on your production system replica, as it also does not need much configuration it should not influence your production clone significantly.
Your master would then just delegate the jobs (like your acceptance tests) to your slave. In that way you could also setup different production environments (like different OS, different DBs etc.) by just setting up a Jenkins slave for every configuration you need.
You are probably using Jenkins to check the quality of your code, compile it, run unit tests, package it, deploy it and maybe run some integration tests.
Given the nature of all those tasks, it is probably best that your Jenkins environment is NOT like your production environment. For example, you don't want your production environment to have your compiler installed (for many reasons, security to name one).
So, Jenkins is a development environment, and the fact that is doesn't exactly match your production environment should not be a concern to you.
However, I understand that perhaps you want to deploy your packages to a production-like or even production-clone environment to run some specific tests or tasks of your software lifecycle, but in my opinion that issue is beyond jenkins and concerns only the "deployment" phase of your lifecycle (ie. it's not a "Jenkins" issue, but an infrastructure issue that you should think about with a more general approach, and then simply tell jenkins where to deploy).

Resources