I am not a developer, but reading about CI/CD at the moment. Now I am wondering about good practices for automated code deployment. I read a lot about the deployment of code to a pre-existing environment so far.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created. For example for testing of a newly created build, deleting the environment again after testing.
I know that there are various plugins to interact with AWS, Azure etc. that could be used to develop a job for deployment of a virtual machine.
There are also plugins to trigger Puppet to deploy infra (as code) and there are plugins to invoke an infrastructure orchestration.
So everything is available to be able to deploy the infrastructure and middleware before deploying code (with some extra effort of course).
Is this something that is used in real life? How is it done?
The background of my question is my interest in full automation of development with as few clicks as possible, and cost saving in a pay-per-use model by not having idle machines.
My question now is whether it is also good-practice to use e.g. a Jenkins workflow to deploy an environment from scratch when a new build is created
Yes it is good practice to deploy an environment from scratch. Like you say, Jenkins and Jenkins pipelines can certainly help with kicking off and orchestrating that process depending on your specific requirements. Deploying a full environment from scratch is one of the hardest things to automate, and if that is automated, it implies that a lot of other things are also automated, such as infrastructure, application deployments, application configuration, and so on.
Is this something that is used in real life?
Yes, definitely. A lot of shops do this. The simpler your environments, the easier it is, and therefore, a startup with one backend app would have relatively little trouble achieving this valhalla state. But even the creation of the most complex environments--with hundreds of interdependent applications--can be fully automated; it just takes more time and effort.
The background of my question is my interest in full automation of development with as less clicks as possible and cost saving in a pay-per-use model by not having idling machines.
Yes, definitely. The "spin up and destroy" strategy benefits all hosting models (since, after full automation, no one ever has to wait for someone to manually provision an environment), but those using public clouds see even larger benefits in terms of cost (vs always leaving AWS environments running, for example).
I appreciate your thoughts.
Not a problem. I will advise that this question doesn't fit stackoverflow's question and answer sweet spot super well, since it is quite general. In the future, I would recommend chatting with your developers, finding folks who are excited about this sort of thing, and formulating more specific questions when you all get stuck in the weeds on something. Welcome to stackoverflow!
All is being used in various combinations; the objective is to deliver continuous value to end user. My two cents:
Build & Release
It depends on what you are using. I personally recommend to use what is available with the tool. For example, VSTS (Visual Studio Team Services) offers complete CI/CD pipeline. But if you have a unique need which can only be served by Jenkins then you must use that and VSTS offers that out of the box.
IAC (Infrastructure as code)
In addition to Puppet etc. You can take benefits of AZURE ARM (Azure Resource Manager) Template to Build and destroy an environment. Again, see what is available out of the box with the tool set you have.
Pay-per-use
What I have personally used is Azure Dev/Test Labs and have the code deployed to that via CI/CD pipeline. Later setup Shutdown policy on the VM so it will auto-start and auto-shutdown based on time provided. This is a great feature to let you save cost on the resources being used and replicate environments.
For example, UAT environment might not needed until QA is signed off. But using IAC you can quickly spin up the environment automatically and then have one-click deployment setup to deploy code to UAT.
Related
Where I work we've been adding microservices for different purposes and the local development environment is becoming difficult to setup. Services have too many environment variables to configure and usually there's not enough memory avaiable to run them.
We plan to fix these issues. I understand it's a matter of architecture and DevOps mostly. One way we've thought of is to create a proper service registry that allows easier setup and opens the door to, for example, have some services running locally and others in the cloud. All wired together with the service registry.
Another option could be to stub some of the dependencies with something like https://wiremock.org/ but it seems too limited and difficult (?).
I wanted to ask, what other strategies are there to manage growing development environments?
Goal:
I have an application based inside a docker container. I want to be able to use continuous integration for that with push to deploy using Bitbucket Pipelines to Google Cloud. I need access to an SQL database (MariaDB preferably), and some kind of caching system (be it memcache, redis, or something else).
Problem:
I'm entirely unsure of what services I need from said cloud provider to facilitate this as simply as possible, while still being cost effective. I looked into using Google's AppEngine, but I don't know if I'm doing something weird or odd, but for 1 vCPU w/ 1 GB of RAM and 10GB of storage, it was $55/month USD. Which is far more than I really want to pay. But I'm not even sure this is what I need for this. I don't need much power (these are very small apps, used by a very small amount of people). Also, again, not sure if I'm doing something wrong, but I was unable to find a caching solution with Google that wasn't insanely expensive (MemoryStore). Basically, I'm completely overwhelmed by the # of options, and am just looking for a cost effective / simple solution for continuous delivery of a docker application to Google Cloud
Using the tools discussed here , you can set up an end-to-end continuous delivery pipeline covering code, build, test, deploy, and monitor phases of software development across multi-cloud, hybrid, or on-premise environment. Likely this is a way to start.
I work on a large open source project based on ruby on rails. We use Github, Travis, Code Climate and others. Our test suite takes a long time to run and we have many pull requests opened and updated through the day, which creates a large backlog. We even implemented a build killer in our bot to prevent any unnecessary builds, however we still have a backlog. Is it possible for us to host our own runner to increase the number of workers?
There's Travis CI Enterprise (https://enterprise.travis-ci.com/) that lets people host their own runners, but that's probably mostly only for paid-for customers. Have you guys swapped over to the container-based builds? Might speed things up a bit. What's the project?
I have an ASP.NET MVC 3 application, WouldBeBetter.com, currently hosted on Windows Azure. I have an Introductory Special subscription package that was free for several months but was surprised at how expensive it has turned out to be (€150 p/m on average!) now that I have started paying for it. That is just way too much money for a site that is not going to generate money any time soon so I've decided to move to a regular hosting provider (DiscountASP.Net).
One of the things I'll truly miss though, is the separated Staging and Production environments Azure provides, along with the zero-downtime environment swap.
My question is, how could I go about "simulating" a staging environment while hosting on a traditional provider? And what is my best shot at minimizing downtime on new deployments?
Thanks.
UPDATE: I chose the answer I chose not because I consider it the best method, but because it is what makes the most sense for me at this point.
Before abandoning Windows Azure, there are several cost-saving things you can do to lower your monthly bill. For instance:
If you have both a Web role and a Worker role, merge the two. Take your background processing, queue processing, etc. and run them in your Web role (do your time-consuming startup in OnStart(), then just add a Run() override to call queue-processing, etc.
Consider the new Extra Small instance, which costs just under half of a Small instance
Delete your Staging deployment after you're confident your production code is running ok. Keep the cspkg handy though, in blob storage, so that you could always re-deploy it.
I use DiscountASP myself. It's pretty basic hosting for sure, a little behind the times. But I have found just creating a subdirectory and publishing my beta/test/whatever versions there works pretty well. It's not fancy or pretty, but does get the job done.
In order to do this you need to create the subdirectory first, then go into the control panel and tell DASP that directory is an application. Then you also have to consider that directory's web.config is going to be a combination of its own and the parent one. You also have to consider robots.txt for this subdirectory and protecting it in general from nosy people.
You could probably pull this off with subdomains too, depending on how your domain is set up.
Another option: appharbor? They have a free plan. If you can stay within the confines of their free plan, it might work well (I've never used them, currently interested in trying them though)
1) Get an automated deployment tool. There are plenty of free/open-source ones that million/billion dollar companies actually use for their production environments.
2) Get a second hosting package identical to the first. Use it as your staging, then just redeploy to production when staging passes.
I have started at a new site that is using .Net applications for the first time. As a developer I am used to VSS but this product is dying a death so we are using TFS (BASIC) instead.
I have been using TFS for source control up until now. But now we are having new servers installed for a live environment.
Now I am not sure what I should be doing. There are no books on TFS 2010 that I can find and I am wondering what tips you can give me. Does TFS need to be installed again, or should I use the existing installation? I am thinking I ought to set up a daily build for a test server. I have not been using TDD up until now, but for the next project this may change.
What must I absolutely get right, and what pitfuls should I avoid?
Without being there in your environment, it's hard to make appropriate recommendations. I've made some assumptions about what your installation based on what you said, but these may be wildly wrong.
You say you're using TFS (BASIC)-- I'm not sure what you mean by that, but if you are using TFS installed on one of the developers workstations, and you're starting to move towards a more robust development environment, I would recommend that you get a separate server (or servers) for your TFS installation.
It sounds like you're relatively small, so having your application tier and your data tier on the same machine shouldn't be that much of an issue. Just make sure that you have enough RAM on the machine to support both processes, and that you have enough disk space allocated for the growth of the database.
You talk about Test Driven Development (TDD), but what I think you're actually talking about is Continuous Integration (CI). When you have a CI environment set up, builds happen automatically based on either a schedule, or triggered by check-ins. Having this set up is never a bad idea, and would recommend that you get into the rhythm of CI builds as soon as possible.
If you're looking for a build server, you are probably going to be ok hosting the build agent on the combined application/data tier. If you find that you're getting performance hits when you do builds, you can move your builds to a different server without much effort.
You will also want to look at migrating your source code repository from your current environment to your future environment. The TFS installation wizard might be able to help you with that. If not, there are other options available, such as moving the database files to the new machine, or using the codeplex-based TFS Integration Platform.