How to upgrade a new hem chart version without pain? - helm3

It's very painful to keep track and upgrade the charts versions, when you have lots of them, for instance 30+
Keep a local repo clone (because of client request, because of safety etc.), take each chart, go through Changelog, adjust your values, consider schema changes, test in non prod environments then hope nothing was missed and deployed to production etc....
The more complex a chart is the more time it takes, at the end when you finish the upgrade you should start over again because new chart releases were delivered, also you cannot afford to do this upgrade each month.
What is the best why to deal with chart version upgrades?

Related

Jenkins to deploy on Gitlab merge into development, but ignore updates

I have Jenkins set up, connected with Gitlab, to do some deployment stuff whenever a merge request gets accepted into the development branch. It pulls the code, runs some incantations, increments the version number, and commits+pushes that new version number in. The only issue, that I just discovered, is that updating old, already merged MRs, even in trivial ways such as updating the milestone or title, will trigger the build. Because of this, when I just updated multiple old merge requests to have the milestone for this release, the version number got incremented ten times or so- not ideal.
My Gitlab triggers inside Jenkins are quite closed down (please note I'm not on EE):
My Gitlab integrations settings only fire on MR events (and I'm aware there's no way to say only on merging in, rather than editing):
I also have some custom build scripts that do bits and bobs, but the main part that bumps the version number checks that if [ "$GIT_COMMIT" != "$GIT_PREVIOUS_SUCCESSFUL_COMMIT" ]; then. However, even though this seemed logical at the time, it became apparent recently that old, stale merge requests will still be true for this when compared to the development HEAD.
There are many different aspects to this that could be the cause of the trouble, and it may be a very easy solution, but I'm not sure what that would be quite yet. Any help or suggestions greatly appreciated.

Migrating EF Code First in multiple instances

I have an MVC app that using EF6 Code First. I want to deploy this app to multiple datacenters. On deployments that have migrations, I can write a script to migrate them all as simultaneously as possible, but if one datacenter is slower, then the calls could all be rejected since the schema no longer matches. A script that tried to coordinate would also make rolling upgrades impossible.
Is there a way to make EF at least attempt to run the query even though the schemas don't match? Is there a different way I can/should approach this?
UPDATE:
Let's see if I can word this better. I want to have my MVC app in multiple datacenters. Let's assume that I deploy the app to each datacenter individually.
Option 1
Deploy to DC A
Code first migration runs on centralized DB
Requests made to DC A succeed, but requests to DC B fail
Option 2
Deploy to DC A
Do not automatically run migration
Requests made to DC A fail and requests to DC B continue to succeed
How do I develop a deployment strategy that will make it so that requests to either DC will work?
BTW: I am using Azure Web Sites, if a platform-specific solution is needed.
In your post, it seemed like you were concerned with how it would behave during the actual upgrade. Nothing about testing. But in comments you are asking about doing a partial deployment then doing testing. So on one hand you'd want to deploy as quickly as possible to minimize downtime. On the other hand, it sounds like you want to deploy to one site, test, and have the other sites continue to function while you are verifying the first deployment?
Verifying a deployment is reasonable, but fairly complex. I'm not sure you will find much in the way of automation for this. I think you should test prior to production deployment thoroughly, and then simply deploy as quickly as possible in production. If there were an issue you found only when deploying to production, you'd be in a bad situation, because now your site is down until you can fix it. Even if you could get the other instance to work with the new database, that is risky as it is going to be modifying things against a schema it doesn't completely understand. Additionally, if you do need to rollback the DDL then you will almost certainly lose any data that was modified since the deployment. So it is really best that all instances for the old schema fail until they are upgraded, to prevent them from modifying data that is at risk of being lost.
Usually you should have done a deployment to a staging environment that is as close to your production as possible to test the database migration process. This is called pre-production testing, and sometimes involves restoring the most recent backup from production into staging to ensure new constraints/structures are valid for existing data. By deploying to this staging environment, you should have a very high level of confidence that production deployment will go successfully.
You additionally safe guard yourself against production deployment issues by taking backups prior to deployment so that you can rollback as necesary(although this is worst case scenario as it might mean throwing out important data that came in between backup/deployment and realization that there is an issue). I imagine EF migrations uses a transaction to run the DDL scripts so they should rollback all-or-nothing if there is an issue.

Zero downtime on Heroku

Is it possible to do something like the Github zero downtime deploy on Heroku using Unicorn on the Cedar stack?
I'm not entirely sure how the restart works on Heroku and what control we have over restarting processes, but I like the possibility of zero downtime deploys and up until now, from what I've read, it's not possible
There are a few things that would be required for this to work.
First off, we'd need backwards compatible migrations. I leave that up to our team to figure out.
Secondly, we'd want to migrate the db right after a push, but before the restart (assuming our migrations are fully backwards compatible, this should not affect anything)
Thirdly, we'd want to instruct Unicorn to launch a new master process and fork some workers, then swap the PIDs and gracefully shut down the old process/workers
I've scoured the docs but I can't find anything that would indicate this is possible on Heroku. Any thoughts?
I can't address migrations, but the part about restarting processes and avoiding wait time:
There is an beta feature for heroku called preboot. After a deploy, it boots your new dynos first and waits a while before switching traffic and killing the old ones:
https://devcenter.heroku.com/articles/labs-preboot/
I also wrote a blog post that has some measurements on my app's performance improvements using this feature:
http://ylan.segal-family.com/blog/2012/08/27/deploy-to-heroku-with-near-zero-downtime/
You might be interested in their feature called preboot.
Taken from their documentation:
This feature provides seamless deploys by booting web dynos with new code before killing existing web dynos.
Some apps take a long time to boot up, and this can cause unacceptable delays in serving HTTP requests during deployment.
There are a few caveats:
You must have at least two web dynos to use this feature. If you have your web process type scaled to 1 or 0, preboot will be disabled.
Whoever is doing the deployment will have to wait a few minutes before the new code starts serving user requests; this happens later than it would without preboot (but in the meanwhile, user requests are still served promptly by old dynos).
There will be a short period (a minute or two) where heroku ps shows the status of the new code, but user requests are still being served by old code.
There is much more information about it, so refer to their documentation.
It is possible, but requires a fair amount of forward planning. As of Rails 3.1 there's three tasks that need carrying out
Upload the new code
Run any database migrations
Sync the assets
Uploading code and restarting is fairly straightforward, the main problem lies with the other two, but the way round them is the pretty much the same.
Essentially you need to:
Make the code compatible with the migration you need to run
Run the migration, and remove any code written specifically for it
For instance, if you want to remove a column, you’ll need to deploy a patch telling ActiveRecord to ignore it first. Only then you can deploy the migration, and clean up that patch.
In short, you need to consider your database and the code compatability an work around them so that the two can overlap in terms of versioning.
An alternative to this method might be to have two versions of the application running on Heroku at the same time. When you deploy, switch the domain to the other version, do the deploy, and switch it back again. This will help in most instances, but again, database compat is an issue.
Personally, I would say that if your deployments are significant to require this sort of consideration, taking parts of the application offline are probably the safest answer. By breaking up an application into several smaller applications can help mitigate this and is a mechanism that I use regularly.
No - this is currently not possible using Unicorn on Heroku cedar. I've been bugging Heroku about this for weeks.
Here was Heroku Support's reply to my email on March 8, 2012:
Hi, you could enable maintenance mode when doing a deploy, at least your users would see a maintenance page instead of an error, and also request queue wouldn't build up.
We're definitely aware this is a pain and we're working to offer rolling / zero-downtime deploys in the future. We have no ETA to announce, though.

TFS 2010 - How to set up for a new application

I have started at a new site that is using .Net applications for the first time. As a developer I am used to VSS but this product is dying a death so we are using TFS (BASIC) instead.
I have been using TFS for source control up until now. But now we are having new servers installed for a live environment.
Now I am not sure what I should be doing. There are no books on TFS 2010 that I can find and I am wondering what tips you can give me. Does TFS need to be installed again, or should I use the existing installation? I am thinking I ought to set up a daily build for a test server. I have not been using TDD up until now, but for the next project this may change.
What must I absolutely get right, and what pitfuls should I avoid?
Without being there in your environment, it's hard to make appropriate recommendations. I've made some assumptions about what your installation based on what you said, but these may be wildly wrong.
You say you're using TFS (BASIC)-- I'm not sure what you mean by that, but if you are using TFS installed on one of the developers workstations, and you're starting to move towards a more robust development environment, I would recommend that you get a separate server (or servers) for your TFS installation.
It sounds like you're relatively small, so having your application tier and your data tier on the same machine shouldn't be that much of an issue. Just make sure that you have enough RAM on the machine to support both processes, and that you have enough disk space allocated for the growth of the database.
You talk about Test Driven Development (TDD), but what I think you're actually talking about is Continuous Integration (CI). When you have a CI environment set up, builds happen automatically based on either a schedule, or triggered by check-ins. Having this set up is never a bad idea, and would recommend that you get into the rhythm of CI builds as soon as possible.
If you're looking for a build server, you are probably going to be ok hosting the build agent on the combined application/data tier. If you find that you're getting performance hits when you do builds, you can move your builds to a different server without much effort.
You will also want to look at migrating your source code repository from your current environment to your future environment. The TFS installation wizard might be able to help you with that. If not, there are other options available, such as moving the database files to the new machine, or using the codeplex-based TFS Integration Platform.

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Resources