How does helm upgrade handle the deployment update? - devops

Let's say we have a helm chart for an app and we want to upgrade that app over time by changing the deployment's image version and use: helm upgrade --install my-app-release;
The question is: does helm use the rolling update strategy defined in the deployment manifest or does it handle upgrading differently?

It uses the strategy defined in the deployment manifest.
Technically the update strategy defined in the deployment manifest is applied every time the PodSpec changes, no matter whether it changes through helm or kubectl or something else. And only if the PodSpec changes.
It's instructive to see how kubectl rollout restart (i.e. the kubectl command to manually trigger the update strategy) works: How to rolling restart pods without changing deployment yaml in kubernetes?
Note that changing an annotation on the Deployment won't trigger a rollout, the changed annotation has to be on the Pod.

Related

Yarn 2.0 zero installs production vs development deployment?

So we are in the process of moving from yarn 1.x to yarn 2 (yarn 3.1.1) and I'm getting a little confused on how to configure yarn in my CI/CD config. As of right now our pipeline does the following to deploy to our kubernetes cluster:
On branch PR:
Obtain branch repo in gitlab runner
Lint
Run jest
Build with environment variables, dependencies, and devdependencies
Publish image to container registry with tag test
a. If success, allow merge to main
Kubernetes watches for updates to test and deploys a test pod to cluster
On merge to main:
Obtain main repo in gitlab runner
Lint
Run jest
Build with environment variables and dependencies
Publish image to container registry with tag latest
Kubernetes watches for updates to latest and deploys a staging pod to cluster
(NOTE: For full-blown production releases we will be using the release feature to manually deploy releases to the production server)
The issue is that we are using yarn 2 with zero installs and in the past we have been able prevent the production environment from using any dev dependencies by running yarn install --production. In yarn 2 this command is deprecated.
Is there any ideal solution to prevent dev dependencies from being installed on production? I've seen some posts mention using workspaces but that seems to be more tailored towards mono-repos where there are more than one application.
Thanks in advance for any help!
I had the same question and came to the same conclusion as you. I could not find an easy way to perform a production build on yarn 2. Yarn Workspaces comes closest but I did find the paragraph below in the documentation:
Note that this command is only very moderately useful when using zero-installs, since the cache will contain all the packages anyway - meaning that the only difference between a full install and a focused install would just be a few extra lines in the .pnp.cjs file, at the cost of introducing an extra complexity.
From: https://yarnpkg.com/cli/workspaces/focus#options-production
Does that mean that there essentially is no production install? It would be nice if that was officially addressed somewhere but this was the closest I could find.
Personally, I am using NextJS and upgraded my project to Yarn 2. The features of Yarn 2 seem to work (no node_modules folder) but I can still use yarn build from NextJS to create a production build with output in the .next folder.

Changes not reflecting after pod deployment

I am modifying the code in my project, and deploying it in kubernetes pod, the pod are deployed successfully, but the latest changes which I have made in my code are not being reflected after deployment, any idea for it?
When you change the code inside your application which will be build into your container, you have to do the following:
Rebuild your image with the latest changes (e.g. via docker build ...)
Push the image to your registry (e.g. via docker push ...)
Update your manifests with the new container version (field spec.containers.image) or if it has the same version/tag, make sure you have the imagePullPolicy set to Always.
In case your manifests did not change (e.g. because the tag remains the same), you need to trigger a rollout in your deployment or delete your Pods manually to ensure it pull the image again and takes the latest changes.
If you're running a Webapp, you might need to clean your browser cache and reload all resources (Shift + F5)

Jenkins on Kubernetes node is complaining its plug-ins need newer version of Jenkins, but don't want to lose data

Jenkins (on a Kubernetes node) is complaining it requires a newer version of Jenkins to run some of my plug-ins.
SEVERE: Failed Loading plugin Matrix Authorization Strategy Plugin
v2.4.2 (matrix-auth) java.io.IOException: Matrix Authorization
Strategy Plugin v2.4.2 failed to load.
- You must update Jenkins from v2.121.2 to v2.138.3 or later to run this plugin.
The same log file also complains farther down that it can't read my config file... I'm hoping this is just because of the version issue above, but I'm including it here in case it is a sign of deeper issues:
SEVERE: Failed Loading global config
java.io.IOException: Unable to read /var/jenkins_home/config.xml
I'd either like to disable the plug-ins that are causing the issue so I can see the Jenkins UI and manage the plug-ins from there, or I'd like to update Jenkins in a way that DOES NOT DELETE MY USER DATA AND JOB CONFIG DATA.
So far, I tried disabling ALL the plug-ins by adding .disabled files to the Jenkins plug-ins folder. That got rid of most of the errors, but it still complained about the plug-in above. So I removed the .disabled file for that, and now it's complaining about Jenkins not being a new enough version again (the error above).
Note: this installation of Jenkins is using a persistent storage volume, mounted with EFS. So that will probably help alleviate some of the restrictions around upgrading Jenkins, if that's what we need to do.
Finally, whatever we do with the plug-ins and Jenkins version, I need to make sure the change is going to persist if Kubernetes re-starts the node in the future. Unfortunately, I am pretty unfamiliar with Kubernetes, and I haven't discovered yet where these changes need to be made. I'm guessing the file that controls the Kubernetes deployment configuration?
This project is using Helm, in case that matters. But again, I hardly know anything about Helm, so I don't know what files you might need to see to make this question solvable. Please comment so I know what to include here to help provide the needed information.
We faced the same problem with our cluster, and we have a basic explanation about that, but not sure about it (The following fix works)
That error come with the fact that you have installed Jenkins via Helm, and their plugins through the Jenkins UI. It works if you decide to never reboot the pod, but if one day, jenkins have to make his initialization again, you will face that error.
Jenkins try to load plugins from the JENKINS_PLUGINS_DIR, which is empty, so the pod die.
To fix the current error, you should specify your plugin in the master.installPLugins parameter.
If you followed a normal install, just go on your cluster and
helm get values jenkins_release_name
So you may have something like that:
master:
enableRawHtmlMarkupFormatter: true
installPlugins:
- kubernetes:1.16.0
- workflow-job:2.32
By default, some values are "embedded" by helm to be sure that jenkins works, see here for more details: Github Helm Charts Jenkins
So, just copy it in a file with the same syntax and add your plugins with their versions. After, you have just to use the helm upgrade command with your file on your release:
helm upgrade [RELEASE] [CHART] -f your_file.yaml
Good luck !

How do you put your source code into Kubernetes?

I am new to Kubernetes and so I'm wondering what are the best practices when it comes to putting your app's source code into container run in Kubernetes or similar environment?
My app is a PHP so I have PHP(fpm) and Nginx containers(running from Google Container Engine)
At first, I had git volume, but there was no way of changing app versions like this so I switched to emptyDir and having my source code in a zip archive in one of the images that would unzip it into this volume upon start and now I have the source code separate in both images via git with separate git directory so I have /app and /app-git.
This is good because I do not need to share or configure volumes(less resources and configuration), the app's layer is reused in both images so no impact on space and since it is git the "base" is built in so I can simply adjust my dockerfile command at the end and switch to different branch or tag easily.
I wanted to download an archive with the source code directly from repository by providing credentials as arguments during build process but that did not work because my repo, bitbucket, creates archives with last commit id appended to the directory so there was no way o knowing what unpacking the archive would result in, so I got stuck with git itself.
What are your ways of handling the source code?
Ideally, you would use continuous delivery patterns, which means use Travis CI, Bitbucket pipelines or Jenkins to build the image on code change.
that is, every time your code changes, your automated build will get triggered and build a new Docker image, which will contain your source code. Then you can trigger a Deployment rolling update to update the Pods with the new image.
If you have dynamic content, you likely put this a persistent storage, which will be re-mounted on Pod update.
What we've done traditionally with PHP is an overlay on runtime. Basically the container will have a volume mounted to it with deploy keys to your git repo. This will allow you to perform git pull operations.
The more buttoned up approach is to have custom, tagged images of your code extended from fpm or whatever image you're using. That way you would run version 1.3 of YourImage where YourImage would contain code version 1.3 of your application.
Try to leverage continuous integration and continuous deployment. You can use Jenkins as CI/CD server, and create some jobs for building image, pushing image and deploying image.
I recommend putting your source code into docker image, instead of git repo. You can also extract configuration files from docker image. In kubernetes v1.2, it provides new feature 'ConfigMap', so we can put configuration files in ConfigMap. When running a pod, configuration files will be mounted automatically. It's very convenience.

Will `pod update` overwrite my code changes when a new version of the pod is available?

I've added MKStoreKit version 4.99 to my project using cocoapods. My Podfile consists of:
platform :ios, '6.0'
pod 'MKStoreKit', '~> 4.99'
MKStoreKit has a configuration file called MKStoreKitConfigs.h that needs to be modified on a per-project basis, and I've modified the file appropriately. What will happen when MKStoreKit releases a new version, say 5.0, and I execute pod update? Will my changes be overwritten? Could you describe why yes or why no?
Yes, pod update will overwrite your changes. What you could do is fork the project on Github make the changes in your fork and point Cocoapods to the fork. See Use a fork of Restkit on github via cocoaPod? on how to do that.
As I understand it's a known problem and also as one said: "this is kind of bad practice to configure 3rd party lib in header file".
So at first you can take a look at this commit. IMO this is a better way to configure it.
Also you can add your fork as a Pod using:
pod 'MKStoreKit.MyFork', :path => 'MKStoreKit.MyFork.podspec'
EDIT:
Thanks to rounak for noticing, :local is now :path. From cocoapods docs:
Using this option (:path) CocoaPods will assume the given folder to be the
root of the Pod and will link the files directly from there in the
Pods project. This means that your edits will persist between
CocoaPods installations. The referenced folder can be a checkout of
your favourite SCM or even a git submodule of the current repo.
This is an old post but I have a fairly simple workaround to keeping changes you make on Pods.
As mentioned, pod update will overwrite any changes you made. However, if you're using git what I like to do is commit all my changes except for my pod changes.
Once the only changes I have on my branch are the Pods changes, I stash the pod changes by running git stash save "Custom Cocoapod changes, apply after every pod update". You can give it any message you'd like by changing the text between the "".
This command has the side effect of reseting your working directory to the previous HEAD, so if you want to reapply those stashes you can just run git stash apply to get those changes back in, and then you can commit them to save them.
Don't use git stash pop as this will delete the stash after applying it.
Now, at some undetermined time in the future, when you update your pods and its time to apply the stash again, what you're going to want to do is run git stash list. this will return a list of all the stashes you've made with the most recent being zero indexed. You'll probably see something like this:
stash#{0}: On featureFooBar: foo bar
stash#{1}: On Master: Custom Cocoapod changes, apply after every pod update
...
If the custom cocoa pods changes stash is in stash#{0} then perfect, you can just run a git stash apply again and you'll get those changes on your working directory. Otherwise once you find which stash number your pods changes are you can apply that stash by running git stash apply stash#{1}
Applying stashes is easiest when you have a clean working directory on the same branch but thats not required. This page gives a good description of git stash and how to use it otherwise.

Resources