Cloudbuild does not trigger new pod deployment, No resources found in namespace GKE - docker

I've been playing around with GCP Triggers to deploy a new pod every time a push is made to a Github repo. I've got everything set up and the docker image is pushed to the GCP Container Registry and the trigger completes successfully without any errors. I use the $SHORT_SHA tags that are generated by the build pipeline as my tags. But, however, the new pod deployment does not work. I am not sure what the issue is because I am modifying the codebase as well with every new push just to test the deployment. I've followed couple of tutorials by Google on Triggers, but I am unable to understand what exactly the issue is and why does the newly pushed image does not get deployed.
cloudbuild.yaml
- name: maven:3-jdk-8
id: Maven Compile
entrypoint: mvn
args: ["package", "-Dmaven.test.skip=true"]
- name: 'gcr.io/cloud-builders/docker'
id: Build
args:
- 'build'
- '-t'
- 'us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA'
- '.'
- name: 'gcr.io/cloud-builders/docker'
id: Push
args:
- 'push'
- 'us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA'
- name: 'gcr.io/cloud-builders/gcloud'
id: Generate manifest
entrypoint: /bin/sh
args:
- '-c'
- |
sed "s/GOOGLE_CLOUD_PROJECT/$SHORT_SHA/g" kubernetes.yaml
- name: "gcr.io/cloud-builders/gke-deploy"
args:
- run
- --filename=kubernetes.yaml
- --image=us.gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA
- --location=us-central1-c
- --cluster=cluster-1
kubernetes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <deployment_name>
spec:
replicas: 1
selector:
matchLabels:
app: <container_label>
template:
metadata:
labels:
app: <container_label>
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- name: <container_name>
image: us.gcr.io/<project_id>/<image_name>:GOOGLE_CLOUD_PROJECT
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: <service-name>
spec:
selector:
app: <selector_name>
ports:
- port: 80
targetPort: 8080
type: LoadBalancer

I would recommend few changes to work your cloud build to deploy an application in the EKS cluster.
cloudbuild.yaml
In build and push stage change the arg into gcr.io/$PROJECT_ID/<image_name>:$SHORT_SHA use the gcr.io/$PROJECT_ID/sample-image:latest.
Generate a manifest stage - you can skip/delete the stage.
gke-deploy stage - remove the image step.
kubernetes.yaml
In the spec - you can mention the image as gcr.io/$PROJECT_ID/sample-image:latest it will always take the latest on each deployment.
Rest all seems good.

Related

AKS, pulling docker image failed with error: manifest tagged by "latest" is not found

I'm trying to deploy spring boot app to azure kubernetes cluster using pipelines setup in azure devops git repo. But the aks deployment is failing with the following error:
Failed to pull image "sapcemission.azurecr.io/spaceship": [rpc error:
code = Unknown desc = Error response from daemon: manifest for
sapcemission.azurecr.io/spaceship:latest not found: manifest unknown:
manifest tagged by "latest" is not found, rpc error: code = Unknown
desc = Error response from daemon: Get
https://sapcemission.azurecr.io/v2/spaceship/manifests/latest:
unauthorized: authentication required, visit
https://aka.ms/acr/authorization for more information.]
My project is a spring boot multi-module project containing two modules. I'm trying to deploy them both. What am I doing wrong here?
azure-pipelines.yml
# Deploy to Azure Kubernetes Service
# Build and push image to Azure Container Registry; Deploy to Azure Kubernetes Service
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- master
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'd5d300b9-b22f-4b38-a5c8-35526548a630'
imageRepositoryCommandCenter: 'commandcenter'
imageRepositorySpaceShip: 'spaceship'
containerRegistry: 'sapcemissipion.azurecr.io'
dockerfilePathCommandCenter: '**/command-center/Dockerfile'
dockerfilePathSpaceShip: '**/space-ship/Dockerfile'
tag: '$(Build.BuildId)'
imagePullSecret: 'sapcemission13564b3d-auth'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Maven#3
inputs:
mavenPomFile: 'pom.xml'
publishJUnitResults: true
testResultsFiles: '**/surefire-reports/TEST-*.xml'
javaHomeOption: 'JDKVersion'
mavenVersionOption: 'Default'
mavenAuthenticateFeed: false
effectivePomSkip: false
sonarQubeRunAnalysis: false
- task: Docker#2
displayName: Build and push an command center image to container registry
inputs:
command: buildAndPush
repository: $(imageRepositoryCommandCenter)
dockerfile: $(dockerfilePathCommandCenter)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: Docker#2
displayName: Build and push an space ship image to container registry
inputs:
command: buildAndPush
repository: $(imageRepositorySpaceShip)
dockerfile: $(dockerfilePathSpaceShip)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- upload: manifests
artifact: manifests
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'spacemission-1550.kube-system'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest#0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest#0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/service.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepositoryCommandCenter):$(tag)
$(containerRegistry)/$(imageRepositorySpaceShip):$(tag)
deployment.yml
apiVersion : apps/v1beta1
kind: Deployment
metadata:
name: commandcenter
spec:
replicas: 1
template:
metadata:
labels:
app: commandcenter
spec:
containers:
- name: commandcenter
image: sapcemission.azurecr.io/commandcenter
ports:
- containerPort: 8080
---
apiVersion : apps/v1beta1
kind: Deployment
metadata:
name: spaceship
spec:
replicas: 1
template:
metadata:
labels:
app: spaceship
spec:
containers:
- name: spaceship
image: sapcemission.azurecr.io/spaceship
ports:
- containerPort: 8081
service.yml
apiVersion: v1
kind: Service
metadata:
name: commandcenter
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: commandcenter
---
apiVersion: v1
kind: Service
metadata:
name: spaceship
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: spaceship
As the error shows the authentication is required. And I see you create the imagePullSecret so that you just need to add the imagePullSecret in the deployment of your YAML file like this:
apiVersion : apps/v1beta1
kind: Deployment
metadata:
name: commandcenter
spec:
replicas: 1
template:
metadata:
labels:
app: commandcenter
spec:
containers:
- name: commandcenter
image: sapcemission.azurecr.io/commandcenter
ports:
- containerPort: 8080
imagePullSecrets: # here
- name: "sapcemission13564b3d-auth"
---
apiVersion : apps/v1beta1
kind: Deployment
metadata:
name: spaceship
spec:
replicas: 1
template:
metadata:
labels:
app: spaceship
spec:
containers:
- name: spaceship
image: sapcemission.azurecr.io/spaceship
ports:
- containerPort: 8081
imagePullSecrets: # here
- name: "sapcemission13564b3d-auth"
And the error also shows the image tag not found. So you need also to check if the tag really exists.
Following docs here Authenticate with Azure Container Registry from Azure Kubernetes Service
When you're using Azure Container Registry (ACR) with Azure Kubernetes Service (AKS), an authentication mechanism needs to be established. This operation is implemented as part of the CLI and Portal experience by granting the required permissions to your ACR.
The easiest way it would be:
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acrName>
but this require:
Owner or Azure account administrator role on the Azure subscription
Azure CLI version 2.7.0 or later
To avoid needing an Owner or Azure account administrator role, you can configure a service principal manually or use an existing service principal to authenticate ACR from AKS. For more information, see ACR authentication with service principals or Authenticate from Kubernetes with a pull secret.

Copy files into kubernetes pod with deployment.yaml

I have containerized microservice built with Java. This application uses the default /config-volume directory when it searches for property files.
Previously I manually deployed via Dockerfile, and now I'm looking to automate this process with Kubernetes.
The container image starts the microservice immediately so I need to add properties to the config-volume folder immediately. I accomplished this in Docker with this simple Dockerfile:
FROM ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
RUN mkdir /config-volume
COPY path/to/my.properties /config-volume
I'm trying to replicate this type of behavior in a kubernetes deployment.yaml but I have found no way to do it.
I've tried performing a kubectl cp command immediately after applying the deployment and it sometimes works, but it can result in a race condition which cause the microservice to fail at startup.
(I've redacted unnecessary parts)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
spec:
containers:
- env:
image: ########.amazon.ecr.url.us-north-1.amazonaws.com/company/image-name:1.0.0
name: my-service
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /config-volume
name: config-volume
volumes:
- name: config-volume
emptyDir: {}
status: {}
Is there a way to copy files into a volume inside the deployment.yaml?
You are trying to emulate a ConfigMap using volumes. Instead, put your configuration into a ConfigMap, and mount that to your deployments. The documentation is there:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Once you have your configuration as a ConfigMap, mount it using something like this:
...
containers:
- name: mycontainer
volumeMounts:
- name: config-volume
mountPath: /config-volume
volumes:
- name: config-volume
configMap:
name: nameOfConfigMap

Deploying a specific image tag in OpenShift Origin from image stream

I have configured my Gitlab CI pipelines so that they build an OCI image with Docker-in-Docker and upload it to Gitlab's own registry.
Now, I want to deploy images built in my CI pipelines to OpenShift Origin. All images in the registry are tagged with $CI_COMMIT_SHORT_SHA (i.e.: I do not use "latest").
How can I do that?
This is what I have tried so far:
before_script:
- oc login --server="$OPENSHIFT_SERVER" --token="$OPENSHIFT_TOKEN"
- oc project myproject
script:
- oc tag registry.gitlab.com/myproject/backend:$CI_COMMIT_SHORT_SHA backend:$CI_COMMIT_SHORT_SHA
- oc import-image backend:$CI_COMMIT_SHORT_SHA
- oc set image dc/backend backend=myproject/backend:$CI_COMMIT_SHORT_SHA
- oc rollout latest backend
Everything seems to work fine until oc set image. I would expect it to change the deployment configuration to use the specified image tag ($CI_COMMIT_SHORT_SHA), but it seems the configuration is not really modified and so, the rollout still deploys the old (previous) image.
What am I missing? Is there a better way to deploy a specific tag from a private registry?
Update
Here is my deployment configuration:
kind: DeploymentConfig
apiVersion: apps.openshift.io/v1
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
selfLink: /apis/apps.openshift.io/v1/namespaces/myproject/deploymentconfigs/backend
resourceVersion: '38635053'
name: backend
uid: 02809a3d-...
creationTimestamp: '2019-10-14T23:04:43Z'
generation: 7
namespace: myproject
labels:
app: backend
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources: {}
activeDeadlineSeconds: 21600
triggers:
- type: ConfigChange
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- backend
from:
kind: ImageStreamTag
namespace: myproject
name: 'backend:094971ea'
lastTriggeredImage: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
replicas: 1
revisionHistoryLimit: 10
test: false
selector:
app: backend
deploymentconfig: backend
template:
metadata:
creationTimestamp: null
labels:
app: backend
deploymentconfig: backend
annotations:
openshift.io/generated-by: OpenShiftNewApp
spec:
containers:
- name: backend
image: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
status:
observedGeneration: 7
details:
message: image change
causes:
- type: ImageChange
imageTrigger:
from:
kind: DockerImage
name: >-
registry.gitlab.com/myproject/backend#sha256:ebce...
availableReplicas: 1
unavailableReplicas: 0
latestVersion: 4
updatedReplicas: 1
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2019-10-14T23:57:51Z'
lastTransitionTime: '2019-10-14T23:57:51Z'
message: Deployment config has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2019-10-16T20:09:20Z'
lastTransitionTime: '2019-10-16T20:09:17Z'
reason: NewReplicationControllerAvailable
message: replication controller "backend-4" successfully rolled out
replicas: 1
readyReplicas: 1
One way to "solve" this is that the ImageChange trigger listen to something other than a specific commit id. Some logical name that does not exist as a tag in docker. Say "default".
If you do that then in your script the only thing you need to do is
- oc tag registry.gitlab.com/myproject/backend:$CI_COMMIT_SHORT_SHA backend:default
OpenShift will then take care of updating the image in the DeploymentConfig and rolling out a new deploy for you.
OP asked for a reason why not using latest. Latest is kind of "magical" in that if you push to a image in a registry without a tag it will name that tag latest. This makes it very easy to overwrite it by accident.
So lets say you use "latest" as the tag that you listen to in the ImageStream. What happends if somebody imports the imageStream? It will fetch the latest tag an overwrite what you have manually tagged.
If you want this kind of control in your pipeline use a ImageStreamTag name that does not exist in your docker registry like I said above.

How does Kubernetes invoke a Docker image?

I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains:
uwsgi: unrecognized option '--http 127.0.0.1:8080'.
The image definitely has the http option because:
a. uWSGI was installed via pip3 which includes the http plugin.
b. When I run the deployment with --list-plugins, the http plugin is listed.
c. The http option is recognized correctly when run locally.
I am running the Docker image locally with:
$: docker run <image_name> uwsgi --http 127.0.0.1:8080
The container Kubernetes YAML config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: launch-service-example
name: launch-service-example
spec:
replicas: 1
template:
metadata:
labels:
app: launch-service-example
spec:
containers:
- name: launch-service-example
image: <image_name>
command: ["uwsgi"]
args:
- "--http 127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv --test1=3--test2=abc--test3=true"
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: launch-service-example-service
spec:
selector:
app: launch-service-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.
It is happening because of the difference between arguments processing in the console and in the configuration.
To fix it, just split your args like that:
args:
- "--http"
- "127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable"
- "APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv"
- "--test1=3--test2=abc--test3=true"

Is there any definitive guide on how to pass all the arguments to Docker containers while starting a container through kubernetes?

I want to start a docker container with Kubernetes with the parameter --oom-score-adj .
My kubernetes deployment script looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: xxx
spec:
template:
metadata:
labels:
app: xxx
spec:
volumes:
- name: some-name
hostPath:
path: /some-path
containers:
- name: xxx-container
image: xxx-image
imagePullPolicy: "IfNotPresent"
securityContext:
privileged: true
command:
- /bin/sh
- -c
args:
- ./rsome-command.sh
volumeMounts:
- name: some-name
mountPath: /some-path
When I inspect the created container, I find --oom-score-adj is set to 1000. I want to set it to 0. Can anyone shed any line on how can I do it? Is there any definitive guide to pass such arguments?
You can't do this yet, it's one of the frustrating things still unresolved with Kubernetes.
There's a similar issue here around logging drivers. Unfortunately, you'll have to set the value on the docker daemon

Resources