I am trying to do blue green deployments for AKS using Jenkins. I am currently following this document
https://learn.microsoft.com/en-us/azure/developer/jenkins/deploy-to-aks-using-blue-green-deployment-pattern
When I run the pipeline I am getting the following error:
kubectl --kubeconfig=kubeconfig delete deployment todoapp-deployment-blue
Error from server (NotFound): deployments.apps "todoapp-deployment-blue" not found
What exactly does this error mean and how do I resolve it.
This error means that there is no deployment called todoapp-deployment-blue in your kubernetes namespace and you need to create it first (to delete it, lol).
Related
While deploying the GKE from Jenkins we are getting below error in jenkins console.
Failed to verify apps/v1/Deployment: app-test
java.io.IOException: Failed to launch command args: [kubectl, --kubeconfig, /var/lib/jenkins/workspace/sample_dev_gke_deploy#tmp/.kube14897892684774139622config, get, deployment, app-test, -o, json], status: 1. Logs: Error from server (NotFound): deployments.apps "app-test" not found
This seems to be an issue as discussed in Similar issue any idea if this issue is resolved. Please let us know if any further details required ?
Thanks in advance
GKE deployment from Jenkins. Manual GKE deployment works fine.
I have deployed the different micro-service in the cluster. And trying to use Log-shipper as a sidecar in one of the services.
When tried to deploy the micro-service, all the services are coming up but one services pod is getting stuck in CrashLoopBackOff.
The service contains 2 container, one for the service itself and other for logshipper as a sidecar.
The error message from the logshipper is as below:
ERROR (catatonit:6): failed to exec pid1: No such file or directory
and in the pod describe its showing
Backoff 40s restarting failed container=logshipper
I am trying to deploy my first stream APP via the spring cloud dataflow dashboard, but I keep getting the "Failed to create stream" error in the UI. Can someone help me investigate what might be wrong?
I am running SCDF on kubernetes and my deployment consists of the following components:
scdf-server
skipper
mariadb
rabbitmq
My stream is the simple time | log example
Try using kubectl on the scdf-server pod to see if it provides any information. I've seen that error occur if an app I deployed was not accessible - in my case, I'd referenced it by an incorrect filepath which didn't get caught by the server until it tried to deploy the stream.
It could be failing at any point in the deploy. To gain some insight, you can view the events and logs on each pod w/ the following commands:
kubectl describe pods/<pod-name>
kubectl logs pods/<pod_name>
I am deploying a docker image from ACR to windows based App Service using Azure DevOps release pipeline (with Azure Web App on Container Deploy task). But getting the error as
"Error: Failed to patch App Service '[App Service Name]' configuration. Error: BadRequest - The parameter DOCKER_REGISTRY_SERVER_URL has an invalid value. Unexpected error when connecting to the registry. Cannot find available registry. https://[ACR Name].azurecr.io (CODE: 400) Error: Failed to update deployment history. Error: Ip Forbidden (CODE: 403)"
Both App Service and ACR are using private endpoint. We are using self hosted agent for our pipeline.
Please let me know how to fix this issue.
Here is a trouble shooting advice:
Please check the value of DOCKER_REGISTRY_SERVER_URL in your ARM template or config file.
What's more, if you are using ARM template, try to add "reserved": true to your properties.
I am testing a openshift v3 starter (ca-central-1) and created a project from custom docker image stream (from github). It was running fine, but after I changed a config map, rescaled the deployment to 0 pods, upscaled it to 1 pod, openshift can no longer start any pods.
The error in web interface is (in Events tab):
Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container
for pod "hass-19-98vws": Error response from daemon: grpc: the connection is unavailable.
Pod sandbox changed, it will be killed and re-created.
These messages appear in a endless loop. I tried to deploy new deployment but it gives same logs.
What am I doing wring?
Ok, it seems that I was affected by an upgrade of cluster. The issue resolved itself after a 2 days.