SCDF server pod is not coming up with containerd - spring-cloud-dataflow

Our k8s environment is recently upgraded to use contianerd as the default platform and with that change, SCDF servers pods fail with the error below :
Error: failed to create containerd container: create container failed validation: containers.Labels: label key and value greater than maximum size (4096 bytes), key: io.buildpa: invalid argument
Do we have any solution or workaround at this time ? The same image worked perfectly fine with the Docker being the default container engine before.

Which version of containerd are you using?
I think the issue has been fixed in containerd 1.4.2
With older version, you might do the following workaround in /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd]'
disable_snapshot_annotations = true

Related

docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found

Following this guide:
https://jamesdefabia.github.io/docs/getting-started-guides/docker/
and both
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/stable.txt)
and
export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/release/latest.txt)
fail at the docker run stage with a not found error. E.g:
docker: Error response from daemon: manifest for gcr.io/google_containers/hyperkube-amd64:v1.24.2 not found: manifest unknown: Failed to fetch "v1.24.2" from request "/v2/google_containers/hyperkube-amd64/manifests/v1.24.2".
Any suggestions?
Check the repo of hyperkube and use an available tag:
https://console.cloud.google.com/gcr/images/google-containers/global/hyperkube-amd64
As mentioned by #zerkms #vladtkachuk that google hyperkube is not available anymore. As mentioned in the document:
Hyperkube, an all-in-one binary for Kubernetes components, is now
deprecated and will not be built by the Kubernetes project going
forward.Several, older beta API versions are deprecated in 1.19 and
will be removed in version 1.22. We will provide a follow-on update
since this means 1.22 will likely end up being a breaking release for
many end users.
Setting up a local Kubernetes environment as your development environment is the recommended option, no matter your situation, because this setup can create a safe and agile application-deployment process.
Fortunately, there are multiple platforms that you can try out to run Kubernetes locally, and they are all open source and available under the Apache 2.0 license.
Minikube has the primary goals of being the best tool for local Kubernetes application development, and to support all Kubernetes features that fit.
kind runs local Kubernetes clusters using Docker container "nodes."

invalid capacity 0 on image filesystem, Lens ID Kubernetes

I am creating k8s cluster from digital ocean but every time I am getting same warning after I create cluster and open that cluster in lens ID.
Here is the screenshot of warning:
i did every soltion which i found but still can't able to remove the error.
Check first if k3s-io/k3s issue 1857 could help:
I was getting the same error when I installed kubernetes cluster via kubeadm.
After reading all the comments on the subject, I thought that the problem might be caused by containerd and the following two commands solved my problem, maybe it can help:
systemctl restart containerd
systemctl restart kubelet
And:
This will need to be fixed upstream. I suspect it will be fixed when we upgrade to containerd v1.6 with the cri-api v1 changes
So checking the containerd version can be a clue.

kubectl version showing the wrong version number

I have downloaded Kubernetes latest version from Kubernetes official site and referenced it in the PATH above the reference for Docker but It is still showing the version installed with Docker Desktop.
I understand that docker comes with Kubernetes installed out of the box but the docker version '1.15.5' doesn't work correctly with my Minikube version which is 'v1.9.2' which is causing me problems.
any suggestions on how to fix this issues? should I remove the Kubernetes binary from C:\Program Files\Docker\Docker\resources\bin I don't think that will be a good idea.
Can someone help me tackle this issue, along with some explanation on how the versions work with each other? Thanks
This is happening because windows always give you the first comment found in the PATH, both kubectl versions (Docker and yours) are in the PATH but Docker PATH in being referenced before your kubectl PATH.
To solve this really depends on what you need. If you are not using your Docker Kubernetes you have two alternatives:
1 - Fix your PATH and make sure that your kubectl PATH is referenced before Docker PATH.
2 - Replace Docker kubectl to yours.
3- Make sure you restart your PC after doing these changes, as kubectl will automatically update the configuration to point to the newer kubectl version the next time you use the minikube start command with a correct --kubernetes-version:
If you are using both from time to time, I would suggest you to create a script that will change your PATH according to your needs.
According to the documentation you must use a kubectl version that is within one minor version difference of your cluster. For example, a v1.2 client should work with v1.1, v1.2, and v1.3 master. Using the latest version of kubectl helps avoid unforeseen issues.

Mysterious Filebeat 7 X-Pack issue using Docker image

I've also posted this as a question on the official Elastic forum, but that doesn't seem super frequented.
https://discuss.elastic.co/t/x-pack-check-on-oss-docker-image/198521
At any rate, here's the query:
We're running a managed AWS Elasticsearch cluster — not ideal, but that's life — and run most the rest of our stuff with Kubernetes. We recently upgraded our cluster to Elasticsearch 7, so I wanted to upgrade the Filebeat service we have running on the Kubernetes nodes to capture logs.
I've specified image: docker.elastic.co/beats/filebeat-oss:7.3.1 in my daemon configuration, but I still see
Connection marked as failed because the onConnect callback failed:
request checking for ILM availability failed:
401 Unauthorized: {"Message":"Your request: '/_xpack' is not allowed."}
in the logs. Same thing when I've tried other 7.x images. A bug? Or something that's new in v7?
The license file is an Apache License, and the build when I do filebeat version inside the container is a4be71b90ce3e3b8213b616adfcd9e455513da45.
It turns out that starting in one of the 7.x versions they turned on index lifecycle management checks by default. ILM (index lifecycle management) is an X-Pack feature, so turning this on by default means that Filebeat will do an X-Pack check by default.
This can be fixed by adding setup.ilm.enabled: false to the Filebeat configuration. So, not a bug per se in the OSS Docker build.

Docker download layers sequentially

is there a way to make docker download the layers of an image sequentially instead of in parallel. I require this due to our repository being very strict (or dodgey) on networking issues. I get a lot of the EOF errors like:
time="2016-06-14T13:15:52.936846635Z" level=debug msg="Error contacting registry http://repo.server/v1/: Get http://repo.server/v1/images/b6...be/layer: EOF"
time="2016-06-14T13:15:52.936924310Z" level=error msg="Download failed: Server error: Status 0 while fetching image layer (b6...be)"
This is when running Docker 1.11.2 on windows.
But on a Centos7 VM it all works fine with the default 1.9.1.
I noticed one difference was that 1.9.1 does the downloads sequentially. So I tried to install 1.9.1 on windows, but the quick start terminal automatically downloaded and installed the 1.11.2 version of the boot2docker ISO.
So is there some arg, config, or environment variable I can set to make docker download the layers one at a time?
Or am I jumping to the wrong conclusion assuming the concurrent downloads are causing my network errors?
Thanks
It seems that there was recently added a max-concurrent-downloads option to the configuration of the docker daemon. Here is the link to the docs although I did not have a chance to test it yet myself.

Resources