Still pretty new to this so forgive me if I'm saying anything wrongly.
This is my code
stages: runSAST
run-sast-job:
stage: runSAST
image: maven:3.8.6-openjdk-11-sliim
script: |
- mvn verify package sonar:sonar -Dsonar.host.url=https://sonarcloud.io/ Dsonar.organization=myorganization -Dsonar.projectKey=myprojectkey -Dsonar.login=mytoken
Pipeline fails and when I check the log it says:
Running with gitlab-runner 15.3.0~beta.42.gdb7789ca (db7789ca)
on blue-1.shared.runners-manager.gitlab.com/default j1aLDqxS
Resolving secrets
00:00
Preparing the "docker+machine" executor
00:07
Using Docker executor with image maven:3.8.6-openjdk-11-sliim ...
Pulling docker image maven:3.8.6-openjdk-11-sliim ...
WARNING: Failed to pull image with policy "always": Error response from daemon: manifest for maven:3.8.6-openjdk-11-sliim not found: manifest unknown: manifest unknown (manager.go:235:0s)
ERROR: Job failed: failed to pull image "maven:3.8.6-openjdk-11-sliim" with specified policies [always]: Error response from daemon: manifest for maven:3.8.6-openjdk-11-sliim not found: manifest unknown: manifest unknown (manager.go:235:0s)
I figured it might be the version of maven or openjdk I'm trying to get it to install but those are the latest versions. Any suggestions?
You have a typo in the image name, its
image: maven:3.8.6-openjdk-11-slim
I am trying to load a tar docker image to my vm (rhel 8.4) using podman with this command.
podman image load -i /tmp/pgsql
I cannot load the image because of the error :
DEBU[0004] No compression detected
DEBU[0004] Using original blob without modification
Copying config 5gr88732911 done
Writing manifest to image destination
Storing signatures
DEBU[0004] Applying tar in /var/lib/containers/storage/overlay/9eb82f04c782ay3f5ik25911e60d75e221ce0fe82e49f0dmmf02c81a3161d1300/diff
DEBU[0005] Error pulling image ref /tmp/pgsql: Error committing the finished image: error adding layer with blob "sha256: 9eb82f04c782ay3f5ik25911e60d75e221ce0fe82e49f0dmmf02c81a3161d1300": Error processing tar file(exit status 1): open /etc/group: permission denied
**Error processing tar file(exit status 1): open /etc/group: permission denied**
DEBU[0005] Error deleting temporary directory: <nil>
DEBU[0005] parsed reference into "[overlay#/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]localhost/tmp/pgsql:latest"
DEBU[0005] Using blob info cache at /var/lib/containers/cache/blob-info-cache-v1.boltdb
DEBU[0005] Error pulling image ref /tmp/pgsql: Error determining manifest MIME type for dir:/tmp/pgsql: open /tmp/pgsql/manifest.json: not a directory
open /tmp/pgsql/manifest.json: not a directory
DEBU[0005] parsed reference into "[overlay#/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]localhost/tmp/pgsql:latest"
DEBU[0005] Error pulling image ref /tmp/pgsql:: Error initializing source oci:/tmp/pgsql:: open /tmp/pgsql/index.json: not a directory
open /tmp/pgsql/index.json: not a directory
Loaded image(s):
it seems that I cannot modify /etc/group
the vm is very hardened so we do not have the rights to write to /etc/group
I have the rights to see the /etc/group
ls -al /etc/group
-rw-r--r- root root root 1262 Apr 22 08:31 /etc/group
i've tried with another image and it isn't working
does anyone have an idea how I can resolve this ? I will be very grateful,
Thank you
I am trying to install Kubernetes on my CentOS machine, when I intialize the cluster, I have the following error.
I specify that I am behind a corporate proxy. I have already configured it for Docker in the directory: /etc/systemd/system/docker.service.d/http-proxy.conf
Docker work fine.
No matter how hard I look, I can't find a solution to this problem.
Thank you for your help.
# kubeadm init
W1006 14:29:38.432071 7560 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 14:29:38.432147 7560 version.go:103] falling back to the local client version: v1.19.2
W1006 14:29:38.432367 7560 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING HTTPProxy]: Connection to "https://192.168.XXX.XXX" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
# kubeadm config images pull
W1006 17:33:41.362395 80605 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 17:33:41.362454 80605 version.go:103] falling back to the local client version: v1.19.2
W1006 17:33:41.362685 80605 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
Maybe root certificates on your machine are outdated - so it does not consider certificate of k8s.gcr.io as valid one. This message x509: certificate signed by unknown authority hints to it.
Try to update them: yum update ca-certificates || yum reinstall ca-certificates
I just did a dig to k8s.gcr.io, and I added the IP given by the request to /etc/hosts.
# dig k8s.gcr.io
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.2 <<>> k8s.gcr.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44303
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;k8s.gcr.io. IN A
;; ANSWER SECTION:
k8s.gcr.io. 21599 IN CNAME googlecode.l.googleusercontent.com.
googlecode.l.googleusercontent.com. 299 IN A 64.233.168.82
;; Query time: 72 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Nov 24 11:45:37 CST 2020
;; MSG SIZE rcvd: 103
# cat /etc/hosts
64.233.168.82 k8s.gcr.io
And now it works!
# kubeadm config images pull
W1124 11:46:41.297352 50730 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.4
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns:1.7.0
Working also with v1.19.2 - I've got the same error.
It seems to be related to the issue mentioned here (and I think also in here).
I re-install kubeadm on the node and ran the kubeadm init workflow again - it is now working with v1.19.3 and the errors are gone.
All master nodes images are pulled successfully.
Also verified with:
sudo kubeadm config images pull
(*) You can run kubeadm init with --kubernetes-version=X.Y.Z (1.19.3 in our case).
I had the same error. Maybe as others say, it's because of an outdated certificate. I believe it's not required to delete anything.
Simple solution was running one of those two commands, which will reconnect to Container repositories via:
podman login
docker login
Source: podman-login
I had this issue on version version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2" when i tried joining a second control panel.
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.9.3: output: E0923 04:47:51.763983 1598 remote_image.go:242] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"k8s.gcr.io/coredns:v1.9.3\": failed to resolve reference \"k8s.gcr.io/coredns:v1.9.3\": k8s.gcr.io/coredns:v1.9.3: not found" image="k8s.gcr.io/coredns:v1.9.3"
time="2022-09-23T04:47:51Z"...
See #99321 it's now k8s.gcr.io/coredns/coredns:v1.9.3 instead of
k8s.gcr.io/coredns:v1.9.3 and i don't now why
by kluevandrew,
refererence: https://github.com/kubernetes/kubernetes/issues/112131
This worked, am using containerd:
crictl pull k8s.gcr.io/coredns/coredns:v1.9.3
ctr --namespace=k8s.io image tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
docker solution:
docker pull k8s.gcr.io/coredns/coredns:v1.9.3
docker tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
Check imageRepository in kubeadm-config configmap (or your kubeadm config file, if You run something like kubeadm init --config=/tmp/kubeadm-config.yml).
We have successfully deployed to OpenShift from a Dockerfile and can verify that is exists via:
oc get is -n my-project
my-image-a image-registry.openshift-image-registry.svc:5000/my-project/my-image-a
We would like to reference this from another Dockerfile like:
FROM my-image-a
This results in:
Pulling image my-image- ...
Warning: Pull failed, retrying in 5s ...
Warning: Pull failed, retrying in 5s ...
Warning: Pull failed, retrying in 5s ...
error: build error: failed to pull image: After retrying 2 times, Pull image still failed due to
error: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
How do we authenticate? We have no issue pushing the image, but pulling it does not work.
you can authenticate using this command:
docker login -u $(oc whoami) -p $(oc whoami -t) registry-openshift-image-registry.apps.<your-cluster-host>
I've started k3d with k3d create && k3d start.
All pods fail to start with the following error:
Warning FailedCreatePodSandBox 14s (x2 over 31s) kubelet,
k3d-k3s-default-server Failed to create pod sandbox: rpc error: code
= Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image
"docker.io/rancher/pause:3.1": failed to pull and unpack image
"docker.io/rancher/pause:3.1": failed to resolve reference
"docker.io/rancher/pause:3.1": failed to do request: Head
https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp:
lookup registry-1.docker.io: Try again
As recommended by a k3d contributor, I've exec'ed into the k3d server container and attempted to pull the image manually:
$ docker exec -it k3d-k3s-default-server sh
/ # ctr image pull docker.io/rancher/pause:3.1
docker.io/rancher/pause:3.1: resolving |--------------------------------------|
elapsed: 4.9 s total: 0.0 B (0.0 B/s)
ctr: failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: dial tcp: lookup registry-1.docker.io: Try again
In the host environment, docker pull docker.io/rancher/pause:3.1 works just fine.
I've seen a number of people resolve the issue by tweaking various DNS settings. But none described how they arrived at their particular solution.
Solving this issue would make me happy. Discovering a general diagnosis strategy would make me even happier.
What hasn't worked
From here:
I got the issue. I had one entry in
/etc/systemd/network/en0.networking Deleted that file, and everything
is fine.
I have no files in /etc/systemd/network/.
I had the same issue with k3s not being able to pull images and solved it by updating my /etc/resolv.conf to be symlinked from /run/systemd/resolve/stub-resolv.conf on the host machine with
ln -sf /etc/resolv.conf /run/systemd/resolve/stub-resolv.conf