I'm trying to setup github workflow for building image and pushing it to the registry using redhat-actions actions:
workflow.yaml
name: build-maven-runner
on:
workflow_dispatch:
jobs:
build-test-push:
outputs:
image-url: ${{ steps.push-to-artifactory.outputs.registry-path }}
image-digest: ${{ steps.push-to-artifactory.outputs.digest }}
name: build-job
env:
runner_memorylimit: 2Gi
runner_cpulimit: 2
runs-on: [ linux ]
steps:
- name: Clone
uses: actions/checkout#v2
- name: Pre-Login
# podman-login: requires docker config repo auths
# Error: TypeError: Cannot set property 'some.repo.com' of undefined
mkdir /home/runner/.docker/
cat <<EOT >> /home/runner/.docker/config.json
{
"auths": {
"some.repo.com": {}
}
}
EOT
- name: Login
uses: redhat-actions/podman-login#v1
with:
registry: some.repo.com
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
auth_file_path: /tmp/podman-run-1000/containers/auth.json
- name: Build
id: build-image
uses: redhat-actions/buildah-build#v2
with:
image: some-image
tags: latest
containerfiles: ./config/Dockerfile
tls-verify: false
- name: Push
id: push-to-artifactory
uses: redhat-actions/push-to-registry#v2
with:
image: ${{ steps.build-image.outputs.image }}
tags: latest
registry: some.other.repo.com/project
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
tls-verify: false
./config/Dockerfile
FROM .../openshift/origin-cli:4.10
USER root
RUN sudo yum update -y
RUN sudo yum install -y maven
RUN maven -version
RUN oc version
But Build step failing resulting in:
/usr/bin/buildah version
Version: 1.22.3
Go Version: go1.15.2
Image Spec: 1.0.1-dev
Runtime Spec: 1.0.2-dev
CNI Spec: 0.4.0
libcni Version:
image Version: 5.15.2
Git Commit:
Built: Thu Jan 1 00:00:00 1970
OS/Arch: linux/amd64
Overriding storage mount_program with "fuse-overlayfs" in environment
Performing build from Containerfile
/usr/bin/buildah bud -f /runner/_work/some-project/some-project/config/Dockerfile --format docker --tls-verify=false -t some-image:latest /runner/_work/some-project/some-project
chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted
time="2022-12-12T16:13:52Z" level=warning msg="failed to shutdown storage: \"chown /home/runner/.local/share/containers/storage/overlay/l: operation not permitted\""
time="2022-12-12T16:13:52Z" level=error msg="exit status 125"
Error: Error: buildah exited with code 125
I'm fairly out of ideas at this point.. I was thinking if it has to do with storage.conf like mentioned here, but even overriding storage.conf still having same error. Originally this how storage.conf looks like:
[storage]
driver = "overlay"
runroot = "/run/containers/storage"
graphroot = "/var/lib/containers/storage"
[storage.options]
additionalimagestores = [
]
[storage.options.overlay]
mountopt = "nodev,metacopy=on"
[storage.options.thinpool]
Does the problem lies deeper like in Dockerfile image ```openshif/origin-cli?
Any help would be appreciated
I ran into this issue today because I was doing some tests locally, typically your CICD should give the correct permissions to your containers (or the workers running your jobs). I fixed this issue by adding the --privileged flag while running my container, I do not recommend using that mode in production unless you are really sure what you are doing. Perhaps not exactly your issue but dropping it here in case it helps someone else.
I have problem with running docker in kubernetes runner.
I've installed kubernetes runner with helm and set privileged mode to true
runners:
config: |
[[runners]]
[runners.kubernetes]
namespace = "{{.Release.Namespace}}"
image = "ubuntu:20.04"
privileged = true
allow_privilege_escalation = true
I've created simple .gitlab-ci.yaml for test
stages:
- docker_test
services:
- docker:dind
docker_test:
stage: docker_test
image: docker:latest
variables:
DOCKER_HOST: "tcp://docker:2375"
script:
- docker version
But when I fire this pipeline I'm gettint error
Running with gitlab-runner 14.6.0 (5316d4ac)
on gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm augRojS5
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image docker:latest ...
Using attach strategy to execute scripts...
Preparing environment
00:06
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
Waiting for pod gitlab-runner/runner-augrojs5-project-30333904-concurrent-0k66kk to be running, status is Pending
ContainersNotReady: "containers with unready status: [build helper svc-0]"
ContainersNotReady: "containers with unready status: [build helper svc-0]"
Running on runner-augrojs5-project-30333904-concurrent-0k66kk via gitlab-runner-gitlab-runner-5cc654bdf7-gjfvm...
Getting source from Git repository
00:03
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/gurita/gurita-core/.git/
Created fresh repository.
Checking out fe720f2f as main...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:00
$ docker version
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.16.12
Git commit: e91ed57
Built: Mon Dec 13 11:40:57 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running?
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: command terminated with exit code 1
I tried to set without variable but at this case there is no /var/run/docker.sock.
You need to mount the host's docker socket:
[runners.kubernetes]
image = "ubuntu:18.04"
privileged=true
[[runners.kubernetes.volumes.host_path]]
name = "docker-socket"
mount_path = "/var/run/docker.sock"
read_only = false
host_path = "/var/run/docker.sock"
(NOTE: This is from one of my old gitlab installations, I haven't tested this against the latest release)
Here's my full Runner block. You can swapping my config in for yours (make a backup of your old config first) and see if it works. Obviously change things as needed -- for example I use a specific node pool, hence the node_selector and node_tolerations sections
## Installation & configuration of gitlab/gitlab-runner
## See requirements.yaml for current version
gitlab-runner:
install: true
rbac:
create: true
runners:
locked: false
privileged: true
cache:
secretName: google-application-credentials
config: |
[[runners]]
[runners.feature_flags]
FF_GITLAB_REGISTRY_HELPER_IMAGE = true
FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE = true
[runners.kubernetes]
image = "ubuntu:18.04"
privileged=true
[[runners.kubernetes.volumes.host_path]]
name = "docker-socket"
mount_path = "/var/run/docker.sock"
read_only = false
host_path = "/var/run/docker.sock"
[runners.kubernetes.node_selector]
"cloud.google.com/gke-nodepool" = "gitlab-runners"
[runners.kubernetes.node_tolerations]
"appName=gitlab" = "NoExecute"
{{- if .Values.global.minio.enabled }}
[runners.cache]
Type = "gcs"
Path = "gitlab-runner"
Shared = true
[runners.cache.gcs]
BucketName = "runner-cache"
{{ end }}
podAnnotations:
gitlab.com/prometheus_scrape: "true"
gitlab.com/prometheus_port: 9252
Thank you for your hint about mounting docker.sock.
this worked for me
runners:
config: |
[[runners]]
[runners.kubernetes]
image = "ubuntu:20.04"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-emptydir"
mount_path = "/var/run"
medium = "Memory"
Thanks again
Hey trying to install jenkins on GKE cluster with this command
helm install stable/jenkins -f test_values.yaml --name myjenkins
My version of helm and kubectl if matters
helm version
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Values downloaded with this command helm inspect values stable/jenkins > test_values.yaml and modified:
cat test_values.yaml
Master:
adminPassword: 34LbGfq5LWEUgw // local testing
resources:
limits:
cpu: '500m'
memory: '1024'
podLabels:
nodePort: 32323
serviceType: ClusterIp
Persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
and some weird new error after update
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$ helm install stable/jekins --name myjenkins -f test_values.yaml
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
As I can see you're trying to install stable/jekins which isn't in the helm repo instead of stable/jenkins. Please update your question if it's just misspelling and I'll update my answer , but I've tried your command:
$helm install stable/jekins --name myjenkins -f test_values.yaml
and got the same error:
Error: failed to download "stable/jekins" (hint: running `helm repo update` may help)
EDIT To solve next errors like:
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:258:11: executing "jenkins/templates/deprecation.yaml" at <fail "Master.* values have been renamed, please check the documentation">: error calling fail: Master.* values have been renamed, please check the documentation
and
Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:354:10: executing "jenkins/templates/deprecation.yaml" at <fail "Persistence.* values have been renamed, please check the documentation">: error calling fail: Persistence.* values have been renamed, please check the documentation
and so on you also need to edit test_values.yaml
master:
adminPassword: 34LbGfq5LWEUgw
resources:
limits:
cpu: 500m
memory: 1Gi
podLabels:
nodePort: 32323
serviceType: ClusterIP
persistence:
storageClass: 'managed-nfs-storage'
size: 5Gi
rbac:
create: true
And after that it's deployed successfully:
$helm install stable/jenkins --name myjenkins -f test_values.yaml
NAME: myjenkins
LAST DEPLOYED: Wed Jan 8 15:14:51 2020
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME AGE
myjenkins 1s
myjenkins-tests 1s
==> v1/Deployment
NAME AGE
myjenkins 0s
==> v1/PersistentVolumeClaim
NAME AGE
myjenkins 1s
==> v1/Pod(related)
NAME AGE
myjenkins-6c68c46b57-pm5gq 0s
==> v1/Role
NAME AGE
myjenkins-schedule-agents 1s
==> v1/RoleBinding
NAME AGE
myjenkins-schedule-agents 0s
==> v1/Secret
NAME AGE
myjenkins 1s
==> v1/Service
NAME AGE
myjenkins 0s
myjenkins-agent 0s
==> v1/ServiceAccount
NAME AGE
myjenkins 1s
NOTES:
1. Get your 'admin' user password by running:
printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=myjenkins" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8080
kubectl --namespace default port-forward $POD_NAME 8080:8080
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
The repo stable is going to be deprecated very soon and is not being updated. I suggest use jenkins chart from Helm Hub
I'm trying to setup the Concourse CI locally by following the documentation. Everything goes great until I try to run a sample hello-world pipeline. The job results in this error...
runc create: exit status 1: container_linux.go:264: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:56: mounting \\\"/worker-state/3.8.0/assets/bin/init\\\" to rootfs \\\"/worker-state/volumes/live/a8d3b403-19e7-4f16-4a8a-40409a9b017f/volume/rootfs\\\" at \\\"/worker-state/volumes/live/a8d3b403-19e7-4f16-4a8a-40409a9b017f/volume/rootfs/tmp/garden-init\\\" caused \\\"open /worker-state/volumes/live/a8d3b403-19e7-4f16-4a8a-40409a9b017f/volume/rootfs/tmp/garden-init: permission denied\\\"\""
Looks like I'm getting a permissions error but I've double-checked that the container is running in privileged mode.
$ docker inspect --format='{{.HostConfig.Privileged}}' concourse_concourse_1
true
Docker Compose File
version: '3'
services:
concourse-db:
image: postgres
environment:
- POSTGRES_DB=concourse
- POSTGRES_PASSWORD=concourse_pass
- POSTGRES_USER=concourse_user
- PGDATA=/database
concourse:
image: concourse/concourse
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["8080:8080"]
environment:
- CONCOURSE_POSTGRES_HOST=concourse-db
- CONCOURSE_POSTGRES_USER=concourse_user
- CONCOURSE_POSTGRES_PASSWORD=concourse_pass
- CONCOURSE_POSTGRES_DATABASE=concourse
- CONCOURSE_EXTERNAL_URL
- CONCOURSE_BASIC_AUTH_USERNAME
- CONCOURSE_BASIC_AUTH_PASSWORD
- CONCOURSE_NO_REALLY_I_DONT_WANT_ANY_AUTH=true
- CONCOURSE_WORKER_GARDEN_NETWORK
Pipeline
---
jobs:
- name: job-hello-world
public: true
plan:
- task: hello-world
config:
platform: linux
image_resource:
type: docker-image
source: {repository: busybox}
run:
path: echo
args: [hello world]
Concourse Version
$ curl http://192.168.99.100:8080/api/v1/info
{"version":"3.12.0","worker_version":"2.0"}
Other Versions
$ docker --version
Docker version 18.04.0-ce, build 3d479c0
$ docker-machine --version
docker-machine version 0.14.0, build 89b8332
$ docker-compose --version
docker-compose version 1.21.0, build unknown
$ system_profiler SPSoftwareDataType
Software:
System Software Overview:
System Version: macOS 10.13.1 (17B1003)
Kernel Version: Darwin 17.2.0
Boot Volume: OSX
While everything on the surface may look like it's up to date. It's important to note that docker-machine runs docker inside of VMs which can get stale if you're not updating them on a regular basis and Concourse needs kernel 3.19 or higher.
Running docker info can shed some light on the situation from Docker server's perspective.
What worked for me was...
$ docker-compose down
$ docker-machine upgrade
$ docker-compose up -d
I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked