Updating a ImageStream from Dockerhub Registry in Openshift - docker

I have a template that defines a object "ImageStream":
{
"apiVersion":"v1",
"kind": "ImageStream",
"metadata": {
"name": "${APPLICATION_NAME}-img",
"labels": {
"app": "${APPLICATION_NAME}"
}
},
"spec": {
"tags": [
{
"name": "latest",
"from": {
"kind": "DockerImage",
"name": "rlanhellas/${APPLICATION_NAME}"
}
}
]
}
}
So, after created I got the image inside openshift registry, the oc get is command return this:
$ oc get is
NAME IMAGE REPOSITORY TAGS UPDATED
safepark-netcore-img default-route-openshift-image-registry.apps.us-east-2.starter.openshift-online.com/safepark/safepark-netcore-img latest About an hour ago
My original image within dockerhub and my pipeline tool always update the latest tag in dockerhub. But the ImageStream in openshift is not updated, so I got always a old version of my image in openshift and a new build is never triggered because the openshift image is not updated.
How can I "link" the ImageStream in OpenShift to my Dockerhub image and ensure that updated image in dockerhub will update the image in openshift ?
Important: I'm using Openshift Online with Free Plan.

If you want an image to automatically sync from one registry to your openshift registry, you can use importPolicy to achieve this.
The OpenShift 3.11 documentation explains the importPolicy functionality.
Set importPolicy to true to automatically sync the image.
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true
apiVersion: v1
kind: ImageStream
metadata:
name: ruby
spec:
tags:
- from:
kind: DockerImage
name: openshift/ruby-20-centos7
name: latest
importPolicy:
scheduled: true

Related

ArgoCD External helm values issues from gitlab url

I am currently having trouble deploying my applications with helm on argocd.
I use an Application ressource and I will go to an ApplicationSet next, that I would copy to you in which I must call on values.yml from another repository in my gitlab.
I try to put the link of the repo directly but it does not work.
I haven't found any other solutions to use values ​​files from another gitlab repository.
Can you help me ?
Thanks in advance !
My code :
My Application ressource file :
`
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: react-docker-app
namespace: argocd
spec:
syncPolicy:
automated:
selfHeal: true
project: default
destination:
server: https://kubernetes.default.svc
namespace: argocd
source:
repoURL: https://gitlab.com/api/v4/projects/40489526/packages/helm/stable
targetRevision: 0.8.0
chart: react-chart
helm:
valueFiles:
- https://gitlab.com/maxdev42-gitops-projects/reactdockerapp2/-/blob/master/deployment/valtues.yaml
`
My values.yml from an another repository :
`
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv8"
`
I'm trying to use value files from other gitlab repositories to deploy my application on argocd with Helm.
The word you are looking for is OTS (off-the-shelf).
Here you have an example: https://github.com/argoproj/argocd-example-apps/tree/master/helm-dependency
Shortly you have to define a new Chart in your repo where you have custom values.yaml referring to a Chart from https://gitlab.com/api/v4/projects/40489526/packages/helm/stable as dependency.
values.yaml should be changed to:
react-chart:
image:
repository: registry.gitlab.com/maxdev42/react-docker-app
tag: "appv7"

buildx call failed with: error: tag is needed when pushing to registry - why are tags in metadata-action not being read by build-push-action?

I am trying to create a Github-Actions workflow - pytorch_error.yml to automatically push Docker images to Docker Hub using Github Actions -
# This is a basic workflow to help you get started with Actions
name: Building and pushing Docker images to Docker hub
# Controls when the workflow will run
on:
workflow_dispatch:
branches: [main]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build_push_pytorch_docker_image:
name: Build and push apex-pytorch-image image to Docker Hub
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout Github repo
uses: actions/checkout#v2
- name: Log into Docker Hub
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Get Metadata (tags,labels) for Docker images
id: meta_pytorch
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: kusur/apex-pytorch-image
- name: Build and push Docker image to Docker Hub
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: .
file: ./dockerfile-pytorch
push: true
tags: ${{ steps.meta_pytorch.ouputs.tags }}
labels: ${{ steps.meta_pytorch.outputs.labels }}
Whenever I execute this code, I get the following error -
error: tag is needed when pushing to registry
Error: buildx call failed with: error: tag is needed when pushing to registry
While looking at the logs, I see that the tag is being generated at the previous step i.e "Get Metadata (tags,labels) for Docker images" -
with:
images: ***/apex-pytorch-image
github-token: ***
Context info
eventName: workflow_dispatch
sha: 046137ce5ae09aac18ba44083cd061ac3a37e48a
ref: refs/heads/main
workflow: Building and pushing Docker images to Docker hub
action: meta_pytorch
actor: ***
runNumber: 2
runId: 1090239471
Processing tags input
type=schedule,pattern=nightly,enable=true,priority=1000
type=ref,event=branch,enable=true,priority=600
type=ref,event=tag,enable=true,priority=600
type=ref,event=pr,prefix=pr-,enable=true,priority=600
Processing flavor input
latest=auto
prefix=
suffix=
Docker image version
main
Docker tags
***/apex-pytorch-image:main
Docker labels
org.opencontainers.image.title=learning-audio-processing
org.opencontainers.image.description=Learning Audio Processing
org.opencontainers.image.url=https://github.com/***/learning-audio-processing
org.opencontainers.image.source=https://github.com/***/learning-audio-processing
org.opencontainers.image.version=main
org.opencontainers.image.created=2021-08-02T12:39:20.636Z
org.opencontainers.image.revision=046137ce5ae09aac18ba44083cd061ac3a37e48a
org.opencontainers.image.licenses=Unlicense
JSON output
{
"tags": [
"***/apex-pytorch-image:main"
],
"labels": {
"org.opencontainers.image.title": "learning-audio-processing",
"org.opencontainers.image.description": "Learning Audio Processing",
"org.opencontainers.image.url": "https://github.com/***/learning-audio-processing",
"org.opencontainers.image.source": "https://github.com/***/learning-audio-processing",
"org.opencontainers.image.version": "main",
"org.opencontainers.image.created": "2021-08-02T12:39:20.636Z",
"org.opencontainers.image.revision": "046137ce5ae09aac18ba44083cd061ac3a37e48a",
"org.opencontainers.image.licenses": "Unlicense"
}
}
Bake definition file
{
"target": {
"docker-metadata-action": {
"tags": [
"***/apex-pytorch-image:main"
],
"labels": {
"org.opencontainers.image.title": "learning-audio-processing",
"org.opencontainers.image.description": "Learning Audio Processing",
"org.opencontainers.image.url": "https://github.com/***/learning-audio-processing",
"org.opencontainers.image.source": "https://github.com/***/learning-audio-processing",
"org.opencontainers.image.version": "main",
"org.opencontainers.image.created": "2021-08-02T12:39:20.636Z",
"org.opencontainers.image.revision": "046137ce5ae09aac18ba44083cd061ac3a37e48a",
"org.opencontainers.image.licenses": "Unlicense"
},
"args": {
"DOCKER_META_IMAGES": "***/apex-pytorch-image",
"DOCKER_META_VERSION": "main"
}
}
}
}
but it is not being read by the build-push-action. This code is copied from Publishing Docker Images. Another file created from this reference is pytorch_image.yml and this code executes without any issue but the code in question is breaking again and again. I am not able to make out any difference between pytorch_image.yml and pytorch_error.yml Any help?

Upgrade Helm Error: the object has been modified; please apply your changes to the latest version and try again

I have successfully installed my helm chart and was trying to change the value of my image version by doing:
helm upgrade --set myAppVersion=1.0.7 myApp . --atomic --reuse-values
This upgrade fails with this error:
The command fails with this error:
Error: UPGRADE FAILED: an error occurred while rolling back the
release. original upgrade error: cannot patch "myappsecret" with
kind Secret: Operation cannot be fulfilled on secrets
"myappsecret": the object has been modified; please apply your
changes to the latest version and try again: cannot patch
"myappsecret" with kind Secret: Operation cannot be fulfilled on
secrets "diffgramsecret": the object has been modified; please apply
your changes to the latest version and try again
It's somehow related to a secret I have in my deployment:
This its the yaml file of the secret:
apiVersion: v1
data:
.dockerconfigjson: {{ .Values.imagePullCredentials.gcrCredentials }}
kind: Secret
metadata:
creationTimestamp: "2021-01-20T22:54:29Z"
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:.dockerconfigjson: {}
f:type: {}
manager: kubectl
operation: Update
time: "2021-01-20T22:54:29Z"
name: myappsecret
namespace: default
resourceVersion: "2073"
uid: 7c99cb08-5576-4fa3-b6f9-d8d11d76d32c
type: kubernetes.io/dockerconfigjson
This secret is used on my deployments to fetch the docker image from our GCR docker registry.
I'm not sure why this is causing problems, because the only value I'm changing is the docker image tag.
Can anybody help me with this?

Kubernetes imagePullSecrets not working; getting "image not found"

I have an off-the-shelf Kubernetes cluster running on AWS, installed with the kube-up script. I would like to run some containers that are in a private Docker Hub repository. But I keep getting a "not found" error:
> kubectl get pod
NAME READY STATUS RESTARTS AGE
maestro-kubetest-d37hr 0/1 Error: image csats/maestro:latest not found 0 22m
I've created a secret containing a .dockercfg file. I've confirmed it works by running the script posted here:
> kubectl get secrets docker-hub-csatsinternal -o yaml | grep dockercfg: | cut -f 2 -d : | base64 -D > ~/.dockercfg
> docker pull csats/maestro
latest: Pulling from csats/maestro
I've confirmed I'm not using the new format of .dockercfg script, mine looks like this:
> cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"REDACTED BASE64 STRING HERE","email":"eng#csats.com"}}
I've tried running the Base64 encode on Debian instead of OS X, no luck there. (It produces the same string, as might be expected.)
Here's the YAML for my Replication Controller:
---
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "maestro-kubetest"
spec:
replicas: 1
selector:
app: "maestro"
ecosystem: "kubetest"
version: "1"
template:
metadata:
labels:
app: "maestro"
ecosystem: "kubetest"
version: "1"
spec:
imagePullSecrets:
- name: "docker-hub-csatsinternal"
containers:
- name: "maestro"
image: "csats/maestro"
imagePullPolicy: "Always"
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
kubectl version:
Client Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"0", GitVersion:"v1.0.3", GitCommit:"61c6ac5f350253a4dc002aee97b7db7ff01ee4ca", GitTreeState:"clean"}
Any ideas?
Another possible reason why you might see "image not found" is if the namespace of your secret doesn't match the namespace of the container.
For example, if your Deployment yaml looks like
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mydeployment
namespace: kube-system
Then you must make sure the Secret yaml uses a matching namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: kube-system
data:
.dockerconfigjson: ****
type: kubernetes.io/dockerconfigjson
If you don't specify a namespace for your secret, it will end up in the default namespace and won't get used. There is no warning message. I just spent hours on this issue so I thought I'd share it here in the hope I can save somebody else the time.
Docker generates a config.json file in ~/.docker/
It looks like:
{
"auths": {
"index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
what you actually want is:
{"https://index.docker.io/v1/": {"auth": "XXXXXXXXXXXXXX", "email": "email#company.com"}}
note 3 things:
1) there is no auths wrapping
2) there is https:// in front of the
URL
3) it's one line
then you base64 encode that and use as data for the .dockercfg name
apiVersion: v1
kind: Secret
metadata:
name: registry
data:
.dockercfg: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX==
type: kubernetes.io/dockercfg
Note again the .dockercfg line is one line (base64 tends to generate a multi-line string)
Another reason you might see this error is due to using a kubectl version different than the cluster version (e.g. using kubectl 1.9.x against a 1.8.x cluster).
The format of the secret generated by the kubectl create secret docker-registry command has changed between versions.
A 1.8.x cluster expect a secret with the format:
{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
But the secret generated by the 1.9.x kubectl has this format:
{
"auths":{
"https://registry.gitlab.com":{
"username":"...",
"password":"...",
"email":"...",
"auth":"..."
}
}
}
So, double check the value of the .dockercfg data of your secret and verify that it matches the format expected by your kubernetes cluster version.
I've been experiencing the same problem. What I did notice is that in the example (https://kubernetes.io/docs/user-guide/images/#specifying-imagepullsecrets-on-a-pod) .dockercfg has the following format:
{
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "jdoe#example.com"
}
}
While the one generated by docker in my machine looks something like this:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "ZmFrZXBhc3N3b3JkMTIK",
"email": "email#company.com"
}
}
}
By checking at the source code, I found that there is actually a test for this use case (https://github.com/kubernetes/kubernetes/blob/6def707f9c8c6ead44d82ac8293f0115f0e47262/pkg/kubelet/dockertools/docker_test.go#L280)
I confirm you that if you just take and encode "auths", as in the example, it will work for you.
Probably the documentation should be updated. I will raise a ticket on github.

Pulling images from private registry in Kubernetes

I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
To add to what #rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
If you need to pull an image from a private Docker Hub repository, you can use the following.
Create your secret key
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
secret "myregistrykey" created.
Then add the newly created key to your Kubernetes service account.
Retrieve the current service account
kubectl get serviceaccounts default -o yaml > ./sa.yaml
Edit sa.yaml and add the ImagePullSecret after Secrets
imagePullSecrets:
- name: myregistrykey
Update the service account
kubectl replace serviceaccount default -f ./sa.yaml
I can confirm that imagePullSecrets not working with deployment, but you can
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
kubectl edit serviceaccounts default
Add
imagePullSecrets:
- name: myregistrykey
To the end after Secrets, save and exit.
And its works. Tested with Kubernetes 1.6.7
Kubernetes supports a special type of secret that you can create that will be used to fetch images for your pods. More details here.
For centos7, the docker config file is under /root/.dockercfg
echo $(cat /root/.dockercfg) | base64 -w 0
Copy and paste result to secret YAML based on the old format:
apiVersion: v1
kind: Secret
metadata:
name: docker-secret
type: kubernetes.io/dockercfg
data:
.dockercfg: <YOUR_BASE64_JSON_HERE>
And it worked for me, hope that could also help.
The easiest way to create the secret with the same credentials that your docker configuration is with:
kubectl create secret generic myregistry --from-file=.dockerconfigjson=$HOME/.docker/config.json
This already encodes data in base64.
If you can download the images with docker, then kubernetes should be able to download them too. But it is required to add this to your kubernetes objects:
spec:
template:
spec:
imagePullSecrets:
- name: myregistry
containers:
# ...
Where myregistry is the name given in the previous command.
go the easy way, do not forget to define --type and add it to proper namespace
kubectl create secret generic YOURS-SECRET-NAME \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson

Resources