Deploying a modified Jenkins image in openshift fails - jenkins

I used a "jenkins-1-centos7" image to deploy in my openshift to run projects on my jenkins image.
It successfully worked and after many configurations, I duplicated a new image out of this jenkins container.
Now I want to use this image to be used as a base for further development, but deploying a pod on to this image fails with the error "ErrImagePull".
On my investigations, I found that openshift needs the image to be present in the docker registry in order to deploy pods successfully.
I deployed another app for docker registries, now when I try to push my updated image into this docker registry it fails with the message "authentication required".
I've given admin privileges to my user.
docker push <local-ip>:5000/openshift/<new-updated-image>
The push refers to a repository [<local-ip>:5000/openshift/<new-updated-image>] (len: 1)
c014669e27a0: Preparing
unauthorized: authentication required
How can I make sure that the modified image gets deployed successfully?

Probably this answer will need edits because your issue can be caused by a lot of things. (I assume you are using OpenShift origin? (opensource)). Because I see the Centos7 image for Jenkins.
First off all you need to deploy the openshift registry in the default project.
$ oc project default
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \
--service-account=registry
A registry pod will be deployed. Above the registry will be created a service (sort of endpoint which will function as loadbalancer above your pods).
This service has an IP which is inside the 172.30 range.
You can check this IP in the webconsole or perform (assuming you're still in the default project):
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.22.11 <none> 5000/TCP 8d
kubernetes 172.30.32.13 <none> 443/TCP,53/UDP,53/TCP 9d
router 172.30.42.42 <none> 80/TCP,443/TCP,1936/TCP 9d
So you'll need to use the service IP of your docker-registry to authenticate. You'll also need a token:
$ oc whoami -t
D_OPnWLdgEbiKJzvG1fm9dYdX..
Now you're able to perform the login and push the image:
$ docker login -u admin -e any#mail.com \
-p D_OPnWLdgEbiKJzvG1fm9dYdX 172.30.22.11:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
$ docker tag myimage:latest 172.30.22.11/my-proj/myimage:latest
$ docker push 172.30.22.11/my-proj/myimage:latest
hope this helps. You can give some feedback on this answer and tell if it works for you or which new issues you're facing.

Everything is fine only last line getting authentication error
docker push 172.30.22.11/my-proj/myimage:latest
😢

Related

Microk8s trouble launching local docker image

I'm using the latest version of microk8s and docker on the same VM. microk8s registry is enabled.
I restaged my image argus
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
argus 0.1 6d72b6be9981 3 hours ago 164MB
localhost:32000/argus registry 6d72b6be9981 3 hours ago 164MB
then I pushed it
$ docker push localhost:32000/argus:registry
The push refers to repository [localhost:32000/argus]
c8a05c6fda3e: Pushed
5836f564d6a0: Pushed
9e3dd069b4a1: Pushed
6935b1ceeced: Pushed
d02e8e9f8523: Pushed
c5129c726314: Pushed
0f299cdf8fbc: Pushed
edaf6f6a5ef5: Pushed
9eb034f85642: Pushed
043895432150: Pushed
a26398ad6d10: Pushed
0dee9b20d8f0: Pushed
f68ef921efae: Pushed
registry: digest: sha256:0a0ac9e076e3249b8e144943026bc7c24ec47ce6559a4e087546e3ff3fef5c14 size: 3052
all working seemingly fine but when I try to deploy a pod with:
$ microk8s kubectl create deployment argus --image=argus
deployment.apps/argus created
$ microk8s kubectl get pods
NAME READY STATUS RESTARTS AGE
argus-84c8dcc968-27nlz 0/1 ErrImagePull 0 9s
$ microk8s kubectl logs argus-84c8dcc968-27nlz
Error from server (BadRequest): container "argus" in pod "argus-84c8dcc968-27nlz" is waiting to start: trying and failing to pull image
The image can not be pulled, I tried the $ microk8s ctr images ls but this does not tell me anything.
So what is it that I'm doing wrong here?
update:
A bit of an update here when I try:
$ microk8s ctr image pull localhost:32000/argus:registry
ctr: failed to resolve reference "localhost:32000/argus:registry": failed to do request: Head "https://localhost:32000/v2/argus/manifests/registry": http: server gave HTTP response to HTTPS client
So it seems that it does not like that it gets and http response from my local repository. I looked into the config at /var/snap/microk8s/current/args/containerd-template.toml and there the localhost repository is correctly configured:
[plugins."io.containerd.grpc.v1.cri".registry]
# 'plugins."io.containerd.grpc.v1.cri".registry.mirrors' are namespace to mirror mapping for all namespaces.
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io", ]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:32000"]
endpoint = ["http://localhost:32000"]
I'm running all of this on a centos8 VM. When I installed docker
I needed to do it with sudo dnf install docker-ce --nobest because otherwise there was some kind of conflict with containerd maybe it has todo something with this?
Okay, there were multiple issues at play here, and I think I solved them all. First of, I made a mistake with the Docker Image. It was a test image, but it should have had something that continuously runs because after PID 1 ends the container gets restarted, the reasons is that microk8s/kubernetes assumes a there is a problem. That's why there was the crash loop.
Second, to check which repositories are present in the local registry, it's easiest to curl the rest API of the registry with:
$curl http://host:32000/v2/_catalog
to get a list of all images, and:
$curl http://host:32000/v2/{repositoryName}/tags/list
to get all tags for a given repo.
Lastly, to pull from the registry to the cluster manually without getting the https error, it's necessary to add the --plain-http option like this:
$microk8s ctr image pull --plain-http localhost:32000/repo:tag
you can use kubectl describe to check the pod
i guess it try to pull the "argus" from docker.io
have you try add localhost:32000 to the image parameter?
microk8s kubectl create deployment argus --image=localhost:32000/argus:registry

Unable to PULL image into minikube from insecure private registry - http: server gave HTTP response to HTTPS client

On Ubuntu 18, I installed Docker (19.03.12) from these instructions
https://docs.docker.com/engine/install/ubuntu/
And then went through these steps
manage docker as non-root user
https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
start on boot using systemd
https://docs.docker.com/engine/install/linux-postinstall/#configure-docker-to-start-on-boot
and set up a private docker registry using this
docker run -d -p 5000:5000 -e REGISTRY_DELETE_ENABLED=true --restart=always --name registry registry:2
I also added this to the daemon.json file
{ "insecure-registries" : ["my.registrydomain.lan:5000"] }
And restarted the docker daemon
sudo /etc/init.d/docker restart
I checked docker info to make sure the setting for insecure registry was applied and I saw this at the end so it seems ok
Insecure Registries:
my.registrydomain.lan:5000
127.0.0.0/8
On the same machine I start minikube (1.12.3) with this command
minikube start --driver=docker --memory=3000 --insecure-registry=my.registrydomain.lan:5000
So everything is running and fine, and I proceed to apply my deployments using kubectl except when I get to the pod that needs to pull the container form the local registry I get an ErrImagePull status. Here is part of my deployment
spec:
containers:
- name: my-container
image: my.registrydomain.lan:5000/name:1.0.0.9
imagePullPolicy: IfNotPresent
When I describe the pod that failed using
kubectl describe pod mypod-8474577f6f-bpmp2
I see this message
Failed to pull image "my.registrydomain.lan:5000/name:1.0.0.9": rpc
error: code = Unknown desc = Error response from daemon: Get
https://my.registrydomain.lan:5000/v2/: http: server gave HTTP
response to HTTPS client
EDIT: I forgot to mention that I am able to PUSH my images into the registry without any issues from a separate machine over http (machine is Windows 10 and I set the insecure registry option in the daemon config)
I tried to reproduce your issue with exact same settings that you provided and this works just fine. Image is being pulled without any problem. I tested this with my debian 9 and fresh ubuntu installation with this settings:
minikube version: v1.12.3
docker version: v19.03.12
k8s version: v1.18.3
ubuntu version: v18
What I`ve done what is not described in the question is to place an entry in minikube container hosts file:
root#minikube:/# cat /etc/hosts
...
10.128.5.6 my.registrydomain.lan
...
And the tag/push commands:
docker tag 4e2eef94cd6b my.registrydomain.lan:5000/name:1.0.0.9
docker push my.registrydomain.lan:5000/name:1.0.0.9
Here`s the describe from the pod:
Normal Pulling 8m19s (x5 over 10m) kubelet, minikube Pulling image "my.registrydomain.lan:5000/name:1.0.0.9"
As suggested in the comments already you may want to check this github case. It goes thru couple of solution of your problem:
First is to check your hosts file and update it correctly if you hosting your repository on another node. Second solution is related to pushing images in to repository which turned for the user that both insecure-registries and docker push command are case sensitive. Third one is to use systemd to control docker daemon.
Lastly If those would not help I would try to clear all settings, uninstall docker, clear docker configuration and start again from scratch.

GCP: Unable to pull docker images from our GCP private container registry on ubuntu/debian VM instances

I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.

Openshift & docker - which registry can I use for Minishift?

It is easy to work with Openshift as a Container As A Service, see the detailed steps. So, via the docker client I can work with Openshift.
I would like to work on my laptop with Minishift. That's the local version of Openshift on your laptop.
Which docker registry should I use in combination with Minishift? Minishift doesn't have it's own registry - I guess.
So, I would like to do:
$ maven clean install -- building the application
$ oc login to your minishift environment
$ docker build -t myproject/mynewapplication:latest
$ docker tag -- ?? normally to a openshift docker registry entry
$ docker push -- ?? to a local docker registry?
$ on 1st time: $ oc new-app mynewapplication
$ on updates: $ oc rollout latest dc/mynewapplication-n myproject
I use just docker and oc cluster up which is very similar. The internal registry that is deployed has an address in the 172.30.0.0/16 space (ie. the default service network).
$ oc login -u system:admin
$ oc get svc -n default | grep registry
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 14m
Now, this service IP is internal to the cluster, but it can be exposed on the router:
$oc expose svc docker-registry -n default
$oc get route -n default | grep registry
docker-registry docker-registry-default.127.0.0.1.nip.io docker-registry 5000-tcp None
In my example, the route was docker-registry-default.127.0.0.1.nip.io
With this route, you can log in with your developer account and your token
$oc login -u developer
$docker login docker-registry-default.127.0.0.1.nip.io -p $(oc whoami -t) -u developer
Login Succeeded
Note: oc cluster up is ephemeral by default; the docs can provide instructions on how to make this setup persistent.
One additional note is that if you want OpenShift to try to use some of it's native builders, you can simply run oc new-app . --name <appname> from within the your source code directory.
$ cat Dockerfile
FROM centos:latest
$ oc new-app . --name=app1
--> Found Docker image 49f7960 (5 days old) from Docker Hub for "centos:latest"
* An image stream will be created as "centos:latest" that will track the source image
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream "app1:latest"
* A binary build was created, use 'start-build --from-dir' to trigger a new build
* This image will be deployed in deployment config "app1"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/app1 --port=[port]' later
* WARNING: Image "centos:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "centos" created
imagestream "app1" created
buildconfig "app1" created
deploymentconfig "app1" created
--> Success
Build scheduled, use 'oc logs -f bc/app1' to track its progress.
Run 'oc status' to view your app.
There is an internal image registry. You login to it and push images just like you suggest. You just need to know the address and what credentials you need. For details see:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html

Unable to push docker image to Openshift Origin Docker registry

I was trying to deploy a docker image I have created via Openshift. I followed the instructions in: http://www.opensourcerers.org/importing-an-external-docker-image-into-red-hat-openshift-v3/
However, as I tried to push my docker image to the Openshift registry, it did not succeed, as shown below
[root#mymachine ~]# docker push
172.30.155.111:5000/default/mycostumedaemon
The push refers to a repository
[172.30.155.111:5000/default/mycostumedaemon]
0a4a35d557a6: Preparing
025eba1692ec: Preparing
5332a889b228: Preparing
e7b287e8074b: Waiting
149636c85012: Waiting
f96222d75c55: Waiting
no basic auth credentials
Following are the docker version and openshift versions:
[root#mymachine ~]# docker --version
Docker version 1.11.0, build 4dc5990
[root#mymachine ~]# oc version
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5
Could someone help me out with this? Not sure what it means by "no basic auth credentials" since the openshift user and server user are root users with all privileges.
After performing oc login to authenticate on your cluster you have to go inside your default project
$ oc project default
Check the service ip of your registry:
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.xx.220 <none> 5000/TCP 76d
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 76d
router 172.30.xx.xx <none> 80/TCP,443/TCP,1936/TCP 76d
Check your token:
$ oc whoami -t
trSZhNVi8F_N3Pxxx
Now you can authenticate on your registry:
docker login -u test -e any#mail.com -p trSZhNVi8F_N3Pxxx 172.30.xx.220:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
One stroke login:
docker login -u developer -p $(oc whoami -t) $(oc registry info)

Resources