It is easy to work with Openshift as a Container As A Service, see the detailed steps. So, via the docker client I can work with Openshift.
I would like to work on my laptop with Minishift. That's the local version of Openshift on your laptop.
Which docker registry should I use in combination with Minishift? Minishift doesn't have it's own registry - I guess.
So, I would like to do:
$ maven clean install -- building the application
$ oc login to your minishift environment
$ docker build -t myproject/mynewapplication:latest
$ docker tag -- ?? normally to a openshift docker registry entry
$ docker push -- ?? to a local docker registry?
$ on 1st time: $ oc new-app mynewapplication
$ on updates: $ oc rollout latest dc/mynewapplication-n myproject
I use just docker and oc cluster up which is very similar. The internal registry that is deployed has an address in the 172.30.0.0/16 space (ie. the default service network).
$ oc login -u system:admin
$ oc get svc -n default | grep registry
docker-registry ClusterIP 172.30.1.1 <none> 5000/TCP 14m
Now, this service IP is internal to the cluster, but it can be exposed on the router:
$oc expose svc docker-registry -n default
$oc get route -n default | grep registry
docker-registry docker-registry-default.127.0.0.1.nip.io docker-registry 5000-tcp None
In my example, the route was docker-registry-default.127.0.0.1.nip.io
With this route, you can log in with your developer account and your token
$oc login -u developer
$docker login docker-registry-default.127.0.0.1.nip.io -p $(oc whoami -t) -u developer
Login Succeeded
Note: oc cluster up is ephemeral by default; the docs can provide instructions on how to make this setup persistent.
One additional note is that if you want OpenShift to try to use some of it's native builders, you can simply run oc new-app . --name <appname> from within the your source code directory.
$ cat Dockerfile
FROM centos:latest
$ oc new-app . --name=app1
--> Found Docker image 49f7960 (5 days old) from Docker Hub for "centos:latest"
* An image stream will be created as "centos:latest" that will track the source image
* A Docker build using binary input will be created
* The resulting image will be pushed to image stream "app1:latest"
* A binary build was created, use 'start-build --from-dir' to trigger a new build
* This image will be deployed in deployment config "app1"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/app1 --port=[port]' later
* WARNING: Image "centos:latest" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "centos" created
imagestream "app1" created
buildconfig "app1" created
deploymentconfig "app1" created
--> Success
Build scheduled, use 'oc logs -f bc/app1' to track its progress.
Run 'oc status' to view your app.
There is an internal image registry. You login to it and push images just like you suggest. You just need to know the address and what credentials you need. For details see:
http://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-push-an-image-to-the-internal-image-registry.html
Related
I would like to pull a Docker image that was built inside an OpenShift Container Platform 3.9 cluster out of that cluster. To this end I try the following:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
image=$(oc get is/my-is -o jsonpath='{.status.tags[0].items[0].dockerImageReference}')
docker pull $image
Now docker login works, but docker image produces the error message
lookup docker-registry.default.svc on 1.2.3.4: no such host
where 1.2.3.4 is a placeholder for my local nameserver according to /etc/resolv.conf and $image is of the form docker-registry.default.svc:5000/registry/my-is#sha256:my-id.
Am I doing something wrong or could it be that the cluster administrator must first expose the registry (but should it not be exposed by default)? If I try oc get svc -n default as suggested here I get this error message:
User "my-user" cannot list services in project "default"
So what steps are needed (preferably without intervention by the cluster's administrator) for me successfully pulling out that image? Would the situation change if the pull occurred in a container also executing inside the OpenShift cluster?
The lead provided in a comment was the right one. (Thanks!). The following script now does work; no intervention by a cluster admin was required:
username=$(oc whoami)
api_token=$(oc whoami -t)
docker login -u $username -p $api_token my-cluster:443
docker pull my-cluster:443/my-project/my-is
docker images
I am trying to run a Nix-built Docker image in tarball form. With docker, docker load -i <path> followed by a docker run works fine. Now I've uploaded the tarball to Artifactory and am trying to run the image on K8s with something like:
$ kubectl run foo-service --image=<internal Artifactory>/foo-service/foo-service-latest.tar.gz
However all I see is:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
foo-service 1 1 1 0 2h
Is it possible to load an image from a (remote) tarball in K8s? If yes, what is the command to do so?
There is no way to do that directly in Kubernetes.
You can do docker load and then docker push to a registry (you can host a private registry in Kubernetes or use a public one) and after that kubectl run.
Minikube also has a registry addon for local development.
I was trying to deploy a docker image I have created via Openshift. I followed the instructions in: http://www.opensourcerers.org/importing-an-external-docker-image-into-red-hat-openshift-v3/
However, as I tried to push my docker image to the Openshift registry, it did not succeed, as shown below
[root#mymachine ~]# docker push
172.30.155.111:5000/default/mycostumedaemon
The push refers to a repository
[172.30.155.111:5000/default/mycostumedaemon]
0a4a35d557a6: Preparing
025eba1692ec: Preparing
5332a889b228: Preparing
e7b287e8074b: Waiting
149636c85012: Waiting
f96222d75c55: Waiting
no basic auth credentials
Following are the docker version and openshift versions:
[root#mymachine ~]# docker --version
Docker version 1.11.0, build 4dc5990
[root#mymachine ~]# oc version
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5
Could someone help me out with this? Not sure what it means by "no basic auth credentials" since the openshift user and server user are root users with all privileges.
After performing oc login to authenticate on your cluster you have to go inside your default project
$ oc project default
Check the service ip of your registry:
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.xx.220 <none> 5000/TCP 76d
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 76d
router 172.30.xx.xx <none> 80/TCP,443/TCP,1936/TCP 76d
Check your token:
$ oc whoami -t
trSZhNVi8F_N3Pxxx
Now you can authenticate on your registry:
docker login -u test -e any#mail.com -p trSZhNVi8F_N3Pxxx 172.30.xx.220:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
One stroke login:
docker login -u developer -p $(oc whoami -t) $(oc registry info)
I used a "jenkins-1-centos7" image to deploy in my openshift to run projects on my jenkins image.
It successfully worked and after many configurations, I duplicated a new image out of this jenkins container.
Now I want to use this image to be used as a base for further development, but deploying a pod on to this image fails with the error "ErrImagePull".
On my investigations, I found that openshift needs the image to be present in the docker registry in order to deploy pods successfully.
I deployed another app for docker registries, now when I try to push my updated image into this docker registry it fails with the message "authentication required".
I've given admin privileges to my user.
docker push <local-ip>:5000/openshift/<new-updated-image>
The push refers to a repository [<local-ip>:5000/openshift/<new-updated-image>] (len: 1)
c014669e27a0: Preparing
unauthorized: authentication required
How can I make sure that the modified image gets deployed successfully?
Probably this answer will need edits because your issue can be caused by a lot of things. (I assume you are using OpenShift origin? (opensource)). Because I see the Centos7 image for Jenkins.
First off all you need to deploy the openshift registry in the default project.
$ oc project default
$ oadm registry --config=/etc/origin/master/admin.kubeconfig \
--service-account=registry
A registry pod will be deployed. Above the registry will be created a service (sort of endpoint which will function as loadbalancer above your pods).
This service has an IP which is inside the 172.30 range.
You can check this IP in the webconsole or perform (assuming you're still in the default project):
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.22.11 <none> 5000/TCP 8d
kubernetes 172.30.32.13 <none> 443/TCP,53/UDP,53/TCP 9d
router 172.30.42.42 <none> 80/TCP,443/TCP,1936/TCP 9d
So you'll need to use the service IP of your docker-registry to authenticate. You'll also need a token:
$ oc whoami -t
D_OPnWLdgEbiKJzvG1fm9dYdX..
Now you're able to perform the login and push the image:
$ docker login -u admin -e any#mail.com \
-p D_OPnWLdgEbiKJzvG1fm9dYdX 172.30.22.11:5000
WARNING: login credentials saved in /root/.docker/config.json
Login Succeeded
$ docker tag myimage:latest 172.30.22.11/my-proj/myimage:latest
$ docker push 172.30.22.11/my-proj/myimage:latest
hope this helps. You can give some feedback on this answer and tell if it works for you or which new issues you're facing.
Everything is fine only last line getting authentication error
docker push 172.30.22.11/my-proj/myimage:latest
😢
I have created image locally (docker image)
and when i run image witg oc run AA --image=(docker image name).
It runs and crashes after few seconds. There is no log in oc and docker.
Error in oc describe is crashloopbackoff
Try prepending the minishift/openshift internal docker registry to the docker image name:
# e.g. get the registry address
$ minishift openshift registry
172.30.1.1:5000
$ oc run AA -i -t --image=172.30.1.1:5000/<project>/<docker image name>:<tag>