Import an image from an archive - docker

I need to deploy selenium/standalone-chrome image to docker.
The problem is that I use corporative openshift with private registry. There is no possibility to upload image to registry or load it thru the docker (docker service is not exposed).
I managed to export tar file from local machine using command 'docker save -o'. I uploaded this image to artifactory as an artifact and now can download it.
Question: how can I create or import image based on a binary archive with layers?
Thanks in advance.

Even you're using OpenShift, you can proceed to Docker push since the registry is exposed by default: you need your username (oc whoami) along with the token oc whoami --show-token.
Before proceeding, make sure you have an Image Stream since it's mandatory in order to push images.
Once obtained these data, proceed to login from your host:
docker login -u `oc whoami` -p `oc whoami --show-token` registry.your.openshift.fqdn.tld:443
Now, you just need to build your image
docker build . -t registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version
Finally, push it!
docker push registry.your.openshift.fqdn.tld:443/your-image-stream/image-name:version

Related

Can I query an OpenShift Docker repository whether a given image _exists_ without pulling it?

I have a situation where I need to wait for an image to show up on a given docker registry (this is an OpenShift external registry) before continuing my script with additional oc-commands.
Before that happens, the external docker registry has no knowledge what so ever of this image. Afterwards it is available as :latest.
Can this be done programatically? (Preferably without trying to download the image)
My order of preference:
oc command
docker command
A REST api (using oc or docker credentials)
assuming the repository from openshift works similarly to dockerhub, you can do this:
curl --silent -f -lSL https://index.docker.io/v1/repositories/$1/tags/$2 > /dev/null
source

How to access the locally built docker-image on the docker-swarm manager?

While trying to build a service on docker-machine i got an error of "image doesn't exist" on that docker-machine manager node. As I checked the docker images command on the manager node, no image was there as expected. But on the root docker side I have those images. I want to access these images on the manager node. I've read few articles where it was mentioned that, maybe I've to upload that image on the docker hub then pull it from that hub. But I want to access it locally. Is there any way to do this as I'm newbie to docker.
This is the command what I tried on my manager machine:
docker#manager:~$ docker service create --name "api-client" -p 4200:4200 api_client
This is my docker images output:
REPOSITORY TAG IMAGE ID CREATED SIZE
api_client latest 097b19c4deb8 27 hours ago 1.15GB
But on my docker#manager terminal, my docker image folder is empty.
The problem is that there is no repository to hold the image. The repository needs to be pulled from to a repository to each node in the Swarm before it can execute. In general you need to do the following:
Setup a repository, if you want a local repository there is a guide here, but it will be some hassle to get it up and running i and "insecure http" version. An easier way is to get yourself a free docker hub account and put your image there.
Tag your local image with the repository name. Howto is shown in the guide above.
docker tag <local image> <repository>/<image:tag>
Login to the repository (if in cloud) and push your image to the repository
docker login
docker push <repository>/<image>:<tag>
To run the image (your command)
docker service create --name "api-client" -p 4200:4200 <repository>/<image>:<tag>
Your can also try to pull an image into the local cache of a node using
docker pull <repository>/<image>:<tag>

Docker: How to get docker image to GitHub

I built a docker image like this:
sudo docker image build -t docker_image_gotk3 .
If I execute
sudo docker images I can see the line:
REPOSITORY TAG IMAGE ID CREATED SIZE
docker_image_gotk3 latest c13f7fcdb11d 14 minutes ago 20.4MB
I searched the complete file system and I didn't find a single file named docker_image_gotk3. How do I actually get it?
You have to export the docker image to push GitHub.
docker save -o docker_image_gotk3.tar docker_image_gotk3
ls -sh docker_image_gotk3.tar
20.4M docker_image_gotk3.tar
Github doesn't appear to have a Docker registry service as of now.
Maybe you could try tracking your image in Docker Hub as an alternative to what Tibi02 proposes?
Just create an account at https://hub.docker.com/ if you don't have one already, and do the following:
docker login in your terminal to authenticate in Docker Hub
docker image push your_username/docker_image_gotk3:latest to upload your image to the registry
Then you should be able to see it at https://cloud.docker.com/repository/docker/you_username/docker_image_gotk3, and download your image with docker image pull your_username/docker_image_gotk3:latest
Hope this helps!

How to pull docker images from public registry and push it to private openshift?

I need to pull all images from an openshift template file, in my case it's openwhisk.
I'm trying to deploy this project on a private network so I don't have access to docker's official repository from there and thus need to push the images myself.
I was hoping there is a script/tool to automate this process.
There is no such available tool/script but you can write small shell script to do it.
If public dockerhub registry not allowed then either use private separate registry
or
Pull the image in your local laptop then tag it and push to openshift registry.
After pushing all the image to openshift, import your openshift template to deploy your application.
Below is the steps for single image. you can define list of image and loop it over the list.
docker pull imagename
oc login https://127.0.0.1:8443 --token=<hidden_token> #copy from https://your_openshift_server:port/console/command-line
oc project test
oc create imagestream imagename
docker login -u `oc whoami` -p `oc whoami --show-token` your_openshift_server:port
docker tag imagename your_openshift_server:port/openshift_projectname/imagename:tag
docker push your_openshift_server:port/openshift_projectname/imagename:tag
you can get more details on page suggested by graham-dumpleton
.
Graham Dumpleton's book talks about this. You create a list (JSON) of all the images used and import that into the openshift namespace. Since your OpenShift is offline/disconnected, you'll also change any remote registry to the URL of the internal, hosted registry.
Example that imports all JBoss images: https://github.com/projectatomic/adb-utils/blob/master/services/openshift/templates/common/jboss-image-streams.json

Awaiting gcloud docker -- push

Im building a deployment script in nodejs, with 1 part being calling the gcloud cli through require('child_process').spawn(...); to push the already build docker images. i execute the following command:
gcloud docker -- push myImage
This all works great, the images gets uploaded. But the problem is that gcloud docker opens a new process to push my image and the process i spawned, closes before the pushing of the image is done.
Problem is, I want to delete the builded images locally, directly afterwards.
I've been looking in the gcloud docker documentation but i don't see any argument for this.
Is there a way to know that the process of uploading the images was completed?
edit:
i did find a way to do it only through docker but i'd like a universal solution (both working on windows and linux environments)
After some more research on the google documentation, i found this authentication page
They tell you to create a service account and use the json private key you get as token to use into docker login. This way you don't need an oauth token for your automated services, but you can use this json key instead.
You can check all the images by running this command:
[sudo docker images]
Take a note of the "IMAGE ID" it will used when Tagging and deleting the image.
When you build a docker images, tag it before By running this command:
[docker tag "IMAGE ID" gcr.io/{the Google Container Registry path}:{version} ]
You can push any built image by running this command:
[gcloud docker -- push gcr.io/{the google container registry path}:{version}].
When pushing you will notice that list of container are pushed to your Google Container registry see the example below:
$ sudo gcloud docker -- push gcr.io/{the google container registry path}:{version}
The push refers to repository [gcr.io/{the google container registry path}]
43d35f91f441: =================> Pushed
3b93beb428bf: Layer already exists
629fa6a1373d: =================> Pushed
0f82335d5733: Layer already exists
c216b39a9ab6: Layer already exists
ccbd0c2af699: Layer already exists
38788b6810d3: Layer already exists
cd7100a72410: Layer already exists
v1: digest: sha256:**************************************************************** size: 1992
You can check all the images by running this command:
[sudo docker images]
Take a note of the "IMAGE ID" of the image you need to delete.
Run the command :
[sudo docker rmi "IMAGE ID"].
If the image doesn't allow to be deleted, you have to stop the container that is still running and prune the docker
[sudo docker container stop "the container ID"]
[sudo docker container prune]
Then you can delete the image.

Resources