OpenShift v3.1: Create app from an image from a remote repository - docker-registry

How do you have to create an application from an image which is pulled from a remote repository.
So I have an image pulled from the repo ec2.xxx:5000.
docker pull ec2.xxx:5000/myimage
The pull was successfully.
When I perform docker images I see the pulled image
But I'm unable to perform the following:
oc new-project myproject
oc new-app ec2.xxx:5000/myimage
Than I get:
The 'new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
Can someone explain me how to create an application from such an image?

Related

The push refers to repository [docker.io/chatapp/monorepo] An image does not exist locally with the tag: chatapp/monorepo

I'm trying to push a Bitbucket repository to a private repository in Docker Hub as Docker image file. The build is successful to the point until I get this error:
docker push chatapp/monorepo
+ docker push chatapp/monorepo
The push refers to repository [docker.io/chatapp/monorepo]
An image does not exist locally with the tag: chatapp/monorepo
Does this have anything to do with how the Dockerfile inside the Bitbucket repository is written? Or are there some scripts missing in bitbucket-pipeline.yml file?
I'm new to Docker and I can't seem to figure this out.
The error signifies that the image you're trying to push doesn't exist locally on the machine you're trying to push from. Run docker images and check if it's there, if not there's a problem with the pipeline creating it, if it does but with a different name try and fix that.

remote Docker Repository behaviour in Artifactory

I have a remote docker repository configured in Artifactory (to docker hub). To test it I've created docker image A and pushed it to docker hub.
The image is user-name/image:latest.
Now I can pull it from artifactory using artifactory-url/docker/user-name/image:latest.
Now I've updated image A to image B and pushed it to docker hub. When I remove my local images and pull this image again from Artifactory I still get the image A (so it seems the cache is used). When I set the following setting to zero (Metadata Retrieval Cache Period) I'll pull the updated image B.
All fine. Now I increase the Metadata Retrieval Cache Period setting again. I've now deleted the image from docker hub and try to pull it again using artifactory. This fails while I was hoping it would just pull the image from the Artifactory cache?
I can also not pull it using the cache directly: docker pull artifactory-url/docker-cache/user-name/image:latest.
Is there a way to use a docker image from artifactory which is deleted in the remote repository?
The first part you wrote is ok and its the expected behavior. Now, for the second part, it's also the expected behavior and I will explain why - when you use a virtual repository as your Artifactory Docker registry, it will always search for artifacts in the local repositories first, then in the remote-cache, and only then in the remote itself. However, if Artifactory finds the package in the local or remote-cache repositories, it will always also check the remote for newer versions. This causes cached images that are deleted from the remote itself to not be downloadable from the remote-cache in Artifactory, since Artifactory receives a 404 error from the remote repository. You can fix this by moving the image to the local repository, and you will be able to pull it.

dockerhub automated build from single repo with single dockerfile building multiple images

I have a single git repository on github with:
a Dockerfile which builds multiple images meant to be used together. (a maven build produces a war file and sql files by downloading then from artifact repositories; a multi-stage build then creates a slim tomcat image with the war and a slim mysql image with the sql data preloaded).
a docker-compose.yml file that uses the "target" instruction to build and run containers on the images from the multi-stage build.
This works well during development. But it forces users to build images on their computer. I want the users to be able to only download images. The setup should also be using dockerhub's automated build to keep images up to date.
How can I set it up to achieve this ? What command(s) or file(s) do I give the users to allow them to download images and run containers ? If it is not possible, what can I do to make it possible (split the repo? copypaste the dockerfile? publish intermediate images to dockerhub and ensure a correct build order ? Don't use dockerhub's automated build ?)
To use dockerhub's automated builds you would need to build one image per Dockerfile and have one Dockerfile per repo. The image name comes from the source repository name (with the github org/user name as the docker image user name and the github repo name as the docker image name). Multistage builds work in automated builds but only one image is published per Dockerfile (the final image of the build).
You could build the images in your CI or even on your local machine and then push to dockerhub. You'd just need to have an account on dockerhub and be logged in to that account when you use the docker push command. When doing this push there doesn't have to be any mapping to GitHub repos but your image names should start with <dockerhub_user>/ as a kind of prefix (explained at https://docs.docker.com/docker-hub/repos/). It's ok if they are built under a different name as you could rename by retagging before pushing. This way you can also build the images however you like.
When you have images in dockerhub you can just refer to them in the docker-compose file using the form image: <dockerhub_user>/<dockerhub_image_name>:<tag>. The images will automatically be pulled when the user does docker-compose up.
Here are some tips and links that should help your situation:
Automated builds are a convenient way to deploy your images.
This part is pretty easy. You'll need accounts with Docker Hub and Github. You can register these accounts for free.
When you create a a repository on Docker Hub you can link it to your Github repository to automate the build.
Recommendations:
Split your services into separate Dockerfiles. Ideally you should use separate repositories: Docker Compose will pull them together at the end. A division of services will also help if anyone wants to implement e.g. a cloud database backend for their deployment.
Don't store database files inside a container. Containers should be ephemeral
For a robust design, test your builds.
Docker Hub automated builds are very flexible with the use of build hooks.
This part is a little tricky because I haven't found the best documentation. It also might not be necessary if you split your Dockerfile.
I've successfully created automated builds with multiple tags and targets using a hook at hooks/build but after reading the documentation it looks like you should also be able to use hooks/post_build.
Your hook could simply build the correct layer and push the tag to Docker Hub
For your repository that should look like:
#!/usr/bin/env bash
docker build --target lutece-mysql -t lutece/mysql .
docker push lutece/mysql
If you end up using hooks/build you might need to build the final target as the last step.
Recommendations:
If you need multiple tags for an image use a hook at hooks/post_push to add additional tags. That way each tag should link users to the same image. e.g.
#!/usr/bin/env bash
docker tag lutece/tomcat:master lutece/tomcat:latest
docker push lutece/tomcat:latest
Additionally you can use build hooks to label your image with things like build date and git commit.
Deployment with Docker Compose
Unfortunately I haven't done this part so I can't confirm how to get this to work.
With your repositories in Docker Hub and a working docker-compose.yml your clients may only need to run docker-compose up in the directory with your docker-compose.yml file. Docker Compose should pull in the images from Docker Hub.

Docker: updating image and registry

What is the right workflow for updating and storing images?
For example:
I download source code from GitHub (project with Docker files, docker-compose.yml)
I run "docker build"
And I push new image to Docker Hub (or AWS ECR)
I make some changes in source code
Push changes to GitHub
And what I should do now to update registry (Docker Hub)?
A) Should I run again "docker build" and then push new image (with new tag) to registry?
B) Should I somehow commit changes to existing image and update existing image on Docker Hub?
This will depend on what for you will use your docker image and what "releasing" policy you adopt.
My recommendation is that you sync the tags you keep on Docker Hub with the release/or tags you have in GitHub and automate as much as you can your production with a continuous integration tools like Jenkins and GitHub webooks.
Then your flow becomes :
You do your code modifications and integrate them in GitHub ideally using a pull request scheme. This means your codes will be merged into your master branch.
Your Jenkins is configured so that when master is changed it will build against your docker file and push it to Docker hub. This will erase your "latest" tag and make sure your latest tag in docker hub is always in sync with your master release on GitHub
If you need to keep additional tags, this will be typical because of different branches or releases of your software. You'll do the same as above with the tag hooked up through Jenkins and GitHub webhooks with a non-master branch. For this, take a look at how the official libraries are organized on GitHub (for example on Postgres or MySQL images).

Trying to create an automated docker build

I am trying to learn docker and create an automated build. I forked a git repo with a dockerfile. I then added this repo to my docker hub account as described here:
When I tried to pull it, I get:
$ docker pull sukottokun/docker-drupal-env
Using default tag: latest
Pulling repository docker.io/sukottokun/docker-drupal-env
Tag latest not found in repository docker.io/sukottokun/docker-drupal-env
Ok, so there is no "latest" tag, fine. After figuring out this is different than a git tag (I think, I tried and failed to clone after pushing a git tag), I am now trying to add that. Since I can't pull it, how can I add a tag?

Resources