i want to pull from nexus all images and push them in harbor i try to do that
docker login -u -p https://harbor.domaine.com/
docker tag nexus.domaine.com/tag:number harbor.domaine.com/project_name/tag:number
but the problem is that i have a lot of images and if i do this operation i need to write 1 line for every images so i want something like a loop too pull and push all images from nexus any help ?!
You can try use bash script, for example
#!/bin/bash
docker login -u -p https://harbor.domaine.com/
for image_name in $(docker images --format="{{.Repository}}:{{.Tag}}" | grep nexus.domaine.com)
do
new_image_name=$(echo $image_name | sed 's/nexus.domaine.com/harbor.domaine.com\/project_name/')
docker tag $image_name $new_image_name
docker push $new_image_name
done
I've been developing regsync to do exactly this. For a quick start, there's a workshop I recently gave at the docker all-hands, which includes not only the copy, but also the cleanup steps, or there's the quick start in the project itself.
To implement, create a regsync.yml:
version: 1
creds:
- registry: nexus.domaine.com
# credentials here
- registry: harbor.domaine.com
# credentials here
defaults:
parallel: 2
interval: 60m
sync:
- source: nexus.domaine.com/image
target: harbor.domaine.com/project_name/image
type: repository
And then run regsync:
docker container run -it --rm \
-v "$(pwd)/regsync.yml:/home/appuser/regsync.yml:ro" \
regclient/regsync:latest -c /home/appuser/regsync.yml once
Related
Where is images in registry:2
If I exec into the container running the registry:
docker exec -it kind-registry sh
Do you know where I can see a list of images - to be able to list and delete.
You can use the API to curl the private registry. Further reading at the Docker Forum.
curl http://my.registry.com/v2/_catalog
Should work.
I am trying to implement Watchtower which auto-build a container if any updates are found in Docker image.
These are commands I used for implementing watchtower:
git clone https://github.com/linuxacademy/content-express-demo-app.git watchtower
cd watchtower/
git checkout dockerfile
docker login -u "MYDOCKERREPO"
docker image build -t MYDOCKERREPO/my-express .
docker image push MYDOCKERREPO/my-express
docker container run -d --name watched-app -p 80:3000 --restart always MYDOCKERREPO/my-express
docker container run -d --name watchtower
--restart always
-v /var/run/docker.sock:/var/run/docker.sock
v2tec/watchtower -i 15
vi .dockerignore
Dockerfile
.git
.gitignore
#Added comment in app.js
created a sample.js file
docker image build -t MYDOCKERREPO/my-express --no-cache .
docker image push MYDOCKERREPO/my-express
I waited for many hours but no changes came. Also while pushing updated docker image it didn't show a single Pushed. All were saying 'Layers already exists"
Please if someone can help
EDIT:
Dockerfile:
FROM node
RUN mkdir -p /var/node
ADD . /var/node/
WORKDIR /var/node
RUN npm install
CMD ./bin/www
I waited for many hours but no changes came. Also while pushing updated docker image it didn't show a single Pushed. All were saying 'Layers already exists"
This means that none of the layers (changesets) you pushed differed from the ones already pushed, and as such, no new hashes were produced. Watchtower will only detect and update when the image has actual changes.
docker container run -d --name watchtower --restart always \
-v /var/run/docker.sock:/var/run/docker.sock v2tec/watchtower -i 15
The image you're using is more than a year old at this point. It might (likely won't) be compatible with current docker versions. The latest release of the watchtower image is available at containrrr/watchtower:latest.
I want to create a docker image, start it as a container (to configure database credentials etc.), commit those changes, tag it and push it to the container registry:
from .gitlab-ci.yml:
configure_db_image:
stage: docker_build
tags:
- docker-in-docker
script:
- docker login <gitlab-CI-CR> -u gitlab-ci-token -p $CI_JOB_TOKEN
- docker pull <gitlab-CI-CR>/db-template/db-template-image:latest
- docker tag <gitlab-CI-CR>/db-template/db-template-image:latest <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
# Remove the container if it exists already
- docker rm -f test-db-image-container || true
- docker create -i -p 5432:5432 --name test-db-image-container --env 'CREATE_ONLY_ON_FIRST_RUN=yes' --env 'DB_USER=user' --env 'DB_PASS=pass' --env 'DB_NAME=dbname' <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
- docker start -i test-db-image-container
- docker stop test-db-image-container
- docker commit test-db-image-container test-db-image
- docker tag test-db-image <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
- docker push <gitlab-CI-CR>/my-project/my-repo/test-db-image:latest
I don't see why but in spite of the docker push the image I pull from the registry isn't configured. Where am I going wrong?
This works as described - the issue is with the Dockerfile in the parent image which declares the path where database changes happen in the file system as a Docker volume. As such they don't persist when the image is pushed to the registry.
Does the google/docker-registry container exist solely to push/pull images from Google Cloud Storage? I am currently following their instructions on Git and have the docker-registry container running, but can't seem to pull from my bucket.
I started it with:
sudo docker run -d -e GCS_BUCKET=mybucket -p 5000:5000 google/docker-registry
I have a .tar Docker image stored in Cloud Storage, at mybucket/imagename.tar. However, when I execute:
sudo docker pull localhost:5000/imagename.tar
It results in:
2014/07/10 19:15:50 HTTP code: 404
Am I doing this wrong?
You need to docker push to the registy instead of copying your image tar manually.
From where you image is:
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
Then from the place you want to run the image from (ex: GCE):
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
When using the google/docker-registry it is preconfigured to use the google buckets.
It should work for any storage (if configuration is overriden), but it's purpose is to be used with the google infrastructure.
The tar file of an exported image should be used when there is no docker registry to manually move images between docker hosts.
You should not upload tar files to the bucket.
To upload images, you should push to the docker-registry container, it will the save the image in the bucket.
The google cloud compute instance that is running the docker registry container must be configured to have write/read access to the bucket.
I am running Boot2Docker v1.0.1 on Windows, and wish to fire up a Docker container I have created on a Google Compute Engine VM.
In order to do so, I need to save the container and upload it to Google Cloud Storage.
I issue the following command:
docker save --output=mycontainer.tar mycontainer:latest
The command completes without error. However, I cannot find the rce_env.tar file anywhere on my hard drive.
Does anyone have any experience with this? If not, is there a better way to run containers on GCE VM's?
You can run google/docker-registry locally to push your container images to GCS.
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
And then run it on GCE to pull your containers from GCS.
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
I understand that you are using boot2docker on windows.
On a similar setup, using OSX and boot2docker 1.1.0, the following works:
docker save --output mycontainer.tar mycontainer:latest
As also does redirecting standard output:
docker save mycontainer:latest > mycontainer.tar
GCE now allows to store docker images for your projects using the gcloud command.
you can now run $ gcloud preview docker push gcr.io/YOUR-PROJECT/IMAGE-NAME
Source: https://cloud.google.com/tools/container-registry/#pushing_to_the_registry