How to fix Docker invalid reference format? - docker

I am trying to run docker inside a shell script. This is what my script looks like :-
#!/bin/bash
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <account-id>.dkr.ecr.us-east-1.amazonaws.com
IMAGE=$(aws ecr describe-images --repository-name repo --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]')
echo $IMAGE
docker pull https://<account-id>.dkr.ecr.us-east-1.amazonaws.com/repo:$IMAGE
docker run -d -p 8080:8080 https://<account-id>.dkr.ecr.us-east-1.amazonaws.com/repo:$IMAGE
But when i run the script, i keep running into
docker: invalid reference format.
See 'docker run --help'.
and i'm not sure what i'm doing wrong. Any help will be appreciated.

From the docs:
The image name format should be registry/repository[:tag] to pull by tag, or registry/repository[#digest] to pull by digest.
so for the pull command you should use:
$ docker pull <account-id>.dkr.ecr.us-east-1.amazonaws.com/repo:$IMAGE
then for the run command, you should use:
$ docker run -d -p 8080:8080 <account-id>.dkr.ecr.us-east-1.amazonaws.com/repo:$IMAGE

It was failing because the image tag was being returned inside double quotes. Had to get the output in plain text using :-
aws ecr describe-images --repository-name repo --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' --output text

Related

Script to push and pull all images from nexus to harbor

i want to pull from nexus all images and push them in harbor i try to do that
docker login -u -p https://harbor.domaine.com/
docker tag nexus.domaine.com/tag:number harbor.domaine.com/project_name/tag:number
but the problem is that i have a lot of images and if i do this operation i need to write 1 line for every images so i want something like a loop too pull and push all images from nexus any help ?!
You can try use bash script, for example
#!/bin/bash
docker login -u -p https://harbor.domaine.com/
for image_name in $(docker images --format="{{.Repository}}:{{.Tag}}" | grep nexus.domaine.com)
do
new_image_name=$(echo $image_name | sed 's/nexus.domaine.com/harbor.domaine.com\/project_name/')
docker tag $image_name $new_image_name
docker push $new_image_name
done
I've been developing regsync to do exactly this. For a quick start, there's a workshop I recently gave at the docker all-hands, which includes not only the copy, but also the cleanup steps, or there's the quick start in the project itself.
To implement, create a regsync.yml:
version: 1
creds:
- registry: nexus.domaine.com
# credentials here
- registry: harbor.domaine.com
# credentials here
defaults:
parallel: 2
interval: 60m
sync:
- source: nexus.domaine.com/image
target: harbor.domaine.com/project_name/image
type: repository
And then run regsync:
docker container run -it --rm \
-v "$(pwd)/regsync.yml:/home/appuser/regsync.yml:ro" \
regclient/regsync:latest -c /home/appuser/regsync.yml once

pull access denied for Amazon ECR, repository does not exist or may require 'docker login'

I have an image in an Amazon ECR Repository called workshop
I have a Dockerfile to pull that image
CodeBuild should build the new image from Dockerfile
Problem:
pull access denied for xxxxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/workshop, repository does not exist or may require 'docker login'
In my buildspec file, I've tried to login with docker, but nothing changes.
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin
xxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com
- CODEBUILD_RESOLVED_SOURCE_VERSION="${CODEBUILD_RESOLVED_SOURCE_VERSION:-$IMAGE_TAG}"
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
Dockerfile looks like this:
FROM xxxxxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/workshop:latest
CMD ["echo", "Hallo!"]
RUN code-server
What may cause the Problem?
Try to update your aws-cli and use latest version, because get-login is deprecated.
New command is like this:
aws ecr get-login-password \
--region <region> \
| docker login \
--username AWS \
--password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
References:
get-login-password: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecr/get-login-password.html
get-login: https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login.html

Jenkins can not execute docker login via ssh-agent

I create the Jenkins pipeline to deploy my app. I built and push docker image to AWS ECR. The final step is executing ssh to deployment server (EC2) and run docker container based on last built image.
This is my script:
stage('Deploy') {
steps {
script {
sshagent(['ssh-cridentials']) {
sh "ssh -o StrictHostKeyChecking=no jenkins#host sudo docker rm -f myapp"
sh "ssh -o StrictHostKeyChecking=no jenkins#host sudo docker image prune -a -f"
sh "ssh -o StrictHostKeyChecking=no jenkins#host \"cat /opt/aws/password.txt | sudo docker login --username AWS --password-stdin $ecrURI & sudo docker run -p 80:80 -d --name=myapp $imageURI\""
}
}
}
}
However, Jenkins built fail and I got the error:
docker: Error response from daemon: Get https://xxx: no basic auth credentials.
This command couldn't login to ECR.
But it works successfully if I execute the same command on deployment server.
Looks like something wrong with your escape character, try without using that (I believe you have valid ecr url in variable $ecrURI)
sh "ssh -o StrictHostKeyChecking=no jenkins#host cat /opt/aws/password.txt | sudo docker login --username AWS --password-stdin $ecrURI & sudo docker run -p 80:80 -d --name=myapp $imageURI"

Docker login failing (at most 1 argument)

I am failing to login to a remote docker registry using a command of the form:
docker login –u my-username –p my-password registry.myclient.com
The error I get is the following:
"docker login" requires at most 1 argument.
See 'docker login --help'.
Usage: docker login [OPTIONS] [SERVER]
How can login to the remote registry?
You don't have tacks in front of your options, it's some other dash like character. Try this instead:
docker login -u my-username -p my-password registry.myclient.com
While it looks similar, -u and -p are not the same as –u and –p.
This one worked for me if ci environment is in play:
echo ${MVN_PASSWORD} | docker login -u ${MVN_USER} --password-stdin ${MVN_URL}
these variables need to be set up via Settings > CI/CD > Variables (gitlabci example)
Here is what worked for me:
I saved the password in a file called my_password.txt.
Then, I ran the following command:
cat ~/my_password.txt | docker login -u AWS --password-stdin https://{YOUR_AWS_ACCOUNT_ID}.dkr.ecr.{YOUR_AWS_REGION}.amazonaws.com

Docker: google/docker-registry container usage

Does the google/docker-registry container exist solely to push/pull images from Google Cloud Storage? I am currently following their instructions on Git and have the docker-registry container running, but can't seem to pull from my bucket.
I started it with:
sudo docker run -d -e GCS_BUCKET=mybucket -p 5000:5000 google/docker-registry
I have a .tar Docker image stored in Cloud Storage, at mybucket/imagename.tar. However, when I execute:
sudo docker pull localhost:5000/imagename.tar
It results in:
2014/07/10 19:15:50 HTTP code: 404
Am I doing this wrong?
You need to docker push to the registy instead of copying your image tar manually.
From where you image is:
docker run -ti --name gcloud-config google/cloud-sdk \
gcloud auth login
docker run -ti --volumes-from gcloud-config google/cloud-sdk \
gcloud config set project <project>
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 \
--volumes-from gcloud-config google/docker-registry
docker tag imagename localhost:5000/imagename
docker push localhost:5000/imagename
Then from the place you want to run the image from (ex: GCE):
docker run -d -e GCS_BUCKET=bucketname -p 5000:5000 google/docker-registry
docker run localhost:5000/imagename
When using the google/docker-registry it is preconfigured to use the google buckets.
It should work for any storage (if configuration is overriden), but it's purpose is to be used with the google infrastructure.
The tar file of an exported image should be used when there is no docker registry to manually move images between docker hosts.
You should not upload tar files to the bucket.
To upload images, you should push to the docker-registry container, it will the save the image in the bucket.
The google cloud compute instance that is running the docker registry container must be configured to have write/read access to the bucket.

Resources