I'm trying to push docker images to artifactory as part of a CI jenkins job.
I have an Artifactory installed with url art:8080
I installed Docker on Win2016 and built my dockerfile.
Now I stuck in how to push the output image of the dockerfile.
I tried:
<!-- language: lang-none -->
docker tag microsoft/windowsservercore art:8080/imageID:latest
docker push art:8080/docker-local:latest
but I get an error stating:
Get https://art:8080/v2/: dial tcp: lookup artifactory: getaddrinfow: No such host is known.
Where is the https getting from?
How do I push to the correct local docker repo in my artifactory?
Docker requires you to use https. What I do (I use Nexus not Artifactory) is setup a reverse proxy using nginx. Here is the doc for that - https://www.jfrog.com/confluence/display/RTF/Configuring+a+Reverse+Proxy
Alternatively, you can set Docker to not require https (though not recommended)
Since you're asking how to pull, these steps worked for an enterprise artifactory where Certificate CA are not trusted outside the organization
$ sudo mkdir -p /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo cp /tmp/ca.crt /etc/docker/certs.d/docker-<artifactory-resolverhost>
$ sudo chown root:docker /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
$ sudo chmod 740 /etc/docker/certs.d/docker-<artifactory-resolverhost>/ca.crt
Where ca.crt is the base-64 chain of CA trusted certificates and is the resolver hostname of the repository. For ex. repo.jfrog.org if you were using the public repository. To confirm you can do a ping against "artifactory-resolverhost" to make sure is reachable from your network
Then you should be able to pull an image with your user belonging to docker group for ex.
docker pull docker-<artifactory-resolverhost>/<repository-name>/rhel7-tomcat:8.0.18_4
You can then view the downloaded image with below command
docker images
Related
I am trying to pull a docker container from our private GCP container registry on a regular VM instance (i.e. ubuntu-1904) running on Google Cloud, but I am getting the following error:
user#test ~ $ sudo docker pull example.io/docker-dev/name:v01
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I followed those instructions, i.e., run the gcloud auth configure-docker command, which outputs a success message.
However, when running the docker pull command again, I get the exact same error.
A couple of extra tests that might help to provide feedback:
If I pull from a different registry, it works (for example, docker run hello-world pulls and runs the hello-world image)
I tested the same command (docker pull example.io/docker-dev/name:v01) on my local computer (Mac) instead of the vm instance and works perfectly.
I have also created vm instances and enable the option "Deploy a container image to this VM instance", providing the container address (example.io/docker-dev/name:v01), and also works. However, I don't want to use this option because it selects automatically a "Container-Optimized" boot disk, which I prefer not to use due to the limitations
Question:
Why I cannot pull docker images from my private container registry on a Ubuntu o Debian VM, even though docker seems to work very well pulling images from other repositories (docker hub)?
I did this yesterday. Just run gcloud auth configure-docker then run
VERSION=2.0.0
OS=linux # or "darwin" for OSX, "windows" for Windows.
ARCH=amd64 # or "386" for 32-bit OSs, "arm64" for ARM 64.
After that you can download the docker-credential-gcr
wget "https://github.com/GoogleCloudPlatform/docker-credential-gcr/releases/download/v${VERSION}/docker-credential-gcr_${OS}_${ARCH}-${VERSION}.tar.gz"
Then run
tar cvzf --to-stdout ./docker-credential-gcr_linux_amd64-2.0.0.tar.gz /usr/bin/docker-credential-gcloud && sudo chmod +x /usr/bin/docker-credential-gcloud
And finally run
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io
Now you will be able to pull you image :)
For me, on a container-os optimized instance, it helped to just run:
docker-credential-gcr configure-docker
https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#starting_a_docker_container_via_cloud-config
Note the default policy for compute instances:
VM instances, including those in Google Kubernetes Engine clusters,
must have the correct storage access scopes configured to push or pull
images. By default, VMs can pull images when Container Registry is in
the same project.
If you run gcloud auth configure-docker, the auth information is saved under your personal directory.
When you then run sudo docker pull example.io/docker-dev/name:v01, it looks for auth info under root directory and doesn't find anything there.
You should run both with or without sudo.
In ubuntu 18.04 VM
I am behind a proxy, I've set up docker configuration with the same proxy.
I created an azure container registry and when trying to docker pull from the registry it works.
But when trying to:
$docker run node:6
I get the error:
"docker: Error response from daemon: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority."
I've added the registry to /etc/docker/daemon.json:
{
"insecure-registries": ["registry-1.docker.io","myazureContainerRegistry.azurecr.io"]
}
By doing the above step, "$docker run myazureContainerRegistry.azurecr.io/myimage:tag" works but "$docker run node:6" still gives the certificate error.
I've added the certificate for "*.docker.io" to /etc/docker/certs.d/docker.io and also to /usr/local/share/ca-certificate (sudo apt update-ca-certificates), still it doesn't work.
I've also tried to:
$curl -k https://registry-1.docker.io/
$wget https://registry-1.docker.io/ --no-check-certificate
Both of these steps work but with docker (to run/pull node:6 ) I still get the certificate error.
The output of "$docker --version" is: "Docker version 18.09.2"
This is how my ~/.docker/config.json looks like:
config.json
I expect "docker run node:6" to pull the image successfully but it actually gives the error
For your issue, first of all, you need to have the certificate in the ~/.docker/config.json. Then you can pull the image from the registry without login. Then you can execute the command without pulling the image before. for you, the command like this:
docker run registry-1.docker.io/node:6
In my side, the config.json will like this:
And I can execute the command like this:
The URI of registry in the docker hub is https://index.docker.io/v1/charlesjunqiang.
Update
If you use the certificate file to authenticate the Docker registry. Then you should do some steps to authenticate the Docker registry in the client machine.
One:
Add the certificate file in the directory /usr/local/share/ca-certificates/docker-dev-cert/ with the name yourname.crt. Then execute the commands:
sudo update-ca-certificates
sudo service docker restart
Secord:
Create a directory in the directory /etc/docker/certs.d with the same name as the registry, for example, myregistry.azurecr.io. Then add the certificate file in it with the name yourname.cert. Also, you should add the file as .key that automatic created when you create the certificate file.
Then you can log in the registry and run the command docker run registry-1.docker.io/node:6 as you want.
There are screenshots of the result in my side.
Using minikube to pull image from local Docker registry (with self-signed CA certificate)
I'd like to be able to run minikube so that it can access a local docker registry using a self signed CA certificate. Ideally the process should be automated so that I can use a *deployment.yaml file to pull the required image without intervention.
At the moment I'm using a workaroud as follows:
#ssh into the minikube instance
sudo minikube ssh
#create a folder for the certificate
sudo mkdir /etc/docker/certs.d/dave.local:5000
#copy the crt file from the registry computer to the minikube instance
sudo scp user#192.168.1.2:/home/dave/certs/domain.crt /etc/docker/certs.d/dave.local:5000
#then check login
docker login dave.local:5000
#then pull image so that it's already in minikube
docker pull dave.local:5000/davedockerimage
I then edit the *deployment.yaml with imagePullPolicy: Never . When I then run sudo kubectl create -f dave-deployment.yamlit finds dave.local:5000/davedockerimagelocally on minikube it uses the already pulled image.
If imagePullPolicy: Always . The image pull fails in minikube.
I've been through a range of tutorials/stack overflow answers and have been unable to crack this. Any help appreciated.
As a alternative for using self signed certificate in minikube you can start minikube with insecure-registry option like below:
minikube start --insecure-registry="dave.local:5000"
Background:
To setup a private docker registry server at path c:\dkrreg on localhost on Windows 10 (x64) system, installed with Docker for Windows, have successfully tried following commands:
docker run --detach --publish 1005:5000 --name docker-registry --volume /c/dkrreg:/var/lib/registry registry:2
docker pull hello-world:latest
docker tag hello-world:latest localhost:1005/hello-world:latest
docker push localhost:1005/hello-world:latest
docker pull localhost:1005/hello-world:latest
Push and Pull from localhost:1005/hello-world:latest via command line succeeds too.
Issue:
If i use my IP address via docker pull 192.168.43.239:1005/hello-world:latest it gives following error in command shell:
Error response from daemon: Get https://192.168.43.239:1005/v1/_ping: http: server gave HTTP response to HTTPS client
When using 3rd party Docker UI Manager via docker run --detach portainer:latest it also shows error to connect as:
2017/04/19 14:30:24 http: proxy error: dial tcp [::1]:1005: getsockopt: connection refused
Tried other stuff also. How can I connect my private registry server that is localhost:1005 from LAN using any Docker Management UI tool ?
At last find solution to this which was tricky
Generated CA private key and certificate as ca-cert-mycompany.pem and ca-cert-key-companyname.pem. And configured docker-compose.yml to save both files as :ro in these locations: /usr/local/share/ca-certificates, /etc/ssl/certs/, /etc/docker/certs.d/mysite.com. But I also tried only copying certificate to /usr/local/share/ca-certificates was enough as docker will ignore duplicate CA certificates. This extra copying is because at many placed docker fellow recommended the same. I did not executed command: update-ca-certificates this time in registry container but was doing earlier as against what is suggested by many.
Defined in docker-compose.yml: random number as REGISTRY_HTTP_SECRET, and server's chained certificate (CA certificate appended to end of it) to REGISTRY_HTTP_TLS_CERTIFICATE amd server's public key to REGISTRY_HTTP_TLS_KEY. Had disabled HTTP authentication. Especially used some naming for file names as found with other certificates in container folder as mysite.com_server-chained-certificate.crt instead of just certificate.crt.
V-Imp: pushed certificate to trusted root in windows using command certutil.exe -addstore root .\Keys\ca-certificate.crt followed with restarting Docker for Windows from taskbar icon and then creating container using docker-compose up -d. This is most important step without this nothing worked.
Now can perform docker pull mysite.com:1005/my-repo:my-tag.
You need to specify to your Docker daemon that your registry is insecure: https://docs.docker.com/registry/insecure/
Based on your OS/system, you need to change the configuration of the daemon to specify the registry address (format IP:PORT, use 192.168.43.239:1005 rather than localhost:1005).
Once you have done that, you should be able to execute the following:
docker pull 192.168.43.239:1005/hello-world:latest
You should also be able to access it via Portainer using 192.168.43.239:1005 in the registry field.
If you want to access your registry using localhost:1005 inside Portainer, you can try to run it inside the host network.
docker run --detach --net host portainer:latest
Is there a way I can download a Docker image/container using, for example, Firefox and not using the built-in docker-pull.
I am blocked by the company firewall and proxy, and I can't get a hole through it.
My problem is that I cannot use Docker to get images, that is, Docker save/pull and other Docker supplied functions since it is blocked by a firewall.
Just an alternative - This is what I did in my organization for couchbase image where I was blocked by a proxy.
On my personal laptop (OS X)
~$ $ docker save couchbase > couchbase.tar
~$ ls -lh couchbase.docker
-rw------- 1 vikas devops 556M 12 Dec 21:15 couchbase.tar
~$ xz -9 couchbase.tar
~$ ls -lh couchbase.tar.xz
-rw-r--r-- 1 vikas staff 123M 12 Dec 22:17 couchbase.tar.xz
Then, I uploaded the compressed tar ball to Dropbox and downloaded on my work machine. For some reason Dropbox was open :)
On my work laptop (CentOS 7)
$ docker load < couchbase.tar.xz
References
https://docs.docker.com/engine/reference/commandline/save/
https://docs.docker.com/engine/reference/commandline/load/
I just had to deal with this issue myself - downloading an image from a restricted machine with Internet access, but no Docker client for use on a another restricted machine with the Docker client, but no Internet access. I posted my question to the DevOps Stack Exchange site:
Downloading Docker Images from Docker Hub without using Docker
With help from the Docker Community I was able to find a resolution to my problem. What follows is my solution.
So it turns out that the Moby Project has a shell script on the Moby GitHub account which can download images from Docker Hub in a format that can be imported into Docker:
download-frozen-image-v2.sh
The usage syntax for the script is given by the following:
download-frozen-image-v2.sh target_dir image[:tag][#digest] ...
The image can then be imported with tar and docker load:
tar -cC 'target_dir' . | docker load
To verify that the script works as expected, I downloaded an Ubuntu image from Docker Hub and loaded it into Docker:
user#host:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#host:~$ tar -cC 'ubuntu' . | docker load
user#host:~$ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
In practice I would have to first copy the data from the Internet client (which does not have Docker installed) to the target/destination machine (which does have Docker installed):
user#nodocker:~$ bash download-frozen-image-v2.sh ubuntu ubuntu:latest
user#nodocker:~$ tar -C 'ubuntu' -cf 'ubuntu.tar' .
user#nodocker:~$ scp ubuntu.tar user#hasdocker:~
and then load and use the image on the target host:
user#hasdocker:~ docker load -i ubuntu.tar
user#hasdocker:~ docker run --rm -ti ubuntu bash
root#1dd5e62113b9:/#
I adapted a python script for having an OS independant solution:
docker-drag
Use it like that, and it will create a TAR archive that you will be able to import using docker load :
python docker_pull.py hello-world
python docker_pull.py alpine:3.9
python docker_pull.py kalilinux/kali-linux-docker
Use Skopeo. It is a tool specifically made for that (and others) purpose.
After install simply execute:
mkdir ubuntu
skopeo --insecure-policy copy docker://ubuntu ./ubuntu
Copy these files and import as you like.
First, check if your Docker daemon is configured for using the proxy. With boot2docker and docker-machine, for instance, this is done on docker-machine create, with the --engine-env option.
If this is just a certificate issue (i.e., Firefox does access Docker Hub), try and install that certificate:
openssl s_client -connect index.docker.io:443 -showcerts /dev/null | openssl x509 -outform PEM > docker.pem
sudo cp docker.pem /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust
sudo systemctl restart docker
sudo docker run hello-world
The other workaround (not a recommended solution) would be to access Docker Hub without relying on certificate with --insecure-registry:
If the firewall is actively blocking any Docker pull, to the point you can't even access Docker Hub from Firefox, then you would need to docker save/docker load an image archive. Save it from a machine where you did access Docker Hub (and where the docker pull succeeded). Load it on your corporate machine (after approval of your IT system administrators, of course).
Note: you cannot easily "just" download an image, because it is often based on top of other images which you would need to download too. That is what docker pull does for you. And that is what docker save does too (create one archive composed of all the necessary images).
The OP Ephreal adds in the comments:
[I] didn't get my corp image to work either.
But I found that I could download the Docker file and recreate the image my self from scratch.
This is essentially the same as downloading the image.
So, by definition, a Docker pull client command actually needs to talk to a Docker daemon, because the Docker daemon assembles layers one by one for you.
Think of it as a POST request - it's causing a mutation of state, in the Docker daemon itself. You're not 'pulling' anything over HTTP when you do a pull.
You can pull all the individual layers over REST from the Docker registry, but that won't actually be the same semantics as a pull, because pull is an action that specifically tells the daemon to go and get all the layers for an image you care about.
Another possibly might be an option for you if your company firewall (and policy) allows for connecting to a remote SSH server. In that case you can simply set up a SSH tunnel to tunnel any traffic to the Docker registry through it.
The Answer and solution to my original question were that I found that I could download the Docker file and all the necessary support files and recreate the image my self from scratch. This is essentially the same as downloading the image.
This solution has been in the questions and comments above, I just pinned it out here.
This is although no longer an issue for me since my company have changed policy and allowed docker pull commands to work.
thanks #Ham Co for answer,
I adapted a golang tool for having an OS independant solution:
golang http pull docker image
./gopull download redis
get a docker importable archive redis.tar
References:
https://github.com/NotGlop/docker-drag