When trying to push an image to a remote docker registry i get the message:
vagrant#vagrant-ubuntu-trusty-64:~/helloworld$ docker push 11.22.33.44:5000/ltrojanowski/helloworld
The push refers to a repository [11.22.33.44:5000/ltrojanowski/helloworld]
unable to ping registry endpoint https://11.22.33.44:5000/v0/
v2 ping attempt failed with error: Get https://11.22.33.44:5000/v2/: EOF
v1 ping attempt failed with error: Get https://11.22.33.44:5000/v1/_ping: EOF
This is despite changing the etc/default/docker on the server to look like:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/articles/systemd/
#
# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"
# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"
DOCKER_OPTS="--insecure-registry 11.22.33.44:5000"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
I only added the DOCKER_OPTS="--insecure-registry 11.22.33.44:5000" line, as stated in the documentation.
Many thanks for any help.
As suggested in the comments I misunderstood the instruction. One needs to add the line DOCKER_OPTS="--insecure-registry 11.22.33.44:5000" in the machine which is pushing the container, and not the registry machine.
Related
Setup
As per Ory Kratos Docker Documentation I run:
$ docker pull oryd/kratos:v0.7.1-alpha.1
$ docker run --rm -it oryd/kratos version
Version: v0.7.1-alpha.1
Build Commit: 4fe76af1302d45ddf4cf3c2c5949311c9cf1f8b8
Build Timestamp: 2021-07-22T17:41:40Z
Running the image in a container
What happens here is that no configuration file is specified, so it just errors out the keys that are required.
$ docker run oryd/kratos:v0.7.1-alpha.1
The configuration contains values or keys which are invalid:
identity: <nil>
^-- one or more required properties are missing
The configuration contains values or keys which are invalid:
selfservice.default_browser_return_url: <nil>
^-- one or more required properties are missing
The configuration contains values or keys which are invalid:
courier.smtp.connection_uri: <nil>
^-- one or more required properties are missing
time=2021-07-27T17:46:47Z level=fatal msg=Unable to instantiate configuration....
Issue
When using the Docker Images, Kratos does not recognize a configuration file with the --config flag.
Since containers are ran independently, I figured I'd have to use a file on the Daemon while running the serve command from the daemon and it seems Ory Kratos has a section for this also at Ory Kratos Docker Image)
docker run --rm -it oryd/kratos serve --config /home/ory/kratos.yml
FATA[2021-07-27T18:35:41Z] Unable to instantiate configuration. audience=application error=map[message:open /home/ory/kratos.yml: no such file or directory] service_name=Ory Kratos service_version=v0.7.1-alpha.1
Relevant Files:
The configuration
message:open /home/ory/kratos.yml: no such file or directory
You error above means the container can't find /home/ory/kratos.yml.
I figured I'd have to use a file on the Daemon
If I catch you correctly, you mean you put kratos.yml in the rootfs of docker host, but you did not put it inside container, this makes your container can't find the configuration file.
So, here you need to mount the file in host into container, something like next:
docker run --rm -v /home/ory/kratos.yml:/home/ory/kratos.yml -it oryd/kratos serve --config /home/ory/kratos.yml
You need to use the correct path of kratos.yml on host of course.
Detail refers to this.
I'm trying to deploy a docker image from docker hub on openshift.
I crated an image with a simple spring boot rest application:
https://hub.docker.com/r/ernst1970/my-rest
After logging into openshift an choosing the correct project I do
oc new-app ernst1970/my-rest
And I get
W0509 13:17:28.781435 16244 dockerimagelookup.go:220] Docker registry lookup failed: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Clien
t.Timeout exceeded while awaiting headers)
error: Errors occurred while determining argument types:
ernst1970/my-rest as a local directory pointing to a Git repository: GetFileAttributesEx ernst1970/my-rest: The system cannot find the path specified.
Errors occurred during resource creation:
error: no match for "ernst1970/my-rest"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
I also tried with
oc new-app mariadb
But got the same error message.
I thought this might be a proxy problem. So I added the proxy to my .profile:
export http_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
export https_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
Unfortunately this did not change anything.
Any ideas why this is not working?
your docker daemon needs the proxy so it can reach the DockerHub. You can specify proxy server by providing it as an environment variable for the docker daemon.
Take a look at the official Docker documentation: https://docs.docker.com/config/daemon/systemd/
sudo mkdir -p /etc/systemd/system/docker.service.d
Add a file /etc/systemd/system/docker.service.d/http-proxy.conf which should contain following
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/" "NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp"
Reload your changes and restart docker daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
Verify by doing a simple "docker pull ... "
I use Docker Toolbox on Windows 7 in a corporate environment. My workflow requires pulling containers from one artifactory and pushing them to a different one (eg. external and internal). Each artifactory requires a different proxy to access it. Is there a way to configure Docker daemon to select proxy based on a URL? Or, if not, what else can I do to make this work?
Since, as Pierre B. mentioned, Docker daemon does not support URL-based proxy selection, the solution is to point it to a local proxy configured to select the proper upstream proxy based on the URL.
While any HTTP[S] proxy capable of upstream selection would do, (pac4cli project being particularly interesting for it's advertised capability to select the upstream based on proxy-auto-discovery protocol used by most Web browsers a in corporate setting), I've chosen to use tinyproxy, as more mature and light-weight solution. Furthermore, I've decided to run my proxy inside the docker-machine VM in order to simplify it's deployment and make sure the proxy is always running when the Docker daemon needs it.
Below are the steps I used to set up my system. I'm especially grateful to phoenix for providing steps to set up Docker Toolbox on Windows behind a corporate proxy, and will borrow heavily from that answer.
From this point on I will assume either Docker Quickstart Terminal or GitBash, with docker in the PATH, as your command line console and that "username" is your Windows user name.
Step 1: Build tinyproxy on your target platform
Begin by pulling a clean Linux distribution, I used CentOS, and run bash inside it:
docker run -it --name=centos centos bash
Next, install the tools we'll need:
yum install -y make gcc
After that we pull the latest release of Tinyproxy from it's GitHub repository and extract it inside root's home directory (at the time of this writing the latest release was 1.10.0):
cd
curl -L https://github.com/tinyproxy/tinyproxy/releases/download/1.10.0/tinyproxy-1.10.0.tar.gz \
| tar -xz
cd tinyproxy-1.10.0
Now let's configure and build it:
./configure --enable-upstream \
--disable-filter\
--disable-reverse\
--disable-transparent\
--disable-xtinyproxy
make
While --enable-upstream is obviously required, disabling other default features is optional but a good practice. To make sure it actually works run:
./src/tinyproxy -h
You should see something like:
Usage: tinyproxy [options]
Options are:
-d Do not daemonize (run in foreground).
-c FILE Use an alternate configuration file.
-h Display this usage information.
-v Display version information.
Features compiled in:
Upstream proxy support
For support and bug reporting instructions, please visit
<https://tinyproxy.github.io/>.
We exit the container by pressing Ctrl+D and copy the executable to a special folder location accessible from the docker-machine VM:
docker cp centos://root/tinyproxy-1.10.0/src/tinyproxy \
/c/Users/username/tinyproxy
Substitute "username" with your Windows user name. Please note that double slash — // before "root" is required to disable MINGW path conversion.
Now we can delete the container:
docker rm centos
Step 2: Point docker daemon to a local proxy port
Choose a TCP port number to run the proxy on. This can be any port that is not in use on the docker-machine VM. I will use number 8618 in this example.
First, let's delete the existing default Docker VM:
WARNING: This will permanently erase all currently stored containers and images
docker-machine rm -f default
Next, we re-create the default machine setting HTTP_PROXY and HTTPS_PROXY environment variables to the local host and the port we selected, and then refresh our shell environment:
docker-machine create default \
--engine-env HTTP_PROXY=http://localhost:8618 \
--engine-env HTTPS_PROXY=http://localhost:8618
eval $(docker-machine env)
Optionally, we could also set NO_PROXY environment variable to list hosts and/or wildcards (separated by ;) to which the daemon should connect directly, bypassing the proxy.
Step 3: Set up tinyproxy inside docker-machine VM
First, we will create two files in the /c/Users/username directory (this is where our tinyproxy binary should reside after Step 1 above) and then we'll copy them to the VM.
The first file is tinyproxy.conf, the exact syntax is documented on the Tinyproxy website, but the example below should have all the settings need:
# These settings can be customized to your liking,
# the port though must be the same we used in Step 2
listen 127.0.0.1
port 8618
user nobody
group nogroup
loglevel critical
syslog on
maxclients 50
startservers 2
minspareServers 2
maxspareServers 5
disableviaheader yes
# Here is the actual proxy selection, rules apply from top
# to bottom, and the last one is the default. More info on:
# https://tinyproxy.github.io/
upstream http proxy1.corp.example.com:80 ".foo.example.com"
upstream http proxy2.corp.example.com:80 ".bar.example.com"
upstream http proxy.corp.example.com:82
In the example above:
http://proxy1.corp.example.com:80 will be used to connect to URLs that end with "foo.example.com", such as http://www.foo.example.com
http://proxy2.corp.example.com:80 will be used to connect to URLs that end with "bar.example.com", such as http://www.bar.example.com, and
http://proxy.corp.example.com:80 will be used to connect all other URLs
It is also possible to match exact host names, IP addresses, subnets and hosts without domains.
The second file is as the shell script that will launch the proxy, its name must be bootlocal.sh:
#! /bin/sh
# Terminate on error
set -e
# Switch to the script directory
cd $(dirname $0)
# Launch proxy server
./tinyproxy -c tinyproxy.conf
Now, let's connect to the docker VM, get root, and switch to boot2docker directory:
docker-machine ssh
sudo -s
cd /var/lib/boot2docker
Next, we'll copy all three files over and a set their permissions:
cp /c/Users/username/boot2docker/{tinyproxy{,.conf},bootlocal.sh} .
chmod 755 tinyproxy bootlocal.sh
chmod 644 tinyproxy.conf
Exit VM session by pressing Ctrl+D twice and restart it:
docker-machine restart default
That's it! Now docker should be able pull and push images from different URLs automatically selecting the right proxy server.
Background:
To setup a private docker registry server at path c:\dkrreg on localhost on Windows 10 (x64) system, installed with Docker for Windows, have successfully tried following commands:
docker run --detach --publish 1005:5000 --name docker-registry --volume /c/dkrreg:/var/lib/registry registry:2
docker pull hello-world:latest
docker tag hello-world:latest localhost:1005/hello-world:latest
docker push localhost:1005/hello-world:latest
docker pull localhost:1005/hello-world:latest
Push and Pull from localhost:1005/hello-world:latest via command line succeeds too.
Issue:
If i use my IP address via docker pull 192.168.43.239:1005/hello-world:latest it gives following error in command shell:
Error response from daemon: Get https://192.168.43.239:1005/v1/_ping: http: server gave HTTP response to HTTPS client
When using 3rd party Docker UI Manager via docker run --detach portainer:latest it also shows error to connect as:
2017/04/19 14:30:24 http: proxy error: dial tcp [::1]:1005: getsockopt: connection refused
Tried other stuff also. How can I connect my private registry server that is localhost:1005 from LAN using any Docker Management UI tool ?
At last find solution to this which was tricky
Generated CA private key and certificate as ca-cert-mycompany.pem and ca-cert-key-companyname.pem. And configured docker-compose.yml to save both files as :ro in these locations: /usr/local/share/ca-certificates, /etc/ssl/certs/, /etc/docker/certs.d/mysite.com. But I also tried only copying certificate to /usr/local/share/ca-certificates was enough as docker will ignore duplicate CA certificates. This extra copying is because at many placed docker fellow recommended the same. I did not executed command: update-ca-certificates this time in registry container but was doing earlier as against what is suggested by many.
Defined in docker-compose.yml: random number as REGISTRY_HTTP_SECRET, and server's chained certificate (CA certificate appended to end of it) to REGISTRY_HTTP_TLS_CERTIFICATE amd server's public key to REGISTRY_HTTP_TLS_KEY. Had disabled HTTP authentication. Especially used some naming for file names as found with other certificates in container folder as mysite.com_server-chained-certificate.crt instead of just certificate.crt.
V-Imp: pushed certificate to trusted root in windows using command certutil.exe -addstore root .\Keys\ca-certificate.crt followed with restarting Docker for Windows from taskbar icon and then creating container using docker-compose up -d. This is most important step without this nothing worked.
Now can perform docker pull mysite.com:1005/my-repo:my-tag.
You need to specify to your Docker daemon that your registry is insecure: https://docs.docker.com/registry/insecure/
Based on your OS/system, you need to change the configuration of the daemon to specify the registry address (format IP:PORT, use 192.168.43.239:1005 rather than localhost:1005).
Once you have done that, you should be able to execute the following:
docker pull 192.168.43.239:1005/hello-world:latest
You should also be able to access it via Portainer using 192.168.43.239:1005 in the registry field.
If you want to access your registry using localhost:1005 inside Portainer, you can try to run it inside the host network.
docker run --detach --net host portainer:latest
I am attempting to use Minikube for local kubernetes development. I have set up my docker environment to use the docker daemon running in the provided Minikube VM (boot2docker) as suggested:
eval $(minikube docker-env)
It sets up these environment variables:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/home/jasonwhite/.minikube/certs"
When I attempt to pull an image from our private docker repository:
docker pull oururl.com:5000/myimage:v1
I get this error:
Error response from daemon: Get https://oururl.com:5000/v1/_ping: x509: certificate signed by unknown authority
It appears I need to add a trusted ca root certificate somehow, but have been unsuccessful so far in my attempts.
I can hit the repository fine with curl using our ca root cert:
curl --cacert /etc/ssl/ca/ca.pem https://oururl.com:5000/v1/_ping
I've been unable to find anyway to get the cert into the minikube vm. But, minikube has a command line parameter to pass in an insecure-registry.
minikube start --insecure-registry=<HOST>:5000
Then to configure authentication on the registry, create a secret.
kubectl create secret docker-registry tp-registry --docker-server=<REGISTRY>:5000 --docker-username=<USERNAME> --docker-password=<PASSWORD> --docker-email=<EMAIL> --insecure-skip-tls-verify=true
Add secret to the default service account as described in the kubernetes docs.
I came up with a work-around for the situation with suggestions from these sources:
https://github.com/docker/machine/issues/1799
https://github.com/docker/machine/issues/1872
I logged into the Minikube VM (minikube ssh), and edited the /usr/local/etc/ssl/certs/ca-certificates.crt file by appending my own ca cert.
I then restarted the docker daemon while still within the VM: sudo /etc/init.d/docker restart
This is not very elegant in that if I restart the Minikube VM, I need to repeat these manual steps each time.
As an alternative, I also attempted to set the --insecure-registry myurl.com:5000 option in the DOCKER_OPTS environment variable (restarted docker), but this didn't work for me.
An addon was recently added to Minikube that makes access to private container registries much easier:
minikube addons configure registry-creds
minikube addons enable registry-creds
For an http registry this steps works for me:
1) minikube ssh
2) edit /var/lib/boot2docker/profile and add to $EXTRA_ARGS --insecure-registry yourdomain.com:5000
3) restart the docker daemon sudo /etc/init.d/docker restart
The Kubernetes documentation on this is pretty good.
Depending on where your private docker repository is hosted, the solution will look a bit different. The documentation explains how to handle each type of repository.
If you want an automated approach to handle this authentication, you will want to use a Kubernetes secret and specify the imagePullSecrets for your Pod.
Sounds like your question has more to do with Docker than Kubernetes. The Docker CLI supports a number of TLS-related options. Since you already have the CA cert, something like this should work:
docker --tlsverify --tlscacert=/etc/ssl/ca/ca.pem pull oururl.com:5000/myimage:v1
You need to edit /etc/default/docker to look like so:
# Docker Upstart and SysVinit configuration file
#
# THIS FILE DOES NOT APPLY TO SYSTEMD
#
# Please see the documentation for "systemd drop-ins":
# https://docs.docker.com/engine/admin/systemd/
#
# Customize location of Docker binary (especially for development testing).
#DOCKERD="/usr/local/bin/dockerd"
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--insecure-registry oururl.com:5000"
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"
# This is also a handy place to tweak where Docker's temporary files go.
#export DOCKER_TMPDIR="/mnt/bigdrive/docker-tmp"
Make sure to sudo service docker stop and sudo docker start to apply the changes. You should then be able to push/pull to your registry.
login account minikube
vi ~/.minikube/machines/<PROFILE_NAME>/config.json (in my case vi ~/.minikube/machines/minikube/config.json)
add private repo on InsecureRegistry attribute (json path: HostOptions.EngineOptions.InsecureRegistry)
minikube start again