I'm trying to deploy a docker image from docker hub on openshift.
I crated an image with a simple spring boot rest application:
https://hub.docker.com/r/ernst1970/my-rest
After logging into openshift an choosing the correct project I do
oc new-app ernst1970/my-rest
And I get
W0509 13:17:28.781435 16244 dockerimagelookup.go:220] Docker registry lookup failed: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Clien
t.Timeout exceeded while awaiting headers)
error: Errors occurred while determining argument types:
ernst1970/my-rest as a local directory pointing to a Git repository: GetFileAttributesEx ernst1970/my-rest: The system cannot find the path specified.
Errors occurred during resource creation:
error: no match for "ernst1970/my-rest"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
I also tried with
oc new-app mariadb
But got the same error message.
I thought this might be a proxy problem. So I added the proxy to my .profile:
export http_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
export https_proxy=http://ue73011:secret#dev-proxy.wzu.io:3128
Unfortunately this did not change anything.
Any ideas why this is not working?
your docker daemon needs the proxy so it can reach the DockerHub. You can specify proxy server by providing it as an environment variable for the docker daemon.
Take a look at the official Docker documentation: https://docs.docker.com/config/daemon/systemd/
sudo mkdir -p /etc/systemd/system/docker.service.d
Add a file /etc/systemd/system/docker.service.d/http-proxy.conf which should contain following
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/" "NO_PROXY=localhost,127.0.0.1,docker-registry.example.com,.corp"
Reload your changes and restart docker daemon
sudo systemctl daemon-reload
sudo systemctl restart docker
Verify by doing a simple "docker pull ... "
Related
Is there a way to build a docker image from a Dockerfile that uses a base image from a local, insecure registry hosted in Gitlab. For example, if my Dockerfile were:
FROM insecure.registry.local:/mygroup/myproject/image:latest
When I run docker build . I get the following error:
failed to solve with frontend dockerfile.v0: failed to create LLB definition:.... http: server gave HTTP response to HTTPS client
When I've been interacting with our registry and received similar types http/https errors, I would alter the docker daemon configuration file to include:
...
"insecure-registries" : ["insecure.registry.local"]
...
and everything would work when executing a particular docker command that would trigger it. I'm able to run docker login insecure.registry.local successfully. Is there a way either in the Dockerfile itself, through the docker build command or other alternative to have it pull the image successfully in the FROM statement from an insecure registry?
Depending on your version, you may need to include the scheme in the insecure registry definition. Newer versions of buildkit should not have this issue, so an upgrade may also help.
...
"insecure-registries" : [
"insecure.registry.local",
"http://insecure.registry.local"
]
...
Unfortunately there is not that much information about the actual error.
So here are a couple of things that may fix the issue you described:
Ensure your file is called Dockerfile (only the D is supposed to be capitalized)
Reload and restart the docker daemon after your changes (sudo systemctl daemon-reload && sudo systemctl restart docker)
Don't use the docker buildkit (export DOCKER_BUILDKIT=0 && export COMPOSE_DOCKER_CLI_BUILD=0 && docker build .
In ubuntu 18.04 VM
I am behind a proxy, I've set up docker configuration with the same proxy.
I created an azure container registry and when trying to docker pull from the registry it works.
But when trying to:
$docker run node:6
I get the error:
"docker: Error response from daemon: Get https://registry-1.docker.io/v2/: x509: certificate signed by unknown authority."
I've added the registry to /etc/docker/daemon.json:
{
"insecure-registries": ["registry-1.docker.io","myazureContainerRegistry.azurecr.io"]
}
By doing the above step, "$docker run myazureContainerRegistry.azurecr.io/myimage:tag" works but "$docker run node:6" still gives the certificate error.
I've added the certificate for "*.docker.io" to /etc/docker/certs.d/docker.io and also to /usr/local/share/ca-certificate (sudo apt update-ca-certificates), still it doesn't work.
I've also tried to:
$curl -k https://registry-1.docker.io/
$wget https://registry-1.docker.io/ --no-check-certificate
Both of these steps work but with docker (to run/pull node:6 ) I still get the certificate error.
The output of "$docker --version" is: "Docker version 18.09.2"
This is how my ~/.docker/config.json looks like:
config.json
I expect "docker run node:6" to pull the image successfully but it actually gives the error
For your issue, first of all, you need to have the certificate in the ~/.docker/config.json. Then you can pull the image from the registry without login. Then you can execute the command without pulling the image before. for you, the command like this:
docker run registry-1.docker.io/node:6
In my side, the config.json will like this:
And I can execute the command like this:
The URI of registry in the docker hub is https://index.docker.io/v1/charlesjunqiang.
Update
If you use the certificate file to authenticate the Docker registry. Then you should do some steps to authenticate the Docker registry in the client machine.
One:
Add the certificate file in the directory /usr/local/share/ca-certificates/docker-dev-cert/ with the name yourname.crt. Then execute the commands:
sudo update-ca-certificates
sudo service docker restart
Secord:
Create a directory in the directory /etc/docker/certs.d with the same name as the registry, for example, myregistry.azurecr.io. Then add the certificate file in it with the name yourname.cert. Also, you should add the file as .key that automatic created when you create the certificate file.
Then you can log in the registry and run the command docker run registry-1.docker.io/node:6 as you want.
There are screenshots of the result in my side.
I am trying to setup a bitbucket pipeline and that uses a docker run statement. But build fails with the following error message:
docker: Error response from daemon: authorization denied
Here is the pipeline configuration
pipelines:
default:
- step:
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t solc .
# Test the solidity files in project
- docker run solc
Question: I did not perform any operation requiring authorization. Why is the error message talking of authorization.
You are running docker commands on a shared environment. As of the time of this question, Bitbucket does not allow you to run docker run commands in that environment for security purposes. The list of docker commands you can run (as of the time of this question) are:
docker login
docker build
docker tag
docker pull
docker push
docker version
Docker is a client/server application. You are running the client commands and bitbucket has secured their environment on the dockerd daemon.
You can see the current capabilities of their docker integration from their documentation which has been extended since this question was first answered. As of the time of this update, it filters privileged containers and mounting host volumes outside of a predefined subdirectory.
All,
I am using DCOS and the associated Jenkins.
My company is having a proxy for any external traffic.
Jenkins is running properly and can access the internal network as well as any external network.
I can get jobs to curl a URL on internet if I set the HTTP proxy. I can pass this proxy to mesosphere/jenkins-dind:0.3.1 container as environment variable however, I can't run any docker pull or docker run while being in docker in docker mode.
I managed to reproduce the issue on one of the agent box.
sudo docker run hello-world
Hello from Docker!
This works!!
However, sudo docker run --privileged mesosphere/jenkins-dind:0.3.1 wrapper.sh "docker run hello-world" will fail with
docker: Error while pulling image: Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate is valid for FG3K6C3A13800607, not index.docker.io.
This is typically showing that the docker daemon is not having access to the proxy.
Do you know how to ensure that the dind is getting access to the proxy settings?
Antoine
This error can also manifest itself if the Docker daemon is unauthenticated against your registry but it looks like you're running against the public image, so that's not likely to be the problem.
You could try creating a new Parameter to the Jenkins node (see the instructions here for an example for how to set an environment variable called DOCKER_EXTRA_OPTS: https://docs.mesosphere.com/1.8/usage/service-guides/jenkins/advanced-configuration/).
In this case, we want to do the same (with Name env) but with the contents of Value set to something like HTTP_PROXY=http://proxy.example.com:80/.
Is there a gcloud API or other command line interface (CLI) to access the list of published container images in the private Google Container Registry? (That is the container registry inside a Google Cloud Platform project)
gcloud container does not seem to help:
$ gcloud container
Usage: gcloud container [optional flags] <group | command>
group may be clusters | operations
command may be get-server-config
Deploy and manage clusters of machines for running containers.
flags:
--zone ZONE, -z ZONE The compute zone (e.g. us-central1-a) for the cluster
global flags:
Run `gcloud -h` for a description of flags available to all commands.
command groups:
clusters Deploy and teardown Google Container Engine clusters.
operations Get and list operations for Google Container Engine
clusters.
commands:
get-server-config Get Container Engine server config.
I also don't want to use gcloud docker to list images because this wants to connect to a particular docker daemon that I don't have. Unless there is a way to tell gcloud docker to connect to a remote public docker daemon that can read the private containers pushed to the registry through my project.
We just released a new command to list the images in your repository! You can try it out with:
gcloud alpha container images list --repository=gcr.io/$MYREPOSITORY
If you want to see the specific tags for an image you can use:
gcloud alpha container images list-tags gcr.io/$MYREPOSITORY/$MYIMAGE
The answer given by Robert Bailey is good for certain tasks, but might be missing what you specifically want to do. Nonetheless, your comments in reply to his answer are not so much faults of his answer as of your own understanding of what the commands which "fail" actually mean to do.
As far as your second comment,
Using docker I get the following error (for the reasons mentioned
above; I also edited the question): Cannot connect to the Docker daemon. Is the docker daemon running on this host?
This is a result of the docker daemon not running. Check if it's running via ps aux | grep docker. You can refer to the Docker documentation to determine how to properly install and run it.
As far as your first comment,
Using curl I get: {"errors":[{"code":"DENIED","message":"Failed to read tags for repository '<my_project>/<my_image>'"}]}. I have to
authenticate somehow to access the images in a private registry. I
don't want to use docker because that means I have to have a docker
daemon available. I only want to see if a container image with a
particular version is in the Container Registry. So what I need is an
API to the Container Registry in the Google Developer Console.
You wouldn't be able to curl the image unless it was public, as mentioned in Robert's latest comment, or unless you somehow provided some great oauth headers during the curl's invocation.
You should use gcloud docker to attempt to list the images in the registry, as you would for other docker registries. The gcloud container command group is the wrong one for your desired task. You can see below an output from gcloud version 96.0.0 (latest as of this comment) for the docker command group:
$ gcloud docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker daemon [ --help | ... ]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug=false Enable debug mode
--disable-legacy-registry=false Do not contact legacy registries
-H, --host=[] Daemon socket(s) to connect to
-h, --help=false Print usage
-l, --log-level=info Set the logging level
--tls=false Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify=false Use TLS and verify the remote
-v, --version=false Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on a container or image
kill Kill a running container
load Load an image from a tar archive or STDIN
login Register or log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
network Manage Docker networks
pause Pause all processes within a container
port List port mappings or a specific mapping for the CONTAINER
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart a container
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save an image(s) to a tar archive
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Display the running processes of a container
unpause Unpause all processes within a container
version Show the Docker version information
volume Manage Docker volumes
wait Block until a container stops, then print its exit code
Run 'docker COMMAND --help' for more information on a command.
You should use gcloud docker search gcr.io/project-id to check which images are in the repository. gcloud has your credentials, so it can talk to the private registry as long as you're authenticated as an appropriate user on the project.
Finally, as an added resource: The Cloud Platform docs have a whole article about working with Google Container Registry.
If you know the project that is hosting the images (e.g. google-containers) you can list images with
gcloud docker search gcr.io/google_containers
For an individual image (e.g. the pause image in the google-containers project), you can check the versions with
curl https://gcr.io/v2/google-containers/pause/tags/list
I've just found a far simpler way to check for specific images. Once you have authenticated gcloud, use it to generate access tokens for reading from your private registry:
curl -u "oauth2accesstoken:$(gcloud auth print-access-token)" https://gcr.io/v2/<projectName>/<imageName>/tags/list
My best solution so far without having a local docker available and without being able to connect to a remote docker (this would still require at least the local docker client but not the local daemon running), is to SSH into a Container Cluster instance that runs docker and have my search done there and getting the result in my original script:
gcloud compute ssh <container_cluster_instance> -C "sudo gcloud docker search ..."
Of course, to avoid all verbose output (like SSH/Terminal welcome messages) I use some arguments to silent the execution a bit:
gcloud compute ssh --ssh-flag="-q" "$INSTANCE_NAME" -o LogLevel=quiet -C "sudo gcloud docker search ..."