Cloudera quickstart docker: unable to run/start the container - docker

I am using windows 10 machine, with Docker on windows, and pulled cloudera-quickstart:latest image. while trying to run it, I am getting into below error.
can someone please suggest.
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "exec: \"/usr/bin/docker-quickstart\": stat /usr/bin/docker-quickstart: no such file or directory"
my run command:
docker run --hostname=quickstart.cloudera --privileged=true -t -i cloudera/quickstart /usr/bin/docker-quickstart

The issue was that I have download docker separately and created the image with this command, which is not supported in cloudera 5.10 and above.
tar xzf cloudera-quickstart-vm-*-docker.tar.gz
docker import - cloudera/quickstart:latest < cloudera-quickstart-vm--docker/.tar
so I finally removed the docker image and then pulled it properly
docker pull cloudera/quickstart:latest
now docker is properly up and running.

If you had downloaded CDH v5.13 docker image, then the issue might be mostly due to the structure of the image archive; in my case, I found it to be clouder*.tar.gz > cloudera*.tar > cloudera*.tar ! Seems the packaging was done by fault and the official documentation too doesn't capture this :( In which case, just perform one more level of extraction to get to the correct cloudera*.tar archieve. This post from the cloudera forum helped me.

Related

unable to run docker container docker4dotnet/nanoserver

Learning docker following a course in udemy. i have all the prerequisites like docker desktop and switched to windows container. While trying to run a container using
docker container run docker4dotnet/nanoserver hostname
getting error like below
Unable to find image 'docker4dotnet/nanoserver:latest' locally
latest: Pulling from docker4dotnet/nanoserver
b5c97e1d373f: Extracting [==================================================>] 103MB/103MB
docker: failed to register layer: re-exec error: exit status 1: output: hcsshim::ProcessBaseLayer \?\C:\ProgramData\Docker\windowsfilter\90f22cdfe817e491c24b8e26f35b4ec43c6477ce0c86cdbfb95a59e2606762a5: The semaphore timeout period has expired.
unable to figure it out. can some one help on this
NOTE : tried to switch the container to linux but it says
Unable to find image 'docker4dotnet/nanoserver:latest' locally
latest: Pulling from docker4dotnet/nanoserver
b5c97e1d373f: Downloading
docker: image operating system "windows" cannot be used on this platform.
NOTE 2 : Even tried
docker run -d -p 8090:80 docker/getting-started it says below even though windows container is selected
PS C:\WINDOWS\system32> docker run -d -p 8090:80 docker/getting-started
docker: Error response from daemon: operating system on which parent image was created is not Windows.
use this lines in cmd:
docker pull mcr.microsoft.com/windows/nanoserver:20H2
docker container run mcr.microsoft.com/windows/nanoserver:20H2 hostname

NVIDIA Docker - initialization error: nvml error: driver not loaded

I'm a complete newcomer to Docker, so the following questions might be a bit naive, but I'm stuck and I need help.
I'm trying to reproduce some results in research. The authors just released code along with a specification of how to build a Docker image to reproduce their results. The relevant bit is copied below:
I believe I installed Docker correctly:
$ docker --version
Docker version 19.03.13, build 4484c46d9d
$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
However, when I try checking that my nvidia-docker installation was successful, I get the following error:
$ sudo docker run --gpus all --rm nvidia/cuda:10.1-base nvidia-smi
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: nvml error: driver not loaded\\\\n\\\"\"": unknown.
It looks like the key error is:
nvidia-container-cli: initialization error: nvml error: driver not loaded
I don't have a GPU locally and I'm finding conflicting information on whether CUDA needs to be installed before NVIDIA Docker. For instance, this NVIDIA moderator says "A proper nvidia docker plugin installation starts with a proper CUDA install on the base machine."
My questions are the following:
Can I install NVIDIA Docker without having CUDA installed?
If so, what is the source of this error and how do I fix it?
If not, how do I create this Docker image to reproduce the results?
Can I install NVIDIA Docker without having CUDA installed?
Yes, you can. The readme states that nvidia-docker only requires NVIDIA GPU driver and Docker engine installed:
Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed
If so, what is the source of this error and how do I fix it?
That's either because you don't have a GPU locally or it's not NVIDIA, or you messed up somewhere when installed drivers. If you have a CUDA-capable GPU I recommend using NVIDIA guide to install drivers. If you don't have a GPU locally, you can still build an image with CUDA, then you can move it somewhere where there is a GPU.
If not, how do I create this Docker image to reproduce the results?
The problem is that even if you manage to get rid of CUDA in Docker image, there is software that requires it. In this case fixing the Dockerfile seems to me unnecessary - you can just ignore Docker and start fixing the code to run it on CPU.
I think you need
ENV NVIDIA_VISIBLE_DEVICES=void
then
RUN your work
finally
ENV NVIDIA_VISIBLE_DEVICES=all

Docker Image running correctly on one machine but failing on another

When Trying to install Istio 1.2.3 on my cluster using Helm, I encountered an issue with the istio/kubectl image being used in the istio-init jobs with the following error:
container_linux.go:295: starting container process caused "exec: \"kubectl\": executable file not found in $PATH"
docker: Error response from daemon: oci runtime error: container_linux.go:295: starting container process caused "exec: \"kubectl\": executable file not found in $PATH".
Running the kubectl command in my local docker also gives the same error, however on another machine it works correctly
docker run <istio/kubectl-imageid> kubectl
What could cause this issue? And what would I need to change to overcome it?
It is definitely the same docker image and from my understanding a docker image should work identically in different environments assuming the same cpu architecture.
Turns out when I copied the image across machines, I did a
docker import istio-kubectl.1.2.3.tar
instead of a
docker load istio-kubectl.1.2.3.tar
The difference according to the documentation is:
docker load: Load an image from a tar archive or STDIN
docker import: Import the contents from a tarball to create a filesystem image
Loading the image instead of importing corrected the observed issue.

Input file not found in docker command on windows

Complete docker noob here, i installed docker desktop on windows - Trying to follow the commands on this link to setup OSRM backend on my machine. i've downloaded the dataset for india(india-latest.osm.pbf) to D:/docker
and am running the commands from that location
docker run -t -v "${PWD}:/data" osrm/osrm-backend osrm-extract -p /opt/car.lua /data/india-latest.osm.pbf
fails with
[error] Input file /data/india-latest.osm.pbf not found!
i just don't understand WHY it doesn't work. according to osrm documentation of the docker command -
The file /data/india-latest.osm.pbf inside the container is referring
to "${PWD}/india-latest.osm.pbf" on the host.
but it's not the case,i am running from d:/docker so it should find india-latest.osm.pbf no problem. This is really really confusing to me even though it must be so basic
it was due to a bug in docker https://github.com/docker/for-win/issues/1712
when you change password it silently fails for commands that access the host filesystem on windows until you reauthenticate

Docker container not running

I have created a docker image which is a python script based on a centos image. This image is working in the host system. Then I converted that image in tar.gz format. After that when I imported that tar.gz file into docker host(in a ubuntu system), it is done properly and the docker images list shows me the image listed in there. Then I tried to run the container in interactive mode using the following command:
$docker run -it image_name /bin/bash
it throws the following error:
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"exec: \\\"/bin/bash\\\": stat /bin/bash: no such file or directory\"\n".
Although docker run -it image_name /bin/bash command is working for all other images in my system. I tried almost all the means, but got no output apart from this error.
docker run -it image_name /bin/sh works for me! (Docker image, like Alpine, does not have /bin/bash).
I've just run into the same issue after updating Docker For Windows. It seems that it corrupted some image layers.
I cleared all the cached containers and images by running:
docker ps -qa|xargs docker rm -f
docker images -q|xargs docker rmi
The last command returned a few errors (some returned images didn't exist anymore).
Then I restarted the service and everything was running again.
I had the same issue, and it got resolved, after following the steps described in this post...
https://www.jamescoyle.net/how-to/1512-export-and-import-a-docker-image-between-nodes
Instead of saving the docker image (I) as .tar and importing, we need to commit the exited container, based on the image (I), as new image (N).
Then save the newly committed image (N) as .tar file, for importing into a new environment.
Hope this helps...

Resources