Docker not creating directory - docker

I am trying to create a nfs-server using the below command:
docker run -d --rm --privileged --name nfs-server -v /var/folders/nfs:/var/nfs phico/nfs-server:latest
After this command when I check /var/folders I don't see nfs folder.
I logs I see the following :
Starting NFS server ...
Not starting NFS kernel daemon: no support in current kernel. ...
(warning).
NFS server started and listening on 172.17.0.2
Docker Preferences shows that docker has File Sharing Permissions for /var/folders
The logs from pod:
Can somebody help me out.

The issue with MacOS is not resolved but I believe similar solution exists in Mac. But on Ubuntu I had to install nfs-server and nfs-common for it to work.
sudo apt-get install nfs-common
sudo apt-get install nfs-server
You can follow the issue on the author's github page. The author was super helpful and figured out the solution.

Related

RAPIDS.ai dependencies cuml and cudf not found no matter how I install

I have followed every version of the instructions on the AWS-EC2 setup for RAPIDS.ai: https://rapids.ai/cloud#AWS-EC2
I can confirm that I am using the exact instance type in the instructions, and following the steps exactly.
When I try to use the docker approach, the --gpus all command is not accepted.
When I try to use the conda approach, the install fails with the error:
PackageNotFoundError: Packages missing in current channels:
- glibc
I have tried (many) different solutions provided to solve both of these problems, none of them seem to work. I really just need to test some python code with cuml and cudf imports in a notebook. Been at this for 7 hours (after giving up on my local and SageMaker).
You note that the --gpus all command is not accepted, which suggests that you do not have the NVIDIA Docker runtime installed.
I followed the instructions you linked and I did run into an issue where the sudo yum install -y nvidia-docker2 command failed and I needed to disable an Amazon yum repo that was causing come conflicts as outlined in this issue.
$ sudo yum-config-manager --disable amzn2-graphics
$ sudo yum install -y nvidia-docker2
$ sudo yum-config-manager --enable amzn2-graphics
Once I'd done that and run sudo systemctl restart docker I was able to start the RAPIDS container.
$ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 rapidsai/rapidsai:cuda11.2-runtime-ubuntu18.04-py3.7
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.download.nvidia.com/licenses/NVIDIA_Deep_Learning_Container_License.pdf
A JupyterLab server has been started!
To access it, visit http://localhost:8888 on your host machine.
Ensure the following arguments were added to "docker run" to expose the JupyterLab server to your host machine:
-p 8888:8888 -p 8787:8787 -p 8786:8786
Make local folders visible by bind mounting to /rapids/notebooks/host
(rapids) root#be7253bb4fdb:/rapids/notebooks#
Turns out, the frist AMI suggested in the documentation is not compatible. Use the Deep Learning NVIDIA one instead.

can't run docker in ubuntu 20.04

I had tried to install docker like I read in lots of site with the same steps, but it didn't work and always send me that error massage:docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. and I didn't could find solution that work.
After that I tried to test a bit the sudo snap, and I was surprised to discover that I can install docker using sudo snap install docker and its look like it worked, but when I run the sudo docker run -dt centos bash, I had got the same error.
It's not only a issue with CentOS, I had tried this with kali, mint, ubuntu, and fedora, no one of them worked, and the error is always the same/
thanks for help.
You need to mount /var/run/docker.sock
sudo docker run -dt -v /var/run/docker.sock:/var/run/docker.sock centos bash

Having Trouble Downloading Hyperledger fabric docker images

Background: I just downloaded docker, docker-compose, node.js,npm, and the hyperledger samples from the offical documentation. However, when I downloaded the hyperledger sample networks,everything seemed to be going fine until the script tried pulling the Hyperledger fabric docker images. This is the error message:
===> Pulling fabric Images
====> hyperledger/fabric-peer:2.1.0
Got permission denied while trying to connect to the Docker daemon socket at unix:///var
/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/images
/create?fromImage=hyperledger%2Ffabric-peer&tag=2.1.0: dial unix /var/run/docker.sock:
connect: permission denied
NOTE: I am using ubuntu 18.04.4
I'm guessing: either the Docker service is not running, or your user does not have permission to access the Docker service (more likely).
Running your command as sudo is one way to fix it. Or have a look at this question: How can I use docker without sudo? (but be careful about the security trade-offs!)
add sudo to the command while you are pulling your docker images using curl.
sudo curl -sSL fabric-binaries-link | bash -s
A temporary solution would be to change the permission of docker.sock file
Go the terminal and type the following and press enter.
sudo chmod 775 //var/run/docker.sock
However, it is not advised to use the root user for installing software for fabric. Instead, you can do the following:
Create a new user
sudo adduser bibek
Add our user to the sudo group.
sudo usermod -aG sudo bibek
Switch to new user
su - bibek
Then you can install all docker and docker-compose
sudo apt-get install docker.io docker-compose
Start and enable docker
sudo usermod -a -G docker $USER
sudo systemctl start docker
sudo systemctl enable docker
You can check if the installation worked by running:
docker run hello-world
Cheers!

install/access executable for existing docker container

I want to run an executable and all of its libraries from within my container. How do I do that?
For my Ubuntu 14.04 server, I can do sudo apt-get install tetex-base tetex-bin
In this case, however, someone already set up a docker container for me, and I need to be able to run the program from within the container.
I got it working with
docker exec -it containerName apt-get install tetex-base tetex-bin
See docs.

Docker client execution

I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.

Resources