CentOS 7
Docker version 20.10.6, build 370c289
I try to run image like this:
docker run -d --name sonarqube -p 9000:9000 -v sonarqube-conf:/opt/sonarqube/conf -v sonarqube-data:/opt/sonarqube/data -v sonarqube-logs:/opt/sonarqube/logs -v sonarqube-extensions:/opt/sonarqube/extensions sonarqube
But get error:
docker: Error response from daemon: driver failed programming external connectivity on endpoint sonarqube (asfsfdsfdsfdsfdsfdsfds): Error starting userland proxy: listen tcp6 [::]:9000: socket: address family not supported by protocol.
This blog post discusses the problem and solution:
For some reason IPv6 (the hint is tcp6) is screwing things up. The problem is that I disabled IPv6 from the start on this host. Mainly because of some concerns in regards of routing and internet accessibility (I have a formal IPv6 subnet at home).
In your case, this is the solution which worked for a container of mine where I was seeing the same issue. Replace the host port with the host LAN IPv4 address (I used 172.16.18.93 in the following snippet) and port:
docker run -d --name sonarqube -p 172.16.18.93:9000:9000 -v sonarqube-conf:/opt/sonarqube/conf -v sonarqube-data:/opt/sonarqube/data -v sonarqube-logs:/opt/sonarqube/logs -v sonarqube-extensions:/opt/sonarqube/extensions sonarqube
I found solution.
Install Docker ver 20.10.5
E.g. from repo:
sudo yum install docker-ce-20.10.5 docker-ce-cli-20.10.5 containerd.io
And now problem is gone.
so for those that run in to this issue, the thing that solved it for me was to update docker
reviewing the documentation, just grab the latest release of docker. https://docs.docker.com/engine/install/ubuntu/
i.e
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Related
I have followed every version of the instructions on the AWS-EC2 setup for RAPIDS.ai: https://rapids.ai/cloud#AWS-EC2
I can confirm that I am using the exact instance type in the instructions, and following the steps exactly.
When I try to use the docker approach, the --gpus all command is not accepted.
When I try to use the conda approach, the install fails with the error:
PackageNotFoundError: Packages missing in current channels:
- glibc
I have tried (many) different solutions provided to solve both of these problems, none of them seem to work. I really just need to test some python code with cuml and cudf imports in a notebook. Been at this for 7 hours (after giving up on my local and SageMaker).
You note that the --gpus all command is not accepted, which suggests that you do not have the NVIDIA Docker runtime installed.
I followed the instructions you linked and I did run into an issue where the sudo yum install -y nvidia-docker2 command failed and I needed to disable an Amazon yum repo that was causing come conflicts as outlined in this issue.
$ sudo yum-config-manager --disable amzn2-graphics
$ sudo yum install -y nvidia-docker2
$ sudo yum-config-manager --enable amzn2-graphics
Once I'd done that and run sudo systemctl restart docker I was able to start the RAPIDS container.
$ docker run --gpus all --rm -it -p 8888:8888 -p 8787:8787 -p 8786:8786 rapidsai/rapidsai:cuda11.2-runtime-ubuntu18.04-py3.7
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.download.nvidia.com/licenses/NVIDIA_Deep_Learning_Container_License.pdf
A JupyterLab server has been started!
To access it, visit http://localhost:8888 on your host machine.
Ensure the following arguments were added to "docker run" to expose the JupyterLab server to your host machine:
-p 8888:8888 -p 8787:8787 -p 8786:8786
Make local folders visible by bind mounting to /rapids/notebooks/host
(rapids) root#be7253bb4fdb:/rapids/notebooks#
Turns out, the frist AMI suggested in the documentation is not compatible. Use the Deep Learning NVIDIA one instead.
I am trying to create a nfs-server using the below command:
docker run -d --rm --privileged --name nfs-server -v /var/folders/nfs:/var/nfs phico/nfs-server:latest
After this command when I check /var/folders I don't see nfs folder.
I logs I see the following :
Starting NFS server ...
Not starting NFS kernel daemon: no support in current kernel. ...
(warning).
NFS server started and listening on 172.17.0.2
Docker Preferences shows that docker has File Sharing Permissions for /var/folders
The logs from pod:
Can somebody help me out.
The issue with MacOS is not resolved but I believe similar solution exists in Mac. But on Ubuntu I had to install nfs-server and nfs-common for it to work.
sudo apt-get install nfs-common
sudo apt-get install nfs-server
You can follow the issue on the author's github page. The author was super helpful and figured out the solution.
I have a simple docker container which runs just fine on my local machine. I was hoping to find an easy checklist how I could publish and run my docker container on cPanel any help , i used centos 7 server
(iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 80 -j DNAT --to-destination 172.17.0.2:8888 ! -i docker0: iptables: No chain/target/match by that name.
and port not be defined
Yes you can install docker over cPanel/WHM just like installing it on any other CentOS server/virtual machine.
Just follow these simple steps (as root):
1) yum install -y yum-utils device-mapper-persistent-data lvm2 (these should be already installed...)
2) yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3) yum install docker-ce
4) enable docker at boot (systemctl enable docker)
5) start docker service (systemctl start docker)
The guide above is for CentOS 7.x. Don't expect to find any references or options related to Docker in the WHM interface. You will be able to control docker via command line from a SSH shell.
I have some docker containers already running on my cPanel/WHM server and I have no issues with them. I basically use them for caching, proxying and other similar stuff.
And as long as you follow these instructions, you won't mess-up any of your cPanel/WHM services/settings or current cPanel accounts/settings/sites/emails etc.
See references Here
Adding onto Tiago's comment
Docker is now installed using docker.io not docker-ce
So skip step 2 and modify step 3
when I installed docker initially, it shows to be of version 1.0.1
Being, that the current version is 1.4.1, I found and executed the following instructions:
$ sudo apt-get update
$ sudo apt-get install docker.io
$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main \
> /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
Now, when I run docker version I get 1.4.1, but docker no longer works - it gives me this error:
root#8dedd2fff58e:/# docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
FATA[0000] Get http:///var/run/docker.sock/v1.16/version: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
What can I do to fox this, but retail the most current docker verion 1.4.1?
/var/run/docker will be created when you start the docker service:
systemd:
sudo systemctl start docker
upstart:
sudo service docker start
init.d:
sudo /etc/init.d/docker start
You might also need this if you get this error:
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
I had the same issue on Mac OS X. Leaving my fix here in case it helps somebody:
Run the "Docker Quick Start Terminal"
In the target-directory, run eval "$(docker-machine env default)"
This fixes the issue for me
I was experiencing the same problem and I was able to find the solution here: https://docs.docker.com/articles/basics/.
It's always good to go back to foundations.
The problem is that you might be running on a different port instead of default socket (unix:///var/run/docker.sock).
If you run "ps aux | grep docker" you should see the daemon running. At the end of the line of the docker process you should also see a parameter -H={IpAddress}:{Port}. You should also see the path were the certificates are stored (--tls parameters)
You have to instruct docker to connect to the tcp address specified in the -H parameter.
For example:
`docker --tls -H tcp://{IpAddress}:{Port} version`
Notice the --tls parameter, this is necessary if you instructed docker to run in a secure mode.
You could avoid the verbosity of the command by setting environment variables.
export DOCKER_HOST="tcp://{IpAddress}:{Port}"
export DOCKER_TLS_VERIFY="1"
Hope this helps..
Is docker initiated as a daemon?
use service docker.io status or service docker status
if not then start it and play with it
On a fresh M1 MacBook I ran into this. Amazingly the solution was to simply log in to the app using my docker account details. Once I did that I re-ran the failed command and it worked.
I have a very basic doubt regarding docker.
I have a docker host installed in ubuntuA.
So, to test this from the client(UbuntuB) , should the docker be installed in UbuntuB machine also?
More correct answer is "only docker client" need to be installed in UbuntuB
In UbuntuB, install docker client only, it is around 17M
# apt-get update && apt-get install -y curl
# curl https://get.docker.io/builds/Linux/x86_64/docker-latest -o /usr/local/bin/docker
# chmod +x /usr/local/bin/docker
In order to run docker command, you need talk to daemon in ubuntuA (port 2375 is used since docker 1.0)
$ docker -H tcp://ubuntuA:2375 images
or
$ export DOCKER_HOST tcp://ubuntuA:2375
$ docker images
see more detail in http://docs.docker.com/articles/basics/
Yes,
you have to install docker on both client and server.