how to set the DOCKER_HOST? - docker

I was doing the django-shop tutorial from this link:https://django-shop.readthedocs.io/en/latest/tutorial/quickstart.html . I am very new in docker ,docker-compose and linux .
I get this error:
ERROR: Couldn't connect to Docker daemon at http://127.0.0.1:2375 - is
it running?
If it's at a non-standard location, specify the URL with the
DOCKER_HOST environment variable.
When I execute these commands...
$ git clone --depth 1 github.com/awesto/django-shop
$ cd django-shop
$ export DJANGO_SHOP_TUTORIAL=commodity
$ docker-compose up --build -d
I tried to do this Tutorial and this didn't work.
EDIT:
I use this Command to solve this problem:
$ sudo adduser razvan docker

As a general rule, never set DOCKER_HOST.
Given your error message, it looks like it might be set (incorrectly) and you might see if things get better if you
unset DOCKER_HOST
The two prominent exceptions are VM-based Docker environments (Docker Toolbox, Docker Machine, Kubernetes' minikube). In these cases there are helper scripts that can set it to the correct value:
eval $(docker-machine env) # Docker Machine, Docker Toolbox
eval $(minikube docker-env) # Minikube

By set DOCKER_HOST you tell for every run of docker in command line to use http api, instead of default - socket on localhost.
By default http api is turned off
$ sudo cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
you can add -H tcp://127.0.0.1:2375 for tern on http api on localhost
but usually you want to tern on api for remote servers by -H tcp://0.0.0.0:2375 (!!! do it only with proper firewall !!!)
so you need to change in /lib/systemd/system/docker.service to next line
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock

I am using Ubuntu 16.04 so I went at the end of the /home/user/.profile file and placed the unset DOCKER_HOST command.
Then sourced the file like so: source /home/user/.profile then logged out and logged back in and docker now works normally.

Related

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

There is any "Podman Compose"?

I recently found out about Podman (https://podman.io). Having a way to use Linux fork processes instead of a Daemon and not having to run using root just got my attention.
But I'm very used to orchestrate the containers running on my machine (in production we use kubernetes) using docker-compose. And I truly like it.
So I'm trying to replace docker-compose. I will try to keep docker-compose and using podman as an alias to docker as Podman uses the same syntax as docker:
alias docker=podman
Will it work? Can you suggest any other tool? I really intend to keep my docker-compose.yml file, if possible.
Yes, that is doable now, check podman-compose, this is one way of doing it, another way is to convert the docker-compose yaml file to a kubernetes deployment using Kompose. there is a blog post from Jérôme Petazzoni #jpetazzo: from docker-compose to kubernetes deployment
Update 6 May 2022 : Podman now supports Docker Compose v2.2 and higher (see Podman 4.1.0 release notes)
Old answer:
Running docker-compose with Podman as a normal user (rootless)
Requirement: Podman version >= 3.2.1 (released in June 2021)
Install the executable docker-compose
curl -sL -o ~/docker-compose https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)
chmod 755 ~/docker-compose
Alternatively you could also run docker-compose in a container image (see below).
Run
systemctl --user start podman.socket
Set the environment variable DOCKER_HOST
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
Run
~/docker-compose up -d
Running docker-compose with Podman as root
Requirement: Podman version >= 3.0 (released in February 2021)
Follow the same procedure but remove the flag --user
systemctl start podman.socket
Running docker-compose in a container image
Use the container image docker.io/docker/compose to run
docker-compose
podman \
run \
--rm \
--detach \
--env DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock \
--security-opt label=disable \
--volume $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock \
--volume $(pwd):$(pwd) \
--workdir $(pwd) \
docker.io/docker/compose \
--verbose \
up -d
(the flag --verbose is optional)
The same command with short command-line options on a single line:
podman run --rm -d -e DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock --security-opt label=disable -v $XDG_RUNTIME_DIR/podman/podman.sock:$XDG_RUNTIME_DIR/podman/podman.sock -v $(pwd):$(pwd) -w $(pwd) docker.io/docker/compose --verbose up -d
Regarding SELINUX: Runnng Podman with SELINUX is preferable from a security point-of-view, but I didn't get it to work on a Fedora 34 computer so I disabled SELINUX by adding the command-line option
--security-opt label=disable
Troubleshooting tips
Test the Docker REST API
A minimal check to see that the Docker REST API is working:
$ curl -H "Content-Type: application/json" \
--unix-socket $XDG_RUNTIME_DIR/podman/podman.sock \
http://localhost/_ping
OK$
Avoid short container image names
If any of your docker-compose.yaml or Dockerfile files contain a short container image name, for instance
$ grep image: docker-compose.yaml
image: mysql:8.0.19
$
$ grep FROM Dockerfile
FROM python:3.9
$
edit the files to use the whole container image name instead
$ grep image: docker-compose.yaml
image: docker.io/library/mysql:8.0.19
$
$ grep FROM Dockerfile
FROM docker.io/library/python:3.9
$
Most often short names have been used to reference DockerHub Official Images
(a catalogue) so a good guess would be to prepend the container image name with docker.io/library/
There are currently many different container image registries, not just DockerHub (docker.io). Writing the whole container image name is thus a good practice. Podman might complain otherwise depending on how Podman is configured.
Rootless users can't bind to ports below 1024
If for instance
$ grep -A1 ports: docker-compose.yml
ports:
- 80:80
$
edit docker-compose.yaml so that the host port number >= 1024, for instance 8080
$ grep -A1 ports: docker-compose.yml
ports:
- 8080:80
$
An alternative solution is to adjust net.ipv4.ip_unprivileged_port_start with sysctl (see Shortcomings of Rootless Podman)
In case Systemd is missing
Most Linux distributions use Systemd where you would preferably start the Podman service (providing the REST API) by "starting" the Podman socket
systemctl --user start podman.socket
or
systemctl start podman.socket
but in case Systemd is missing you could also start the Podman service directly
podman system service --time 0 unix:/some/path/podman.sock
Systemd gives the extra benefit that the Podman service is started on demand with Systemd socket activation and stops after some time of inactivity.
Caveat: Swarm functionality is missing
A difference to Docker is that the functionality relating to Swarm is not supported when using docker-compose with Podman.
References:
https://www.redhat.com/sysadmin/podman-docker-compose
https://github.com/containers/podman/discussions/10644#discussioncomment-857897
Ensure Podman is installed on your machine.
You can install Podman Compose in a terminal with the following command:
pip3 install https://github.com/containers/podman-compose/archive/devel.tar.gz
cd into the directory your docker-compose file is located in
Run podman-compose up
See the following link for a decent introduction.

Connect to remote docker host

I have following scenario.
Two Machine ( Physical Machine)
One is Windows 10 With Docker On Windows Installer and same way ubuntu 18.04 with docker-ce installed.
I can run command on individual and that is fine.
I want to connect Ubuntu Docker Host from Docker on Windows machine. So Docker CLI on Windows Point to deamon at Ubuntu Host.
You will need to enable docker remote API on Ubuntu Docker Host by adding below settings in daemon.json or your startup script
[root#localhost ~]# cat /etc/docker/daemon.json
{
"hosts": [ "unix:///var/run/docker.sock", "tcp://0.0.0.0:2376" ]
}
Once you restart docker you can connect to docker host locally by socket file and remotely by listening port (2376).
Verify the listening port of docker on Ubuntu
[root#localhost ~]# netstat -ntlp | grep 2376
tcp6 0 0 :::2376 :::* LISTEN 1169/dockerd
Now you can connect to this docker from Windows machine by setting the DOCKER_HOST env variable in Windows like this
PS C:\Users\YellowDog> set DOCKER_HOST=tcp://<Ubuntu-Docker_Host-IP>:2376
PS C:\Users\YellowDog> docker ps
It will list docker containers running on Ubuntu Docker Host
You can also do this through additional options to the service:
Find original ExecStart line docker.service:
systemctl status docker | grep load | grep -oP "\/.+service"
# --> /lib/systemd/system/docker.service
cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// $DOCKER_OPTS
Create a new file to store the daemon options:
sudo mkdir /etc/systemd/system/docker.service.d/
Add next lines with -H unix:// -H tcp://0.0.0.0:2375 options to /etc/systemd/system/docker.service.d/options.conf:
cat <<EOF > /etc/systemd/system/docker.service.d/options.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// \$DOCKER_OPTS -H unix:// -H tcp://0.0.0.0:2375
EOF
Here you need to pay attention to the escaping $DOCKER_OPTS variable if it exists.
Or using your favorite editor, for example vim.
Now, reload the systemd daemon and restart the docker service:
# Reload the systemd daemon.
sudo systemctl daemon-reload
# Restart Docker.
sudo systemctl restart docker
Configuring your dev box to connect to the remote Docker daemon:
If you want to set DOCKER_HOST by default so it always connects remotely you can export it in your ~/.bashrc file.
Here’s an example of that as a 1 liner:
echo "export DOCKER_HOST=tcp://X.X.X.X:2375" >> ~/.bashrc && source ~/.bashrc
Or use it all at once:
DOCKER_HOST=tcp://X.X.X.X:2375 docker ps

How do i expose the Docker Remote API on Centos 7?

On ubuntu i can go into /etc/init/docker.conf and put in DOCKER_OPTS='-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock' to get the json data to display on my browser but how can i do it for Centos?
I have tried creating a file in /etc/sysconfig/docker and placing other_args="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock" inside the file and restarting docker but it doesn't do anything.
The systemd unit installed by the Docker corp package hardcodes the command line used to start the docker daemon:
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
[...]
There is no support for reading a file from /etc/sysconfig or elsewhere to modify the command line. Fortunately, systemd gives us the tools we need to change this behavior.
The simplest solution is probably to create the file /etc/systemd/system/docker.service.d/docker-external.conf (the exact filename doesn't matter; it just needs to end with .conf) with the following contents:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
And then:
systemctl daemon-reload
systemctl restart docker
This is actually documented on the Docker website in this document, which includes instructions for a more flexible solution that will allow you to use files in /etc/sysconfig to control the daemon.
Yes, you can do the configuration thing. But how about a docker solution to a docker problem?
docker run -d \
--name sherpa \
-v /var/run/docker.sock:/tmp/docker.sock \
-p 2375:4550 \
djenriquez/sherpa --allow
Proxies access to the socket through port 2375 on localhost.
1、edit /usr/lib/systemd/system/docker.service to add two params in the service section:
# vim /usr/lib/systemd/system/docker.service
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock
2、reload the configuration, and then restart docker。
# systemctl daemon-reload
# systemctl restart docker
3、to check for success, see if the return the following response。
# ps -ef|grep docker
root 26208 1 0 23:51 ? 00:00:00 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock
reference from Expose the Docker Remote API on Centos 7?

How to add variable to docker daemon in CentOS?

I want to create registry mirror in docker. I read this tutorial. So,I want to add this variable "--registry-mirror=http://10.0.0.2:5000" to docker daemon when it start.
I have succeeded in mac. I add the line to /var/lib/boot2docker/profile:
EXTRA_ARGS="--registry-mirror=http://192.168.59.103:5555"
It can work after adding in mac. So I do the same thing in CentOS. I use the command in this question:I:
sudo sed -i 's|other_args=|other_args=--registry-mirror=http://<my-docker-mirror-host> |g' /etc/sysconfig/docker
sudo sed -i "s|OPTIONS='|OPTIONS='--registry-mirror=http://<my-docker-mirror-host> |g" /etc/sysconfig/docker
sudo service docker restart
and it makes my "/etc/sysconfig/docker" like below in CentOS, and this is my docker file:
# /etc/sysconfig/docker
#
# Other arguments to pass to the docker daemon process
# These will be parsed by the sysv initscript and appended
# to the arguments list passed to docker -d
OPTIONS=--selinux-enabled -H fd:// -g="/opt/apps/docker"
other_args="--registry-mirror=http://10.11.150.76:5555"
Then, I restart docker using this command:
service docker restart
But, the mirror didn't work in CentOS. I use command:
ps -ef
It did't add the variable to docker daemon. what is wrong?
In the /etc/sysconfig/docker file, change:
OPTIONS=--selinux-enabled -H fd:// -g="/opt/apps/docker"
into:
OPTIONS=--selinux-enabled -H fd:// -g="/opt/apps/docker" --registry-mirror=http://10.11.150.76:5555
I can't help you with other_args, I don't know this option.
If you using yum install docker, you may get a problem with the docker service config file.
Then you need to check your system service config file, see if it using other_args as a parameter to start docker. By default, the service config file should placed at /usr/lib/systemd/system/docker.service, edit it with any editor, check ExecStart part, add other_args to it.
For example, ExecStart=/usr/bin/docker -d --selinux-enabled $other_args

Resources