nvidia-docker can't talk to: http://localhost:3476/docker/cli/json - docker

nvidia-docker can't talk to http://localhost:3476/docker/cli/json
Traceback (most recent call last):
File "/usr/local/bin/nvidia-docker-compose", line 43, in <module>
resp = request.urlopen('http://{0}/docker/cli/json'.format(args.nvidia_docker_host)).read().decode()
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 429, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 447, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1228, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1198, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [Errno 111] Connection refused>

A fresh install of nvidia-docker-compose fixed this:
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
Then to test it:
Test nvidia-smi
nvidia-docker run --rm nvidia/cuda nvidia-smi

Encountered this also, a customer wasn't managing to run nvidia-docker-compose. Turns out even after reinstallations of docker and nvidia-docker, the query made by nvidia-docker to docker on localhost:3476 was not getting any response (see nvidia-docker-compose code here)
I managed to solve this by generating a hand-made docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.

Related

Airflow: DockerOperator fails with Permission Denied error

I'm trying to run a docker container via Airflow but getting Permission Denied errors. I have seen a few related posts and some people seem to have solved it via sudo chmod 777 /var/run/docker.sock which is a questionable solution at best, but it still didn't work for me (even after restarting docker. If anyone managed to solve this problem, please let me know!
Here is my DAG:
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(2020, 6, 21, 11, 45, 0),
'retries': 1,
'retry_delay': timedelta(minutes=1),
}
dag = DAG(
"docker",
default_args=args,
max_active_runs=1,
schedule_interval='* * * * *',
catchup=False
)
hello_operator = DockerOperator(
task_id="run_docker",
image="alpine:latest",
command="/bin/bash echo HI!",
auto_remove=True,
dag=dag
)
And here is the error that I'm getting:
[2020-06-21 14:01:36,620] {taskinstance.py:1145} ERROR - ('Connection aborted.', PermissionError(13, 'Permission denied'))
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/packages/six.py", line 734, in reraise
raise value.with_traceback(tb)
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/home/airflow/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/lib/python3.6/http/client.py", line 1262, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1308, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1257, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/usr/local/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/home/airflow/.local/lib/python3.6/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', PermissionError(13, 'Permission denied'))
Here is my setup:
Dockerfile:
FROM apache/airflow
RUN pip install --upgrade --user pip && \
pip install --user psycopg2-binary && \
pip install --user docker
COPY airflow/airflow.cfg /opt/airflow/
docker-compose.yaml:
version: "3"
services:
postgres:
image: "postgres:9.6"
container_name: "postgres"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
ports:
- "5432:5432"
volumes:
- ./data/postgres:/var/lib/postgresql/data
initdb:
image: learning/airflow
entrypoint: airflow initdb
depends_on:
- postgres
webserver:
image: learning/airflow
restart: always
entrypoint: airflow webserver
environment:
- EXECUTOR=Local
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-webserver.pid ]"]
interval: 1m
timeout: 5m
retries: 3
ports:
- "8080:8080"
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
scheduler:
image: learning/airflow
restart: always
entrypoint: airflow scheduler
healthcheck:
test: ["CMD-SHELL", "[ -f /opt/airflow/airflow-scheduler.pid ]"]
interval: 1m
timeout: 5m
retries: 3
depends_on:
- postgres
volumes:
- ./airflow/dags:/opt/airflow/dags
- ./airflow/plugins:/opt/airflow/plugins
- ./data/logs:/opt/airflow/logs
- /var/run/docker.sock:/var/run/docker.sock
Even knowing that this question is old, my answer can still help other people that are having this problem.
I've found a elegant (and functional) solution in the following link:
https://onedevblog.com/how-to-fix-a-permission-denied-when-using-dockeroperator-in-airflow/
Quoting the article:
There is a more elegant approach which consists of “wrapping” the file around a service (accessible via TCP).
--
from the above link, the solution is to:
add an additional service docker-proxy to access localhost docker (/var/run/docker.sock) via tcp://docker-proxy:2375 using socat.
version: '3.7'
services:
docker-proxy:
image: bobrik/socat
command: "TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock"
ports:
- "2376:2375"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
replace kwarg docker_url='unix://var/run/docker.sock' with docker_url='tcp://docker-proxy:2375' for all DockerOperators.
If volume is already mapped to container, run chmod on HOST:
chmod 777 /var/run/docker.sock
Solved for me.
I ran into this issue on windows (dev environment), using the puckel image.
Note that the file /var/run/docker.sock does not exist on this image, I created it and changed the owner to the airflow user already existent in the puckel image
RUN touch /var/run/docker.sock
RUN chown -R airflow /var/run/docker.sock
You can try to run your docker file with:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image_name
I remember having issues similar to this and what I did, on top of what you have already done, was to dynamically add the docker group in the container with the GID of the docker.sock in a startup script like this:
#!/usr/bin/env bash
ARGS=$*
# Check if docker sock is mounted
if [[ -S /var/run/docker.sock ]];
then
GROUP=`stat -c %g /var/run/docker.sock`
groupadd -g $GROUP docker
usermod -aG docker airflow
else
echo "Docker unix sock not found. DockerOperators will not run."
fi
su airflow -c "/usr/bin/dumb-init -- /entrypoint $ARGS"
That way you don't touch the socket's permissions and the airflow user is still able to interact with it.
Some other considerations:
I had to redeclare the default user in the Dockerfile to start as root
Run airflow as user airflow
First things first, we need to mount /var/run/docker.sock as a volume, because it is the file through which the Docker Client and Docker Server can communicate, as is in this case - to launch a separate Docker container using the DockerOperator() from inside the running Airflow container. The UNIX domain socket requires either root permission, or Docker group membership. Since the Airflow user is not the root, we need to add it to the Docker group and this way it will get access to the docker.sock. For that you need to do the following:
1.1. Add a Docker group and your user to it in the terminal on your host machine (following the official Docker documentation)
sudo groupadd docker
sudo usermod -aG docker <your_user>
newgrp docker
1.2. Log out and log back in on your host machine
2.1. Get the Docker group id in the terminal on your host machine
cut -d: -f3 < <(getent group docker)
2.2. Add the Airflow user to this docker group (use the GID from the line above) in the Airflow's docker-compose.yaml
group_add:
- <docker_gid>
3.1. Get your user id in the terminal on your host machine
id -u <your_user>
3.2. Set your AIRFLOW_UID to match your user id (use the UID from the line above) on the host machine and AIRFLOW_GID to 0 in the Airflow's docker-compose.yaml
user: "<your_uid>:-50000:0"
4.1. If you're creating your own Dockerfile for the separate container, add your user there
ARG UID=<your_uid>
ENV USER=<your_user>
RUN useradd -u $UID -ms /bin/bash $USER
Add another leading / to /var/run/docker.sock (at the source which is a part before :) in volumes, as below:
volumes:
//var/run/docker.sock:/var/run/docker.sock
In my case 'sudo' before command helped - I run
sudo docker-compose up -d --build dev
instead of
docker-compose up -d --build dev
and it helped. Issue was in lack of rights.
Try
$sudo groupadd docker
$sudo usermod -aG docker $USER

podman-compose failing to compose

I realize podman-compose is still under development. I'm going to be replacing my docker stack with podman once I replace Debian with CentOS8 on my Poweredge server as part of my homelab. Right now I'm just testing out/playing around with podman on my Fedora machine.
OS: Fedora 32
KERNEL:5.6.12-300.fc32.x86_64
PODMAN: 1.9.2
PODMAN-COMPOSE: 0.1.5
PROBLEM: podman-compose is failing and I'm unable to ascertain the reason why.
Here's my docker-compose.yml:
version: "2.1"
services:
deluge:
image: linuxserver/deluge
container_name: deluge
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
# - UMASK_SET=022 #optional
# - DELUGE_LOGLEVEL=error #optional
volumes:
- /home/mike/test/deluge:/config
- /home/mike/Downloads:/downloads
restart: unless-stopped
When I run podman-compose up Here is the output:
[mike#localhost test]$ podman-compose up
podman pod create --name=test --share net
ce389be26589efe4433db15d875844b2047ea655c43dc84dbe49f69ffabe867e
0
podman create --name=deluge --pod=test -l io.podman.compose.config-hash=123 -l io.podman.compose.project=test -l io.podman.compose.version=0.0.1 -l com.docker.compose.container-number=1 -l com.docker.compose.service=deluge --network host -e PUID=1000 -e PGID=1000 -e TZ=America/New_York --mount type=bind,source=/home/mike/test/deluge,destination=/config --mount type=bind,source=/home/mike/Downloads,destination=/downloads --add-host deluge:127.0.0.1 --add-host deluge:127.0.0.1 linuxserver/deluge
Trying to pull registry.fedoraproject.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull registry.access.redhat.com/linuxserver/deluge...
name unknown: Repo not found
Trying to pull registry.centos.org/linuxserver/deluge...
manifest unknown: manifest unknown
Trying to pull docker.io/linuxserver/deluge...
Getting image source signatures
Copying blob a54f3db92256 done
Copying blob c114dc480980 done
Copying blob d0d29aaded3d done
Copying blob fa1dff0a3a53 done
Copying blob 5076df76a29a done
Copying blob a40b999f3c1e done
Copying config 31fddfa799 done
Writing manifest to image destination
Storing signatures
Error: error checking path "/home/mike/test/deluge": stat /home/mike/test/deluge: no such file or directory
125
podman start -a deluge
Error: unable to find container deluge: no container with name or ID deluge found: no such container
125
Then finally when I quite via ctrl-c :
^CTraceback (most recent call last):
File "/home/mike/.local/bin/podman-compose", line 8, in <module>
sys.exit(main())
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 1093, in main
podman_compose.run()
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 625, in run
cmd(self, args)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 782, in wrapped
return func(*args, **kw)
File "/home/mike/.local/lib/python3.8/site-packages/podman_compose.py", line 914, in compose_up
thread.join(timeout=1.0)
File "/usr/lib64/python3.8/threading.py", line 1005, in join
if not self._started.is_set():
File "/usr/lib64/python3.8/threading.py", line 513, in is_set
def is_set(self):
KeyboardInterrupt
I'm not experienced enough to be able to read through this and figure out what the problem is so I'm hoping to learn from you all.
Thanks!
There is an error in your path:
volumes:
/home/mike/test/deluge:/config
/home/mike/test/deluge: no such file or directory
Check the folder path.

How to call non-interactively docker-compose run command - SSH?

Description of the situation : I have a Ubuntu VM running on remote computer B. I can access this computer B via SSH executed on my host (computer A).
When I enter an interactive shell via SSH on computer B, I can execute all my docker commands without any trouble, the steps are the following :
ssh user#remote_IP (enters interactively in the remote shell)
docker-compose run -d --name wfirst --user 1000:1000 container (compose a docker container and starts it)
This works perfectly, my container is mounted, up and running.
However, if I try to run that command in a non-interactive way by writing in my host terminal :
ssh user#remoteIP "cd /path_to_docker_file;docker-compose run -d --name wfirst --user 1000:1000 container"
The command does not succeed, my container is not mounted and does not run. I was able to get more information by using the "--verbose" flag on the docker-compose command.
Here the interesting part of the output of the successful method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
WARNING: compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
WARNING: compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
urllib3.connectionpool._new_conn: Starting new HTTP connection (1): localhost:2375
urllib3.connectionpool._make_request: http://localhost:2375 "GET /v1.25/version HTTP/1.1" 200 758
compose.cli.command.get_client: docker-compose version 1.21.0, build unknown
docker-py version: 3.4.1
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.1c 28 May 2019
compose.cli.command.get_client: Docker base_url: http://localhost:2375
...
We can see that a HTTP connection to docker is successfully established. The command then continues its execution and is able to create the container from the docker image.
Here the output of the failing method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 384, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 380, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 367, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 386, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 306, in _raise_timeout
raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value)
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 11, in <module>
load_entry_point('docker-compose==1.21.0', 'console_scripts', 'docker-compose')()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 71, in main
command()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 124, in perform_command
project = project_from_options('.', options)
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 41, in project_from_options
compatibility=options.get('--compatibility'),
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 121, in get_project
host=host, environment=environment
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 95, in get_client
version_info = six.iteritems(client.version())
File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 198, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
We can see that the HTTP connection cannot be established.
Why do I need to send my commands in a non-interactive way ? Originally, I want to send those commands using Jenkins (I added a SSH plugin in Jenkins), and I noticed that the docker commands were not working (same output as shown in this post). After a couple of tests, I realised that when Jenkins uses SSH, he sends the commands in a non-interactive way :
- ssh user#remote_IP "commands_to_execute"
This non-interactive way is not an issue for simple commands, but it appears to be an issue for some docker commands which require to be executed in an interactive shell I guess ?
Has someone found a work-around to successfully execute docker commands in a non-interactive shell ? Any help, hint, or alternative solutions will be greatly appreciated, as I tried a lot of things without any success so far..
Did you check if it is docker-engine running on tcp or by unix socket?
Maybe isn't reading right environment vars when you login by ssh u # h "command".
If it's running on unix socket (by default) try...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=unix:///var/run/docker.sock docker-compose run -d --name wfirst --user 1000: 1000 container"
or if is running on TCP, you can try ...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=tcp://127.0.0.1:2375 docker-compose run -d --name wfirst --user 1000: 1000 container"

Docker compose returns error : UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)

I am trying to create a MySQL container inside Jenkins pipeline using Docker Compose . I run the following command after
installing docker compose version 1.9.0
docker-compose -f ./jenkins/docker-compose.yml run -rm redis
and my compose file looks like
version: '2.1'
services:
redis:
image: "redis:alpine"
When running this I am getting the error as follows :
docker-compose $'\342\200\223f' ./jenkins/docker-compose.yml run $'\342\200\223rm' redis
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 62, in main
File "compose/cli/main.py", line 93, in dispatch
File "compose/cli/docopt_command.py", line 31, in parse
File "compose/cli/docopt_command.py", line 42, in get_handler
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 0: ordinal not in range(128)
docker-compose returned -1
How to fix this ?
Did you copy and paste your Jenkins config by chance? \342\200\223 is the octal representation of an "en dash" which is being used in places where you want a hyphen. Try adjusting your Jenkins config to use hyphens instead.

Odoo docker, how to create database from command lines?

I want to create database from command line with docker. I try these lines
$ docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db postgres:9.5
$ docker run -p 8069:8069 --name odoo --link db:db -t odoo -- -d test
But I got these errors
2017-05-17 05:58:37,441 1 INFO ? odoo: Odoo version 10.0-20170207
2017-05-17 05:58:37,442 1 INFO ? odoo: Using configuration file at /etc/odoo/odoo.conf
2017-05-17 05:58:37,442 1 INFO ? odoo: addons paths: ['/var/lib/odoo/addons/10.0', u'/mnt/extra-addons', u'/usr/lib/python2.7/dist-packages/odoo/addons']
2017-05-17 05:58:37,442 1 INFO ? odoo: database: odoo#172.17.0.2:5432
2017-05-17 05:58:37,443 1 INFO ? odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 9, in <module>
odoo.cli.main()
File "/usr/lib/python2.7/dist-packages/odoo/cli/command.py", line 64, in main
o.run(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 164, in run
main(args)
File "/usr/lib/python2.7/dist-packages/odoo/cli/server.py", line 138, in main
odoo.service.db._create_empty_database(db_name)
File "/usr/lib/python2.7/dist-packages/odoo/service/db.py", line 79, in _create_empty_database
with closing(db.cursor()) as cr:
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 622, in cursor
return Cursor(self.__pool, self.dbname, self.dsn, serialized=serialized)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 164, in __init__
self._cnx = pool.borrow(dsn)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 505, in _locked
return fun(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/odoo/sql_db.py", line 573, in borrow
**connection_info)
File "/usr/lib/python2.7/dist-packages/psycopg2/__init__.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "172.17.0.2" and accepting
TCP/IP connections on port 5432?
What is the problem, are there other solution to do this?

Resources