I'm trying to deploy a docker image to a remote server using docker-compose but I'm getting authentication issue:
Command:
docker-compose -H "ssh://root#host_ip" --tlscacert "~/.ssh/private_key" ps
Output:
sudo docker-compose -H "ssh://root#jenkins.evercam.io" --tlscacert "~/.ssh/id_jenkins" -f evercam_camera_server/docker-compose.yml ps
Traceback (most recent call last):
File "bin/docker-compose", line 3, in <module>
File "compose/cli/main.py", line 67, in main
File "compose/cli/main.py", line 123, in perform_command
File "compose/cli/command.py", line 69, in project_from_options
File "compose/cli/command.py", line 132, in get_project
File "compose/cli/docker_client.py", line 43, in get_client
File "compose/cli/docker_client.py", line 170, in docker_client
File "docker/api/client.py", line 164, in __init__
File "docker/transport/sshconn.py", line 113, in __init__
File "docker/transport/sshconn.py", line 121, in _connect
File "paramiko/client.py", line 446, in connect
File "paramiko/client.py", line 764, in _auth
File "paramiko/client.py", line 740, in _auth
File "paramiko/transport.py", line 1580, in auth_publickey
File "paramiko/auth_handler.py", line 250, in wait_for_response
paramiko.ssh_exception.AuthenticationException: Authentication failed.
[28861] Failed to execute script docker-compose
docker-compose version 1.27.4, build 40524192
docker version 20.10.5, build 55c4c88
ubuntu version 18.04
I tried with a recent version of docker compose which is 1.28.5 it's able to access the given host, but strangely it's asking the password five times!
Related
Given this Dockerfile:
FROM python:3.8-slim-buster
EXPOSE 5000
ADD . /data/sample.csv
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
WORKDIR /app
COPY . /app
I can run this flask app on my local machine, no errors. Pandas grabs the data from the csv and displays it in a return statement from the function (below):
from flask import Flask
import pandas as pd
app = Flask(__name__)
df = pd.read_csv("data\sample.csv")
lst_one = df['col1'].to_list()
lst_two = df['col2'].to_list()
lst_three = df['col3'].to_list()
lang = "World!"
#app.route('/')
def hello_world():
return f'Hello, {lang} {lst_one}, {lst_three}'
However, after creating Dockerfile and requirements.txt and building image, when running the container, this error persists. (Notice how the error message has the file path slashes going in the opposite direction from those in my code???)
[2021-04-30 01:34:04 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2021-04-30 01:34:04 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
[2021-04-30 01:34:04 +0000] [1] [INFO] Using worker: sync
[2021-04-30 01:34:04 +0000] [8] [INFO] Booting worker with pid: 8
[2021-04-30 01:34:05 +0000] [8] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 119, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/app.py", line 5, in <module>
df = pd.read_csv("data\sample.csv")
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 610, in read_csv
return _read(filepath_or_buffer, kwds)
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 462, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 819, in __init__
self._engine = self._make_engine(self.engine)
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 1050, in _make_engine
return mapping[engine](self.f, **self.options) # type: ignore[call-arg]
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 1867, in __init__
self._open_handles(src, kwds)
File "/usr/local/lib/python3.8/site-packages/pandas/io/parsers.py", line 1362, in _open_handles
self.handles = get_handle(
File "/usr/local/lib/python3.8/site-packages/pandas/io/common.py", line 642, in get_handle
handle = open(
***FileNotFoundError: [Errno 2] No such file or directory: 'data\\sample.csv'***
[2021-04-30 01:34:05 +0000] [8] [INFO] Worker exiting (pid: 8)
[2021-04-30 01:34:05 +0000] [1] [INFO] Shutting down: Master
[2021-04-30 01:34:05 +0000] [1] [INFO] Reason: Worker failed to boot
Tried a different path, an absolute path, everything and Docker just cannot recognize that this directory/file exists. The actual app I need to containerize has to be able to access data outside the container, eventually, but as a first step, I cannot even get Docker to recognize this data WITHIN the app. So, all you Docker experts, please step up and let me know what presumably simple thing I am obviously missing here. Thank you so much.
I'm learning the Symfony framework. I started the example given in the documentation with the book. However, I have a problem with the command
symfony new --version=5.0-6 --book guestbook
Checking book requirements are good but the step stopping docker containers didn't work.
I got :
[WEB] Stopping Docker Containers: [ KO ]
[14654] Failed to execute script docker-compose
Traceback (most recent call last):
File "urllib3/connectionpool.py", line 677, in urlopen
File "urllib3/connectionpool.py", line 392, in _make_request
File "http/client.py", line 1252, in request
File "http/client.py", line 1298, in _send_request
File "http/client.py", line 1247, in endheaders
File "http/client.py", line 1026, in _send_output
File "http/client.py", line 966, in send
File "docker/transport/unixconn.py", line 43, in connect
PermissionError: [Errno 13] Permission denied
And general exit code status 255
I think these are bad rights for the script and docker composes, could you enlighten me
Thank you very much
Ps : I'm on Ubuntu 20.04
Finally,
It was permission issue
On Linux command line :
sudo groupadd docker
sudo usermod -aG docker $USER
I have a REST endpoint that is deployed in a docker container using
Dockerfile snippet
ENTRYPOINT service nginx start | tensorflow_model_server --rest_api_port=8501 \
--model_name= <modelname> \
--model_base_path=/<Saved model path>
The REST endpoint is for a Tensorflow model.
I am using NGINX to accept the requests and route them to the REST endpoint
nginx.conf snippet
listen 8080 deferred;
client_max_body_size 500M;
location /invocations {
proxy_pass http://localhost:8501/v1/models/<modelname>:predict;
}
location /ping {
return 200 "OK";
}
Docker desktop version: 2.1.0.5
OS: macOS High Sierra 10.13.4
Invoking this service worked fine all this while but now throws the below Broken Pipe error starting yesterday. Can someone please help?
Using TensorFlow backend.
Traceback (most recent call last):
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/site-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 1065, in _send_output
self.send(chunk)
File "/Users/ls_spiderman/.pyenv/versions/3.6.8/lib/python3.6/http/client.py", line 986, in send
self.sock.sendall(data)
BrokenPipeError: [Errno 32] Broken pipe
Description of the situation : I have a Ubuntu VM running on remote computer B. I can access this computer B via SSH executed on my host (computer A).
When I enter an interactive shell via SSH on computer B, I can execute all my docker commands without any trouble, the steps are the following :
ssh user#remote_IP (enters interactively in the remote shell)
docker-compose run -d --name wfirst --user 1000:1000 container (compose a docker container and starts it)
This works perfectly, my container is mounted, up and running.
However, if I try to run that command in a non-interactive way by writing in my host terminal :
ssh user#remoteIP "cd /path_to_docker_file;docker-compose run -d --name wfirst --user 1000:1000 container"
The command does not succeed, my container is not mounted and does not run. I was able to get more information by using the "--verbose" flag on the docker-compose command.
Here the interesting part of the output of the successful method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
WARNING: compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
WARNING: compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
urllib3.connectionpool._new_conn: Starting new HTTP connection (1): localhost:2375
urllib3.connectionpool._make_request: http://localhost:2375 "GET /v1.25/version HTTP/1.1" 200 758
compose.cli.command.get_client: docker-compose version 1.21.0, build unknown
docker-py version: 3.4.1
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.1c 28 May 2019
compose.cli.command.get_client: Docker base_url: http://localhost:2375
...
We can see that a HTTP connection to docker is successfully established. The command then continues its execution and is able to create the container from the docker image.
Here the output of the failing method :
compose.config.config.find: Using configuration files: ./docker-compose.yml
compose.config.environment.__getitem__: The DISPLAY variable is not set. Defaulting to a blank string.
compose.config.environment.__getitem__: The NO_PROXY variable is not set. Defaulting to a blank string.
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
docker.utils.config.find_config_file: Trying paths: ['/home/user/.docker/config.json', '/home/user/.dockercfg']
docker.utils.config.find_config_file: No config file found
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 384, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 380, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib/python3.7/http/client.py", line 1344, in getresponse
response.begin()
File "/usr/lib/python3.7/http/client.py", line 306, in begin
version, status, reason = self._read_status()
File "/usr/lib/python3.7/http/client.py", line 267, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/usr/lib/python3.7/socket.py", line 589, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 367, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
raise value
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 386, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 306, in _raise_timeout
raise ReadTimeoutError(self, url, "Read timed out. (read timeout=%s)" % timeout_value)
urllib3.exceptions.ReadTimeoutError: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 11, in <module>
load_entry_point('docker-compose==1.21.0', 'console_scripts', 'docker-compose')()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 71, in main
command()
File "/usr/lib/python3/dist-packages/compose/cli/main.py", line 124, in perform_command
project = project_from_options('.', options)
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 41, in project_from_options
compatibility=options.get('--compatibility'),
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 121, in get_project
host=host, environment=environment
File "/usr/lib/python3/dist-packages/compose/cli/command.py", line 95, in get_client
version_info = six.iteritems(client.version())
File "/usr/lib/python3/dist-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/usr/lib/python3/dist-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/usr/lib/python3/dist-packages/docker/api/client.py", line 198, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
We can see that the HTTP connection cannot be established.
Why do I need to send my commands in a non-interactive way ? Originally, I want to send those commands using Jenkins (I added a SSH plugin in Jenkins), and I noticed that the docker commands were not working (same output as shown in this post). After a couple of tests, I realised that when Jenkins uses SSH, he sends the commands in a non-interactive way :
- ssh user#remote_IP "commands_to_execute"
This non-interactive way is not an issue for simple commands, but it appears to be an issue for some docker commands which require to be executed in an interactive shell I guess ?
Has someone found a work-around to successfully execute docker commands in a non-interactive shell ? Any help, hint, or alternative solutions will be greatly appreciated, as I tried a lot of things without any success so far..
Did you check if it is docker-engine running on tcp or by unix socket?
Maybe isn't reading right environment vars when you login by ssh u # h "command".
If it's running on unix socket (by default) try...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=unix:///var/run/docker.sock docker-compose run -d --name wfirst --user 1000: 1000 container"
or if is running on TCP, you can try ...
ssh user # remoteIP "cd /path_to_docker_file;DOCKER_HOST=tcp://127.0.0.1:2375 docker-compose run -d --name wfirst --user 1000: 1000 container"
I successfully installed docker and nvidia-docker on ubuntu 18.04
I pull this image from NVIDIA's GPU cloud
https://ngc.nvidia.com/catalog/containers/nvidia:caffe
and ran it with this command
nvidia-docker run -it --rm -v /home/stefan/Dropbox:/data -p 8888:8888 nvcr.io/nvidia/caffe:19.03-py2 sh
The container gives me a shell prompt and it seems to work, for example
# nvidia-smi
results in
Sat Mar 30 21:03:30 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.39 Driver Version: 418.39 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 105... On | 00000000:01:00.0 On | N/A |
| 20% 30C P8 N/A / 75W | 441MiB / 4038MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
It sees my wimpy gpu. I try to run jupyter with this command
#jupyter-notebook
but I get
[I 21:05:18.088 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1628, in initialize
self.init_webapp()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1407, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 143, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 168, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 99] Cannot assign requested address
I know jupyter is installed in the container because when I type
#jupyter --version
I get
4.4.0
Typing
# jupyter
gives
usage: jupyter [-h] [--version] [--config-dir] [--data-dir] [--runtime-dir]
[--paths] [--json]
[subcommand]
jupyter: error: one of the arguments --version subcommand --config-dir --data-dir --runtime-dir --paths is required
I have several notebooks in the host directory I attached to the container
# ls
NBA.ipynb exponents.ipynb hello_deep_learning-master
but nothing seems to work
# jupyter NBA.ipynb
Error executing Jupyter command 'NBA.ipynb': [Errno 2] No such file or directory
# jupyter notebook NBA.ipynb
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1628, in initialize
self.init_webapp()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1407, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 143, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 168, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 99] Cannot assign requested address
# jupyter-notebook NBA.ipynb
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 10, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "</usr/local/lib/python2.7/dist-packages/decorator.pyc:decorator-gen-7>", line 2, in initialize
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1628, in initialize
self.init_webapp()
File "/usr/local/lib/python2.7/dist-packages/notebook/notebookapp.py", line 1407, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/local/lib/python2.7/dist-packages/tornado/tcpserver.py", line 143, in listen
sockets = bind_sockets(port, address=address)
File "/usr/local/lib/python2.7/dist-packages/tornado/netutil.py", line 168, in bind_sockets
sock.bind(sockaddr)
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 99] Cannot assign requested address
I think it's a syntax issue because this works
docker run -it --rm -v ~/Dropbox:/tf/notebooks -p 8888:8888 tensorflow/tensorflow:latest-py3-jupyter
It starts the jupyter server in the container and in a browser I can open a notebook at 127.0.0.1 which shows a directory where I can see a folder named 'notebooks' which contains my Dropbox contents. Just as expected since I mounted my dropbox folder as a volume in the command above.
But if I type this
docker run -it --rm -v ~/Dropbox:/tf/notebooks -p 8888:8888 tensorflow/tensorflow:latest-py3-jupyter sh
I am in a shell but cannot start jupyter. I get the same error as I was before with the nvcr.io/nvidia/caffe image. How can I start jupyter AFTER I am in the running docker container shell?
I think I figured it out. At the shell prompt of the container , I type
jupyter notebook --ip=0.0.0.0 --allow-root
I'll leave this here in case any other noob like me has a similar problem. (Unless the moderator feels it should be edited or nuked)