I have Google Cloud SDK docker configured and running on my windows machine after following this.
https://hub.docker.com/r/google/cloud-sdk/
I'm trying to run this command to list a s3 bucket
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname
Authentication fails due to not setting the AWS keys. I presume from .boto file not having aws_access_key_id and aws_secret_access_key set. I can't seem to figure out how to set those variables.
I tried to run this to generate a .boto file but the bucket was shared with me and I don't have access keys.
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil config -a
Am I missing something or is there any other way to set these AWS credentials? Maybe with gcloud config set?
Here is the error log
ERROR 1202 03:16:07.326810 utils.py] Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python3.7/urllib/request.py", line 1324, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/lib/python3.7/http/client.py", line 1260, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1306, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1255, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1030, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 970, in send
self.connect()
File "/usr/lib/python3.7/http/client.py", line 942, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/utils.py", line 220, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/lib/python3.7/urllib/request.py", line 1352, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 111] Connection refused>
ERROR 1202 03:16:07.328018 utils.py] Unable to read instance data, giving up
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
sys.exit(gslib.__main__.main())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 444, in main
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 783, in _RunNamedCommandAndHandleExceptions
_HandleUnknownFailure(e)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 640, in _RunNamedCommandAndHandleExceptions
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 412, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 683, in RunCommand
listing_helper.ExpandUrlAndPrint(storage_url))
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 372, in ExpandUrlAndPrint
print_initial_newline=False)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 449, in _RecurseExpandUrlAndPrint
bucket_listing_fields=self.bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 540, in IterAll
expand_top_level_buckets=expand_top_level_buckets):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 174, in __iter__
fields=bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 447, in ListObjects
headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 166, in list_bucket
bucket = self.get_bucket(headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 181, in get_bucket
conn = self.connect()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 117, in connect
**connection_args)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/s3/connection.py", line 205, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/connection.py", line 573, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/auth.py", line 1032, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['S3HmacAuthV4Handler'] Check your credentials
I edited the .boto file in the legacy config and got this to work.
docker restart gcloud-config
docker exec -u 0 -it <container-id-here>/bin/bash
apt-get install nano
nano root/.config/gcloud/legacy_credentials/***/.boto
add under [Credentials]
aws_access_key_id = ***
aws_secret_access_key = ***
enjoy
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname
Related
I have been trying to use a custom dockerfile for mounting dags and plugins as follows:
FROM apache/airflow:2.3.0-python3.7
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/docker push
COPY requirements.txt .
RUN pip install -r requirements.txt
EXPOSE 5555
which I am building as:
docker build -f base.dockerfile --pull --tag lqc-airflow:0.0.1 .
minikube image load lqc-airflow:0.0.1
and then doing a helm install
helm upgrade $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE --set images.airflow.repository=lqc-airflow --set images.airflow.tag=0.0.1
which however is making just the airflow-worker-0 pod fail due to the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/celery_command.py", line 130, in worker
session = celery_app.backend.ResultSession()
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 109, in ResultSession
**self.engine_options)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 88, in session_factory
self.prepare_models(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 72, in prepare_models
ResultModelBase.metadata.create_all(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4745, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3007, in _run_ddl_visitor
with self.begin() as conn:
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2923, in begin
conn = self.connect(close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3095, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 91, in __init__
else engine.raw_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3174, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3145, in _wrap_pool_connect
e, dialect, self
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2004, in _handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3141, in _wrap_pool_connect
return fn()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 419, in checkout
rec = pool._do_get()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 259, in _do_get
return self._create_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 583, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: Temporary failure in name resolution
I am just following the reading advisory from airflow: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html
please note that there are no such name resolution errors if I dont use my custom docker file. Kindly help!
thanks #Hussein also for your support, but I could solve it by myself. I saw that the logs of the airflow-migrations pods, complaint of a revision id not being found. So somehow I chanced upon: https://airflow.apache.org/docs/apache-airflow/stable/migrations-ref.html
and there I saw the airflow tag which was above/carrying that alembic revision.
My revision id was ecb43d2a1842 and thus the changes to my docker file were:
FROM apache/airflow:2.4.3
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/
COPY requirements.txt .
RUN pip install -r requirements.txt
thus 2.4.3 was the catch.
Unable to install Ipex using docker centos image
I pulled this docker image: docker pull sysstacks/dlrs-pytorch-centos
Tried to run on my linux machine with this command: docker run -it sysstacks/dlrs-pytorch-centos bash
I was trying to install Ipex using centos docker image (image name: sysstacks/dlrs-pytorch-centos) unfortunately i got this error.
after setting env
enter image description here
[root#0d96884d3a05 /]# python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable
Looking in links: https://software.intel.com/ipex-whl-stable
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 204, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 319, in run
reqs, check_supported_wheels=not options.target_dir
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 128, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 341, in resolve
name, crit = self._merge_into_criterion(r, parent=None)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _merge_into_criterion
if not criterion.candidates:
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/structs.py", line 139, in __bool__
return bool(self._sequence)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in __bool__
return any(self)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 129, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 30, in _iter_built
for version, func in infos:
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 272, in iter_index_candidate_infos
hashes=hashes,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 879, in find_best_candidate
candidates = self.find_all_candidates(project_name)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 824, in find_all_candidates
page_candidates = list(page_candidates_it)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/sources.py", line 134, in page_candidates
yield from self._candidates_from_page(self._link)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 783, in process_project_url
html_page = self._link_collector.fetch_page(project_url)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 512, in fetch_page
return _get_html_page(location, session=self.session)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 422, in _get_html_page
resp = _get_html_response(url, session=session)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 137, in _get_html_response
"Cache-Control": "max-age=0",
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/network/session.py", line 449, in request
return super().request(method, url, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/cachecontrol/adapter.py", line 53, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connection.py", line 506, in _connect_tls_proxy
ssl_context=ssl_context,
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/util/ssl_.py", line 432, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/util/ssl_.py", line 474, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
_context=self, _session=session)
File "/usr/lib64/python3.6/ssl.py", line 732, in __init__
raise ValueError("check_hostname requires server_hostname")
NOTE:
I have tried all these methods shown below but none of them worked
Why requests raise this exception "check_hostname requires server_hostname"?
I think the issue is with your proxy, can you please try below commands in your docker container:
export HTTP_PROXY=http://x.x.x.x
export https_proxy=http://x.x.x.x
export HTTPS_PROXY=http://x.x.x.x
export http_proxy=http://x.x.x.x
Change x with your proxy ip
To preface this, I am running docker in a Ubuntu 20.04 VM on virtualbox.
I created a simple shell script to kill any process running on port 9042, then start my docker-compose file. Here is the script in question:
#!/bin/bash
# Check for and kill any processes running on port 9042
sudo kill -9 $(sudo lsof -t -i:9042)
# start docker-compose
docker-compose -f ./docker/docker-compose.yml up
Since running that, however, it has made my docker installation completely unresponsive to any sort of interaction. Any docker commands will hang indefinitely until cancelled with Ctrl+C, and any other system commands that use docker (such as sudo service docker start) will also hang indefinitely.
If I try to run dockerd, it fails with the message failed to start daemon: pid file found, ensure docker is not running or delete /var/run/docker.pid. As my system reports that docker is not running, I go ahead and delete var/run/docker.pid. If I then try to run dockerd again, I get a different error message: failed to start daemon: error while opening volume store metadata database: timeout.
At this stage, some of the docker commands start working again. docker version and docker help both work, but it is still reported that the docker daemon is not running. Attempting to run docker-compose up on a docker-compose file produces this output:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 950, in send
self.connect()
File "/home/david/.local/lib/python3.8/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 719, in urlopen
retries = retries.increment(
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 400, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/lib/python3/dist-packages/six.py", line 702, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 665, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 387, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.8/http/client.py", line 1010, in _send_output
self.send(msg)
File "/usr/lib/python3.8/http/client.py", line 950, in send
self.connect()
File "/home/david/.local/lib/python3.8/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/david/.local/lib/python3.8/site-packages/docker/api/client.py", line 205, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
File "/home/david/.local/lib/python3.8/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/home/david/.local/lib/python3.8/site-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/home/david/.local/lib/python3.8/site-packages/docker/api/client.py", line 228, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/david/.local/bin/docker-compose", line 8, in <module>
sys.exit(main())
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/main.py", line 67, in main
command()
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/main.py", line 123, in perform_command
project = project_from_options('.', options)
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/command.py", line 60, in project_from_options
return get_project(
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/command.py", line 131, in get_project
client = get_client(
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/docker_client.py", line 41, in get_client
client = docker_client(
File "/home/david/.local/lib/python3.8/site-packages/compose/cli/docker_client.py", line 170, in docker_client
client = APIClient(**kwargs)
File "/home/david/.local/lib/python3.8/site-packages/docker/api/client.py", line 188, in __init__
self._version = self._retrieve_server_version()
File "/home/david/.local/lib/python3.8/site-packages/docker/api/client.py", line 212, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', ConnectionRefusedError(111, 'Connection refused'))
Other system commands such as sudo service docker start still hang indefinitely until killed.
I have tried every single solution in this thread (Cannot connect to the Docker daemon at unix:/var/run/docker.sock. Is the docker daemon running?) and this one (Docker commands do not respond anymore), but none of them work.
Does anyone know what could be the issue here?
EDIT: A few more points -
The docker.pid file reappears again when I restart my VM
Restarting my VM does not do anything to remedy the problem
Executing commands as the root user likewise doesn't do anything
Trying to reinstall docker using sudo apt-get install --reinstall docker-ce also hangs at the stage Preparing to unpack .../docker-ce_5%3a20.10.0~1.1.beta1-0~ubuntu-focal_amd64.deb ...
I know this is super late, but I found an answer from another similar question I asked.
Docker containers are stored in the default location at /var/lib/docker/ on Linux. I was able to identify the container that was causing the issue and deleted the actual container files. I then used the CLI to remove all other traces of the container, and docker was able to start running as normal.
Obviously doing this is risky, so make sure you take adequate steps to back up your machine first.
I am following this tutorial to run Ethereum crawler using Docker on window 10 using docker
after executing
$ MYSQL_DATA_PATH="$HOME/indexer-data/mysql" GETH_DATA_PATH="$HOME/indexer-data/geth"docker-compose up
in this line i got error:
409 Client Error: Conflict for url: http+docker://localnpipe/v1.25/containers/ee2b46142bae704d4963853e22a77ba896a8f841120ecf5ac97befca91847672/attach?logs=0&stdout=1&stderr=1&stream=1
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "threading.py", line 916, in
_bootstrap_inner File "threading.py", line 864, in run File "compose\cli\log_printer.py", line 233, in watch_events File
"compose\container.py", line 215, in attach_log_stream File
"compose\container.py", line 307, in attach File
"site-packages\docker\utils\decorators.py", line 19, in wrapped File
"site-packages\docker\api\container.py", line 61, in attach File
"site-packages\docker\api\client.py", line 400, in _read_from_socket
File "site-packages\docker\api\client.py", line 311, in
_get_raw_response_socket File "site-packages\docker\api\client.py", line 263, in _raise_for_status File
"site-packages\docker\errors.py", line 19, in
create_api_error_from_http_exception File
"site-packages\requests\models.py", line 880, in json File
"site-packages\requests\models.py", line 828, in content File
"site-packages\requests\models.py", line 750, in generate File
"site-packages\urllib3\response.py", line 496, in stream File
"site-packages\urllib3\response.py", line 444, in read File
"http\client.py", line 449, in read File "http\client.py", line 493,
in readinto File "site-packages\docker\transport\npipesocket.py",
line 209, in readinto File
"site-packages\docker\transport\npipesocket.py", line 20, in wrapped
RuntimeError: Can not reuse socket after connection was closed.
git config --global user.name "usrname"
git config --global user.password "paswd"
git clone https://github.com/usrame/eth-indexer.git
cd eth-indexer
touch .env
echo "MYSQL_DATA_PATH=~/indexer-data/mysql" > .env
echo "GETH_DATA_PATH=~/indexer-data/geth" > .env
git add -f .env
docker-compose build
mkdir -p ~/indexer-data/mysql ~/indexer-data/geth
# Create database sechema
MYSQL_DATA_PATH="$HOME/indexer-data/mysql" docker-compose up idx-database idx-migration
MYSQL_DATA_PATH="$HOME/indexer-data/mysql" GETH_DATA_PATH="$HOME/indexer-data/geth" docker-compose up
I have setup a rancher with a host. Now i'm trying to add another host to the same, here is what I did, installed the docker-ce on the new host and then ran the following:
sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher 10.10.18.35:5000/rancher/ant http://10.10.18.35:8080/v1/scripts/9F4687F125CF02E0ACF1:1546214400000:L1JlsOJwk5weSux3NHNHNhLWkI
I get the following error (requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)):
INFO: Running Agent Registration Process, CATTLE_URL=http://10.10.18.35:8080/v1 INFO: Attempting to connect to: http://10.10.18.35:8080/v1 INFO: http://10.10.18.35:8080/v1 is accessible INFO: Inspecting host capabilities INFO: Boot2Docker: false INFO: Host writable: true INFO: Token: xxxxxxxx INFO: Running registration Traceback (most recent call last): File "./register.py", line 11, in <module>
secret_key=os.environ['CATTLE_REGISTRATION_SECRET_KEY']) File "/usr/local/lib/python2.7/dist-packages/cattle.py", line 45, in from_env
return gdapi.from_env(prefix=prefix, factory=Client, **kw) File "/usr/local/lib/python2.7/dist-packages/gdapi.py", line 613, in from_env
return _from_env(prefix=prefix, factory=factory, **args) File "/usr/local/lib/python2.7/dist-packages/gdapi.py", line 632, in
_from_env
return factory(**result) File "/usr/local/lib/python2.7/dist-packages/cattle.py", line 12, in
__init__
super(Client, self).__init__(*args, **kw) File "/usr/local/lib/python2.7/dist-packages/gdapi.py", line 197, in
__init__
self._load_schemas() File "/usr/local/lib/python2.7/dist-packages/gdapi.py", line 315, in
_load_schemas
response = self._get_response(self._url) File "/usr/local/lib/python2.7/dist-packages/gdapi.py", line 264, in
_get_response
headers=self._headers) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 501, in get
return self.request('GET', url, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs) File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 497, in send
raise SSLError(e, request=request) requests.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
Any clues to resolve this please
It is required to access Rancher via ssl. I see you’re using http instead. To generate a self-signed cert, follow this documentation; https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install/#option-a-default-self-signed-certificate