I am following this tutorial to run Ethereum crawler using Docker on window 10 using docker
after executing
$ MYSQL_DATA_PATH="$HOME/indexer-data/mysql" GETH_DATA_PATH="$HOME/indexer-data/geth"docker-compose up
in this line i got error:
409 Client Error: Conflict for url: http+docker://localnpipe/v1.25/containers/ee2b46142bae704d4963853e22a77ba896a8f841120ecf5ac97befca91847672/attach?logs=0&stdout=1&stderr=1&stream=1
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "threading.py", line 916, in
_bootstrap_inner File "threading.py", line 864, in run File "compose\cli\log_printer.py", line 233, in watch_events File
"compose\container.py", line 215, in attach_log_stream File
"compose\container.py", line 307, in attach File
"site-packages\docker\utils\decorators.py", line 19, in wrapped File
"site-packages\docker\api\container.py", line 61, in attach File
"site-packages\docker\api\client.py", line 400, in _read_from_socket
File "site-packages\docker\api\client.py", line 311, in
_get_raw_response_socket File "site-packages\docker\api\client.py", line 263, in _raise_for_status File
"site-packages\docker\errors.py", line 19, in
create_api_error_from_http_exception File
"site-packages\requests\models.py", line 880, in json File
"site-packages\requests\models.py", line 828, in content File
"site-packages\requests\models.py", line 750, in generate File
"site-packages\urllib3\response.py", line 496, in stream File
"site-packages\urllib3\response.py", line 444, in read File
"http\client.py", line 449, in read File "http\client.py", line 493,
in readinto File "site-packages\docker\transport\npipesocket.py",
line 209, in readinto File
"site-packages\docker\transport\npipesocket.py", line 20, in wrapped
RuntimeError: Can not reuse socket after connection was closed.
git config --global user.name "usrname"
git config --global user.password "paswd"
git clone https://github.com/usrame/eth-indexer.git
cd eth-indexer
touch .env
echo "MYSQL_DATA_PATH=~/indexer-data/mysql" > .env
echo "GETH_DATA_PATH=~/indexer-data/geth" > .env
git add -f .env
docker-compose build
mkdir -p ~/indexer-data/mysql ~/indexer-data/geth
# Create database sechema
MYSQL_DATA_PATH="$HOME/indexer-data/mysql" docker-compose up idx-database idx-migration
MYSQL_DATA_PATH="$HOME/indexer-data/mysql" GETH_DATA_PATH="$HOME/indexer-data/geth" docker-compose up
Related
I have been trying to use a custom dockerfile for mounting dags and plugins as follows:
FROM apache/airflow:2.3.0-python3.7
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/docker push
COPY requirements.txt .
RUN pip install -r requirements.txt
EXPOSE 5555
which I am building as:
docker build -f base.dockerfile --pull --tag lqc-airflow:0.0.1 .
minikube image load lqc-airflow:0.0.1
and then doing a helm install
helm upgrade $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE --set images.airflow.repository=lqc-airflow --set images.airflow.tag=0.0.1
which however is making just the airflow-worker-0 pod fail due to the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/celery_command.py", line 130, in worker
session = celery_app.backend.ResultSession()
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 109, in ResultSession
**self.engine_options)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 88, in session_factory
self.prepare_models(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 72, in prepare_models
ResultModelBase.metadata.create_all(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4745, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3007, in _run_ddl_visitor
with self.begin() as conn:
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2923, in begin
conn = self.connect(close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3095, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 91, in __init__
else engine.raw_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3174, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3145, in _wrap_pool_connect
e, dialect, self
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2004, in _handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3141, in _wrap_pool_connect
return fn()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 419, in checkout
rec = pool._do_get()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 259, in _do_get
return self._create_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 583, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: Temporary failure in name resolution
I am just following the reading advisory from airflow: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html
please note that there are no such name resolution errors if I dont use my custom docker file. Kindly help!
thanks #Hussein also for your support, but I could solve it by myself. I saw that the logs of the airflow-migrations pods, complaint of a revision id not being found. So somehow I chanced upon: https://airflow.apache.org/docs/apache-airflow/stable/migrations-ref.html
and there I saw the airflow tag which was above/carrying that alembic revision.
My revision id was ecb43d2a1842 and thus the changes to my docker file were:
FROM apache/airflow:2.4.3
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/
COPY requirements.txt .
RUN pip install -r requirements.txt
thus 2.4.3 was the catch.
I have Google Cloud SDK docker configured and running on my windows machine after following this.
https://hub.docker.com/r/google/cloud-sdk/
I'm trying to run this command to list a s3 bucket
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname
Authentication fails due to not setting the AWS keys. I presume from .boto file not having aws_access_key_id and aws_secret_access_key set. I can't seem to figure out how to set those variables.
I tried to run this to generate a .boto file but the bucket was shared with me and I don't have access keys.
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil config -a
Am I missing something or is there any other way to set these AWS credentials? Maybe with gcloud config set?
Here is the error log
ERROR 1202 03:16:07.326810 utils.py] Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python3.7/urllib/request.py", line 1324, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/lib/python3.7/http/client.py", line 1260, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1306, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1255, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1030, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 970, in send
self.connect()
File "/usr/lib/python3.7/http/client.py", line 942, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/utils.py", line 220, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/lib/python3.7/urllib/request.py", line 1352, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 111] Connection refused>
ERROR 1202 03:16:07.328018 utils.py] Unable to read instance data, giving up
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
sys.exit(gslib.__main__.main())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 444, in main
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 783, in _RunNamedCommandAndHandleExceptions
_HandleUnknownFailure(e)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 640, in _RunNamedCommandAndHandleExceptions
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 412, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 683, in RunCommand
listing_helper.ExpandUrlAndPrint(storage_url))
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 372, in ExpandUrlAndPrint
print_initial_newline=False)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 449, in _RecurseExpandUrlAndPrint
bucket_listing_fields=self.bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 540, in IterAll
expand_top_level_buckets=expand_top_level_buckets):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 174, in __iter__
fields=bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 447, in ListObjects
headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 166, in list_bucket
bucket = self.get_bucket(headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 181, in get_bucket
conn = self.connect()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 117, in connect
**connection_args)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/s3/connection.py", line 205, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/connection.py", line 573, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/auth.py", line 1032, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['S3HmacAuthV4Handler'] Check your credentials
I edited the .boto file in the legacy config and got this to work.
docker restart gcloud-config
docker exec -u 0 -it <container-id-here>/bin/bash
apt-get install nano
nano root/.config/gcloud/legacy_credentials/***/.boto
add under [Credentials]
aws_access_key_id = ***
aws_secret_access_key = ***
enjoy
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname
I'm having some difficulty figuring out how to run conda install ... in a Dockerfile, while referencing a non-standard certificates file. In my Dockerfile, I have:
RUN REQUESTS_CA_BUNDLE=/non-standard-certificates.pem conda update -n base conda -y
which appears to run fine. But then I have:
RUN REQUESTS_CA_BUNDLE=/non-standard-certificates.pem CURL_CA_BUNDLE=/non-standard-certificates.pem conda install -n base -c defaults -c conda-forge <list-of-packages>
which ends in:
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/linux-64/current_repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https://conda.anaconda.org/conda-forge/linux-64'
Can anyone see what is incorrect here?
Update:
I have since figured out that I should probably be using RUN conda config --set client_ssl_cert ..., and that the certificate file in question had Windows carriage-returns in it (which I removed with dos2unix), but now I'm getting a different error:
Step 7/29 : RUN conda update -n base conda -y
---> Running in 052e36266aef
Collecting package metadata (current_repodata.json): ...working... failed
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/conda/exceptions.py", line 1074, in __call__
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/main.py", line 84, in _main
exit_code = do_call(args, p)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/conda_argparse.py", line 82, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/main_update.py", line 20, in execute
install(args, parser, 'update')
File "/opt/conda/lib/python3.7/site-packages/conda/cli/install.py", line 265, in install
should_retry_solve=(_should_retry_unfrozen or repodata_fn != repodata_fns[-1]),
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 117, in solve_for_transaction
should_retry_solve)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 158, in solve_for_diff
force_remove, should_retry_solve)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 262, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "/opt/conda/lib/python3.7/site-packages/conda/common/io.py", line 88, in decorated
return f(*args, **kwds)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 415, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 1004, in _prepare
self.subdirs, prepared_specs, self._repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/index.py", line 214, in get_reduced_index
repodata_fn=repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 97, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
yield fs.pop().result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/conda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 90, in <lambda>
package_ref_or_match_spec))
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 102, in query
self.load()
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 166, in load
_internal_state = self._load()
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 240, in _load
repodata_fn=self.repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 477, in fetch_repodata_remote_request
timeout=timeout)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 839, in _validate_conn
conn.connect()
File "/opt/conda/lib/python3.7/site-packages/urllib3/connection.py", line 344, in connect
ssl_context=context)
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 338, in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 439, in load_cert_chain
self._ctx.use_privatekey_file(keyfile or certfile)
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 990, in use_privatekey_file
self._raise_passphrase_exception()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 967, in _raise_passphrase_exception
_raise_current_error()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('PEM routines', 'get_name', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
`$ /opt/conda/bin/conda update -n base conda -y`
environment variables:
CIO_TEST=<not set>
CONDA_ROOT=/opt/conda
PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin
:/bin
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
active environment : None
user config file : /root/.condarc
populated config files : /root/.condarc
conda version : 4.7.12
conda-build version : not installed
python version : 3.7.4.final.0
virtual packages :
base environment : /opt/conda (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/conda/pkgs
/root/.conda/pkgs
envs directories : /opt/conda/envs
/root/.conda/envs
platform : linux-64
user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.4 Linux/3.10.0-1160.6.1.el7.x86_64 debian/10 glibc/2.28
UID:GID : 0:0
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
Upload did not complete.
ERROR: Service 'base_image' failed to build: The command '/bin/sh -c conda update -n base conda -y' returned a non-zero code: 1
I'm new to docker and I'm trying to follow this simple "getting started" tutorial https://docs.docker.com/compose/gettingstarted/ using a newly first time installation of docker (for Windows 10) dowloaded from here: https://hub.docker.com/editions/community/docker-ce-desktop-windows/.
At step 4 of this tutorial i get this error:
PS D:\composetest> docker-compose up
Building web
Traceback (most recent call last):
File "site-packages\docker\credentials\store.py", line 80, in _execute
File "subprocess.py", line 395, in check_output
File "subprocess.py", line 487, in run
subprocess.CalledProcessError: Command '['C:\\Program Files\\Docker\\Docker\\resources\\bin\\docker-credential-desktop.EXE', 'list']' returned non-zero exit status 1.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "docker-compose", line 6, in <module>
File "compose\cli\main.py", line 72, in main
File "compose\cli\main.py", line 128, in perform_command
File "compose\cli\main.py", line 1078, in up
File "compose\cli\main.py", line 1074, in up
File "compose\project.py", line 548, in up
File "compose\service.py", line 367, in ensure_image_exists
File "compose\service.py", line 1106, in build
File "site-packages\docker\api\build.py", line 261, in build
File "site-packages\docker\api\build.py", line 308, in _set_auth_headers
File "site-packages\docker\auth.py", line 302, in get_all_credentials
File "site-packages\docker\credentials\store.py", line 71, in list
File "site-packages\docker\credentials\store.py", line 93, in _execute
docker.credentials.errors.StoreError: Credentials store docker-credential-desktop exited with "error listing credentials - err: exit status 1, out: `Impossibile trovare elemento.`".
[13284] Failed to execute script docker-compose
EDIT: The accepted solution in docker-compose unable to start, unfortunately, did not work.
What is going wrong?
I have the same issue...
Try:
$nano ~/.docker/config.json
In this file change credsStore to credStore
Now run your docker-compose. If its not works try sudo
try to add to the environment path :
C:\Program Files\Docker\Docker\resources\bin
If you are using WSL2 run this command
sudo ln -s /mnt/c/Program\ Files/Docker/Docker/resources/bin/docker-credential-desktop.exe /usr/bin/docker-credential-desktop.exe
I am trying to build gerrit events-log plugin via Buck build, but its failing with error:
root#monkey:/tmp/events-log# buck build plugin
[-] PROCESSING BUCK FILES...FINISHED 0.3s [100%]
[+] DOWNLOADING... (0.00 B/S, TOTAL: 0.00 B, 0 Artifacts)
[+] BUILDING...0.2s [19%] (3/19 JOBS, 0 UPDATED, 0.0% CACHE MISS)
|=> //lib/commons:dbcp__download_bin... 0.1s (running genrule[0.0s])
Traceback (most recent call last):
File ".bootstrap/_pex/pex.py", line 320, in execute
File ".bootstrap/_pex/pex.py", line 253, in _wrap_coverage
File ".bootstrap/_pex/pex.py", line 285, in _wrap_profiling
File ".bootstrap/_pex/pex.py", line 363, in _execute
File ".bootstrap/_pex/pex.py", line 421, in execute_entry
File ".bootstrap/_pex/pex.py", line 426, in execute_module
File "/usr/lib/python2.7/runpy.py", line 180, in run_module
fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "bucklets/tools/download_file.py", line 198, in <module>
File "/usr/lib/python2.7/subprocess.py", line 535, in check_call
retcode = call(*popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
Can somebody please tell how to fix it?
Thanks
If you haven't already figured this out, ensure that 'zip' is installed and in your $PATH.
github issue for similar failure in a different plugin