ERROR: while installing Ipex using docker centos image - intel-pytorch

Unable to install Ipex using docker centos image
I pulled this docker image: docker pull sysstacks/dlrs-pytorch-centos
Tried to run on my linux machine with this command: docker run -it sysstacks/dlrs-pytorch-centos bash
I was trying to install Ipex using centos docker image (image name: sysstacks/dlrs-pytorch-centos) unfortunately i got this error.
after setting env
enter image description here
[root#0d96884d3a05 /]# python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable
Looking in links: https://software.intel.com/ipex-whl-stable
ERROR: Exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 180, in _main
status = self.run(options, args)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 204, in wrapper
return func(self, options, args)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 319, in run
reqs, check_supported_wheels=not options.target_dir
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 128, in resolve
requirements, max_rounds=try_to_avoid_resolution_too_deep
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 473, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 341, in resolve
name, crit = self._merge_into_criterion(r, parent=None)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 172, in _merge_into_criterion
if not criterion.candidates:
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/resolvelib/structs.py", line 139, in __bool__
return bool(self._sequence)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in __bool__
return any(self)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 129, in <genexpr>
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 30, in _iter_built
for version, func in infos:
File "/usr/local/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 272, in iter_index_candidate_infos
hashes=hashes,
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 879, in find_best_candidate
candidates = self.find_all_candidates(project_name)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 824, in find_all_candidates
page_candidates = list(page_candidates_it)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/sources.py", line 134, in page_candidates
yield from self._candidates_from_page(self._link)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/package_finder.py", line 783, in process_project_url
html_page = self._link_collector.fetch_page(project_url)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 512, in fetch_page
return _get_html_page(location, session=self.session)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 422, in _get_html_page
resp = _get_html_response(url, session=session)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/index/collector.py", line 137, in _get_html_response
"Cache-Control": "max-age=0",
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_internal/network/session.py", line 449, in request
return super().request(method, url, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/cachecontrol/adapter.py", line 53, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/connection.py", line 506, in _connect_tls_proxy
ssl_context=ssl_context,
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/util/ssl_.py", line 432, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "/usr/local/lib/python3.6/site-packages/pip/_vendor/urllib3/util/ssl_.py", line 474, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "/usr/lib64/python3.6/ssl.py", line 365, in wrap_socket
_context=self, _session=session)
File "/usr/lib64/python3.6/ssl.py", line 732, in __init__
raise ValueError("check_hostname requires server_hostname")
NOTE:
I have tried all these methods shown below but none of them worked
Why requests raise this exception "check_hostname requires server_hostname"?

I think the issue is with your proxy, can you please try below commands in your docker container:
export HTTP_PROXY=http://x.x.x.x
export https_proxy=http://x.x.x.x
export HTTPS_PROXY=http://x.x.x.x
export http_proxy=http://x.x.x.x
Change x with your proxy ip

Related

airflow worker crashing in helms upgrade with Temporary failure in name resolution for postgres

I have been trying to use a custom dockerfile for mounting dags and plugins as follows:
FROM apache/airflow:2.3.0-python3.7
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/docker push
COPY requirements.txt .
RUN pip install -r requirements.txt
EXPOSE 5555
which I am building as:
docker build -f base.dockerfile --pull --tag lqc-airflow:0.0.1 .
minikube image load lqc-airflow:0.0.1
and then doing a helm install
helm upgrade $RELEASE_NAME apache-airflow/airflow --namespace $NAMESPACE --set images.airflow.repository=lqc-airflow --set images.airflow.tag=0.0.1
which however is making just the airflow-worker-0 pod fail due to the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/celery_command.py", line 130, in worker
session = celery_app.backend.ResultSession()
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/__init__.py", line 109, in ResultSession
**self.engine_options)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 88, in session_factory
self.prepare_models(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/celery/backends/database/session.py", line 72, in prepare_models
ResultModelBase.metadata.create_all(engine)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4745, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3007, in _run_ddl_visitor
with self.begin() as conn:
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2923, in begin
conn = self.connect(close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3095, in connect
return self._connection_cls(self, close_with_result=close_with_result)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 91, in __init__
else engine.raw_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3174, in raw_connection
return self._wrap_pool_connect(self.pool.connect, _connection)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3145, in _wrap_pool_connect
e, dialect, self
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2004, in _handle_dbapi_exception_noconnection
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 3141, in _wrap_pool_connect
return fn()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 301, in connect
return _ConnectionFairy._checkout(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 755, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 419, in checkout
rec = pool._do_get()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/impl.py", line 259, in _do_get
return self._create_connection()
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 247, in _create_connection
return _ConnectionRecord(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 362, in __init__
self.__connect(first_connect_check=True)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 605, in __connect
pool.logger.debug("Error on connect(): %s", e)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 72, in __exit__
with_traceback=exc_tb,
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/pool/base.py", line 599, in __connect
connection = pool._invoke_creator(self)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 578, in connect
return dialect.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 583, in connect
return self.dbapi.connect(*cargs, **cparams)
File "/home/airflow/.local/lib/python3.7/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "postgres" to address: Temporary failure in name resolution
I am just following the reading advisory from airflow: https://airflow.apache.org/docs/helm-chart/stable/manage-dags-files.html
please note that there are no such name resolution errors if I dont use my custom docker file. Kindly help!
thanks #Hussein also for your support, but I could solve it by myself. I saw that the logs of the airflow-migrations pods, complaint of a revision id not being found. So somehow I chanced upon: https://airflow.apache.org/docs/apache-airflow/stable/migrations-ref.html
and there I saw the airflow tag which was above/carrying that alembic revision.
My revision id was ecb43d2a1842 and thus the changes to my docker file were:
FROM apache/airflow:2.4.3
COPY ./dags/ /opt/airflow/dags/
COPY ./plugins/ /opt/airflow/plugins/
COPY requirements.txt .
RUN pip install -r requirements.txt
thus 2.4.3 was the catch.

cannot create piepenv in linux ubuntu

hello everyone…i have problem in launching venv using pipenv…i never had this problem in windows…I have recently migrated to linux Ubuntu…so i cannot solve this problem…I would be grateful if someone could help me
Creating a virtualenv for this project...
Pipfile: /media/sasan/F/my projects/Djprojects/newprj/Pipfile
Using /usr/bin/python3 (3.10.4) to create virtualenv...
⠸ Creating virtual environment...created virtual environment CPython3.10.4.final.0-64 in 106ms
creator CPython3Posix(dest=/home/sasan/.local/share/virtualenvs/newprj-cXkKfH7p, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/sasan/.local/share/virtualenv)
added seed packages: pip==22.2.2, setuptools==65.3.0, wheel==0.37.1
activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
✔ Successfully created virtual environment!
Traceback (most recent call last):
File "/usr/bin/pipenv", line 33, in <module>
sys.exit(load_entry_point('pipenv==11.9.0', 'console_scripts', 'pipenv')())
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/cli/options.py", line 57, in main
return super().main(*args, **kwargs, windows_expand_args=False)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/cli/command.py", line 397, in shell
do_shell(
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/core.py", line 2520, in do_shell
ensure_project(
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/core.py", line 525, in ensure_project
ensure_virtualenv(
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/core.py", line 458, in ensure_virtualenv
do_create_virtualenv(
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/core.py", line 959, in do_create_virtualenv
project._environment = Environment(
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/environment.py", line 79, in __init__
self._base_paths = self.get_paths()
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/environment.py", line 390, in get_paths
c = subprocess_run(command)
File "/home/sasan/.local/lib/python3.10/site-packages/pipenv/utils/processes.py", line 75, in subprocess_run
return subprocess.run(
File "/usr/lib/python3.10/subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/lib/python3.10/subprocess.py", line 966, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1842, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/home/sasan/.local/share/virtualenvs/newprj-cXkKfH7p/bin/python'

Configure gsutil s3 in Google Cloud SDK Docker

I have Google Cloud SDK docker configured and running on my windows machine after following this.
https://hub.docker.com/r/google/cloud-sdk/
I'm trying to run this command to list a s3 bucket
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname
Authentication fails due to not setting the AWS keys. I presume from .boto file not having aws_access_key_id and aws_secret_access_key set. I can't seem to figure out how to set those variables.
I tried to run this to generate a .boto file but the bucket was shared with me and I don't have access keys.
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil config -a
Am I missing something or is there any other way to set these AWS credentials? Maybe with gcloud config set?
Here is the error log
ERROR 1202 03:16:07.326810 utils.py] Caught exception reading instance data
Traceback (most recent call last):
File "/usr/lib/python3.7/urllib/request.py", line 1324, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/lib/python3.7/http/client.py", line 1260, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1306, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1255, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.7/http/client.py", line 1030, in _send_output
self.send(msg)
File "/usr/lib/python3.7/http/client.py", line 970, in send
self.connect()
File "/usr/lib/python3.7/http/client.py", line 942, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/utils.py", line 220, in retry_url
r = opener.open(req, timeout=timeout)
File "/usr/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/lib/python3.7/urllib/request.py", line 1352, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.7/urllib/request.py", line 1326, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 111] Connection refused>
ERROR 1202 03:16:07.328018 utils.py] Unable to read instance data, giving up
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
sys.exit(gslib.__main__.main())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 444, in main
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 783, in _RunNamedCommandAndHandleExceptions
_HandleUnknownFailure(e)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 640, in _RunNamedCommandAndHandleExceptions
user_project=user_project)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/command_runner.py", line 412, in RunNamedCommand
return_code = command_inst.RunCommand()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/commands/ls.py", line 683, in RunCommand
listing_helper.ExpandUrlAndPrint(storage_url))
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 372, in ExpandUrlAndPrint
print_initial_newline=False)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/utils/ls_helper.py", line 449, in _RecurseExpandUrlAndPrint
bucket_listing_fields=self.bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 540, in IterAll
expand_top_level_buckets=expand_top_level_buckets):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/wildcard_iterator.py", line 174, in __iter__
fields=bucket_listing_fields):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py", line 447, in ListObjects
headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 166, in list_bucket
bucket = self.get_bucket(headers=headers)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 181, in get_bucket
conn = self.connect()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/storage_uri.py", line 117, in connect
**connection_args)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/s3/connection.py", line 205, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/connection.py", line 573, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/auth.py", line 1032, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['S3HmacAuthV4Handler'] Check your credentials
I edited the .boto file in the legacy config and got this to work.
docker restart gcloud-config
docker exec -u 0 -it <container-id-here>/bin/bash
apt-get install nano
nano root/.config/gcloud/legacy_credentials/***/.boto
add under [Credentials]
aws_access_key_id = ***
aws_secret_access_key = ***
enjoy
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls s3://bucketname

Docker Compose, Conda, and non-standard certificates

I'm having some difficulty figuring out how to run conda install ... in a Dockerfile, while referencing a non-standard certificates file. In my Dockerfile, I have:
RUN REQUESTS_CA_BUNDLE=/non-standard-certificates.pem conda update -n base conda -y
which appears to run fine. But then I have:
RUN REQUESTS_CA_BUNDLE=/non-standard-certificates.pem CURL_CA_BUNDLE=/non-standard-certificates.pem conda install -n base -c defaults -c conda-forge <list-of-packages>
which ends in:
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://conda.anaconda.org/conda-forge/linux-64/current_repodata.json>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
'https://conda.anaconda.org/conda-forge/linux-64'
Can anyone see what is incorrect here?
Update:
I have since figured out that I should probably be using RUN conda config --set client_ssl_cert ..., and that the certificate file in question had Windows carriage-returns in it (which I removed with dos2unix), but now I'm getting a different error:
Step 7/29 : RUN conda update -n base conda -y
---> Running in 052e36266aef
Collecting package metadata (current_repodata.json): ...working... failed
# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/conda/exceptions.py", line 1074, in __call__
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/main.py", line 84, in _main
exit_code = do_call(args, p)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/conda_argparse.py", line 82, in do_call
exit_code = getattr(module, func_name)(args, parser)
File "/opt/conda/lib/python3.7/site-packages/conda/cli/main_update.py", line 20, in execute
install(args, parser, 'update')
File "/opt/conda/lib/python3.7/site-packages/conda/cli/install.py", line 265, in install
should_retry_solve=(_should_retry_unfrozen or repodata_fn != repodata_fns[-1]),
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 117, in solve_for_transaction
should_retry_solve)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 158, in solve_for_diff
force_remove, should_retry_solve)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 262, in solve_final_state
ssc = self._collect_all_metadata(ssc)
File "/opt/conda/lib/python3.7/site-packages/conda/common/io.py", line 88, in decorated
return f(*args, **kwds)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 415, in _collect_all_metadata
index, r = self._prepare(prepared_specs)
File "/opt/conda/lib/python3.7/site-packages/conda/core/solve.py", line 1004, in _prepare
self.subdirs, prepared_specs, self._repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/index.py", line 214, in get_reduced_index
repodata_fn=repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 97, in query_all
result = tuple(concat(executor.map(subdir_query, channel_urls)))
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
yield fs.pop().result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/conda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 90, in <lambda>
package_ref_or_match_spec))
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 102, in query
self.load()
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 166, in load
_internal_state = self._load()
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 240, in _load
repodata_fn=self.repodata_fn)
File "/opt/conda/lib/python3.7/site-packages/conda/core/subdir_data.py", line 477, in fetch_repodata_remote_request
timeout=timeout)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 546, in get
return self.request('GET', url, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py", line 839, in _validate_conn
conn.connect()
File "/opt/conda/lib/python3.7/site-packages/urllib3/connection.py", line 344, in connect
ssl_context=context)
File "/opt/conda/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 338, in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
File "/opt/conda/lib/python3.7/site-packages/urllib3/contrib/pyopenssl.py", line 439, in load_cert_chain
self._ctx.use_privatekey_file(keyfile or certfile)
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 990, in use_privatekey_file
self._raise_passphrase_exception()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/SSL.py", line 967, in _raise_passphrase_exception
_raise_current_error()
File "/opt/conda/lib/python3.7/site-packages/OpenSSL/_util.py", line 54, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.SSL.Error: [('PEM routines', 'get_name', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
`$ /opt/conda/bin/conda update -n base conda -y`
environment variables:
CIO_TEST=<not set>
CONDA_ROOT=/opt/conda
PATH=/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin
:/bin
REQUESTS_CA_BUNDLE=<not set>
SSL_CERT_FILE=<not set>
active environment : None
user config file : /root/.condarc
populated config files : /root/.condarc
conda version : 4.7.12
conda-build version : not installed
python version : 3.7.4.final.0
virtual packages :
base environment : /opt/conda (writable)
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/conda/pkgs
/root/.conda/pkgs
envs directories : /opt/conda/envs
/root/.conda/envs
platform : linux-64
user-agent : conda/4.7.12 requests/2.22.0 CPython/3.7.4 Linux/3.10.0-1160.6.1.el7.x86_64 debian/10 glibc/2.28
UID:GID : 0:0
netrc file : None
offline mode : False
An unexpected error has occurred. Conda has prepared the above report.
Upload did not complete.
ERROR: Service 'base_image' failed to build: The command '/bin/sh -c conda update -n base conda -y' returned a non-zero code: 1

Use google default credentials on local docker run

I have the same problem asked on this question, but the provided solution does not work for me.
Basically, I want to run my docker image, with entrypoint run_query.py, locally. I have issues with credentials when I try to run a Bigquery job.
When I try to run my
docker run -v ~/.config/:/root/.config my-image-name --param1 ...
I get this error
Traceback (most recent call last):
File "run_query.py", line 97, in <module>
query_params=params)
File "run_query.py", line 54, in create_table
query_job = client.query(query, job_config=job_config)
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/client.py", line 2467, in query
query_job._begin(retry=retry, timeout=timeout)
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/job.py", line 3156, in _begin
super(QueryJob, self)._begin(client=client, retry=retry, timeout=timeout)
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/job.py", line 638, in _begin
retry, method="POST", path=path, data=self.to_api_repr(), timeout=timeout
File "/usr/local/lib/python3.7/dist-packages/google/cloud/bigquery/client.py", line 558, in _call_api
return call()
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/usr/local/lib/python3.7/dist-packages/google/cloud/_http.py", line 419, in api_request
timeout=timeout,
File "/usr/local/lib/python3.7/dist-packages/google/cloud/_http.py", line 277, in _make_request
method, url, headers, data, target_object, timeout=timeout
File "/usr/local/lib/python3.7/dist-packages/google/cloud/_http.py", line 315, in _do_request
url=url, method=method, headers=headers, data=data, timeout=timeout
File "/usr/local/lib/python3.7/dist-packages/google/auth/transport/requests.py", line 444, in request
self.credentials.before_request(auth_request, method, url, request_headers)
File "/usr/local/lib/python3.7/dist-packages/google/auth/credentials.py", line 133, in before_request
self.refresh(request)
File "/usr/local/lib/python3.7/dist-packages/google/oauth2/credentials.py", line 198, in refresh
self._scopes,
File "/usr/local/lib/python3.7/dist-packages/google/oauth2/_client.py", line 248, in refresh_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/usr/local/lib/python3.7/dist-packages/google/oauth2/_client.py", line 124, in _token_endpoint_request
_handle_error_response(response_body)
File "/usr/local/lib/python3.7/dist-packages/google/oauth2/_client.py", line 60, in _handle_error_response
raise exceptions.RefreshError(error_details, response_body)
I also tried to use -v ~/.config/gcloud/:/root/.config/gcloud, but I get the same result.
Keep in mind that using this image into a Kubeflow Pipeline works smoothly.
Did I misinterpret the solution from the previous question? What am I missing?

Resources