ABOUT
MINIMAL WORKING EXAMPLE: https://gitlab.com/hynek.blaha/debug-docker-poetry/-/tree/master
I have been building Docker images using Poetry with Python packages from internal PyPI registry. As our projects are in private GitLab repository and the internal packages are not top-secret, we are storing the poetry credentials directly in the poetry source URL in pyproject.toml.
On 2022-08-24, all our Docker builds started failing while installing internal package:
• Installing til-bigquery (0.3.4)
HTTPError
401 Client Error: Unauthorized for url: https://gitlab.com/api/v4/projects/38869805/packages/pypi/files/7a4731d831d4b37262481002271e359f96017570e9480ef16c89489e0b41252f/til_bigquery-0.3.4-py3-none-any.whl#sha256=7a4731d831d4b37262481002271e359f96017570e9480ef16c89489e0b41252f
at /usr/local/lib/python3.9/site-packages/requests/models.py:1021 in raise_for_status
1017│ f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018│ )
1019│
1020│ if http_error_msg:
→ 1021│ raise HTTPError(http_error_msg, response=self)
1022│
1023│ def close(self):
1024│
1025│ called the underlying ``raw`` object must not be accessed again.
What I found weird:
Docker build fails even when I retry deploy job, that successfully passed a few days ago.
Considering the issue might have been caused by unpinned minor version of Docker base image python:3.7-slim or Poetry, I used older versions but got the same result.
I compared the build logs of previously successful build build_success.log (8/22/22, 3:00 PM) and the same build retry build_fail.log (8/24/22, 6:00 AM) and found both use the same poetry wheel poetry-1.1.15-py2.py3-none-any.whl.
It still works as before on my machine, but fails in Docker.
It stops working on localhost when I remove the credentials from the repository URL, so I am sure the credentials are not stored anywhere else (e.g. ~/.netrc).
How to reproduce:
Localhost - OK
git clone git#gitlab.com:hynek.blaha/debug-docker-poetry.git
poetry install
Docker - FAIL
git clone git#gitlab.com:hynek.blaha/debug-docker-poetry.git
docker build .
I am able to fix the issue by explicitly providing the credentials in Dockerfile:
RUN pip install poetry --no-cache-dir && \
poetry config virtualenvs.create false && \
poetry config repositories.my_private_repo https://gitlab.com/api/v4/projects/21870843/packages/pypi/simple && \
poetry config http-basic.my_private_repo __token__ glpat-mkEPJ4Rsy2peTCrH23pG
But it doesn't explain, why rebuilding the same image started failing.
And why it still works as expected when running on my machine (outside of Docker).
Does anyone have an idea, what might have changed? I was unable to tell what even when using diff on the build_success.log and build_fail.log
I was struggling with totally the same problem the last few days. Though I'm still not sure of the exact cause, somehow I managed to avoid the problem.
I also used a repository URL with credentials embedded in pyproject.toml until yesterday like this.
[[tool.poetry.source]]
name = 'private'
url = 'https://your_username:your_password#gitlabce.example.com/api/v4/projects/<project_id>/packages/pypi/simple'
secondary = true
Though it's basically the same as your solution, you can specify the local poetry configs for each project by creating poetry.toml at the project root. So instead of embedding credentials in the URL, you can specify them via poetry.toml as follows:
[http-basic]
[http-basic.private]
username = "your_username"
password = "your_password"
That way, you can reproduce the same behaviour with embedded credentials without the authentication error.
Why does it still work outside docker?
I guess it is due to the archive cache in your local environment. Since poetry stores downloaded archives in ~/.cache/pypoetry/artifacts/ and reuse them when execute poetry install, you didn't need to access your private PyPI server in the first place. If you manually remove archives (I'm not sure, but poetry cache clear command didn't work for my case), you'll be able to reproduce the authentication error even in your local environment.
Why did the error suddenly start to occur?
Poetry uses the embedded credentials to query the list of package links to the private PyPI server.
It works fine until there, however, when installing actual packages, Poetry uses the acquired link from the PyPI server that has no credentials embedded. That's why it fails and the URL shown in the error message doesn't have any credentials embedded.
I'm still not sure why the credential embed had been working until a few days ago though. I guess there might have been behavior changes on the GitLab side.
Related
I build my symfony 4 application with composer install inside docker container.
Composer version 1.10.19
But i got this error.
[ErrorException]
file_put_contents(/root/.composer/cache/repo/https---flex.symfony.com/): failed to open stream: Is a directory
If i run composer install 1 more time without any interruption build is succeeded.
If i delete vendor and var/cache directory on project directory error accoured again.
I tried this methods:
trigger 'composer clearcache' command no success
Delete ~/.composer directory no success
chmod -R 777 ~/.composer no success
Some build of the same project inside different container is succeeded. My container starts with this volumes:
project directory
~/.ssh directory
I search across net but got no solution. PLease help.
This was a bug on Flex version <1.13.4
In latest version (as I write 1.13.4) the issue is solved.
The problem was:
Writing to the cache with an empty key will fail with "failed to open
stream: Is a directory", so do not try to do that.
We noticed this when cloudflare - or the backend service - responded with a "last-modified" header for our CI servers (AWS) but not for our local system. This triggered the condition to become true and it tries to write a cache file without a filename.
That problem was solved with commit d81196c3f3b5
Edit: as posted by #Tmb a workaround is to use: composer install --no-cache ...
A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.
I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?
Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing
Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt
I'm trying to get the CircleCI CLI tool ( https://circleci.com/docs/2.0/local-jobs/ ) working on Ubuntu WSL on Windows 10. It appeared to install successfully -- and the file permissions appear to be correct. I have Docker for Windows installed and running, and the Linux Docker client works without issue.
But now it always errors when trying to validate a CircleCI config file.
I have tried:
circleci config validate -c .circleci/config.yml
and
circleci config validate
from the root of my repo.
But each time, it gives the error:
Error: open .circleci/config.yml: no such file or directory
Has anyone been able to get this work?
sudo worked for me to overcome this. However, I stuck with the next error.
When running npm install within circleci we fetch some node packages from our github repositories through package.json. This operation is happening when building a docker image from a Dockerfile.
This has been working great until last week when without changes in our side, we started to get errors while cloning these packages. To perform this operation, we were using Basic Authentication in the URL providing user credentials in it. For ie:
https://<username>:<password>#github.com/elektron-technogoly/<repository>.git
Now, we get the following errors:
npm ERR! Command failed: git clone ...
npm ERR! fatal: unable to access 'https://<username>:<password>#github.com/elektron-technogoly/<repository>.git':
Could not resolve host: <username>
From the error message it seems like it thinks the username is the host and thus, fails. I checked that password is still valid and it did not expire.
Has recently - around last week - something changed that could cause this error? Has Basic Authentication been disabled?
UPDATE: Playing a bit seems like when you change the base docker image (say from node:4-slim to node:4), the first time it works, subsequent times don't. Unfortunately, logs are not giving me any lead, both look exactly the same but the error appears from the first onwards.