A little confused at the moment. I've got docker on one my servers and as it doesn't have internet access, I'm trying to build a base image for centos7.4. The nice Docker site has a mkimage_yum.sh script for this purpose, but it consistently fails when it tries running:
yum -c /tmp/mkimage_yum.sh.gnagTv/etc/yum.conf --installroot=/tmp/mkimage_yum.sh.gnagTv -y clean all
with a "No enabled repos" error. The thing is, if I enter "yum repolist" I get back 17 entries, and I have manually tried to set several repos to enabled. Yet, this command still fails, and I do not understand what could be missing.
Anybody have some idea of what I can so this succeeds?
Jay
I figured out why this was failing, the docker file for mkimage_yum.sh does not contain the proper code if you're storing your repos in /etc/yum.repos.d, it assumes that everything is in /etc/yum.conf. This is really not correct, and it causes one of the later yum clean operations to fail. I fixed it, but I cannot upload the change as the server has no internet access.
Related
ABOUT
MINIMAL WORKING EXAMPLE: https://gitlab.com/hynek.blaha/debug-docker-poetry/-/tree/master
I have been building Docker images using Poetry with Python packages from internal PyPI registry. As our projects are in private GitLab repository and the internal packages are not top-secret, we are storing the poetry credentials directly in the poetry source URL in pyproject.toml.
On 2022-08-24, all our Docker builds started failing while installing internal package:
• Installing til-bigquery (0.3.4)
HTTPError
401 Client Error: Unauthorized for url: https://gitlab.com/api/v4/projects/38869805/packages/pypi/files/7a4731d831d4b37262481002271e359f96017570e9480ef16c89489e0b41252f/til_bigquery-0.3.4-py3-none-any.whl#sha256=7a4731d831d4b37262481002271e359f96017570e9480ef16c89489e0b41252f
at /usr/local/lib/python3.9/site-packages/requests/models.py:1021 in raise_for_status
1017│ f"{self.status_code} Server Error: {reason} for url: {self.url}"
1018│ )
1019│
1020│ if http_error_msg:
→ 1021│ raise HTTPError(http_error_msg, response=self)
1022│
1023│ def close(self):
1024│
1025│ called the underlying ``raw`` object must not be accessed again.
What I found weird:
Docker build fails even when I retry deploy job, that successfully passed a few days ago.
Considering the issue might have been caused by unpinned minor version of Docker base image python:3.7-slim or Poetry, I used older versions but got the same result.
I compared the build logs of previously successful build build_success.log (8/22/22, 3:00 PM) and the same build retry build_fail.log (8/24/22, 6:00 AM) and found both use the same poetry wheel poetry-1.1.15-py2.py3-none-any.whl.
It still works as before on my machine, but fails in Docker.
It stops working on localhost when I remove the credentials from the repository URL, so I am sure the credentials are not stored anywhere else (e.g. ~/.netrc).
How to reproduce:
Localhost - OK
git clone git#gitlab.com:hynek.blaha/debug-docker-poetry.git
poetry install
Docker - FAIL
git clone git#gitlab.com:hynek.blaha/debug-docker-poetry.git
docker build .
I am able to fix the issue by explicitly providing the credentials in Dockerfile:
RUN pip install poetry --no-cache-dir && \
poetry config virtualenvs.create false && \
poetry config repositories.my_private_repo https://gitlab.com/api/v4/projects/21870843/packages/pypi/simple && \
poetry config http-basic.my_private_repo __token__ glpat-mkEPJ4Rsy2peTCrH23pG
But it doesn't explain, why rebuilding the same image started failing.
And why it still works as expected when running on my machine (outside of Docker).
Does anyone have an idea, what might have changed? I was unable to tell what even when using diff on the build_success.log and build_fail.log
I was struggling with totally the same problem the last few days. Though I'm still not sure of the exact cause, somehow I managed to avoid the problem.
I also used a repository URL with credentials embedded in pyproject.toml until yesterday like this.
[[tool.poetry.source]]
name = 'private'
url = 'https://your_username:your_password#gitlabce.example.com/api/v4/projects/<project_id>/packages/pypi/simple'
secondary = true
Though it's basically the same as your solution, you can specify the local poetry configs for each project by creating poetry.toml at the project root. So instead of embedding credentials in the URL, you can specify them via poetry.toml as follows:
[http-basic]
[http-basic.private]
username = "your_username"
password = "your_password"
That way, you can reproduce the same behaviour with embedded credentials without the authentication error.
Why does it still work outside docker?
I guess it is due to the archive cache in your local environment. Since poetry stores downloaded archives in ~/.cache/pypoetry/artifacts/ and reuse them when execute poetry install, you didn't need to access your private PyPI server in the first place. If you manually remove archives (I'm not sure, but poetry cache clear command didn't work for my case), you'll be able to reproduce the authentication error even in your local environment.
Why did the error suddenly start to occur?
Poetry uses the embedded credentials to query the list of package links to the private PyPI server.
It works fine until there, however, when installing actual packages, Poetry uses the acquired link from the PyPI server that has no credentials embedded. That's why it fails and the URL shown in the error message doesn't have any credentials embedded.
I'm still not sure why the credential embed had been working until a few days ago though. I guess there might have been behavior changes on the GitLab side.
I have a debian package I am deploying that comes with a docker image. On upgrading the package, the prerm script stops and removes the docker image. As a fail safe, I have the preinst script do it as well to ensure the old image is removed before the installation of the new image. If there is no image, the following error reports to the screen: (for stop) No such image: <tag> and (for rmi): No such container: <tag>.
This really isn't a problem, as the errors are ignored by dpkg, but they are reported to the screen, and I get constant questions from the users is that error ok? Did the install fail? etc.
I cannot seem for find the correct set of docker commands to check if a container is running to stop it, and check to see if an image exists to remove it, so those errors are no longer generated. All I have is docker image tag to work with.
I think you could go one of two ways:
Knowing the image you could check whether there is any container based on that image. If yes, find out whether that container is running. If yes, stop it. If not running, remove the image. This would prevent error messages showing up but other messages regarding the container and image handling may be visible.
Redirect output of the docker commands in question, e.g. >/dev/null
you're not limited with docker-cli you know? you can always combine docker-cli commands with linux sh or dos commands as well and also you can write your own .sh scripts and if you don't want to see the errors you can either redirect them or store them to a file such as
to redirect: {operation} 2>/dev/null
to store : {operation} 2>>/var/log/xxx.log
I'm attempting to learn how to create a Laravel Docker image by following a tutorial on DigitalOcean using WSL. Following the instructions on the Docker Hub page, however, yields an error:
❯ docker run --rm --interactive --tty -v $(pwd):/app composer install
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 94 installs, 0 updates, 0 removals
- Installing voku/portable-ascii (1.4.10): Failed to download voku/portable-ascii from dist: Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
Now trying to download from source
- Installing voku/portable-ascii (1.4.10):
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
install [--prefer-source] [--prefer-dist] [--dry-run] [--dev] [--no-dev] [--no-custom-installers] [--no-autoloader] [--no-scripts] [--no-progress] [--no-suggest] [-v|vv|vvv|--verbose] [-o|--optimize-autoloader] [-a|--classmap-authoritative] [--apcu-autoloader] [--ignore-platform-reqs] [--] [<packages>]...
How can I diagnose what I'm doing wrong?
It turns out that the underlying problem had nothing to do with Docker at all. In fact, Composer was trying to tell me what the problem was all along, but I dismissed it as just a symptom of a deeper issue:
[RuntimeException]
Could not delete /app/vendor/voku/portable-ascii/src/voku/helper:
This message was the crux of it all. I noticed that the directory mentioned, [...]/helper, was empty, so I tried to remove it by hand with rmdir. Instead, I got a No such file or directory error message. I attempted many other was to kill this directory, the entire project directory with rm -rf ~/laravel-app from the home folder, etc. Nothing worked.
Some digging around on the internet suggested that it could be an NTFS corruption if I was running into this issue on Windows. Since I am, indeed, attempting this on WSL (Windows Subsystem for Linux), I gave their suggested fix a try: running chkdsk /F in CMD/PowerShell. A reboot was necessary to complete this task, but after getting everything back up and trying those first few tutorial steps again, I was able to get composer to install the Laravel dependencies without a hitch.
Bottom line: If you run into this sort of issue on WSL, please try running chkdsk /F and reboot. You might just have a similar file system corruption.
We have two possibilities for this error:
1 - You did not execute the command inside the directory :
cd ~/laravel-app
2 - I'm sure there is an internal proxy that is blocking the download of packages.
I had this happen on my regular instance of Artifactory oss so I made a clean install with minimum configuration change to check everything.
On a clean install of Artifactory :
Version : Artifactory oss 7.3.2 (docker version)
The command used to create the docker : docker run --privileged=true --name=artifactory -i -d -v /media/sdb1/Artifactory:/media/sdb1/Artifactory:z -p 8082:8082 docker.bintray.io/jfrog/artifactory-oss:latest
Everything works fine for regular file
I can upload a file with a hash symbol in it ex: test#1_hashtag. txt
When I try to download it with the UI I end up here : http://my.dns.com:8082/ui/api/v1/download?repoKey=generic-local&path=test%231_hashtag.txt
There is this error displayed :
errors
0
status 404
message "File not found."
I can download the file with curl
I still get the error even when I connect via IP.
I am looking to fix this since not being able to use the hash symbol (#) would need us to rename a lot of files. I don't know if it's due to redirect or something. But this installation is 100% what come out of the box.
Edit : It's not a problem of understanding how the hash symbol in the link is working, I know how it works. it's a problem of special character not being handled correctly by the app or by the redirect.
It looks like you are running into a regression. This seems to have been working 6.16.2 and broken in 7.3.2 (the versions I tested, not necessarily where the regression happened, which is likely in 7.0). There is a bug open for it: https://www.jfrog.com/jira/browse/RTFACT-21460. Please vote for and follow it up for updates.
This is my build.
https://travis-ci.org/gogo/protobuf
It intermittently fails for some of the builds.
I think it is struggling with installing a protocol buffer version using wget, but I can't see the logs, since they take forever to load.
It would be great if travis could tell me that it has failed to load the logs instead of just pretending to load them. Sorry I don't know if that is really the case, but that is how it feels.
Also I don't understand why this works some of the time and randomly fails. If the server is overloaded, put me in a queue, please don't fail when there is not something wrong with the code.
Please help I am new to travis, so maybe I am just doing it wrong.
Some of the other builds with the same use of PROTOBUF_VERSION are successful and show some output from the final step of install-protobuf.sh (./configure --prefix=/home/travis && make -j2 && make install). So similar to you, I suspect that the wget step in install-protobuf.sh is what is failing in the failed builds.
I would suggest editing install-protobuf.sh so that you can better see what is going on in the travis-ci logs:
change set command to: set -ex
remove the -q option from your use of wget in:
wget -q https://github.com/google/protobuf/releases/download/v$PROTOBUF_VERSION/$basename.tar.gz