Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have a simple Dockerfile using centos:latest which fails to find pdns using yum. This is running on Windows host.
$ docker --version
Docker version 20.10.8, build 3967b7d
Dockerfile_dns
FROM centos:latest
RUN yum update -y
RUN yum install -y epel-release pdns pdns-tools pdns-backend-postgresql pdns-backend-sqlite pdns-recursor net-tools bind-utils jq
Using command
$ docker build -t dns_img -f Dockerfile_dns .
#5 [2/3] RUN yum update -y
#5 sha256:103582845ea3b4ba6361ca1a570ed91dbb7ffbdb7bd1b67e3dc21635f2dfc8da
#5 CACHED
#6 [3/3] RUN yum install -y epel-release pdns pdns-tools pdns-backend-postgresql pdns-backend-sqlite pdns-recursor net-tools bind-utils jq
#6 sha256:6639667b2dcec34132dc4bfb88fe520d625f4ab8de649b631344017f22fbd2d7
#6 2.419 Last metadata expiration check: 0:16:22 ago on Fri Sep 24 16:21:33 2021.
#6 2.788 No match for argument: pdns
#6 2.792 No match for argument: pdns-tools
#6 2.796 No match for argument: pdns-backend-postgresql
#6 2.800 No match for argument: pdns-backend-sqlite
#6 2.804 No match for argument: pdns-recursor
#6 2.817 Error: Unable to find a match: pdns pdns-tools pdns-backend-postgresql pdns-backend-sqlite pdns-recursor
#6 ERROR: executor failed running [/bin/sh -c yum install -y epel-release pdns pdns-tools pdns-backend-postgresql pdns-backend-sqlite pdns-recursor net-tools bind-utils jq]: exit code: 1
I found some references to using a specific centos version, but that didn't help either.
That's because you don't have those dependencies in the repositories you have installed.
install this repo and you'll have the packages you need:
RUN yum install epel-release
After you can install your packages:
RUN yum install -y pdns pdns-tools pdns-backend-postgresql pdns-backend-sqlite pdns-recursor net-tools bind-utils jq
Installing the repo and the dependencies only it holds can't be done because yum executes the installation in one take instead of one after the other like python's pip for example
Related
Pretty new to docker; trying to get base layer setup on docker though it gives me these errors:
It's noting that the repository is failing / how do I set that repository?
I don't think it's AWS issue as I have been able to see the AWS push in cloud formation.
$./generate_base_layer.sh
Error: No such container: layer-container
[+] Building 27.7s (6/13)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 551B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/amazonlinux:2 0.9s
=> [auth] library/amazonlinux:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 37B 0.0s
=> [2/8] RUN yum install -y python37 && yum install -y python3-pip && yum install -y 26.7s
=> => # Loaded plugins: ovl, priorities
> [2/8] RUN yum install -y python37 && yum install -y python3-pip && yum install -y zip && yum clean all:
#6 0.369 Loaded plugins: ovl, priorities
#6 36.47
#6 36.47
#6 36.47 One of the configured repositories failed (Unknown),
#6 36.47 and yum doesn't have enough cached data to continue. At this point the only
#6 36.47 safe thing yum can do is fail. There are a few ways to work "fix" this:
#6 36.47
#6 36.47 1. Contact the upstream for the repository and get them to fix the problem.
#6 36.47
#6 36.47 2. Reconfigure the baseurl/etc. for the repository, to point to a working
#6 36.47 upstream. This is most often useful if you are using a newer
#6 36.47 distribution release than is supported by the repository (and the
#6 36.47 packages for the previous distribution release still work).
#6 36.47
#6 36.47 3. Run the command with the repository temporarily disabled
#6 36.47 yum --disablerepo=<repoid> ...
#6 36.47
#6 36.47 4. Disable the repository permanently, so yum won't use it by default. Yum
#6 36.47 will then just ignore the repository until you permanently enable it
#6 36.47 again or use --enablerepo for temporary usage:
#6 36.47
#6 36.47 yum-config-manager --disable <repoid>
#6 36.47 or
#6 36.47 subscription-manager repos --disable=<repoid>
#6 36.47
#6 36.47 5. Configure the failing repository to be skipped, if it is unavailable.
#6 36.47 Note that yum will try to contact the repo. when it runs most commands,
#6 36.47 so will have to try and fail each time (and thus. yum will be be much
#6 36.47 slower). If it is a very temporary problem though, this is often a nice
#6 36.47 compromise:
#6 36.47
#6 36.47 yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
#6 36.47
#6 36.47 Cannot find a valid baseurl for repo: amzn2-core/2/aarch64
#6 36.47 Could not retrieve mirrorlist http://amazonlinux.default.amazonaws.com/2/core/latest/aarch64/mirror.list error was
#6 36.47 12: Timeout on http://amazonlinux.default.amazonaws.com/2/core/latest/aarch64/mirror.list: (28, 'Failed to connect to amazonlinux.default.amazonaws.com port 80 after 4723 ms: Connection timed out')
------
executor failed running [/bin/sh -c yum install -y python37 && yum install -y python3-pip && yum install -y zip && yum clean all]: exit code: 1
Unable to find image 'base-layer:latest' locally
docker: Error response from daemon: pull access denied for base-layer, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
Error: No such container:path: layer-container:layer.zip
I've already logged into docker and tried it with docker build -t... ; same issue
dockerfile:
FROM amazonlinux:2
# Install Python
RUN yum install -y python37 && \
yum install -y python3-pip && \
yum install -y zip && \
yum clean all
# Set up PIP and Venv
RUN python3.7 -m pip install --upgrade pip && \
python3.7 -m pip install virtualenv
RUN python3.7 -m venv base
RUN source base/bin/activate
# Install dependencies
COPY ./requirements.txt .
RUN pip install -r requirements.txt -t ./python
# Zip it up for deployment.
RUN zip -r layer.zip ./python/
ENTRYPOINT ["/bin/bash", "-l"]
generate_base.. file:
# Generates a base layer for the Lambda functions.
# Remove the container first (if it exists).
docker rm layer-container
# Build the base layer.
docker build -t base-layer .
# Rename it to layer-container.
docker run --name layer-container base-layer
# Copy the generated zip artifact so our CDK can use it.
docker cp layer-container:layer.zip . && echo "Created layer.zip with updated base layer."
This question already has an answer here:
Can not install package within docker debian:jessie
(1 answer)
Closed 1 year ago.
I'm trying to install soundfile over pip install on my docker container. Sadly i need to install libsndfile1 manually over apt get by myself. This fails somehow and i don't really get why and does anyone know how to install it.
I'm running docker desktop on Win10 - but container will finally run on a Linux machine.
> [ 7/11] RUN apt-get install libsndfile1:
#11 0.618 Reading package lists...
#11 1.814 Building dependency tree...
#11 2.219 Reading state information...
#11 2.829 The following additional packages will be installed:
#11 2.830 libflac8 libogg0 libvorbis0a libvorbisenc2
#11 2.942 The following NEW packages will be installed:
#11 2.944 libflac8 libogg0 libsndfile1 libvorbis0a libvorbisenc2
#11 2.956 0 upgraded, 5 newly installed, 0 to remove and 3 not upgraded.
#11 2.956 Need to get 669 kB of archives.
#11 2.956 After this operation, 2136 kB of additional disk space will be used.
#11 2.956 Do you want to continue? [Y/n] Abort.
------
executor failed running [/bin/sh -c apt-get install libsndfile1]: exit code: 1
Anyone know something?
Use the command with an automatic yes to make it run non-interactively.
RUN apt-get --yes install libsndfile1
And FYI - this dangerous --force-yes option too is available.
Use with absolute discretion if necessary.
Read about apt-get options here.
I want to install docker on raspberry, i used the script on docker:https://docs.docker.com/engine/install/debian/#install-using-the-convenience-script
I run the script, then i met this issue:
sudo sh get-docker.sh
# Executing docker install script, commit: 442e66405c304fa92af8aadaa1d9b31bf4b0ad94
+ sh -c apt-get update -qq >/dev/null
+ sh -c DEBIAN_FRONTEND=noninteractive apt-get -y install -qq apt-transport-https ca-certificates curl >/dev/null
E: Essential packages were removed and -y was used without --allow-remove-essential.
then i install this package alone, ca-certificates, curl is ok, but apt-transport-https met question agina:
sudo apt-get install apt-transport-https
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
python-apt-common python3-apt python3-debconf
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
libapt-pkg4.12
The following packages will be REMOVED:
apt apt-listchanges apt-utils tasksel tasksel-data
The following NEW packages will be installed:
apt-transport-https libapt-pkg4.12
WARNING: The following essential packages will be removed.
This should NOT be done unless you know exactly what you are doing!
apt
0 upgraded, 2 newly installed, 5 to remove and 0 not upgraded.
Need to get 847 kB of archives.
After this operation, 3,112 kB disk space will be freed.
You are about to do something potentially harmful.
To continue type in the phrase 'Yes, do as I say!'
?]
I am trying to create an nvidia-docker image with installed TensorRT for my specific application. I can't use any of the provided TensortRT base images, as they are using CUDA version not compatible with the application, but I have a custom TensorRT debian package which is used in my organization. The problem is, when I install it from the Dockerfile, it also installs nvidia drivers. As a result, the container is successfully created, but can't be started - the result is:
svc_moma_usr#PL1LXD-529389:~/gutkowsp/Docker_projects/test_cuda$ nvidia-docker run tensorrt-test
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:432: running prestart hook 1 caused \\\"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/97f449ff2535b1ad304520dae75c613931888658a66b89235b0d040a872a625c/merged/usr/bin/nvidia-smi: file exists\\\\n\\\"\"": unknown.
ERRO[0001] error waiting for container: context canceled
The dockerfile is:
FROM nvidia/cuda:9.1-devel-ubuntu16.04
ENV DEBIAN_FRONTEND noninteractive
ENV CUDNN_VERSION 7.0.5.15
LABEL com.nvidia.cudnn.version="${CUDNN_VERSION}"
RUN apt update -y && \
apt install software-properties-common -y && \
apt-add-repository --yes --update ppa:ansible/ansible && \
apt install ansible -y
RUN apt update -y && \
apt install -y --no-install-recommends \
libcudnn7=$CUDNN_VERSION-1+cuda9.1 \
libcudnn7-dev=$CUDNN_VERSION-1+cuda9.1
RUN apt update -y && \
apt install tensorrt -y
How this problem of unnecessary drivers is solved? This seems to me like a common issue, as in general nvidia docker images typically have installed nvidia software, which usually comes with drivers. Maybe someone can share the dockerfiles for the TensorRT images for reference?
For anyone who facing the same issue:
If necessary use CUDNN enabled docker image, like 11.7.1-cudnn8-runtime-ubuntu18.04 to avoid the necessity to install it using apt
Run apt update
Run apt install <your package> -y --dry-run | grep nvidia
Add all listed nvidia packages to apt ignore list - add a dash after the package name with an asterisk in place of version number
apt install <your package> libnvidia-compute-*-server- \
libnvidia-compute-*- --dry-run | grep nvidia
Make sure that none of nvidia packages will be installed. If necessary add newly discovered packages to ignore list.
If everything is OK then remove --dry-run flag and install your package
apt install <your package> libnvidia-compute-*-server- libnvidia-compute-*-
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I have observed a problem where apt-get install will fail within a container where:
the package is already installed and,
sudo is not used.
This can be recreated by creating a simple container, e.g.
docker run -it ubuntu:latest /bin/bash
Within the container, run the following:
apt-get install software-properties-common
apt-get install software-properties-common
The second time, this will fail with a "Killed" message. If you then prepend the statement with sudo it will complete successfully:
sudo apt-get install software-properties-common
If the user within the container is root, why is sudo required to reinstall an existing package? I do not believe this is related to the AUFS file system as prepending with sudo will complete.
This is using docker 1.10 and an Ubuntu image.
The main point is the use of sudo
It doesn't fail on Debian 11:
# docker run -it ubuntu:latest /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
00f50047d606: Pull complete
Digest: sha256:20fa2d7bb4de7723f542be5923b06c4d704370f0390e4ae9e1c833c8785644c1
Status: Downloaded newer image for ubuntu:latest
# apt-get update
Hit:1 http://ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:2 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease
Reading package lists... Done
# apt-get install software-properties-common
Reading package lists... Done
...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
Processing triggers for dbus (1.12.20-2ubuntu4) ...
# apt-get install software-properties-common
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
software-properties-common is already the newest version (0.99.22.3).
0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.
There may be something corrupt on your system. If running these don't help, you may need to reinstall:
apt-get update
apt-get upgrade
You need to install sudo package by following commands.
apt update && apt upgrade
apt install sudo