Cannot install packages inside docker Ubuntu image [closed] - docker

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
I installed Ubuntu 14.04 image on docker. After that, when I try to install packages inside the ubuntu image, I'm getting unable to locate package error:
apt-get install curl
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package curl
How to fix this error?

It is because there is no package cache in the image, you need to run:
apt-get update
before installing packages, and if your command is in a Dockerfile, you'll then need:
apt-get -y install curl
To suppress the standard output from a command use -qq. E.g.
apt-get -qq -y install curl

From the docs in May 2017 2018 2019 2020 2021 2022
Always combine RUN apt-get update with apt-get install in the same
RUN statement, for example
RUN apt-get update && apt-get install -y package-bar
(...)
Using apt-get update alone in a RUN statement causes caching
issues and subsequent apt-get install instructions fail.
(...)
Using RUN apt-get update && apt-get install -y ensures your Dockerfile installs the latest package versions with no further coding or manual intervention. This technique is known as “cache busting”.

Add following command in Dockerfile:
RUN apt-get update

Make sure you don't have any syntax errors in your Dockerfile as this can cause this error as well. A correct example is:
RUN apt-get update \
&& apt-get -y install curl \
another-package
It was a combination of fixing a syntax error and adding apt-get update that solved the problem for me.

Running apt-get update didn't solve it for me because, it always seemed to read the result of this command from cache and somehow the cache seemed to be corrupted. My suspicion is that, this happens if you have multiple docker containers having the same 'base image' ( for me it was python:3.8-slim).
So, here's what worked for me:
Stopping all Docker containers in the system
docker stop $(sudo docker ps -aq)
Removing all dangling containers
docker container prune
Removing all dangling images
docker image prune
After running these commands the docker build seems to lose the corrupted cache and does a clean apt-get update which fetches the correct package info and the package installations progress as expected.

I found that mounting a local volume over /tmp can cause permission issues when the "apt-get update" runs, which prevents the package cache from being populated. Hopefully, this isn't something most people do, but it's something else to look for if you see this issue.

Out of the blue I started getting this problem. I did this first and it seemed to resolve the problem:
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/*

I was getting the same error when trying to install cron and nginx-extras in my Dockerfile. I tried all the answers so far and no dice.
I then simply ran
sudo service docker restart
which fixed the issue for me.

This has since been answered but I encountered this issue earlier and none of these steps worked.
The issue, it turns out, was that I had saved a list of packages to install in a separate file. This was saved on a windows machine with a CRLF line separator (\r\n and not just \n). Forcing the line endings to only \n on top of the steps provided in the accepted answer resolved the issue.

I have met the same question when I try to install jre
and
I try the command "apt-get update" and then
try "apt install default-jre" again
and it worked !
I wish it can help you, good luck!

You need to update the package list in your Ubuntu:
$ sudo apt-get update
$ sudo apt-get install <package_name>

Related

Running PHP scripts on Synology from within Docker container

Today I had to move my Domoticz/jadahl/Synology setup to one that runs in a Docker container. While this didn’t give any problems, I have one issue. Domoticz allows scripts to be executed when a switch is toggled. I have been running PHP scripts for years this way and I was wondering if it is possible to run a script located on the Synology from the Docker container. Totally new to Docker so forgive any stupid questions.
If not, any tips on how to approach this so I can get back to my dayjob?
Solved this by creating my own image:
FROM domoticz/domoticz:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install etherwake wget curl php-cli php-xml php-soap -y

Docker build dependent on host Ubuntu version not on the actual Docker File

I'm facing an issue with my docker build.
I have a dockerfile as follow:
FROM python:3.6
RUN apt-get update && apt-get install -y libav-tools
....
The issue I'm facing is that I'm getting this error when building on ubuntu:20.04 LTS
E: Package 'libav-tools' has no installation candidate
I made some research and found out that ffmpeg should be a replacement for libav-tools
FROM python:3.6
RUN apt-get update && apt-get install -y ffmpeg
....
I tried again without any issue.
but when I tried to build the same image with ffmpeg on ubuntu:16.04 xenial I'm getting a message that
E: Package 'ffmpeg' has no installation candidate
after that, I replace the ffmpeg with libav-tools and it worked on ubuntu:16.04
I'm confused now why docker build is dependant on the host ubuntu version that I'm using and not the actual dockerfile.
shouldn't docker build be coherent whatever the ubuntu version I'm using.
Delete the the existing image and pull it again. Seems you have a old image which may have a different base OS and that is why you are seeing the issue

dockerized conan shows FileExistsError: [Errno 17] File exists: './util-linux-2.33.1/tests/expected/libmount/context-X-mount.mkdir'

The full error is
ERROR: libmount/2.33.1: Error in source() method, line 26
tools.get(**self.conan_data["sources"][self.version])
FileExistsError: [Errno 17] File exists: './util-linux-2.33.1/tests/expected/libmount/context-X-mount.mkdir'
My setup is a dockerized conen where the container is built like this:
FROM gcc:10.2.0
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get install -y cmake
RUN apt-get install -y python3-pip
RUN pip3 install --upgrade pip
RUN pip3 install conan
RUN conan remote add bincrafters https://api.bintray.com/conan/bincrafters/public-conan
CMD ["/bin/bash"]
My basepath contains the folders build/conan and there is a conanfile.txt in the basepath.
The conanfile.txt contains:
[requires]
sdl2/2.0.12#bincrafters/stable
The motivation to dockerize is so that I get to a stable buid environment over all my machines.
build/conan is extracted to store all cached files between builds, or so I hope it will once this works.
I made this into a repository so you can check out this example
EDIT: I modified the repo as I went on investigating - the original is in the commit history.
https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2.git
What I want is to use conan install from within a container on a mounted docker container with caching on the host machine.
My obvious question is: What is happening here and how do I fix it?
The issue seems to be stemming from the volume mount on my system.
I followed user uilianries advice and went for building a container based on an official conan-docker-tools container as well as moving the volume into a docker managed volume.
This error message is gone now although it looks like this approach in general may not fit what I want to do.
I modified the repository for this question with what I ended up with. https://github.com/Aypahyo/dockerized-conan-shows-fileexistserror-errno-17-file-exists-util-linux-2
caching does not work as I want it to but that is not what this question was about.

DockerFile one-line vs multi-line instruction [duplicate]

This question already has answers here:
Multiple RUN vs. single chained RUN in Dockerfile, which is better?
(4 answers)
Closed 2 years ago.
To my knowledge of the way docker build works is that for each line of instruction, it creates a separate image/layer. However, it is very efficient in managing to reuse the layers or avoid rebuilding those layers if nothing has changed.
So does it matter if I put below instruction either on same line or multi-line? For convenience, I would prefer the single line option unless it is not an efficient option.
Multi-Line Instruction
RUN apt-get -y update
RUN apt-get -y install ...
Single-Line Instruction
RUN apt-get -y update && apt-get -y install
In this specific case it is important to put apt-get update and apt-get install together. More broadly, fewer layers is considered "better" but it almost never has a perceptible difference.
In practice I tend to group together "related" commands into the same RUN command. If I need to configure and install a package from source, that can get grouped together, and even if I change make arguments I don't mind re-running configure. If I need to configure and install three packages, they'd go into separate RUN lines.
The important difference in this specific apt-get example is around layer caching. Let's say your Dockerfile has
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install package-a
If you run docker build a second time, it will decide it's already run all three of these commands and the input hasn't changed, so it will run very quickly and you'll get an identical image out.
Now you come back a day or two later and realize you were missing something, so you change
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install package-a package-b
When you run docker build again, Docker decides it's already run apt-get update and can jump straight to the apt-get install line. In this specific case you'll have trouble: Debian and Ubuntu update their repositories fairly frequently, and when they do the old versions of packages get deleted. So your apt-get update from two days ago points at a package that no longer exists, and your build will fail.
You'll avoid this specific problem by always putting the two apt-get commands together in the same docker run line
FROM ubuntu:18.04
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
package-a \
package-b
I would use single-line instruction. It's considered to be a best practice for docker. So that you minimize number of layers(RUN is one of instructions that create layers).
As of multiple instructions for collecting dependencies, sometimes it's useful during development, if you're frequently changing list of packages(or their versions). But for production image, i would avoid that.

Why can't Docker find a existing package?

I am new at using Docker so this may be obvious for some. I am running Ubuntu 18.04TLS.
I want to install the package "python3-protobuf" inside an image. I try to do this with the following line in the Dockerfile:
...
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3-protobuf \
<some other packages to be installed>
...
When I run 'docker build -t myImageName', I get the message:
E: Unable to locate package python3-protobuf
There are many packages that I am installing but this is the only one that is creating a problem for me.
I know that the package name is correct because in the terminal, when I 'apt search' for it, it is found. Additionally, in the dockerfile I do the recommended 'update' and 'install' steps. So it should be finding it. Any ideas why it does not?
#banuj answered this question.
The package "python3-protobuf" became available from Ubuntu 18.04 and onward. The base image I took is using ubuntu 16.04.
I have two way to solve this:
Use a base image that is with ubuntu 18.04 (or later)
Use pip to install the package.
I ended up using option two.

Resources