Docker Build CMD fail yum not able to install the requirements - docker

My docker build cmd is failing to create an image using Dockerfile. It shows this error
here is the screenshot of the error

Check if you can access the site on the host machine.
Check your docker networking, for a docker VM, it is usually a bridge network by default.
Check if you need to add the repository to YUM.

Related

Docker desktop install in container fails on /etc/hosts .. what do to?

I am trying to install docker engine inside a container.
wget https://desktop.docker.com/linux/main/amd64/docker-desktop-4.16.2-amd64.deb
apt-get install -y ./docker-desktop-4.16.2-amd64.deb
Everything goes fine until in the post install phase, it tries to update /ect/hosts files for the kubernetes. Here it fails:
/var/lib/dpkg/info/docker-desktop.postinst: line 42: /etc/hosts: Read-only file system
This is expected behaviour for docker build in that it does not allow to modify /etc/hosts of the container.
Is there a way to solve this? Install docker desktop without doing this step? Or any other way?
I Solved this issue by adding this parameter in build
--add-host kubernetes.docker.internal:127.0.0.1
Example:
docker build --add-host kubernetes.docker.internal:127.0.0.1 -t stsjava2 .
When the Docker desktop installation fails with an error related to "/etc/hosts", it is usually due to a conflict with the host system's configuration. Here are some steps that you can try to resolve the issue:
Check the permissions of the "/etc/hosts" file on your host system to ensure
that it is accessible to Docker.
Try to start the Docker container with elevated privileges (e.g., using
"sudo") to see if that resolves the issue.
If the above steps do not resolve the issue, you can try modifying the
Docker container's network configuration to use a different network driver
that does not conflict with the host system's "/etc/hosts" file.
You can also try running the Docker container in a different environment
(e.g., a virtual machine) that does not have the same conflicts with the
host system.
If all else fails, you can try reinstalling Docker or using a different version of Docker to see if that resolves the issue.

How to change owner for docker socket in container (macOS, Intel Chip)

I have a fresh install of docker desktop on my machine and I'm attempting to create a dev environment. Using docker.io/library/wordpress:latest
However, I'm having some issues with user permissions. From what I can see the documentation doesn't mention this issue for mac users, but does mention something for ubuntu users (See Doc's). The specific issue is as follows;
// Docker error msg...
chown: invalid group: 'root:docker'
WARNING: Could not change owner for docker socket in container : exit code 1
Docker socket permission set to allow in container docker
// My setup...
macOS BigSir 11.6.5(Intel Chip)
Docker Desktop 4.8.2
VSCode Version: 1.67.1
git version 2.36.1
My Question: How do I resolve this issue? I.e. What steps do I need to take?
Any guidance would be greatly appreciated... 😅
Note: I can see other questions floating around here on stack, but from what I can see they're mostly covering ubuntu users or quite old questions and answers.
Note: Added screenshots to demonstrate what I was doing when the error occurred.
Step 1
Step 2
Step 3 -- Error
Most docker image in public registry is not designed to compatible with Docker Dev Environments (Beta).
According to documentation: https://docs.docker.com/desktop/dev-environments/specify/, specified docker image is required to have docker group and vscode user which already included to the group.
So we need to modify official docker image for your case to be working fine with docker dev environments and visual studio code.
# Dockerfile
FROM docker.io/library/wordpress:latest
# the next two commands are required based on documentation
# https://docs.docker.com/desktop/dev-environments/specify/
RUN useradd -s /bin/bash -m vscode \
&& groupadd docker \
&& usermod -aG docker vscode
USER vscode
Build our new docker image by running docker build, e.g:
docker build -t bickyeric/wordpress:dev-latest -f Dockerfile .
After that, you can update image tag from Step 2 of Question with our new image bickyeric/wordpress:dev-latest

Docker build error: "could not connect to server" (behind proxy)

Context:
OS: Windows 10 Pro; Docker ver: 18.09.0 (build 4d60db4); Behind corporate proxy, using CNTLM to solve this issue. (currently pulling / running image works fine)
Problem:
I was trying to build the following Dockerfile:
FROM alpine:3.5
RUN apk add --update \
python3
RUN pip3 install bottle
EXPOSE 8000
COPY main.py /main.py
CMD python3 /main.py
This is what I got:
Sending build context to Docker daemon 11.26kB
Step 1/6 : FROM alpine:3.5
---> dc496f71dbb5
Step 2/6 : RUN apk add --update python3
---> Running in 7f5099b20192
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/main: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.c51f8f92.tar.gz: No such file or directory
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.5/community: could not connect to server (check repositories file)
WARNING: Ignoring APKINDEX.d09172fd.tar.gz: No such file or directory
ERROR: unsatisfiable constraints:
python3 (missing):
required by: world[python3]
The command '/bin/sh -c apk add --update python3' returned a non-zero code: 1
I was able to access the URL from a browser, so there is no problem with the server itself.
I suspected that it has something to do with the proxy not being propagated to the container, as explained in this question, since I also did not get the http_proxy line when running docker run alpine env. However, after entering the proxies into the config file, it finally appeared. Yet the problem still exists.
I also tried to change the DNS as instructed here, but the problem is still unsolved.
I finally managed to solve this problem, and the culprit was my setting in the CNTLM.
For a background story, please check this post.
The root cause of this problem is that the docker container could not access the internet from inside the VM due to wrong IP setting inside the CNTLM.ini.
Normally CNTLM listens to 127.0.0.1:3128 by default to forward the proxy. I followed the default, and thus set the proxy setting on Docker (for the daemon - through the GUI, and for the container - through config.json) is also set into that address and port. It turns out that this "localhost" does not apply to the VM where docker sits, since the VM has its own localhost. Long story short, the solution is to change that address into dockerNAT IP address (10.0.75.1:3128) in all of the following locations:
CNTLM.ini (on the Listen line. Actually if we use CNTLM for other purposes as well, it is possible to supply more than one Listen line)
Docker daemon's proxy (through the Docker setting GUI)
Docker container config.json (usually in C:\Users\<username>\.docker), by adding the following lines:
"proxies":
{
"default":
{
"httpProxy": "http://10.0.75.1:3128",
"httpsProxy": "http://10.0.75.1:3128",
"noProxy": <your no_proxy>
}
}
also check these related posts:
Building a docker image for a node.js app fails behind proxy
Docker client ignores HTTP_PROXY envar and build args
Beginner having trouble with docker behind company proxy
You can try to build your docker file with the following command:
docker build --build-arg http_proxy=http://your.proxy:8080 --build-arg http_proxy=http://your.proxy:8080 -t yourimage .

How to access parent host service while building docker image?

while building a docker image I would like to access a service hosted on the parent host. For instance, suppose I need to access a npm private repository that's running on the host machine xpto:8080. On xpto I'm also building a new image that on Dockerfile calls
RUN npm set registry http://xpto:8080
RUN npm install
When I try to docker build -t=my_image . I always get
failed, reason: connect EHOSTUNREACH 192.168.2.103:4873
Also tried RUN wget xpto:8080 and got
failed: No route to host.
Tried to use the --add-host parameter but it didn't workout.
The strange part is that when I try to access the parent host service from another container it runs ok, but had to add the --net="host" parameter like this:
docker run --it --rm --net="host" my-test-image sh
wget xpto:8080
The thing is that this --net parameter isn't supported by docker build!
Thanks,
For some, unknown reason, the centosOS firewall was only blocking the connection while building and not while running.
The solution was to add an exception to the firewall and problem was solved.

docker : When creating a machine, VT-X/AMD is enabled yet

I'm going through this tutorial
Dockerizing Flask With Compose and Machine - From Localhost to the Cloud
When trying to create a virtualbox with the command below
docker-machine create -d virtualbox dev;
I have the following error
Error creating machine : Error in driver during machine creation. This computer doesn't have VT-X/AMD enabled. Enabling it in the BIOS is mandatory
(Addendum: I'm running an ubuntu image on a virtual box. The physical host is a windows machine. The VT VT-X/AMD is enabled both , in the bios and in the virtualbox.)
Reading here and there, it seems to be a normal behavior because I'm trying to create a virtualbox within a virtualbox -> Click here for the explanation
What command should I use instead of docker-machine ?
Any insights are more than welcomed ...
Update: I've asked 3 additional questions to #VonC after his initial answer. Please find the questions below , in italic
1) How should I make the virtualbox and the docker config see that new "virtualbox"?
2) Will the ubuntu box, be able to do the docker-compose and build the container on that host?
3) If I'm pulling an image like debian, how can I use it as a machine and build an container on top of it?
If you do not want to change the BIOS settings, please run the below command.
I have the same problem because I have Hyper-V manager installed in my Windows 8 server. To avoid this issue I ran the below with the below option
--virtualbox-no-vtx-check
Example: docker-machine create default --virtualbox-no-vtx-check
I'm in a VM already , running ubuntu. Physical host is a windows machine
Then you don't need docker-machine.
You would create a small Linux image from windows with (again, type in a regular Windows CMD shell)
docker-machine create -d virtualbox dev
But on a full-fledged Ubuntu VM, you just need to install docker and run it directly.
If you need to use docker-machine, just copy (on Windows) v0.6.0-rc1/docker-machine_windows-amd64.exe as docker-machine.exe anywhere you want.
Also: set VBOX_MSI_INSTALL_PATH=C:\Program Files\Oracle\VirtualBox\ (if your VirtualBox is installed there)
You now can use docker-machine -d virtualbox dev.
2) Will the ubuntu box, be able to do the docker-compose and build the container on that host?
Yes, no issue there. The installation is straightforward.
3) If I'm pulling an image like debian, how can I use it as a machine and build an container on top of it?
You simply write a Dockerfile starting with FROM debian:jessie (see an example here), add some commands (RUN, COPY, ...): for instance:
FROM debian:stable
RUN apt-get update && apt-get install -y --force-yes apache2
EXPOSE 80 443
VOLUME ["/var/www", "/var/log/apache2", "/etc/apache2"]
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
Build it (docker build .)and run it (docker run).

Resources