Yum update fails -Centos 7 - dockerbuild - docker

I have frequently built docker container using centos 7 as base image. But now I am getting error when I run,
RUN yum update add \
bash \
&& rm -rfv /var/cache/apk/*
ERROR:
Loaded plugins: fastestmirror, ovl
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
`subscription-manager repos --disable=<repoid>`
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64 Could not retrieve
mirrorlist
http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container
error was 14: curl#6 - "Could not resolve host: mirrorlist.centos.org;
Name or service not known" The command '/bin/sh -c yum update add
bash && rm -rfv /var/cache/apk/*' returned a non-zero code: 1
I also saw few resolutions to use "dhclient" but this error happens when i do docker-compose build.

I ran into this problem attempting to run the same Dockerfile, which fetched several software packages using yum, on two different platforms; one macOS, the other an Ubuntu 16.04-based Linux OS (elementaryOS Loki), both using the official packages from docker.com.
My theory is that the Linux package is just more restrictive out of the box, security-wise, than the macOS one. Maybe this is configurable with some kind of /etc/something config file, but I don't have the expertise with Docker to say for sure. EDIT: See my comment below.
What I can say is there was no additional configuration required for me on macOS (10.11 El Capitan); just docker build . worked fine, and yum processes from the Dockerfile were able to reach all the remote repositories.
In the Ubuntu-derived Linux distro, however, it was necessary to use
docker build --network host .
followed by
docker run -it --network host <image> <command>
when I wanted to run a process inside that image which required internet access.
This may be the case for other Debian-derived systems as well.
There are, of course, security considerations which need to be taken into account when allowing a long-running Docker container to communicate through the host network adapter, unrestricted, and one would do well to review the appropriate documentation in that regard.

My assumption is that for some reason network behavior in docker varies based on distribution.
Try to use:
docker run -d --net mybridge centos
or
docker network create -d bridge mybridge
docker run -d --net mybridge centos
It should start working. Or just edit /etc/hosts and add mirror address
Name: mirrorlist.centos.org
Address: 67.219.148.138

root cause of the issue is, container proxy settings were wrong. Just corrected the proxy settings at the below location and worked.
/root/.docker/config.json

Related

WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory

I configured deis workflow in aws eks cluster. after that created deis apps and deployed in deis local repository by,
git push test test:master
when deploying, docker file is executed. here is my docker file
FROM mhart/alpine-node:12
#FROM ubuntu:18.04
ARG SOURCE_VERSION=na
ENV SOURCE_VERSION=$SOURCE_VERSION
RUN apk add --no-cache -X http://dl-cdn.alpinelinux.org/alpine/v3.9 --update bash && rm -rf /var/cache/apk/*
#apt-get update &&\
#apt-get install -y make gcc wget
WORKDIR /app
ADD . .
RUN npm install
EXPOSE 3200
CMD ["node", "app.js"]
this results error like,
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/main: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/main: No such file or directory
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.9/community: temporary error (try again later)
WARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.9/community: No such file or directory
ERROR: unable to select packages:
bash (no such package):
required by: world[bash]
The command '/bin/sh -c apk add --update bash && rm -rf /var/cache/apk/*' returned a non-zero code: 1
remote: 2021-11-15 13:30:22.569253 I | Error running git receive hook [Build pod exited with code 1, stopping build]
To ssh://deis-builder.app-test.paceup.io:2222/pu-api-gateway.git
! [remote rejected] test -> master (pre-receive hook declined)
error: failed to push some refs to 'ssh://git#deis-builder.app-test.paceup.io:2222/pu-api-gateway.git'
I am totally new to docker, deis and eks. if anyone can help it would be grateful
Finally found the answer is that we have configured nodegroup setup in amazon linux which didn't support this deployment. we changed the nodegroup for eks optimized ubuntu and deployed the app using docker and working fine.
Edit:
This is working in some of the Linux versions. In my case it's working on EKS version 1.9 but not working in EKS version 2.0 and above.
This error may come due to DNS issue also while building the docker image pus the dns flag and mention google dn 8.8.8.8. Or edit the resolv.conf and add the nameserver 8.8.8.8 in the container
I hope this may help
I had this problem when my machine had many symptoms of a network configuration problem:
A Dockerfile that had to download zip files from the net could not do this anymore and threw the warning in question which stopped the build. I could download the zip files when entering the URL:s in the browser instead, it was a problem of the container. I checked the same Dockerfile on another healthy machine and the build ran through.
I had lost the connection to the internal dns server. I could not ping another machine by its name anymore, but had to use its internal IP, although the day before, the ping had worked.
I could see any GCP project items only in Firefox incognito mode.
Answer insofar is: change the machine and test whether it does not work only on your machine. If that is true, the workaround is already done. As the next step, try to fix any other network problems, and it is likely that this will get rid of the warning.
UPDATE: The problem was a running container that gave my machine its own network. When I ran docker-compose down, the network worked again. When I removed the network from the docker-compose file, the download from inside the container worked again, the warning in question was gone.

How do I add an additional command line tool to an already existing Docker/Singularity image?

I work in neuroscience, and I use a cloud platform called Brainlife to upload and download data (linked here, but I don't think knowledge of Brainlife is relevant to this question). I use Brainlife's command line interface to upload and download data on my university's server. In order to use their CLI, I run Singularity with a Docker image created by Brainlife (found here). I run this using the following code:
singularity shell docker://brainlife/cli -B
I also have the file saved on my server account, and can run it like this:
singularity shell brainlifeimage.sif -B
After running one of those commands, I am able to download and upload data, usually successfully. Currently I'm following Brainlife's tutorial to bulk download data. The tutorial uses the command line tool "jq" (link), which isn't on their docker image. I tried installing it within the Singularity shell like this:
apt-get install jq
And it returned:
Reading package lists... Done
Building dependency tree
Reading state information... Done
W: Not using locking for read only lock file /var/lib/dpkg/lock
E: Unable to locate package jq
Is there an easy way to add this one tool to the image? I've been reading over the Singularity and Docker documentations, but Docker is all new to me and I'm really lost.
If relevant, my university server runs on Ubuntu 16.04.7 LTS, and I am using terminal on a Mac laptop running MacOS 11.3. This is my first stack overflow question - please let me know if i can provide any additional info! Thanks so much.
The short, specific answer: jq is portable, so you can just mount it into the image and use it normally. e.g.,
singularity shell -B /path/to/jq:/usr/bin/jq brainlifeimage.sif
The short, general answer: you can't modify the read only image and need to build a new one.
Long answer with several options and specific examples:
Since singularity images are read only, they cannot have persistent changes made to them. This is great for reproducibility, a bit inconvenient if your tools are likely to change often. You can rebuild the image in several ways, though all will require sudo permissions.
Write a new Singularity definition based on the docker image
Create a new definition file (generally called Singularity or something.def), use the current container as a base and add the desired software in the %post section. Then build the new image with: sudo singularity build brainy_jq.sif Singularity
The definition file docs are quite good and highly recommended.
Bootstrap: docker
From: brainlife/cli:latest
%post
apt-get update && apt-get install -y jq
Create a sandbox of the current singularity image, make your changes, and convert back to a read-only image. See the singularity docs on writable sandbox directories and converting images between formats.
# use --sandbox to create a writable singularity image
sudo singularity build --sandbox writable_brain/ brainlifeimage.sif
# --writable must still be used to make changes, and sudo for correct permissions
sudo singularity exec writable_brain/ bash -c 'apt-get update && apt-get install -y jq'
# convert back to read-only image for normal usage
sudo singularity build brainlifeimage_jq.sif writable_brain/
Modify the source docker image locally and build from that. One of the more... creative options. Almost sudo-free, except singularity pull doesn't accept docker-daemon so a sudo singularity build is necessary.
# add jq to a new docker container. the value for --name doesn't matter, but we use it
# in later steps. The entrypoint needs to be overridden in this case as well.
docker run -it --name brainlife-jq --entrypoint=/bin/bash \
brainlife/cli:1.5.25 -c 'apt-get update && apt-get install -y jq'
# use docker commit to create an image from the container so it can be reused
# note that we're using the name of the image set in the previous step
# the output of docker commit is the hash for the newly created image, so we grab that
IMAGE_ID=$(docker commit brainlife-jq)
# tag the newly created image with a more useful name
docker tag $IMAGE_ID brainlife/cli:1.5.25-jq
# here we use docker-daemon instead of docker to build from a locally cached docker image
# instead of looking at docker hub
sudo singularity build brainlife_jq.sif docker-daemon://brainlife/cli:1.5.25-jq
# now check that it all worked as planned
singularity exec brainlife_jq.sif which jq
# /usr/bin/jq
ref: docker commit, using locally cached docker images

Starting ssh service through ENTRYPOINT not working

I'm having a lot of difficulties running an linux container with SSH service on it. To skip the details, SSH is not optional, I must have it.
I installed the openssh-server with:
RUN
echo "**** Setting up openssh-server ****" &&
apt-get install -y openssh-server &&
sed -i "s|# PasswordAuthentication yes|PasswordAuthentication yes|g" /etc/ssh/sshd_config &&
mkdir /var/run/sshd
And am trying to open the service with:
ENTRYPOINT service ssh restart && bash
However it does not work. I tried in multiple way to get it started, by using CMD, by making a script that would start the service, and it's not working. What's worse is that this seems to have worked for others (pull access denied repository does not exist or may require docker login)
The image that I am using as base is ubuntu:18.04. However I switched to jre/systemd-ubuntu:18.04 as I thought the lack of systemd could prevent the service from running however that did not work either. Any suggestions what the possibly issue could be?
I managed to get my service to run, as a first advice I recommend making sure that the service runs by itself before putting it together with other services. In my case it seems the ssh service was not being started because a previous non-returning service was started which would keep the shell occupied and would not let it continue it's ENTRYPOINT execution to start the SSH.
One other thing that I had done previously and could have been part of the solution is that I manually created the folder /var/run/sshd. It seems some ssh service versions need that to exist otherwise they won't run. At this point I can't verify though if that was the only issue, as I've tried multiple solution at once.

docker login fails on a server with no X11 installed

I am trying to deploy a docker configuration with images on a private docker registry.
Now, every time I execute docker login registry.example.com, I get the following error message:
error getting credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
The only solution I found for non-MacOS users was to run export $(dbus-launch) first, but that did not change anything.
I am running Ubuntu Server and tried with both the Ubuntu Docker package and the Docker-CE package.
How can I log in without an X11 session?
Looks like this is because it defaults to use the secretservice executable which seems to have some sort of X11 dependency for some reason. If you install and configure pass docker will use that instead which seems to solve the problem.
In a nutshell (from https://github.com/docker/compose/issues/6023)
sudo apt install gnupg2 pass
gpg2 --full-generate-key
This generates a you a gpg2 key. After that's done you can list it with
gpg2 -k
Copy the key id (from the line labelled [uid]) and do
pass init "whatever key id you have"
Now docker login should work.
There are a couple of bugs logged on launchpad regarding this:
https://bugs.launchpad.net/ubuntu/+source/golang-github-docker-docker-credential-helpers/+bug/1794307
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
This works: sudo apt remove golang-docker-credential-helpers
You can remove the offending package golang-docker-credential-helpers without removing all of docker-compose.
The following worked for me on a server without X11 installed:
dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
and then
echo 'foo' | docker login mydockerrepo.com -u dockeruser --password-stdin
Source:
bug reported in debian:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=910823#39
bug reported on ubuntu:
https://bugs.launchpad.net/ubuntu/+source/docker-compose/+bug/1796119
secretservice requires a GUI. You can use pass without a GUI.
Unfortunately, Docker's documentation on how to configure Docker Credential Helpers is quite lacking. Here's a comprehensive guide how to configure pass with Docker (tested with Ubuntu 18.04):
1. Install the Docker Credential Helper for pass
Find the url for the latest version of docker-credential-pass from https://github.com/docker/docker-credential-helpers/releases . For example:
# substitute with the latest version
url=https://github.com/docker/docker-credential-helpers/releases/download/v0.6.2/docker-credential-pass-v0.6.2-amd64.tar.gz
# download and untar the binary
wget $url
tar -xzvf $(basename $url)
# move the binary to a dir in your $PATH
sudo mv docker-credential-pass /usr/local/bin
# verify it works
docker-credential-pass list
2. Install and configure pass
apt install pass
# create a gpg2 key
gpg2 --gen-key
# if you have issues with lack of entropy, "apt install haveged" and try again
# create the password store using the gpg user id above
pass init $gpg_id
3. docker login
docker login
# You should not see any credentials stored in "auths" section.
# "credsStore": "pass" should have been automatically added.
# If the value is "secretservice", replace it with "pass".
cat ~/.docker/config.json
# verify credentials stored in `pass` store now
pass
There is a much easier answer than the ones already posted, which I found in a comment on https://github.com/docker/docker-credential-helpers/issues/105.
The solution is to rename docker-credential-secretservice out of the way
e.g: mv /usr/bin/docker-credential-secretservice /usr/bin/docker-credential-secretservice.broken
Once you do this, docker login works regardless of whether or not docker-compose is installed. No other package additions or removals are necessary.
I've resolved this issue by uninstalling docker-compose which was installed from Ubuntu repo and installing docker-compose by official instruction at https://docs.docker.com/compose/install/#install-compose
What helped me on Ubuntu 18.04 was:
Following the steps in #oberstet 's post and uninstalling the golang helper
Performing a login after the helper uninstall
Reinstalling docker via sudo apt-get install docker
Logging back in via sudo docker login

Docker build error There are no enabled repos

On Centos7.1 Docker host : I am building a docker image with Dockerfile having command
RUN yum -y install deltarpm yum-utils --disablerepo=*-eus-* --disablerepo=*-htb-* --disablerepo=*-ha-* --disablerepo=*-rt-* --disablerepo=*-lb-* --disablerepo=*-rs-* --disablerepo=*-sap-*
During the run of docker build command : docker build -t <image>, I get the error:
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
There are no enabled repos.
Run "yum repolist all" to see the repos you have.
You can enable repos with yum-config-manager --enable <repo>
How can I fix this? Do I need to enable yum repo inside docker also?
(Note that I can install these packages in Docker host)
Using yum (the Yellowdog Updater, Modified) in your Dockerfile has nothing to do with your host CentOS.
It has to do with your base image used by your Dockerfile (FROM xxx).
The error message that matters is:
There are no enabled repos.
You can see a manual resolution in "RHEL 7 - Solution to "There are no enabled repos" message"
If you simply want to play around and install software without the need for up to date Red Hat subscription you can mount your downloaded redhat ISO image and make it your default local repository and be able to install software.
To enable your local repository and thus overcome the There are no enabled repos, first mount your REHL7 iso image:
[root#rhel7 ~]# mkdir /media/rhel7-repo-iso
[root#rhel7 ~]# mount /dev/cdrom /media/rhel7-repo-iso/
mount: /dev/sr0 is write-protected, mounting read-only
That is not supported by a Dockerfile/docker image though.
You are better off using a base image which does not require any subscription model. For example:
FROM fedora
RUN yum update -y
RUN yum install -y httpd
Again, this has nothing to do with your host.
The OP mentions following Red Hat Enterprise Linux Atomic Host 7 Getting Started Guide
That guide clearly includes:
To enable software updates, you must register your Red Hat Enterprise Linux Atomic Host installation.
This is done with the subscription-manager command as described below.
If your system is located on a network that requires the use of an HTTP proxy, please see the Red Hat Knowledge Base Article on configuring subscription manager to use an HTTP proxy. The --name= option may be included if you wish to provide an easy to remember name to be used when reviewing subscription records.
$ sudo subscription-manager register --username=<username> --auto-attach

Resources