Centos -- Yum Update Error – “HTTP Error 403 - Forbidden” - docker

I’ve recently spun up a Docker container running CentOS Linux version 7. In my office, we have a proxy server, so once the container was up, I consoled in and set the proxy manually:
[me#8adfa83bb9e2 /home/me]#
[me#8adfa83bb9e2 /home/me]# export http_proxy="http://10.10.10.101:8888"
[me#8adfa83bb9e2 /home/me]#
On a separate SO post, I learned about setting the proxy in the /etc/yum.conf file. So I added the following line to my /etc/yum.conf file:
proxy=http://10.10.10.101:8888
And then I did a “yum clean metadata”:
[me#8adfa83bb9e2 /home/me]# yum clean metadata
Loaded plugins: fastestmirror, ovl
Cleaning repos: base extras updates
0 metadata files removed
0 sqlite files removed
0 metadata files removed
[me#8adfa83bb9e2 /home/me]#
At this point, I figured I was home free. I did a “yum update”:
[me#8adfa83bb9e2 /home/me]#
[me#8adfa83bb9e2 /home/me]# yum update
Loaded plugins: fastestmirror, ovl
Loading mirror speeds from cached hostfile
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was
14: HTTP Error 403 – Forbidden
...and then a lot more stuff here...
Hmm. “HTTP Error 403”. That’s a new one for me; I’m used to running “yum update” and it just automagically works.
This isn’t a DNS problem; the Docker container can resolve and ping mirrorlist.centos.org. I tried to use wget to pull down that URL, but the container doesn’t have wget installed. When I try the same thing from the host machine:
me#hostmachine:/home/me$
me#hostmachine:/home/me$
me#hostmachine:/home/me$ sudo wget http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container
[1] 7039
[2] 7040
[3] 7041
[2] Done arch=x86_64
me#hostmachine:/home/me$
Redirecting output to ‘wget-log’.
[1]- Exit 8 sudo wget http://mirrorlist.centos.org/?release=7
[3]+ Done repo=os
me#hostmachine:/home/me$
me#hostmachine:/home/me$
me#hostmachine:/home/me$ ls -l
total 4
-rw-r--r-- 1 root root 382 Jan 21 19:55 wget-log
me#hostmachine:/home/me$
me#hostmachine:/home/me$
me#hostmachine:/home/me$ more wget-log
--2021-01-21 19:55:31-- http://mirrorlist.centos.org/?release=7
Resolving mirrorlist.centos.org (mirrorlist.centos.org)... 147.75.69.225, 18.225.36.18, 67.219.148.138, ...
Connecting to mirrorlist.centos.org (mirrorlist.centos.org)|147.75.69.225|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2021-01-21 19:55:31 ERROR 503: Service Unavailable.
me#hostmachine:/home/me$
me#hostmachine:/home/me$
(Yes, the host machine has the correct proxy settings. It is not a Centos machine.)
Soooooooo… It looks like the yum service is “unavailable” from my host system. But I’ve run “yum update” on many, many other Centos machines in my environment. No idea what might be different here. Has anyone seen this before? Thank you.

FYI for anyone who may be looking at this post... The issue was a proxy server problem. Once I set the proxy server settings on both the host and container, the issue went away. I think in the above post, I'd set the proxy on the container, but not the host, and the host was NAT'ing the container's IP address when doing a "yum update."

Related

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Creating new docker-machine instance always fails validating certs using openstack driver

Everytime I try to create a new instance via docker-machine on open stack, I always get this error for validating the certs. I have to end up regenerating the certs right after I create the instance for me to be able to use the instances.
$ docker-machine create --driver openstack --openstack-ssh-user root --openstack-keypair-name "KeyName" --openstack-private-key-file ~/.ssh/id_rsa --openstack-flavor-id 50 --openstack-image-name "Ubuntu-16.04" manager1
Running pre-create checks...
Creating machine...
(staging-worker1) Creating machine...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Error creating machine: Error checking the host: Error checking and/or regenerating the certs: There was an error validating certificates for host "xxx.xxx.xxx.xxx:2376": dial tcp xxx.xxx.xxx.xxx:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
$ docker-machine regenerate-certs manager1
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Then it seems to work
$ docker-machine ssh manager1 pwd
/home/ubuntu
But when I try to do env
$ docker-machine env manager1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "xxx.xxx.xxx.xx:2376": dial tcp xxx.xxx.xxx.xx:2376: i/o timeout
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
Any ideas on what might be causing this?
I've documented it further in github https://github.com/docker/machine/issues/3829
It turns out my hosting service locked down everything other than 22, 80, and 443 on the Open Stack Security Group Rules. I had to add 2376 TCP Ingress for docker-machine's commands to work.
It helps explain why docker-machine ssh worked but not docker-machine env
On Ubuntu you will need to SSH to your machine and cd into following directory:
cd /etc/systemd/system/docker.service.d/
list all files in it with:
ls -l
you will probably have something like this:
-rw-r--r-- 1 root root 274 Jul 2 17:47 10-machine.conf
-rw-r--r-- 1 root root 101 Jul 2 17:46 override.conf
you will need to delete all files except 10-machine.conf with sudo rm. After that remove existing machine which is failing with:
docker-machine rm machine1
and try to create it one more time like this:
docker-machine create -d generic --generic-ip-address ip --generic-ssh-key ~/.ssh/key --generic-ssh-user username --generic-ssh-port 22 machine1
please change ip, key, username and machine1 with you actual values. It should work now. I hope this helps.

Docker error: HTTP 408 response body: invalid character '<' looking for beginning of value

When I go docker pull hello-world I get the below error message:
Error response from daemon: error parsing HTTP 408 response body: invalid character '<' looking for beginning of value: "<html><body><h1>408 Request Time-out</h1>\nYour browser didn't send a complete request in time.\n</body></html>\n\n"
Installed Docker version:
Client:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:47:50 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.2
API version: 1.23
Go version: go1.5.4
Git commit: b9f10c9
Built: Wed Jun 1 21:47:50 2016
OS/Arch: linux/amd64
Installed using: curl -fsSL https://get.docker.com/ | sh
I have ensured that network exists and I can reach other sites. Please help
Update 1: The issue cannot be of setting MTU because I could pull the images from Docker hub a few days back on the same machine.
The issue cannot be of HTTP_PROXY either because I am in my home network
I have run across this issue a couple times with Raspberry Pi boards running various flavors of Debian/Raspbian (RPi model info was obtained by cat /proc/cpuinfo | grep Model):
Raspberry Pi Model B Rev 1 with Raspbian based on Debian 11 (bullseye)
Raspberry Pi 4 MOdel B Rev 1.4 with Debian 10 (buster)
In both cases, running docker run --rm hello-world resulted in the 408 HTTP status code reported in the original question in this thread:
$ docker run --rm hello-world
Unable to find image 'hello-world:latest' locally
docker: Error response from daemon: error parsing
HTTP 408 response body: invalid character '<'
looking for beginning of value: "<html><body>
<h1>408 Request Time-out</h1>\nYour browser
didn't send a complete request in time.\n</body>
</html>\n".
See 'docker run --help'.
The solution (noted as an aside by #Romaan) was to adjust the MTU. I did this as follows:
sudo ip link set dev eth0 mtu 1400
docker run --rm hello-world
and the hello-world container was successfully pulled and executed.
Examples of how to permanently adjust the MTU for a network interface on Debian may be found here.
That error message looks like it's coming from a proxy server. From the docker pull documentation
Proxy configuration
If you are behind an HTTP proxy server, for example in corporate
settings, before open a connect to registry, you may need to configure
the Docker daemon’s proxy settings, using the HTTP_PROXY, HTTPS_PROXY,
and NO_PROXY environment variables. To set these environment variables
on a host using systemd, refer to the control and configure Docker
with systemd for variables configuration.
The link to the instructions for configuring systemd with a proxy is straightforward.
The error message is little misleading. The problem was not that there was invalid character, but the network was misconfigured. I had one LAN interface and one WLAN interface.
LAN interface connects to a router A which forward requests to router B which was connected to internet. While the WLAN interface was directly connected to router B. I forgot to remove the WLAN configuration.
Once I ensured the WLAN interface is removed, things worked smoothly.
In short: Ensured DNS resolution works and that MTU is set right.
Another possible reason for error
If you are using Mac, please ensure to allow Unrestriction Access to Web Content like below:
Another possible step in troubleshooting
Ensure there is no proxy or web filter in your network, that is, if possible connect to your 3G network and try again to see if the results are different
I ran into this problem on Ubuntu. I managed to solve it by disconnecting from NordVPN:
$ nordvpn disconnect
You are disconnected from NordVPN.
It seems the VPN somehow slowed down the dockerhub traffic and broke my docker pulls.
There is a high chance that this is caused due to internet connectivity issue, try to rerun when the internet connection is stable.

docker-machine create with digitalocean driver and Ubuntu 16.04 x64 fails

I am attempting to create a docker machine on Digital Ocean, but with the 16.04 LTS instead of the default 15.10. The do-access-token file contains my token.
Here's the script (create-do):
#!/usr/bin/env bash
# Creates a digital-ocean server with Ubuntu 16.04 instead of the default
if [ "$1" != "" ]; then
echo "Creating: " $1
docker-machine \
create \
--driver digitalocean \
--digitalocean-access-token=`cat do-access-token` \
--digitalocean-image=ubuntu-16-04-x64 \
--digitalocean-ipv6=true \
$1
else
echo "Must have server name!"
fi
When I run the script like this:
$ ./create-do ps-server
It successfully allocates the machine at Digital Ocean, then craps out with this:
Creating: ps-server
Running pre-create checks...
Creating machine...
(ps-server) Creating SSH key...
(ps-server) Creating Digital Ocean droplet...
(ps-server) Waiting for IP address to be assigned to the Droplet...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Error creating machine: Error running provisioning: Something went wrong
running an SSH command!
command : sudo apt-get update
err : exit status 100
output : Reading package lists...
E: Could not get lock /var/lib/apt/lists/lock - open (11: Resource temporarily unavailable)
E: Unable to lock directory /var/lib/apt/lists/
The machine is running, but I can't get to it since the SSH key was apparently not set before things started going wrong.
Anyone seen this before and/or have a work-around?
Update: May 21, 2016
Broken again with same error this morning. Tried 4 times, failed same way each time.
Update: May 20, 2016
This was, according do Digital Ocean's support, due to an issue with their Ubuntu 16.04 image which has now been corrected and I have confirmed that this now works.
Related GitHub issue (not yet closed):
https://github.com/docker/machine/issues/3358
this worked for me:
docker-machine provision your-node
I've taken this solution from here: https://github.com/docker/machine/issues/3358
I hope this helps!

Docker on RHEL 6 Cgroup mounting failing

I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.

Resources