There is an error cannot start docker after did yum update on Centos with error message :
error response from daemon : failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see.aws.config.CredentialsChainVerboseError.
The problem was solved by add credentials variable AWS Key on path :
/usr/lib/systemd/system/docker.service
Environment="AWS_ACCESS_KEY_ID=XXX"
Environment="AWS_SECRET_ACCESS_KEY=XXX"
Environment="AWS_DEFAULT_REGION=XXX"
Related
I want
minikube start
to run in /etc/rc.d/rc.local as this script executes after everytime ec2 instance starts.
It is failing to start minikube when kept in rc.local but when I execute it as non-root user, it works.
Any help is appreciated to make it work from rc.local script
Update:
I've added minikube start --force --driver=docker
This time, it says:
E0913 18:12:21.898974
10063 status.go:258]
status error: NewSession:
new client:
new client: ssh: handshake failed:
ssh: unable to authenticate, attempted methods [none publickey],
no supported methods remain.
Failed to list containers for "kube-apiserver":
docker: NewSession: new client: new client:
ssh: handshake failed: ssh: unable to authenticate,
attempted methods [none publickey],
no supported methods remain StackOverflow
etc etc
In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.
My question is is there any notable difference between running Docker-Machine as a snap vs built from a source ? I'm having networking issue and I suspect it may be related to the type of installation.
this is what I get when I try point active host to the Docker:
executing: docker-machine env instance
output:
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
after executing: docker-machine -D regenerate-certs instance
output:
SSH cmd err, output: fork/exec /usr/bin/ssh: permission denied:
Error getting ssh command 'exit 0' : ssh command error:
command : exit 0
err : fork/exec /usr/bin/ssh: permission denied
Changing permissions for the mentioned directory didn't help.
I've been battling an issue with a corporate proxy when trying to run docker-compose up -d nginx mysql
I'm attempting to run the Laradock container on OSX but keep running into errors when composer attempts to install dependencies. I've updated my docker settings to notify it about my corporate proxy:
Before adding the proxy information, I was receiving this error:
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:
error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
Since updating the proxy details, I am now receiving this error:
Step 27/183 : RUN if [ ${COMPOSER_GLOBAL_INSTALL} = true ]; then composer global install ;fi
---> Running in a7699d4ecebd
Changed current directory to /home/laradock/.composer
Loading composer repositories with package information
[Composer\Downloader\TransportException]
The "https://packagist.org/packages.json" file could not be downloaded: SSL: Success
Failed to enable crypto
failed to open stream: operation failed
I'm an experienced dev, but new to Docker. I think that the error is being caused because PHP is running inside the docker container but for some reason does not have access to my local certificates?
I connected my Virtualbox VM to my docker machine.And when i do docker-compose up from docker machine I am getting this following Error.
ERROR: SSL error: HTTPSConnectionPool(host='192.168.4.20', port=2376): Max retries exceeded with url: /v1.22/info (Caused by SSLError(CertificateError("hostname '192.168.4.20' doesn't match 'localhost'",),))
I know I'm a bit late to the party, but I just had this. Apparently, Docker Compose is not using the correct TLS version. You can fix this by having the following environment variable:
COMPOSE_TLS_VERSION=TLSv1_2
Here's the original link: https://stackify.com/docker-environment-variables/
I had the same issue, I could resolve renewing the certificate.
$ docker-compose up -d
ERROR: SSL error: HTTPSConnectionPool(host='192.168.99.100', port=2376):
Max retries exceeded with url: /v1.30/networks/docker_default
(Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
$ docker network ls
error during connect: Get https://192.168.99.100:2376/v1.40/networks: x509:
certificate has expired or is not yet valid
To fix:
$ docker-machine regenerate-certs --client-certs
$ docker-compose up -d
Starting couchdb-dev ... done
Starting consul-dev ... done
Starting postgres-dev ... done
Starting zipkin-dev ... done
Starting rabbitmq-dev ... done
Starting oracle-dev ... done
Starting cassandra-dev ... done
Works!
ps: I got this error after change the hour on clock from computer