Kubectl apply return certificate has expired or is not yet valid - docker

I m using docker and kubernetes for deployment. When I try to deploy project.yaml file by running kubectl apply -f project.yaml I'm getting an error Unable to connect to the server: x509: certificate has expired or is not yet valid.
I found here https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ that is is clock issue but I don't know how to sync local clock with server one. I tried manuali setting local clock to UK time(Where server is) but it does't work.

Related

Error on etcd health check while setting up RKE cluster

i'm trying to set up a rke cluster, the connection to the nodes goes well but when it starts to check etcd health returns:
failed to check etcd health: failed to get /health for host [xx.xxx.x.xxx]: Get "https://xx.xxx.x.xxx:2379/health": remote error: tls: bad certificate
If you are trying to upgrade the RKE and facing this issue then it could be due to the missing of kube_config_<file>.yml file from the local directory when you perform rke up.
This similar kind of issue was reported and reproduced in this git link . Can you refer to the work around and reproduce it by using the steps provided in the link and let me know if this works.
Refer to this latest SO and doc for more information.

How to get serverless framework to use CA Cert

I'm on a corporate network. Said network requires a ca certificate for all encrypted transmissions.
I make this work using NPM by npm config set cafile /path/to/cerrname.pem
When I attempt to run serverless (or sls commands) commands of any kind I get
Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (node:_tls_wrap:1530:34)
at TLSSocket.emit (node:events:394:28)
at TLSSocket.emit (node:domain:475:12)
at TLSSocket._finishInit (node:_tls_wrap:944:8)
at TLSWrap.ssl.onhandshakedone (node:_tls_wrap:725:12)
This "unable to get local issuer certificate" is the exact same error I get if I don't set the CA file in npm.
How can I set the CA file in serverless framework?
I have looked at this answer (Serverless Framework Login From Behind a Proxy?), which feels close, but when running the command in the accepted answer and then trying to run serverless I get the same unable to get local issuer certificate error.
I believe it's possible to address that by setting NODE_EXTRA_CA_CERTS, at least some users in the past were successful with that approach: https://github.com/serverless/serverless/issues/9548#issuecomment-857882498

OpenShift 4 error: Error reading manifest

during OpenShift installation from a local mirror registry, after I started the bootstrap machine i see the following error in the journal log:
release-image-download.sh[1270]:
Error: error pulling image "quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129":
unable to pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: unable to pull image:
Error initializing source docker://quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129:
(Mirrors also failed: [my registry:5000/ocp4/openshift4#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: Error reading manifest
sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129 in my registry:5000/ocp4/openshift4: manifest unknown: manifest unknown]):
quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: error pinging docker registry quay.io:
Get "https://quay.io/v2/": dial tcp 50.16.140.223:443: i/o timeout
Does anyone have any idea what it can be?
The answer is here in the error:
... dial tcp 50.16.140.223:443: i/o timeout
Try this on the command line:
$ podman pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129
You'll need to be authenticated to actually download the content (this is what the pull secret does). However, if you can't get the "unauthenticated" error then this would more solidly point to some network configuration.
That IP resolves to a quay host (you can verify that with "curl -k https://50.16.140.223"). Perhaps you have an internet filter or firewall in place that's blocking egress?
Resolutions:
fix your network issue, if you have one
look at doing an disconnected /airgap install -- https://docs.openshift.com/container-platform/4.7/installing/installing-mirroring-installation-images.html has more details on that
(If you're already doing an airgap install and it's your local mirror that's failing, then your local mirror is failing)

how to solve the certification issues in puppet

I have installed docker in my ubuntu 14.04 OS.In docker containers im running puppet master and puppet agent.But im getting errors during the certificate exchange.
The puppet agent is not requesting certificates.Also showing an error saying the name cannot be resolved.
I checked the IP and hostname in /etc/hosts and /etc/hostname.
root#55fe460464d3:/# puppet agent --test
Error: Could not request certificate: getaddrinfo: Name or service not known
Exiting; failed to retrieve certificate and waitforcert is disabled
root#f7d7516d720e:/# puppet cert list -all
+ "f7d7516d720e" (SHA256) D1:6C:50:5B:BD:F6:AA:91:C4:B2:FD:4D:58:B8:DF:18:32:F4:EB:D7:B2:75:FF:E4:AF:7B:F6:F6:FE:0D:84:54
The puppet cert list --all command is showing only the master certificate,not the client certificate
What it looks like is happening is that the puppet agent can't talk to or find the puppetmaster to ask for a certificate.
The first thing to check would be that they can talk to each other over the network; the second thing to check is that the short hostname puppet resolves to the puppetmaster when run on the host. Unless you've specified a different dns name in /etc/puppet/puppet.conf by setting a server =directive in the [main] section or specified it on the command line with puppet agent -t --server <foo>, it will look for a hostname called puppet and rely on your /etc/resolv.conf's search domains to find it.

Docker cannot acces registry from openshift

Here is my whole scenario.
I have a RHEL 7.1 vmware image, with the corporate proxy properly configured, accessing stuff over http or https works properly.
Installed docker-engine, and added the HTTP_PROXY setting to /etc/systemd/system/docker.service.d/http-proxy.conf. I can verify the proxy setting is picked up by executing:
sudo systemctl show docker --property Environment
which will print:
Environment=HTTP_PROXY=http://proxy.mycompany.com:myport/ with real values of course.
Pulling and running docker images works correctly this way.
The goal is to work with the binary distribution of openshift-origin. I downloaded the binaries, and started setting up things as per the walkthrough page on github:
https://github.com/openshift/origin/blob/master/examples/sample-app/README.md
Starting openshift seems to work as I can:
* login via the openshift cli
* create a new project
* even access the web console
But when I try to create an app in the project (also via the cli):
oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-hello-world.git
It fails:
error: can't look up Docker image "centos/ruby-22-centos7": Internal error occurred: Get https://registry-1.docker.io/v2/: dial tcp 52.71.246.213:443: connection refused
I can access (without authentication though) this endpoint via the browser on the VM or via WGET.
Hence I believe DOCKER fails to pick up the proxy settings. After some searching I also fear if there are IPTABLES settings missing. Referring to:
https://docs.docker.com/v1.7/articles/networking/
But I don't know if I should fiddle with the IPTABLES settings, should not Docker figure that out itself?
Check your HTTPS_PROXY environment property.

Resources