I'm using Rancher for my PoC. As part of stack I'm using harbor as helm chart registry and container registry. I'm following this tutorial about how to configure self signed certificate in Ranchor for Harbor in this case.
I'm interested about how to use cert-manager for managing self-signed certificate in any cluster in Rancher. Currently cert-manager is running in rancher cluster (because rancher is using self-signed certificate too). Have I install cert-manager in all of my clusters in which I need to have an integration with Harbor. Since certificate is currently not configured, I'm always getting that error x509: certificate signed by unknown authority.
Check if, in the context of your tutorial, the page "Updating a Private CA Certificate"
It includes "Reconfigure Rancher agents to trust the private CA"
For each cluster under Rancher management (except the local Rancher management cluster) run the following command using the Kubeconfig file of the Rancher management cluster (RKE or K3S).
kubectl patch clusters.management.cattle.io <REPLACE_WITH_CLUSTERID> \
-p '{"status":{"agentImage":"dummy"}}' --type merge
This command will cause all Agent Kubernetes resources to be reconfigured with the checksum of the new certificate.
Related
I am running a secure private docker registry on a machine (server) within my local network. My kubernetes and helm clients are running on another machine (host).
I am using a Certification Authority (CA) in the server machine in order to verify https pull requests from the host. That is why I am copying the registry's Certificate Authority (CA) to the host as mentioned here:
Instruct every Docker daemon to trust that certificate. For Linux:
Copy the domain.crt file to
/etc/docker/certs.d/myregistrydomain.com:5000/ca.crt on every Docker
host. You do not need to restart Docker
Even though this solution works as a charm, but only for few nodes.
From kubernetes or HELM perspective, is there a way to generate a secret or stating username/password inside the charts. Sot that, when 1000 hosts exist, there will be no need to copy the CA to every and each single host?
Server and hosts run Centos7
I'm using CircleCI 2.0 and I have a private docker registry with self-signed certificate. I'm able to configure my local docker, just like documented here, the problem is in CircleCI:
I'm using remote dockers so when I try to login in Docker registry it's failing with Error response from daemon: Get https://docker-registry.mycompany.com/v2/: x509: certificate signed by unknown authority.
Is there a way to install the certificate in a remote docker? I don't have access to the docker host's shell. I don't want to use machine executor type.
It's not possible. It could only be accomplished by using CircleCI's Enterprise level system.
I would like to use kubernetes for gitlab runner.
I have a gitlab instance on server 1 and kubernetes on server 2 (with gitlab runner) .
I installed kubernetes with kubeadm and flannel network pod.
When I launch the build, I can connect with kubernetes.
But, the job is not running.
I have this error:
Post https://<master_ip>:<master_port>/api/v1/namespaces/gitlab/pods: x509: certificate signed by unknown authority
So I know it is a ssl issue.
Do I must create a ssl certificates?
And how and which argument?
Thanks for help
have you tried making a service account on kubernetes and generate a bearer token? Not sure if gitlab works with tokens. If not you'll need the following:
The following options are provided, which allow you to connect to the Kubernetes API:
host: Optional Kubernetes apiserver host URL (auto-discovery attempted if not specified)
cert_file: Optional Kubernetes apiserver user auth certificate
key_file: Optional Kubernetes apiserver user auth private key
ca_file: Optional Kubernetes apiserver ca certificate
So in short, you'll have to generate a key-cert combo based on the ca that you kubernetes apiserver uses. The content of your combo can be the following (this is a raw json example config that I use for cfssl):{"CN":"worker","hosts":[],"key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"OV","OU":"Devops"}]}
I have a website that I'm running on a digital ocean droplet, which I want to continuously deploy via docker and a Teamcity build server which I have running on my home server. I want to enable https on my docker repo, with a self signed certificate, and without a domain name.
Let's say my home's ip address is 10.10.10.10 and the docker repo is running on port 5000.
I followed the steps here, however docker on my website complained that it cannot connect to the docker repo on my homeserver because it doesn't specify an IP in the SAN extension.
Okay. So I created a new certificate without the CN field and only an IP in the SAN, and now my cert config on my website looks like...
/etc/docker/certs.d/10.10.10.10:5000/ca.crt
I also added the cert to my general certs (Ubuntu 16.04 btw)
Then I try to pull the image from my home server to my website...
docker pull 10.10.10.10:5000/personal_site:latest
However, I'm getting this error.
Error response from daemon: Get https://10.10.10.10:5000/v1/_ping: x509:
certificate signed by unknown authority (possibly because of "x509:
invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate "serial:xxx")
I thought by adding my cert to the /etc/docker/... it would accept a self-signed cert. Anyone have any advice here?
You can't used a self signed certificate for this, it needs to be a CA certificate. Follow the same steps required to create a certificate for a docker host and store your CA in /etc/docker/certs.d/.... Or you can also define 10.10.10.10 as an insecure registry as part of the docker daemon startup (dockerd --insecure-registry 10.10.10.10:5000 ...) and docker should ignore any certificate issues.
I just did the same thing with this instructions create private repo with password without domain and ssl. That will require you to add certificate on client and domain on host file (if you love to have an domain yourself without registering new domain)
Docker version 1.2.0, build 2a2f26c/1.2.0,
docker registry 0.8.1
i setup docker private registry on cenots7 and created my custom ssl cert. when I try to access my docker registry using https I get x509: certificate signed by unknown authority. i found a solution for this by placing the cert file under "/etc/pki/tls/certs" then do
"update-ca-trust"
"service docker restart"
now it started to read my certificate.i can login and pull and push to docker private registry
"https://localdockerregistry".
now when i tries to read from online docker registry(https://index.docker.io/v1/search?q=centos) like
"docker search centos"
i get
"Error response from daemon: Get https://index.docker.io/v1/search?q=centos: x509: certificate signed by unknown authority"
i exported docker.io cert from firefox brower and put it under "/etc/pki/tls/certs" then do "update-ca-trust" and "service docker restart" but same error. it looks like docker client cant decide which cert to use for which repository.
Any ideas how we can fix "x509: certificate signed by unknown authority" for online docker registry while using your own docker private registry.
The correct place to put the certificate is on the machine running your docker daemon (not the client) in this location: /etc/docker/certs.d/my.registry.com:5000/ca.crt where my.registry.com:5000 is the address of your private registry and :5000 is the port where your registry is reachable. If the path /etc/docker/certs.d/ does not exist, you should create it -- that is where the Docker daemon will look by default.
This way you can have a private certificate per private registry and not affect the public registry.
This is per the docs on http://docs.docker.com/reference/api/registry_api/
I had the problem with a docker registry running in a container behind a Nginx proxy with a StartSSL certificate.
In that case you have to append the intermediate ca certs to the nginx ssl certificate, see https://stackoverflow.com/a/25006442/1130611