I'm setting up a private docker registry on centOS using "docker-distribution". Therefore, I use the "config.yml" to configure the registry. This file looks as follows:
version: 0.1
log:
fields:
service: registry
storage:
cache:
layerinfo: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
tls:
certificate: /certs/certificate.crt
key: /certs/key.key
auth:
htpasswd:
realm: somerealm
path: /auth/registry.password
Everything works well so far, but I would like to use two different certificates, one for local traffic and one for remote traffic over the internet. The problem is that I don't know how to specify multiple certificate/key-files in the config. I already tried using a wildcard like "/certs/*.crt" or to add another entry "certificate: /certs/certificate_2.crt"...but it did not work.
I could not find any documentation nor post about this. Does anyone have an idea how I could achieve this?
Related
We have a lambda that has a post event. Before deploying we need to test the whole flow on local so we're using serverless-offline. We have our main app inside a docker-compose and we are trying to test calling this lambda from the main app. We're getting this error: Error: socket hang up
I first thought that could be a docker configuration on the dockerfile or docker-compose.yml but we tested with an express app using the same dockerfile from the lambda and I can hit the endpoint from the main app. So know we know that is not docker issue but rather the serverless.yml
service: csv-report-generator
custom:
serverless-offline:
useDocker: true
dockerHost: host.docker.internal
httpPort: 9000
lambdaPort: 9000
provider:
name: aws
runtime: nodejs14.x
stage: local
functions:
payments:
handler: index.handler
events:
- http:
path: /my-endpoint
method: post
cors:
origin: '*'
plugins:
- serverless-offline
- serverless-dotenv-plugin
Here is our current configuration, we've been trying different configurations without success. Any idea how to be able to hit the lambda?
I use a compose file with two services (Python app & Traefik), in the docker file I load all the environment variables.
For Traefik I use a YML file to define the services, In that YML file I have a node for certificateResolvers, that node looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: "email#domain.com"
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
I want to set the email from a environment variable so the YML file should looks like this:
certificatesResolvers:
letsencrypt:
acme:
email: '{{env "USER_EMAIL"}}'
storage: /etc/traefik/acme/acme.json
httpChallenge:
entryPoint: web
Having the YML in this way I got this in the Logs:
level=info msg="Starting provider *acme.Provider {\"email\":\"{{env \\\"USER_EMAIL\\\"}}\",\"caServer\":\"https://acme-v02.api.letsencrypt.org/directory\",\"storage\":\"/etc/traefik/acme/acme.json\",\"keyType\":\"RSA4096\",\"httpChallenge\":{\"entryPoint\":\"web\"},\"ResolverName\":\"letsencrypt\",\"store\":{},\"ChallengeStore\":{}}"
level=error msg="Unable to obtain ACME certificate for domains \"domain.com\": cannot get ACME client acme: error: 400 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-acct :: urn:ietf:params:acme:error:invalidEmail :: Error creating new account :: \"{{env \\\"USER_EMAIL\\\"}}\" is not a valid e-mail address, url: " providerName=letsencrypt.acme routerName=web-secure-router#file rule="Host(`domain.com`)"
I tried with:
email: '{{env "USER_EMAIL"}}'
email: '`{{env "USER_EMAIL"}}`'
email: "{{env 'USER_EMAIL'}}"
email: "{{env USER_EMAIL}}"
But none of those worked.
In the same YML file I have a node that looks like this:
http:
routers:
web-secure-router:
rule: 'Host(`{{env "PROJECT_HOSTNAME"}}`)'
entryPoints:
- web-secure
service: fastapi
tls:
certResolver: letsencrypt
In that section, I get the right value of the PROJECT_HOSTNAME variable, in this case domain.com as you can see in the Logs above
this may not be the solution, but it is a different way of doing things, you can try with:
instead of using traefik yml, use commands in the docker compose yml;
https://github.com/nasatome/docker-network-utils/blob/389324b6795d07684dac9bfe7dc5315bcd7eef7c/reverse-proxy/traefik/docker-compose.yml
Another thing to try would be to use:
${USER_EMAIL}
instead of
{{env "USER_EMAIL"}}
To clarify on why you cannot use your own user defined environment variable for certificatesResolvers is because this is part of the static configuration, whereas the http is part of the dynamic configuration (where you can use your own like PROJECT_HOSTNAME)
You can still use TRAEFIK Environment variables to set the email for your certificate resolver with the variable TRAEFIK_CERTIFICATESRESOLVERS_<NAME>_ACME_EMAIL.
I haven't tested this myself, but I think the following should do the trick:
services:
traefik:
environment:
TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL: ${USER_EMAIL}
I'm trying to enable file provider for registering dynamic configuration, but I get the error:
Cannot start the provider *file.Provider: error adding file watcher: no space left on device
Traefik uses fsnotify for adding new watchers and it get a limit from variable of Linux: /proc/sys/fs/inotify/max_user_watches
I tried to change the variable inside the docker container by sudo:
sudo sysctl -w fs.inotify.max_user_watches=12288
but I'm getting a error:
sysctl: error setting key 'fs.inotify.max_user_watches': Read-only file system
Traefik configuration:
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
providers:
docker: {}
file:
directory: '/config'
watch: true
api:
dashboard: true
certificatesResolvers:
le:
acme:
email: myemail#mail.com
storage: acme.json
httpChallenge:
entryPoint: web
Traefik version: 2.2.1
When I run traefik on another machine or my Mac or when I set a watch of configuration to false then it works like a charm, but I need to watch file changes
Please, tell me how I can change the variable by sudo in Alpine container or how to solve this issue in another way
Well, I try to change max_user_watches inside docker container. It was a wrong idea. I need to change max_user_watches inside my linux machine where I run docker containers.
After run command:
sudo sysctl -w fs.inotify.max_user_watches=12288
it worked fine
I'm trying to generate an SSL certificate with certbot/certbot docker container in kubernetes. I am using Job controller for this purpose which looks as the most suitable option. When I run the standalone option, I get the following error:
Failed authorization procedure. staging.ishankhare.com (http-01):
urn:ietf:params:acme:error:connection :: The server could not connect
to the client to verify the domain :: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
I've made sure that this isn't due to misconfigured DNS entries by running a simple nginx container, and it resolves properly. Following is my Jobs file:
apiVersion: batch/v1
kind: Job
metadata:
#labels:
# app: certbot-generator
name: certbot
spec:
template:
metadata:
labels:
app: certbot-generate
spec:
volumes:
- name: certs
containers:
- name: certbot
image: certbot/certbot
command: ["certbot"]
#command: ["yes"]
args: ["certonly", "--noninteractive", "--agree-tos", "--staging", "--standalone", "-d", "staging.ishankhare.com", "-m", "me#ishankhare.com"]
volumeMounts:
- name: certs
mountPath: "/etc/letsencrypt/"
#- name: certs
#mountPath: "/opt/"
ports:
- containerPort: 80
- containerPort: 443
restartPolicy: "OnFailure"
and my service:
apiVersion: v1
kind: Service
metadata:
name: certbot-lb
labels:
app: certbot-lb
spec:
type: LoadBalancer
loadBalancerIP: 35.189.170.149
ports:
- port: 80
name: "http"
protocol: TCP
- port: 443
name: "tls"
protocol: TCP
selector:
app: certbot-generator
the full error message is something like this:
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for staging.ishankhare.com
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. staging.ishankhare.com (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8: Timeout during connect (likely firewall problem)
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: staging.ishankhare.com
Type: connection
Detail: Fetching
http://staging.ishankhare.com/.well-known/acme-challenge/tpumqbcDWudT7EBsgC7IvtSzZvMAuooQ3PmSPh9yng8:
Timeout during connect (likely firewall problem)
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
I've also tried running this as a simple Pod but to no help. Although I still feel running it as a Job to completion is the way to go.
First, be aware your Job definition is valid, but the spec.template.metadata.labels.app: certbot-generate value does not match with your Service definition spec.selector.app: certbot-generator: one is certbot-generate, the second is certbot-generator. So the pod run by the job controller is never added as an endpoint to the service.
Adjust one or the other, but they have to match, and that might just work :)
Although, I'm not sure using a Service with a selector targeting short-lived pods from a Job controller would work, neither with a simple Pod as you tested. The certbot-randomId pod created by the job (or whatever simple pod you create) takes about 15 seconds total to run/fail, and the HTTP validation challenge is triggered after just a few seconds of the pod life: it's not clear to me that would be enough time for kubernetes proxying to be already working between the service and the pod.
We can safely assume that the Service is actually working because you mentioned that you tested DNS resolution, so you can easily ensure that's not a timing issue by adding a sleep 10 (or more!) to give more time for the pod to be added as an endpoint to the service and being proxied appropriately before the HTTP challenge is triggered by certbot. Just change your Job command and args for those:
command: ["/bin/sh"]
args: ["-c", "sleep 10 && certbot certonly --noninteractive --agree-tos --staging --standalone -d staging.ishankhare.com -m me#ishankhare.com"]
And here too, that might just work :)
That being said, I'd warmly recommend you to use cert-manager which you can install easily through its stable Helm chart: the Certificate custom resource that it introduces will store your certificate in a Secret which will make it straightforward to reuse from whatever K8s resource, and it takes care of renewal automatically so you can just forget about it all.
I have just installed the docker-registry in stand-alone mode successfully and I can use the following command
curl -X GET http://localhost:5000/v2/
to get the proper result.
However, when I use
curl -X DELETE http://localhost:5000/v2/<name>/blobs/<digest>
to delete a layer, it fails, I get:
{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}
I use the default configuration from the docker hub. And I studied the official configuration but failed to resolve it.
How can I make it out?
You have to add the parameter delete: enabled: true in /etc/docker/registry/config.yml
make it look like that :
version: 0.1
log:
fields:
service: registry
storage:
cache:
layerinfo: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
take a look here for more details
Or by adding an environment var to the container on boot :
-e REGISTRY_STORAGE_DELETE_ENABLED=true
Either use:
REGISTRY_STORAGE_DELETE_ENABLED=true
or define:
REGISTRY_STORAGE_DELETE_ENABLED: "yes"
in docker-compose.