How to prevent Traefik from serving default self-signed certificat - docker

i am trying to setup nexus 3 docker registry behind traefik v2.3.1, the problem is when i want to do
docker login <docker_url> -u <user> -p <password>
i receive this error
Error response from daemon: Get https://docker_url/v1/users/: x509: certificate is valid for 6ddc59ad70b84f1659f8ffb82376935b.6f07c26f5a92b019cea10818bc6b7b7e.traefik.default, not docker_url
Treafik parameters
- "--entryPoints.web.address=:80/tcp"
- "--entryPoints.websecure.address=:443/tcp"
- "--entryPoints.traefik.address=:9000/tcp"
- "--api.dashboard=true"
- "--api.insecure"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetesingress"
- "--log.level=DEBUG"
- "--serversTransport.insecureSkipVerify=true"
IngressRouteTCP
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: nexus
spec:
routes:
- match: Host(`docker_url`)
kind: Rule
services:
- name: nexus-svc
port: 5000
in nexus 3 i configured a docker registry to listen on port 5000 using http
so my question is it realy i need only treafik to stop serving default self-singed cert or there is another problem that i don't see
thanks for the help in advance

Related

What is the correct configuration to let the HTTPS traffic goes to containers with Docker and Traefik?

I got a docker swarm configure with Traefik has the reverse-proxy. I got a service where my SSL traffic should be routed.
I got my certificate SSL configure with traefik and traefik is able to resolve them correctly, and served them the client.
But, i would like to let the SSL traffic go to my containers (they also have the SSL certificate).
I try different ways but each witout any success.
Here is the label i use with my service
No success with this one
- traefik.http.routers.localhost-https.rule=HostRegexp(`{subdomain:[a-z0-9]+}.mydomain.com`)
- traefik.http.routers.localhost-https.entrypoints=https
- traefik.http.routers.localhost-https.service=localhost-https
- traefik.http.routers.localhost-https.priority=2
- traefik.http.routers.localhost-https.tls=true
- traefik.http.services.localhost-https.loadbalancer.passhostheader=true
- traefik.http.services.localhost-https.loadbalancer.server.port=443
- traefik.http.services.localhost-https.loadbalancer.server.scheme=https
No success with this one to
- traefik.http.routers.localhost-https.rule=HostRegexp(`{subdomain:[a-z0-9]+}.mydomain.com`)
- traefik.http.routers.localhost-https.entrypoints=https
- traefik.http.routers.localhost-https.service=localhost-https
- traefik.http.routers.localhost-https.priority=2
- traefik.http.routers.localhost-https.tls=true
- traefik.http.services.localhost-https.loadbalancer.passhostheader=true
#- traefik.http.services.localhost-https.loadbalancer.server.port=443
- traefik.http.services.localhost-https.loadbalancer.server.scheme=https
The only one with who i got success .. but do not reach our goal is this configuration
- traefik.http.routers.localhost-https.rule=HostRegexp(`{subdomain:[a-z0-9]+}.mydomain.com`)
- traefik.http.routers.localhost-https.entrypoints=https
- traefik.http.routers.localhost-https.service=localhost-https
- traefik.http.routers.localhost-https.priority=2
- traefik.http.routers.localhost-https.tls=true
- traefik.http.services.localhost-https.loadbalancer.passhostheader=true
- traefik.http.services.localhost-https.loadbalancer.server.port=80
#- traefik.http.services.localhost-https.loadbalancer.server.scheme=https
How could i let the traffic continue to my container in SSL ?
Thanks.
You should use a "Transport" to let traefik know that the SSL should be pass-though to the pod
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`#####`)
services:
- name: NAME
namespace: NAMESPACE
port: 8080
scheme: https
serversTransport: transport
tls:
domains:
- main: '###'
---
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
spec:
insecureSkipVerify: true
I pasted my config for Kubernetes resources, you should try to traduce this to the DockerSwarm config. I don't know how to do that but it should be quite straightforward

How can I connect my Kubernetes pod to a remote Jaeger?

I am trying to connect my pod from Kubernetes (k8s) cluster to a remote Jaeger server. I've tested and it can work well if both of them are on the same machine. However, when I run my app on k8s, my app can not connect to Jaeger despite I were using physical IP.
First, I've tried this:
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: <my-physical-ip>
- name: OTEL__AGENT_PORT
value: "6831"
After read the docs from the internet, I add the Jaeger agent to my deployments as a sidecar container like this.
containers:
- name: api
env:
- name: OTEL__AGENT_HOST
value: "localhost"
- name: OTEL__AGENT_PORT
value: "6831"
- image: jaegertracing/jaeger-agent
name: jaeger-agent
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"]
It seems work very well on both containers. But on the collector of Jaeger, I received a log like this:
{"level":"warn","ts":1641987200.2678068,"caller":"channelz/logging.go:62","msg":"[core]grpc: Server.Serve failed to create ServerTransport: connection error: desc = \"transport: http2Server.
HandleStreams failed to receive the preface from client: read tcp 172.20.0.4:14250-><the-ip-of-machine-my-pods-are-deployed>:32852: i/o timeout\"","system":"grpc","grpc_log":true}
I exposed port 14267 on Jaeger collector on remote machine, then change args: ["--reporter.grpc.host-port=<my-physical-ip>:14250"] to args: ["--reporter.grpc.host-port=<my-physical-ip>:14267"] and it works.
Have you tried using jaeger operator? https://github.com/jaegertracing/jaeger-operator
This is how you will install it :
kubectl create namespace observability
kubectl create -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.31.0/jaeger-operator.yaml -n observability
then you can create Jaeger instance that will up jaeger components like collector, agent, query . You can define storage too .. like elastic search for e.g.
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod-es
spec:
strategy: production
storage:
type: elasticsearch
options:
es:
server-urls: https://search-test-g7fbo7pzghdquvvgxty2pc6lqu.us-east-2.es.amazonaws.com
index-prefix: jaeger-span
username: test
password: xxxeee
Then in your application's deployment yaml file you will need to configure agent as a side car (or u can use agent as deamonset) so that request can be forwarded to the collector ..
More details here: https://www.jaegertracing.io/docs/1.31/operator/#deployment-strategies

Configuring WSO2 API Manager to Work With Traefik for HTTPS

I am trying to configure Traefik and WSO2 API Manager. Basically, I want to configure Traefik to handle https.
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.service-am-https.redirectscheme.scheme=https"
- "traefik.http.routers.service-am-http.entrypoints=web"
- "traefik.http.routers.service-am-http.rule=Host(`xx.xx.xx`) && Path(`/apim/admin`)"
- "traefik.http.routers.service-am-http.middlewares=service-am-https#docker"
- "traefik.http.routers.service-am.tls=true"
- "traefik.http.routers.service-am.rule=Host(`xx.xx.xx`) && Path(`/apim/admin`)"
- "traefik.http.routers.service-am.entrypoints=web-secure"
- "traefik.http.services.service-am.loadbalancer.server.port=9443"
I also included this in the deployment.toml file for API Manager.
[catalina.valves.valve.properties]
className = "org.apache.catalina.valves.RemoteIpValve"
internalProxies = "*"
remoteIpHeader ="x-forwarded-for"
proxiesHeader="x-forwarded-by"
trustedProxies="*"
When I try to access the service, https://xx.xx.xx/apim/admin, I get this error:
Bad Request
This combination of host and port requires TLS.
Traefik is successfully handling the https part but when it comes to WSO2 API Manager, this issue comes up. Any ideas on how to resolve this?
I just had this problem and solved including
annotations:
ingress.kubernetes.io/protocol: https
in my Ingress.
The full configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wso2-ingress
namespace: <namespace>
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
ingress.kubernetes.io/protocol: https
spec:
rules:
- host: <hostname>
http:
paths:
- path: /
backend:
serviceName: <service-name>
servicePort: 9443

How does Kubernetes invoke a Docker image?

I am attempting to run a Flask app via uWSGI in a Kubernetes deployment. When I run the Docker container locally, everything appears to be working fine. However, when I create the Kubernetes deployment on Google Kubernetes Engine, the deployment goes into Crashloop Backoff because uWSGI complains:
uwsgi: unrecognized option '--http 127.0.0.1:8080'.
The image definitely has the http option because:
a. uWSGI was installed via pip3 which includes the http plugin.
b. When I run the deployment with --list-plugins, the http plugin is listed.
c. The http option is recognized correctly when run locally.
I am running the Docker image locally with:
$: docker run <image_name> uwsgi --http 127.0.0.1:8080
The container Kubernetes YAML config is:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: launch-service-example
name: launch-service-example
spec:
replicas: 1
template:
metadata:
labels:
app: launch-service-example
spec:
containers:
- name: launch-service-example
image: <image_name>
command: ["uwsgi"]
args:
- "--http 127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv --test1=3--test2=abc--test3=true"
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: launch-service-example-service
spec:
selector:
app: launch-service-example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
The container is exactly the same which leads me to believe that the way the container is invoked by Kubernetes may be causing the issue. As a side note, I have tried passing all the args via a list of commands with no args which leads to the same result. Any help would be greatly appreciated.
It is happening because of the difference between arguments processing in the console and in the configuration.
To fix it, just split your args like that:
args:
- "--http"
- "127.0.0.1:8080"
- "--module code.experimental.launch_service_example.__main__"
- "--callable"
- "APP"
- "--master"
- "--processes=2"
- "--enable-threads"
- "--pyargv"
- "--test1=3--test2=abc--test3=true"

Kubernetes Private Docker Registry Push Error

So I have deployed a Kubernetes cluster and installed a private Docker registry. Here is my registry controller:
---
apiVersion: v1
kind: ReplicationController
metadata:
name: registry-master
labels:
name: registry-master
spec:
replicas: 1
selector:
name: registry-master
template:
metadata:
labels:
name: registry-master
spec:
containers:
- name: registry-master
image: registry
ports:
- containerPort: 5000
command: ["docker-registry"]
And the service:
---
apiVersion: v1
kind: Service
metadata:
name: registry-master
labels:
name: registry-master
spec:
ports:
# the port that this service should serve on
- port: 5000
targetPort: 5000
selector:
name: registry-master
Now I sshed to one of Kubernetes' nodes and built a Ruby app container:
cd /tmp
git clone https://github.com/RichardKnop/sinatra-redis-blog.git
cd sinatra-redis-blog
docker build -t ruby-redis-app
When I try to tag it and push it to the registry:
docker tag ruby-redis-app registry-master/ruby-redis-app
docker push 10.100.129.115:5000/registry-master/ruby-redis-app
I am getting this error:
Error response from daemon: invalid registry endpoint https://10.100.129.115:5000/v0/: unable to ping registry endpoint https://10.100.129.115:5000/v0/
v2 ping attempt failed with error: Get https://10.100.129.115:5000/v2/: read tcp 10.100.129.115:5000: connection reset by peer
v1 ping attempt failed with error: Get https://10.100.129.115:5000/v1/_ping: read tcp 10.100.129.115:5000: connection reset by peer. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 10.100.129.115:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/10.100.129.115:5000/ca.crt
Any idea how to solve it? I have been struggling with this for several hours.
Richard
if you're using HTTPS, you must have created a self-signed certificate (with your own CA authority) or you have a CA signed certificate.
If so, you need to install this CA cert on the machine you're calling FROM
put your CA cert in
/etc/ssl/certs
and run
update-ca-certificates
sometimes I have had to put it also in
/usr/local/share/ca-certificates/
(in both cases your CA file EXTENSION should be .pem
For Docker you may also need to put a file in
/etc/docker/certs.d/<--your-site-url--->/ca.crt
and the file must be named ca.crt
(same file file as the .pem file but named ca.crt)
I saw a similar issue and it was related to my registry not supporting https. If your registry does not support https, then you'll have to specify it's an insecure registry to the docker daemon
echo 'DOCKER_OPTS="--insecure-registry 10.100.129.115:5000"' | sudo tee -a /etc/default/docker
And then restart your docker daemon.
If you are using Ubuntu, add this line into your /etc/default/docker file.
$DOCKER_OPTS=“--insecure-registry xxx.xxx.xxx.xxx:5000”
Where the xxx.xxx.xxx.xxx is your private registry ip.
And then restart your docker client.
sudo docker service restart

Resources