I'm trying to start the change of backends to be compatible with traefik v2.0.
The old configuration was:
labels:
- traefik.port=8500
- traefik.docker.network=proxy
- traefik.frontend.rule=Host:consul.{DOMAIN}
I assumed, the network is not necessary anymore, it would change the new traefik for:
- traefik.http.routers.consul-server-bootstrap.rule=Host('consul.scoob.thrust.com.br')
But how I set, that this should forward to my backend at port 8500? and not 80 where the entrypoint was reached at Traefik?
My goal would try to accomplish something like this:
https://docs.traefik.io/user-guide/cluster-docker-consul/#migrate-configuration-to-consul
Is it still possible?
I saw, there was no --consul or storeconfig command in v2.0
You need traefik.http.services.{SERVICE}.loadbalancer.server.port
labels:
- "traefik.http.services.{SERVICE}.loadbalancer.server.port=8500"
- "traefik.docker.network=proxy"
- "traefik.http.routers.{SERVICE}.rule=Host(`{DOMAIN}`)"
Replace {SERVICE} with the name of your service.
Replace {DOMAIN} with your domain name.
If you want to remove the proxy network you'll need to look at https://docs.traefik.io/v2.0/providers/docker/#usebindportip
Related
landoop/fast-data-dev:2.6
I want to change default batch.size using 'producer.override.batch.size=65536' when creating new connector.
But in order to do that, it's required to apply override policy on the worker side
connector.client.config.override.policy=All
Otherwise there is exception
"producer.override.batch.size" : The 'None' policy does not allow
'batch.size' to be overridden in the connector configuration.
It's not clear, how exactly:
to change the default worker properties
where it expects them to be placed
which name they should have
So that landoop sees them
I start the landoop using the following docker-compose.
version: '2'
services:
kafka-cluster:
image: landoop/fast-data-dev:2.6
environment:
ADV_HOST: 127.0.0.1
RUNTESTS: 0
ports:
- 2181:2181 # Zookeeper
- 3030:3030 # Landoop UI
- 8081-8083:8081-8083 # REST Proxy, Schema Registry, Kafka Connect ports
- 9581-9585:9581-9585 # JMX Ports
- 9092:9092 # Kafka Broker
volumes:
- ./connectors/news/crypto-panic-connector-1.0.jar:/connectors/crypto-panic-connector-1.0.jar
distributed.properties at folder /connect/connect-avro-distributed.properties generated by Landoop
offset.storage.partitions=5
key.converter.schema.registry.url=http://127.0.0.1:8081
value.converter.schema.registry.url=http://127.0.0.1:8081
config.storage.replication.factor=1
offset.storage.topic=connect-offsets
status.storage.partitions=3
offset.storage.replication.factor=1
key.converter=io.confluent.connect.avro.AvroConverter
config.storage.topic=connect-configs
config.storage.partitions=1
group.id=connect-fast-data
rest.advertised.host.name=127.0.0.1
port=8083
value.converter=io.confluent.connect.avro.AvroConverter
rest.port=8083
status.storage.replication.factor=1
status.storage.topic=connect-statuses
access.control.allow.origin=*
access.control.allow.methods=GET,POST,PUT,DELETE,OPTIONS
jmx.port=9584
plugin.path=/var/run/connect/connectors/stream-reactor,/var/run/connect/connectors/third-party,/connectors
bootstrap.servers=PLAINTEXT://127.0.0.1:9092
crypto-panic-connector-1.0 connector directories structure:
/config:
> worker.properties
/src:
> ...
UPDATE
Adding to environment properties:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Doesn't work for landoop/fast-data-dev:2.6
In logs it's still
'connector.client.config.override.policy = None'
And warning
WARN The configuration 'connect.client.config.override.policy' was supplied but isn't a known config.
Changing this to
CONNECTOR_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
Removes warning, but at the end the override policy is still 'None' and it's not possible to override properties for client when creating connector.
Changing to
CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
has same effect, policy 'None'.
Also batch size overriding is not aplied. So I assume those overriding features are not supported in Landoop.
WARN The configuration 'producer.override.batch.size' was supplied but isn't a known config.
I assume 'confluentinc/cp-kafka-connect' doesn't have UI built-in, and for learning purposes seems better to have it. So it's more preferable to do it in Landoop. But thanks for recommendation to use 'confluentinc/cp-kafka-connect'. I will try to do this config overriding there also
For starters, that image is very old and no longer maintained. I'd recommend you use confluentinc/cp-kafka-connect
In any case, for both images, you can use
environment:
CONNECT_CONNECT_CLIENT_CONFIG_OVERRIDE_POLICY: 'All'
CONNECT_PRODUCER_OVERRIDE_BATCH_SIZE: 65536
It's not clear, how exactly ... change the default worker properties
Look at the source code
I have a problem with traefik. The situation:
I have a docker gitlab with gitlab-registry activated, no problem with that.
My gitlab answer on port 80 and registry on port 80 as well, and this is not a problem nether (tried with port mapping and worked well.
Now I try to configure these guys in traefik and that's when problems occur. When ther is just one router and one service, all good but when I try to add the registry I have an error:
The involved part in the docker-compose file:
labels:
- traefik.http.routers.gitlab.rule=Host(`git.example.com`)
- traefik.http.routers.gitlab.tls=true
- traefik.http.routers.gitlab.tls.certresolver=lets-encrypt
- traefik.http.routers.gitlab.service=gitlabservice
- traefik.http.services.giltabservice.loadbalancer.server.port=80
- traefik.http.routers.gitlab.entrypoints=websecure
- traefik.http.routers.gitlabregistry.rule=Host(`registry.example.com`)
- traefik.http.routers.gitlabregistry.tls=true
- traefik.http.routers.gitlabregistry.tls.certresolver=lets-encrypt
- traefik.http.routers.gitlabregistry.service=gitlabregistryservice
- traefik.http.services.giltabregistryservice.loadbalancer.server.port=80
- traefik.http.routers.gitlabregistry.entrypoints=websecure
And the errors I have in traefik:
time="2021-03-27T08:48:20Z" level=error msg="the service "gitlabservice#docker" does not exist" entryPointName=websecure routerName=gitlab#docker
time="2021-03-27T08:48:20Z" level=error msg="the service "gitlabregistryservice#docker" does not exist" entryPointName=websecure routerName=gitlabregistry#docker
Thanks in advance if you have any idea.
In reference to: Define host and path frontend rule for Traefik (I wanted to comment on the answer but I can't)
I implemented the suggestion in the answer using
Host(`domain.com`) && Path(`/path`)
but it does not work (Getting 404 when trying to access it).
Traefik logs show:
time="2020-07-07T10:31:30Z" level=error msg="field not found, node: rule " providerName=docker
My docker compose looks like this:
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule = Host(`${HOSTNAME}`) && Path(`${DIRECTORY}`)"
When just using with the Host rule it works perfectly fine. But I want to be able to do e.g. subdomain.domain.com/subdirectory for service 1 and subdomain.domain.com/subdirectory2 for service 2
I also tried - "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule = Host(`${HOSTNAME}`) && PathPrefix(`${DIRECTORY}`)" but I get the same error in the log and 404.
I found the Problem: remove the spaces around the "="
This works:
- "traefik.http.routers.typo3-${NAMEOFSERVICE}.rule=(Host(`${HOSTNAME}`) && Path(`${DIRECTORY}`))"
I now have another problem. My service in this subdirectory, redirects outside of it. (Example, Typo 3 first install: I access subdomain.domain.com/foo and it redirects me to subdomain.domain.com/typo3/install.php)
I deployed an Application on OPENSHIFT. And created a route for it. I want the route to be accessible by only few users and not everyone. And the users who can access the route should be controlled by me. It's like providing Authentication Scheme for the route in the Openshift. How can i achieve this. Please help me in this regard.
Unfortunately, OpenShift Routes do not have any authentication mechanisms built-in. There are the usual TLS / subdomain / path-based routing features, but no authentication.
So your most straight-forward path on OpenShift would be to deploy an additional reverse proxy as part of your application such as "nginx", "traefik" or "haproxy":
+-------------+ +-----------------+
| reverse | | |
+--incoming traffic--->+ proxy +------>+ your application|
from your Route | | | |
+-------------+ +-----------------+
For authentication methods, you then have multiple options, depending on which solution you choose to deploy (the simplest ones being Basic Auth or Digest Auth).
While Kubernetes ingress controllers usually embeds such functionality, this is not the case with OpenShift. The recommended way would usually be to rely on an oauth-proxy, which would integrate with the OpenShift API, allowing for easy integration with OpenShift users -- while it could also be used integrating with third-party providers (KeyCloak, LemonLDAP-NG, ...).
Oauth
Integrating with the OpenShift API, we would first create a ServiceAccount, such as the following:
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.my-app: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"my-route-name"}}'
name: my-app
namespace: my-project
That annotation would tell OpenShift how to send your users back to your application, upon successful login: look for a Route my-route-name.
We will also create a certificate, signed by OpenShift PKI, that we'll later use setting up our OAuth proxy. This could be done creating a Service such as the following:
apiVersion: v1
kind: Service
metadata:
annotations:
service.alpha.openshift.io/serving-cert-secret-name: my-app-tls
name: my-app
namespace: my-project
spec:
ports:
- port: 8443
name: oauth
selector:
[ your deployment selector placeholder ]
We may then add a sidecar container into our application deployment:
[...]
containers:
- args:
- -provider=openshift
- -proxy-prefix=/oauth2
- -skip-auth-regex=^/public/
- -login-url=https://console.example.com/oauth/authorize
- -redeem-url=https://kubernetes.default.svc/oauth/token
- -https-address=:8443
- -http-address=
- -email-domain=*
- -upstream=http://localhost:3000
- -tls-cert=/etc/tls/private/tls.crt
- -tls-key=/etc/tls/private/tls.key
- -client-id=system:serviceaccount:my-project:my-app
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-refresh=0
- -cookie-expire=24h0m0s
- -cookie-name=_oauth2_mycookiename
name: oauth-proxy
image: docker.io/openshift/oauth-proxy:latest
[ resource requests/limits & livenesss/readiness placeholder ]
volumeMounts:
- name: cert
path: /etc/tls/private
[ your application container placeholder ]
serviceAccount: my-app
serviceAccountName: my-app
volumes:
- secret:
name: my-app-tls
name: cert
Assuming:
console.example.com your OpenShift public API FQDN
^/public/, optional path regexpr that should not be subject to authentication
http://localhost:3000, the URL the Oauth proxy would connect, fix port to whichever your application binds to. Also consider reconfiguring your application to bind on its loopback interface only, preventing accesses that would bypass your oauth proxy
my-project / my-app, your project & app names
Make sure your Route tls termination is set to Passthrough, sends its traffic to port 8443.
Checkout https://github.com/openshift/oauth-proxy for an exhaustive list of options. the openshift-sar one could be interesting, filtering which users may log in, based on their permissions over OpenShift objects.
Obviously, in some cases you would be deploying applications that could be configured to integrate with some OAuth provider: then there's no reason to use a proxy.
Basic Auth
In cases you do not want to integrate with OpenShift users, nor any kind of third-party provider, you may want to use some nginx sidecar container - there are plenty available on dockerhub.
Custom Ingress
Also note that with OpenShift 3.x, cluster admins may customize the template used to generate HAProxy configuration, patching their ingress deployment. This is a relatively common way to fix some headers, handle some custom annotations for your users to set in their Routes, ... Look for TEMPLATE_FILE in https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html#env-variables.
As of right now, I do not think OpenShift 4 ingress controllers would allow for this - as it is driven by an operator, with limited configuration options, see https://docs.openshift.com/container-platform/4.5/networking/ingress-operator.html.
Although as for OpenShift 3: you may very well deploy your own ingress controllers, should you have to handle such edge cases.
I'm now using docker-compose for all of my projects. Very convenient. Much more comfortable than manual linking through several docker commands.
There is something that is not clear to me yet though: the logic behind the linking environment variables.
Eg. with this docker-compose.yml:
mongodb:
image: mongo
command: "--smallfiles --logpath=/dev/null"
web:
build: .
command: npm start
volumes:
- .:/myapp
ports:
- "3001:3000"
links:
- mongodb
environment:
PORT: 3000
NODE_ENV: 'development'
In the node app, I need to retrieve the mongodb url. And if I console.log(process.env), I get so many things that it feels very random (just kept the docker-compose-related ones):
MONGODB_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PORT: '27017',
MYAPP_MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
MONGODB_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_GOSU_VERSION: '1.7',
'MYAPP_MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
MYAPP_MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GOSU_VERSION: '1.7',
MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_1_NAME: '/myapp_web_1/mongodb_1',
MONGODB_1_PORT_27017_TCP_PORT: '27017',
MONGODB_1_PORT_27017_TCP_PROTO: 'tcp',
'MONGODB_1_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MONGODB_PORT: 'tcp://172.17.0.2:27017',
MONGODB_1_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_ENV_GOSU_VERSION: '1.7',
MONGODB_ENV_MONGO_MAJOR: '3.2',
MONGODB_PORT_27017_TCP_ADDR: '172.17.0.2',
MONGODB_NAME: '/myapp_web_1/mongodb',
MONGODB_1_PORT_27017_TCP: 'tcp://172.17.0.2:27017',
MONGODB_PORT_27017_TCP_PORT: '27017',
MONGODB_1_ENV_MONGO_VERSION: '3.2.6',
MONGODB_PORT_27017_TCP_PROTO: 'tcp',
MYAPP_MONGODB_1_PORT: 'tcp://172.17.0.2:27017',
'MONGODB_ENV_affinity:container': '=d5c9ebd7766dc954c412accec5ae334bfbe836c0ad0f430929c28d4cda1bcc0e',
MYAPP_MONGODB_1_ENV_MONGO_MAJOR: '3.2',
MONGODB_ENV_GPG_KEYS: 'DFFA3DCF326E302C4787673A01C4E7FAAAB2461C \t42F3E95A2C4F08279C4960ADD68FA50FEA312927',
MYAPP_MONGODB_1_PORT_27017_TCP_ADDR: '172.17.0.2',
MYAPP_MONGODB_1_NAME: '/myapp_web_1/novatube_mongodb_1',
Don't know what to pick, and why so many entries? Is it better to use the general ones, or the MYAPP-prefixed one? Where does the MYAPP name comes from? Folder name?
Could someone clarify this?
Wouldn't it be easier to let the user define the ones he needs in the docker-compose.yml file with a custom mapping? Like:
links:
- mongodb:
- MONGOIP: IP
- MONGOPORT : PORT
What I'm saying might not have sense though. :-)
Environment variables are a legacy way of defining links between containers. If you are using a newer version of compose, you don't need the links declaration at all. Trying to connect to mongodb from your app container will work fine by just using the name of the service (mongodb) as a hostname, without any links defined in the compose file (instead using docker's builtin DNS resolution, check /etc/hosts, nothing in there either!)
In answer to your question about why the prefix with MYAPP, you're right. Compose prefixes the service name with the name of the folder (or 'project', in compose nomenclature). It does the same thing when creating custom networks and volumes.