Hi I use Kafka connect docker container image confluentinc/cp-kafka-connect 5.5.3 and everything was running fine when using follwing Parameters
...
-e "CONNECT_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
-e "CONNECT_INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter" \
...
Now we introduced Schema Registry and decided to go with JsonSchemaConverter for now and not avro. I changed follwoing (INTERNAL stays as it is for now)
...
-e "CONNECT_KEY_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_VALUE_CONVERTER=io.confluent.connect.json.JsonSchemaConverter" \
-e "CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
-e "CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://<schemaregsirty_url>:8081" \
...
Following Error appeared:
[2021-02-04 09:24:14,637] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed)org.apache.kafka.common.config.ConfigException: Invalid value io.confluent.connect.json.JsonSchemaConverter for configuration key.converter: Class io.confluent.connect.json.JsonSchemaConverter could not be found.
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:727)
at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:473)
at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:466)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:108)
at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:129)
at org.apache.kafka.connect.runtime.WorkerConfig.<init>(WorkerConfig.java:374)
at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<init>(DistributedConfig.java:316)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:93)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
It seems the converter is not available here by default. Do I have to install JsonSchemaConverter? I thought it comes by default?
Related
I used the configuration from: enabling oauth2 with pgadmin and gitlab
The main difference is, i have a local gitlab setup at https://gitlab_company_org
and a local (dockered) pgadmin instance at http://pgadmin_projectx_company_org:8000
But i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} when i try to login.
So my configs are:
config_local.py:
AUTHENTICATION_SOURCES = ['oauth2', 'internal']
MASTER_PASSWORD = True
OAUTH2_CONFIG = [
{
'OAUTH2_NAME': 'gitlab',
'OAUTH2_DISPLAY_NAME': 'Gitlab',
'OAUTH2_CLIENT_ID': 'gitlab_client_id',
'OAUTH2_CLIENT_SECRET': 'gitlab_client_secret',
'OAUTH2_TOKEN_URL': 'https://gitlab_company_org/oauth/token',
'OAUTH2_AUTHORIZATION_URL': 'https://gitlab_company_org/oauth/authorize',
'OAUTH2_API_BASE_URL': 'https://gitlab_company_org/oauth/',
'OAUTH2_USERINFO_ENDPOINT': 'userinfo',
'OAUTH2_SCOPE': 'openid email profile',
'OAUTH2_ICON': 'fa-gitlab',
'OAUTH2_BUTTON_COLOR': '#E24329',
}
]
OAUTH2_AUTO_CREATE_USER = True
run_pgadmin.sh
mkdir -p ./pgadmin
mkdir -p ./pgadmin/data
touch ./pgadmin/config_local.py
chown -R 5050:5050 ./pgadmin
docker stop pgadmin
docker rm pgadmin
docker pull dpage/pgadmin4
docker run -p 8000:80 \
--name pgadmin \
-e 'PGADMIN_DEFAULT_EMAIL=pgadmin#company.org' \
-e 'PGADMIN_DEFAULT_PASSWORD=somesupersecretsecret' \
-e 'PGADMIN_CONFIG_LOGIN_BANNER="Authorised users only!"' \
-v /opt/container/pgadmin/data:/var/lib/pgadmin \
-v /opt/container/pgadmin/config_local.py:/pgadmin4/config_local.py:ro \
-d dpage/pgadmin4
When trying to login via gitlab button i get the gitlab login, then i allowed the app to login via gitlab, but afterwards i get the error: {"success":0,"errormsg":"Missing \"jwks_uri\" in metadata","info":"","result":null,"data":null} .. which seems a json response to: http://pgadmin.projectx.company.org:8000/oauth2/authorize?code=VERYLONGCODE&state=SOMEOTHERKINDOFCODE
Solution:
Thanks to Aditya Toshniwal: i tried the new dpage/pgadmin4:snapshot or 2023-01-09-2 tag on dockerhub, and had to add the OAUTH2_SERVER_METADATA_URL parameter (value: https://gitlab_company_org/oauth/.well-known/openid-configuration), which i found in the issue he mentioned, now the thing works with gitlab onprem. Awesome!
The issue is fixed - https://github.com/pgadmin-org/pgadmin4/issues/5666 and will be available in pgAdmin release coming this week. You can also try on the candidate build here - https://developer.pgadmin.org/builds/2023-01-09-2/
I can run Keycloak with the following command
./bin/kc.sh start-dev \
--https-certificate-file=/etc/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/etc/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
Works as expected
On the same computer, I try to run using Docker
docker run -p 80:8080 -p 443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
quay.io/keycloak/keycloak:latest \
start-dev \
--https-certificate-file=/ect/letsencrypt/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/ect/letsencrypt/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
It fails
2022-12-23 23:11:59,784 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (development) mode
2022-12-23 23:11:59,785 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: /ect/letsencrypt/live/keycloak.fhir-poc.hcs.us.com/cert.pem
2022-12-23 23:11:59,787 ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) Key material not provided to setup HTTPS. Please configure your keys/certificates.
Any suggestions besides a reverse proxy?
The problem is based on the directory linked structure of letsencrypt in linux and the permissions to access these files.
Letsencrypt linked directory structure works like:
/etc/letsencrypt/live/<your-domain/.pem -> /etc/letsencrypt/archive/<your-domain/.pem
The problem is the link from the live to the archive folder/file.
The permissions are mostly not correct.
A quick-fix is create a cert-mirror and copy the related files from /etc/letsencrypt/live/<your-domain/*.pem
to a new cert folder like /opt/cert
change permissions in /opt/cert to 777: chmod 777 -R /opt/certs
create a cron.monthly job in /etc/cron.monthly which copy the files to /opt/certs + change permissions correctly every month that your certs mirror always up-to-date
This will make your example working. Please keep in mind that permissions like 777 are let everyone access this file. You should use the correct permissions in productive environment.
I discovered the answer
letsencrypt certificates in the "live" folder are symlinks to the "archive" folder and I needed a custom docker image for keycloak to mount my certificates. So I followed the keycloak docs for creating a custom docker image and started a container with that image
Following
https://www.keycloak.org/server/containers
https://eff-certbot.readthedocs.io/en/stable/using.html#where-are-my-certificates
to build a custom image and change the cert permissions
Dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KEYCLOAK_ADMIN=root
ENV KEYCLOAK_ADMIN_PASSWORD=change_me
WORKDIR /opt/keycloak
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
COPY kc-export.json /opt/keycloak/kc-export.json
RUN /opt/keycloak/bin/kc.sh import --file /opt/keycloak/kc-export.json
VOLUME [ "/opt/keycloak/certs" ]
ENTRYPOINT ["/opt/keycloak/bin/kc.sh"]
Then start the container
docker run -p 8443:8443 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=change_me \
-e JAVA_OPTS_APPEND="$JAVA_OPTS_APPEND" \
my-keycloak-image:latest \
start-dev \
--https-certificate-file=/opt/keycloak/certs/live/$HOSTNAME/cert.pem \
--https-certificate-key-file=/opt/keycloak/certs/live/$HOSTNAME/privkey.pem \
--hostname=$HOSTNAME
I am unable to run 2 or more models via TensorFlow Serving via docker on a Windows 10 machine.
I have made a models.config file
model_config_list: {
config: {
name: "ukpred2",
base_path: "/models/my_models/ukpred2",
model_platform: "tensorflow"
},
config: {
name: "model3",
base_path: "/models/my_models/ukpred3",
model_platform: "tensorflow"
}
}
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\temp\models\,target=/models/my_models --mount type=bind,source=C:\Users\th3182\Documents\temp\models.config,target=/models/models.config -t tensorflow/serving --model_config_file=/models/models.config
In C:\Users\th3182\Documents\temp\models are 2 folders ukpred2 and ukpred3 in these folders are the exported folders from the trained models ie 1536668276 which contains an assets folder a variables folder and a saved_model.ph file.
The error I get is
2018-09-13 15:24:50.567686: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config: model_name: model model_base_path: /models/model
2018-09-13 15:24:50.568209: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-09-13 15:24:50.568242: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: model
2018-09-13 15:24:50.568640: E tensorflow_serving/sources/storage_path/file_system_storage_path_source.cc:369] FileSystemStoragePathSource encountered a file-system access error: Could not find base path /models/model for servable model
I can't seem to get this to work with the alterations on the above. But I have managed to server a single model with the following command
docker run -p 8501:8501 --mount type=bind,source=C:\Users\th3182\Documents\projects\Better_Buyer2\model2\export\exporter,target=/models/model2 -e MODEL_NAME=model2 -t tensorflow/serving
You'll have to wait for the next release (1.11.0) for this to work. In the interim, you can use the image tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0
In tensorflow serving 2.6.0, Model Server Config Details for multiple models:
model_config_list {
config {
name: 'my_first_model'
base_path: '/tmp/my_first_model/'
model_platform: 'tensorflow'
}
config {
name: 'my_second_model'
base_path: '/tmp/my_second_model/'
model_platform: 'tensorflow'
}
}
Example: Run multiple models using tensorflow/serving
docker run -p 8500:8500 \
-p 8501:8501 \
--mount type=bind,source=/tmp/models,target=/models/my_first_model \
--mount type=bind,source=/tmp/models,target=/models/my_second_model \
--mount type=bind,source=/tmp/model_config,\
target=/models/model_config \
-e MODEL_NAME=my_first_model \
-t tensorflow/serving \
--model_config_file=/models/model_config
For more information please refer Model Server Configuration
Trying to complete this tutorial to run grafana on Windows, at this point of compilation I kept got this error:
PS C:\Programs\Others\LocustReport\docker-grafana-graphite> make up
mkdir -p \
data/whisper \
data/elasticsearch \
data/grafana \
log/graphite \
log/graphite/webapp \
log/elasticsearch
The syntax of the command is incorrect.
make: *** [prep] Error 1
PS C:\Programs\Others\LocustReport\docker-grafana-graphite>
Please any workaorund to get it compiled?
You'll need to run a linux vm (I use virtualbox) to build the image on.
Note: I've read this and this question. Neither helped.
I've created a Spring config server Docker image. The intent is to be able to run multiple containers with different profiles and search locations. This is where it differs from the questions above, where the properties were either loaded from git or from classpath locations known to the config server at startup. My config server is also deployed traditionally in Tomcat, not using a repackaged boot jar.
When I access http://<docker host>:8888/movie-service/native, I get no content (see below). The content I'm expecting is given at the end of the question.
{"name":"movie-service","profiles":["native"],"label":"master","propertySources":[]}
I've tried just about everything under the sun but just can't get it to work.
Config server main:
#SpringBootApplication
#EnableConfigServer
#EnableDiscoveryClient
public class ConfigServer extends SpringBootServletInitializer {
/* Check out the EnvironmentRepositoryConfiguration for details */
public static void main(String[] args) {
SpringApplication.run(ConfigServer.class, args);
}
#Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(ConfigServer.class);
}
}
Config server application.yml:
spring:
cloud:
config:
enabled: true
server:
git:
uri: ${CONFIG_LOCATION}
native:
searchLocations: ${CONFIG_LOCATION}
server:
port: ${HTTP_PORT:8080}
Config server bootstrap.yml:
spring:
application:
name: config-service
eureka:
instance:
hostname: ${CONFIG_HOST:localhost}
preferIpAddress: false
client:
registerWithEureka: ${REGISTER_WITH_DISCOVERY:true}
fetchRegistry: false
serviceUrl:
defaultZone: http://${DISCOVERY_HOST:localhost}:${DISCOVERY_PORT:8761}/eureka/
Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native xxx
Config Directory:
/Users/<some path>/config
--- movie-service.yml
Content of movie-service.yml:
themoviedb:
url: http://api.themoviedb.org
Figured this out myself. The Docker container doesn't have access to the host file system unless the file system directory is mounted at runtime using a -v flag.
For the sake of completeness, this is the full working Docker run command:
docker run -it -p 8888:8888 \
-e CONFIG_HOST="$(echo $DOCKER_HOST | grep -Eo '([0-9]{1,3}\.){3}[0-9]{1,3}')" \
-e HTTP_PORT=8888 \
-e CONFIG_LOCATION="file:///Users/<some path>/config/" \
-e REGISTER_WITH_DISCOVERY="false" \
-e SPRING_PROFILES_ACTIVE=native \
-v "/Users/<some path>/config/:/Users/<some path>/config/" \
xxx