I am using keyclaock in docker compose file and try to import realm.json file as mentioned below but importing realm fails with this error
15:07:38,919 WARN [org.keycloak.services] (ServerService Thread Pool -- 60) KC-SERVICES0005: Unable to import realm boost from file /opt/jboss/keycloak/realm-config/keycloak-realm.json.: java.lang.IllegalArgumentException: No such provider 'declarative-user-profile'
Code from docker compose
keycloak:
image: 'wizzn/keycloak:14'
environment:
KEYCLOAK_IMPORT: /opt/jboss/keycloak/realm-config/keycloak-realm.json -Dkeycloak.profile.feature.upload_scripts=enabled
volumes:
- ./keycloak-init:/opt/jboss/keycloak/realm-config
Related
I have my file docker-compose.yml :
otel-collector:
image: otel/opentelemetry-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP http receiver
- "55679:55679" # zpages extension
I see this error after execution of docker compose up:
otel-collector | Error: failed to get config: cannot resolve the
configuration: cannot retrieve the configuration: unable to read the
file file:/etc/otel-collector-config.yaml: open
/etc/otel-collector-config.yaml: permission denied otel-collector |
2022/01/09 11:15:47 collector server run finished with error: failed
to get config: cannot resolve the configuration: cannot retrieve the
configuration: unable to read the file
file:/etc/otel-collector-config.yaml: open
/etc/otel-collector-config.yaml: permission denied
How can I solve it?
I am using Kafka docker-compose setting with below docker-compose.yml installed in VM-Ware machine.
When I connect to it by pyspark.streaming.kafka.KafkaUtils, it released some errors.
Please help me resolve this problems.
I used configuration from https://rmoff.net/2018/08/02/kafka-listeners-explained/
docker-compose.yml file
version: '3.7'
services:
zookeeper:
image: "confluentinc/cp-zookeeper:latest"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
# This has three listeners you can experiment with.
# BOB for internal traffic on the Docker network
# FRED for traffic from the Docker-host machine (`localhost`)
# ALICE for traffic from outside, reaching the Docker host on the 192.168.231.145
# Use
kafka0:
image: "confluentinc/cp-enterprise-kafka:latest"
ports:
- '9092:9092'
- '29094:29094'
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 0
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://kafka0:9092,LISTENER_ALICE://0.0.0.0:29094
KAFKA_ADVERTISED_LISTENERS: LISTENER_BOB://kafka0:29092,LISTENER_FRED://localhost:9092,LISTENER_ALICE://192.168.231.145:29094
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_BOB:PLAINTEXT,LISTENER_FRED:PLAINTEXT,LISTENER_ALICE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_BOB
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
kafkacat:
image: confluentinc/cp-kafkacat
command: sleep infinity
python code i used to connect from vm-ware hosted machine
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils
from confluent_kafka import Producer, Consumer
import socket
import json
if __name__ == "__main__":
sc = SparkContext(appName="Processing_raw_data")
ssc = StreamingContext(sc, 1)
in_stream = KafkaUtils.createStream(ssc, "192.168.231.145:2181", socket.gethostname(), {"testing": 1}, {"auto.offset.reset": "smallest"})
in_stream.pprint()
ssc.start()
ssc.awaitTermination()
Errors
21/06/19 19:44:24 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21/06/19 19:44:24 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
-------------------------------------------
Time: 2021-06-19 19:44:27
-------------------------------------------
21/06/19 19:44:27 WARN AppInfo$: Can't read Kafka version from MANIFEST.MF. Possible cause: java.lang.NullPointerException
[Stage 0:> (0 + 1) / 1]-------------------------------------------
Time: 2021-06-19 19:44:28
-------------------------------------------
21/06/19 19:44:28 WARN ClientUtils$: Fetching topic metadata with correlation id 0 for topics [Set(testing)] from broker [id:0,host:kafka0,port:29092] failed
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
21/06/19 19:44:28 WARN ConsumerFetcherManager$LeaderFinderThread: [dino-computer_dino-computer-1624106667579-fbe9ab2d-leader-finder-thread], Failed to find leader for Set([testing,0])
kafka.common.KafkaException: fetching topic metadata for topics [Set(testing)] from broker [ArrayBuffer(id:0,host:kafka0,port:29092)] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:72)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
... 3 more
in Centos7, I'm trying to start 2 containers by docker-compose when I get this error:
error: container_linux.go:235: starting container process caused keycloak/keycloak-gatekeeper
# ls
docker-compose.yml Dockerfile gatekeeper-be.conf gatekeeper-fe.conf nginx-conf.d README.MD
=================
# cat docker-compose
version: '3.2'
networks:
network-bo-network:
driver: "bridge"
ipam:
config:
- subnet: "173.200.1.0/24"
gatekeeper-fe:
image: keycloak/keycloak-gatekeeper:latest
command: /keycloak-proxy --config /opt/keycloak-gatekeeper/gatekeeper.conf
volumes:
- ./gatekeeper-fe.conf:/opt/keycloak-gatekeeper/gatekeeper.conf
networks:
network-bo-network:
ipv4_address: "173.200.1.3"
network-bo-nginx:
image: nginx:1.17
ports:
- "83:80"
volumes:
- ./nginx-conf.d:/etc/nginx/conf.d
networks:
network-bo-network:
ipv4_address: "173.200.1.5"
===========================================
cat gatekeeper-fe.conf
ClientID is the client id
client-id: client-bo-app
## ClientSecret is the secret for AS
client-secret: xxxxxxxxxxxxxxxxxxx
## DiscoveryURL is the url for the keycloak server
discovery-url: https://xxxxxxxxxxxxxxxxxxxx
## SkipOpenIDProviderTLSVerify skips the tls verification for openid provider communication
skip-openid-provider-tls-verify: true
## EnableDefaultDeny indicates we should deny by default all requests
enable-default-deny: true
## EnableRefreshTokens indicate's you wish to ignore using refresh tokens and re-auth on expiration of access token
enable-refresh-tokens: true
## EncryptionKey is the encryption key used to encrypt the refresh token
encryption-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxx
## Listen is the binding interface
listen: :8081
## Upstream is the upstream endpoint i.e whom were proxying to
upstream-url: http://173.200.1.1:8082
## EnableLogging indicates if we should log all the requests
enable-logging: true
## EnableJSONLogging is the logging format
enable-json-logging: true
## PreserveHost preserves the host header of the proxied request in the upstream request
preserve-host: true
## NoRedirects informs we should hand back a 401 not a redirect
no-redirects: true
## AddClaims is a series of claims that should be added to the auth headers
add-claims:
- email
- given_name
- family_name
- name
## Resources configuration
resources:
- uri: /api/v1/metadata
methods:
- GET
white-listed: true
==================================================
# docker-compose up
WARNING: Found orphan containers (network-bo-dev_network-bo-postgres_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
network-bo-dev_network-bo-nginx_1 is up-to-date
Creating network-bo-dev_gatekeeper-fe_1 ... error
ERROR: for network-bo-dev_gatekeeper-fe_1 Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: for gatekeeper-fe Cannot start service gatekeeper-fe: oci runtime error: container_linux.go:235: starting container process caused "container init exited prematurely"
ERROR: Encountered errors while bringing up the project.
You should provide https://stackoverflow.com/help/minimal-reproducible-example - provided docker-compose doesn't have correct syntax.
A few obvious errors:
gatekeeper binary in the image has /opt/keycloak-gatekeeper
location, not /keycloak-proxy, but see next point
used images uses entrypoint=/opt/keycloak-gatekeeper=> command just needs that part after binary, e.g.: --config /opt/keycloak-gatekeeper/gatekeeper.conf
first line in gatekeeper-fe.conf should be comment
When starting keycloak using docker-compose, the 'master' realm is created but the 'jhipster' realm was not created. I see two files jhipster-realm.json and jhipster-users-0.json files. I can import them manually from the keycloak admin console. From what I remember, a project created a few months back imported 'jhipster' realm automatically. Did I do something wrong configuring the project?
Jhipster version: 6.5.1
Keycloak version: 7.0.0
The keycloak.yml is the default from the generator.
version: '2'
services:
keycloak:
image: jboss/keycloak:7.0.0
command:
[
'-b',
'0.0.0.0',
'-Dkeycloak.migration.action=import',
'-Dkeycloak.migration.provider=dir',
'-Dkeycloak.migration.dir=/opt/jboss/keycloak/realm-config',
'-Dkeycloak.migration.strategy=OVERWRITE_EXISTING',
'-Djboss.socket.binding.port-offset=1000',
]
volumes:
- ./realm-config:/opt/jboss/keycloak/realm-config
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- DB_VENDOR=h2
ports:
- 9080:9080
- 9443:9443
- 10990:10990
The error while starting the app using mvnw command resulted in this error.
Factory method 'clientRegistrationRepository' threw exception; nested exception is java.lang.IllegalArgumentException: Unable to resolve the OpenID Configuration with the provided Issuer of "http://localhost:9080/auth/realms/jhipster"
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:769)
at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:218)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1341)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1187)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:555)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:515)
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:847)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:744)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:391)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at com.ve.EducationApp.main(EducationApp.java:63)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
I had exact same issue, when i opened shell in container realm and user files were not there and not getting imported, seemed to be some mount issue. However it started working for me when i updated my docker compose file version which was set to older version i.e. '2' to version: '3.7' and it worked like charm.
I am trying to configure a docker-compose.yml (I am aware version and services is not stated, they are apart of the file) file to run a neo4j instance. I am using docker swarm and deploying a stack i.e. used the following commands:
docker swarm init
docker stack deploy -c docker-compose.yml neo
note_db:
image: neo4j:latest
environment:
- NEO4J_AUTH=<username>/<password>
- NEO4J_dbms_mode=CORE
- NEO4J_ACCEPT_LICENSE_AGREEMENT=yes
- NEO4J_dbms_connector_http_listen__address=:7474
- NEO4J_dbms_connector_https_listen__address=:6477
- NEO4J_dbms_connector_bolt_listen__address=:7687
ports:
- "7474:7474"
- "6477:6477"
- "7687:7687"
volumes:
- type: bind
source: ~/neo4j/data
target: /data
- type: bind
source: ~/neo4j/logs
target: /logs
deploy:
replicas: 1
resources:
limits:
cpus: "0.1"
memory: 120M
restart_policy:
condition: on-failure
I have omitted the username and password. I am currently only trying to spin up one instance as I am still testing. I have removed NEO4J_AUTH as well as NEO4J_AUTH=none, with the same outcome.
The logs provide the following:
org.neo4j.commandline.admin.CommandFailed: initial password was not set because live Neo4j-users were detected., at org.neo4j.commandline.admin.security.SetInitialPasswordCommand.setPasswor (SetInitialPasswordCommand.java:83)
command failed: initial password was not set because live Neo4j-users were detected.,
Starting Neo4j.,
2018-09-17 16:12:39.396+0000 INFO ======== Neo4j 3.4.7 ========,
2018-09-17 16:12:41.990+0000 INFO Starting...,
2018-09-17 16:12:43.792+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#70b0b186' was successfully initialized, but failed to start. Please see the attached cause exception "/logs/debug.log (Permission denied)".
In the debug.log file, the only things I found is :
[o.n.b.s.a.BasicAuthentication] Failed authentication attempt for 'neo4j' (no other failures, errors or warnings).
Clearly, I have some sort of auth issue but I am not sure where the error lies and how to address it. I have attempted NEO4J_AUTH=none and removing the ENV completely, it still does not work.
Someone has posted something along the lines of this issue but they haven't received any responses. I am hoping mine does.
FROM user: logisima
You don't have any issue with auth, it's a permission issue : cause exception "/logs/debug.log (Permission denied)"