I have custom SPI for keycloak 15 version and I want to deploy this every time automatically, means in fresh deployment or if the pod restarts.
I was able to do this using docker-compose but not getting anyway, how to do it in kubernetes ?
version: '3.7'
services:
keycloak:
container_name: local_keycloak
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
image: jboss/keycloak:latest
ports:
- "8080:8080"
restart: unless-stopped
volumes:
- type: bind
source: /home/vipul/Docker/common-keycloak-spi-0.1.jar
target: /opt/jboss/keycloak/standalone/deployments/common-keycloak-spi-0.1.jar
Any help would be appreciated.
Thanks to all the experts, for providing trigger point. Was able to accomplish this using below docker file.
FROM jboss/keycloak:latest
ADD common-keycloak-spi-0.1.jar /opt/jboss/keycloak/standalone/deployments/
ENTRYPOINT ["/usr/bin/env"]
CMD ["sh","/opt/jboss/tools/docker-entrypoint.sh"]
Related
I have built a CRUD application using spring-boot and MySQL, MySQL is in docker and I am able to connect from local and my application is working. But when I tried to deploy the Spring-boot application in docker now it is not able to connect to Docker MySQL.
## Spring application.properties
server.port=8001
# MySQL Props
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto = create
spring.datasource.url=jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:9001}/${MYSQL_DATABASE:test-db}
spring.datasource.username=${MYSQL_USER:admin}
spring.datasource.password=${MYSQL_PASSWORD:nimda}
##Dockerfile
FROM openjdk:11
RUN apt-get update
ADD target/mysql-crud-*.jar mysql-crud.jar
ENTRYPOINT ["java", "-jar", "mysql-crud.jar"]
## docker-compose.yml
version: '3.9'
services:
dockersql:
image: mysql:latest
restart: always
container_name: dockersql
ports:
- "3306:3306"
env_file: .env
environment:
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
networks:
- crud-network
mycrud:
depends_on:
- dockersql
restart: always
container_name: mycrud
env_file: .env
environment:
- MYSQL_HOST=dockersql:3306
- MYSQL_DATABASE=$MYSQL_DATABASE
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
build: .
networks:
- crud-network
networks:
crud-network:
driver: bridge
# .env file
MYSQL_DATABASE=test-db
MYSQL_USER=admin
MYSQL_PASSWORD=nimda
MYSQL_ROOT_PASSWORD=nimda
Can anyone help me?
Even better add a health check for MySQL and make it a condition for spring boot to start
dockersql:
healthcheck:
test: [ "CMD-SHELL", 'mysql --user=${MYSQL_USER} --database=${MYSQL_DATABASE} --password=${MYSQL_PASSWORD} --execute="SELECT count(table_name) > 0 FROM information_schema.tables;"' ]
mycrud:
depends_on:
dockersql:
condition: service_healthy
The --execute can also be modified to include application-specific healthcheck. for example, checking on a specific table that it exists.
I found out that before MySQL is completely up and running, my spring boot tries to connect MySQL and that is causing the error.
After adding
mycrud:
depends_on:
- dockersql
container_name: mycrud
restart: on-failure
It resolves my issue.
I am having issues adding auth to Solr in a Docker container. I have tried copying the security.json file to the Solr container's $SOLR_HOME folder. But it's returning a response at http://localhost:8983/solr/admin/authentication:
{
"responseHeader":{
"status":0,
"QTime":0},
"errorMessages":["No authentication configured"]}
security.json:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0= Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="},
"realm":"My Solr users",
"forwardCredentials": false
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
I'm copying the file in the docker-compose.yml in the volume:
version: "3"
services:
index:
image: solr:8.11.1
ports:
- "8983:8983"
volumes:
- data:/var/solr
- ./security/security.json:/opt/solr-8.11.1/server/solr/security.json
command:
- solr-precreate
- archive_poc_core
volumes:
data:
When I go into the container and check if the file is there with the settings, I can find it. So I don't think that's the problem. I think that the file is copied after solr is started but not sure how to get the security file prior on the container or what the correct way of doing it should be.
Any help, guidance or advice would be appreciated.
Guides I looked at:
https://solr.apache.org/guide/8_1/basic-authentication-plugin.html#enable-basic-authentication
https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#using-security-json-with-solr
I managed to get this working with help from a colleague of mine. We ended up using Zookeeper for managing the Apache side of things.
docker-compose.yml
version: "3"
services:
solr1:
build:
context: .
dockerfile: solr.Dockerfile
container_name: solr1
ports:
- "8983:8983"
volumes:
- data:/var/solr
environment:
- ZK_HOST=zoo1:2181
depends_on:
- zoo1
tty: true
stdin_open: true
zoo1:
tty: true
image: zookeeper:3.6.2
container_name: zoo1
restart: always
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181
volumes:
data:
solr.Dockerfile:
It copies over the files I needed like the security.json and solr-security.sh. It also adds in the entrypoint when building.
FROM solr:8.11.1
COPY security/security.json security.json
COPY scripts/solr-security.sh /usr/bin/solr-security.sh
ENTRYPOINT ["/usr/bin/solr-security.sh"]
solr-security.sh:
This executes the authentication for Solr via Zookeeper, you can find out more here https://solr.apache.org/guide/8_11/authentication-and-authorization-plugins.html#in-solrcloud-mode
Then starts up the default entrypoint for Solr after the authentication has been done.
#!/bin/bash
solr zk cp /opt/solr-8.11.1/security.json zk:security.json -z zoo1:2181
exec /opt/docker-solr/scripts/docker-entrypoint.sh "$#"
Everything worked as expected when browsing to Solr, it showed the login screen. I hope this helps someone else who was trying to resolve this.
I am trying my hands-on on Docker and am a newbie to this technology. Here's an issue I am stuck since long:
I have an application composed of DJango REST Framework, Angular and MySQL as database. I am trying to dockerize each of these component and execute using docker-compose
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: mysql:5.7.33
container_name: db
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ******
MYSQL_DATABASE: database
volumes:
- mysql_db:/var/lib/mysql
api:
build:
context: ./api
dockerfile: Dockerfile
container_name: api
volumes:
- /api36:/home/api/api36
ports:
- "8010:8010"
depends_on:
- db
ui:
build:
context: ./ui
dockerfile: Dockerfile
volumes:
- /ui:/ui
container_name: ui
ports:
- "4201:4201"
depends_on:
- db
volumes:
mysql_db:
I updated settings.py file to point to db as HOST in DATABASES, so I am able to have a communication between my DB and API calls.
However, when I load the UI I face a failure saying: Failed to load resource: net::ERR_NAME_NOT_RESOLVED. I had added serviceUrl as http://api:8010/ in my environment.ts file. As api is name of my API service, I expected that docker-compose will establish communication internally, but looks like I am missing something. Also, I am able to get the expected behavior when I add ip of the machine in environment.ts file.
Can someone please help me out on this?
The way I bring up and execute all the containers is docker-compose up.
Thanks in advance!
Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.
I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.