How to persist the default keycloak database in docker? - docker

I have a docker image:
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/keycloak/keycloak 18.0.2 ce57c5afb395 5 months ago 590MB
I run this command:
sudo docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:18.0.2 start-dev --http-relative-path /auth
but the problem is when I stop the server the realms the users that I created are deleted, how can I create a container?
How to have only one container that I can run, not everytime that I do docker run, a new fresh container is created.
Is there another command?

Keycloak uses a H2 database by default, you need to persist the database files in a volume and re-use the volume in subsequent docker run's.
Create a docker volume for the H2 database files
docker volume create keycloak
Set the correct permissions on the new volume (the keycloak user in the quay.io/keycloak/keycloak:18.0.2 image container is UID: 1000, GID: 0)
docker run \
--rm \
--entrypoint chown \
-v keycloak:/keycloak \
alpine -R 1000:0 /keycloak
Use the docker volume to persist the H2 database
docker run \
-p 8080:8080 \
-e KEYCLOAK_ADMIN=admin \
-e KEYCLOAK_ADMIN_PASSWORD=admin \
-v keycloak:/opt/keycloak/data/h2 \
quay.io/keycloak/keycloak:18.0.2 start-dev --http-relative-path /auth

Related

Migrating A Docker Volume to Podman

I used to have a Docker volume for mariadb, which contained my database. As part of migration from Docker to Podman, I am trying to migrate the db volume as well. The way I tried this is as follows:
1- Copy the content of the named docker volume (/var/lib/docker/volumes/mydb_vol) to a new directory I want to use for Podman volumes (/opt/volumes/mydb_vol)
2- Run Podman run:
podman run --name mariadb-service -v /opt/volumes/mydb_vol:/var/lib/mysql/data:Z -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=mysecret -e MYSQL_DATABASE=wordpress --net host mariadb
This successfully creates a container and initializes the database with the given environment variables. The problem is that the database in the container is empty! I tried changing host mounted volume to /opt/volumes/mydb_vol/_data and container volume to /var/lib/mysql simultaneously and one at a time. None of them worked.
As a matter of fact, when I "podman execute -ti container_digest bash" inside the resulting container, I can see that the tables have been mounted successfully in the specified container directories, but mysql shell says the database is empty!
Any idea how to properly migrate Docker volumes to Podman? Is this even possible?
I solved it by not treating the directory as a docker volume, but instead mounting it into the container:
podman run \
--name mariadb-service \
--mount type=bind,source=/opt/volumes/mydb_vol/data,destination=/var/lib/mysql \
-e MYSQL_USER=wordpress \
-e MYSQL_PASSWORD=mysecret \
-e MYSQL_DATABASE=wordpress \
mariadb

Unable to backup docker volumne

I'm following the offical docker guide from here to backup a docker volume. I'm also aware of this SO question however I'm still running into errors. Running the following command:
docker run --rm --volumes-from dbstore -v $(pwd):/backup ny_db_1 tar cvf /backup/backup.tar /dbdata
No matter what image name or container name or container id I put, I get the following error:
Unable to find image 'ny_db_1:latest' locally
The volume I want to backup:
$ docker volume ls
DRIVER VOLUME NAME
local ny_postgres_data
My containers:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39e71e660eda postgres:10.1-alpine "docker-entrypoint.s…" 4 days ago Up 23 minutes 0.0.0.0:5434->5433/tcp ny_db_1
How do I backup my volume?
Update:
I tried the following but ran into a new error:
$ docker run --rm --volumes-from 39e71e660eda -v $(pwd):/backup postgres:10.1-alpine tar:local cvf /backup/backup.tar /dbdata
/usr/local/bin/docker-entrypoint.sh: line 145: exec: tar:local: not found
The docker run syntax is docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...] - ny_db_1 is the name of your container, docker will attempt to use the IMAGE "ny_db_1" which does not exist hence the error: "Unable to find image 'ny_db_1:latest' locally" (latest is the default [:TAG] if none is specified).
--volumes-from will mount volumes from the specified container(s) into a new container spawned from IMAGE[:TAG] for example: docker run --rm --volumes-from db -v $(pwd):/backup ubuntu:18.04 tar czvf /backup/backup.tar /dbdata
Note: if you're backing up a PostgreSQL database then imho you'd be better off using the appropriate tools to backup and restore the database for example:
Backup using pg_dumpall:
docker run --rm \
--name db-backup \
--entrypoint pg_dumpall \
--volume ${PWD}/backup:/backup \
--volumes-from db \
postgres:9 --host /var/run/postgresql --username postgres --clean --oids --file /backup/db.dump
Restore using psql:
docker run --rm -it \
-v ${PWD}/backup:/restore \
--name restore \
postgres:10.1-alpine
docker exec restore psql \
--host /var/run/postgresql \
--username postgres \
--file /restore/db.dump postgres
docker rename restore NEW_NAME
try this command here:
docker run -it --rm -v ny_postgres_data:/volume -v /tmp:/backup ny_db_1 \
tar -cjf /backup/ny_postgres_data -C /volume ./

Wordpress Access denied for user root with MySQL container

I'm trying to make MySQL instance available to other containers, I'm following this documentation mysql and this wordpress official documentation, I get this error
MySQL Connection Error: (1045) Access denied for user 'root'#'172.17.0.3' (using password: YES)
Code for MySQL instance
docker run -d --restart on-failure -v hatchery:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Kerrigan \
-e MYSQL_DATABASE=zerglings --name spawning-pool mysql
Code for WordPress instance
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql wordpress
How can I successfully link wordpress and mysql containers?
you need to pass in your database connection credentials via environment variables to wordpress:
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql \
-e WORDPRESS_DB_HOST=mysql \
-e WORDPRESS_DB_NAME=zerglings \
-e WORDPRESS_DB_PASSWORD=zerglings wordpress
I have solved it by deleting everything and try starting it up again.
docker rm -v spawning-pool # -v Remove the volumes associated with the container
Remove the volume too
docker volume rm hatchery
Then I created the containers again
# create the volume
docker volume create hatchery
# MySQL instance
docker run -it -d --restart on-failure -v hatchery:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=Kerrigan \
-e MYSQL_DATABASE=zerglings --name spawning-pool mysql
# creating wordpress
docker run -d --name lair -p 8080:80 --link spawning-pool:mysql \
-e WORDPRESS_DB_HOST=mysql -e WORDPRESS_DB_NAME=zerglings
-e WORDPRESS_DB_PASSWORD=Kerrigan wordpress

what does docker export dir mean

i am following this tutorial to run mssql in a docker.First the user pulls the image
docker pull microsoft/mssql-server-linux
second he does below
export DIR=/var/lib/mssql
sudo mkdir $DIR
finally he runs
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v $DIR:/var/opt/mssql \
microsoft/mssql-server-linux
Author explains second step as below
Create a directory on the host that will store data from the container and keep the value in an environment variable for convenience:
ask:
what does the author meant by that and what happens if we dont create the directory
I tried searching for different terms like below
docker container default path
docker file system
but not able to understand.Can some one shed some light on this
So here is thing. Consider below code
export DIR=/var/lib/mssql
sudo mkdir $DIR
I can rewrite it as
sudo mkdir /var/lib/mssql
But I will also have to change my RUN command to
docker run \
-d \
--name mssql \
-e 'ACCEPT_EULA=Y' \
-e 'SA_PASSWORD=' \
-p 1433:1433 \
-v /var/lib/mysql:/var/opt/mssql \
microsoft/mssql-server-linux
Now if you change the directory, then you you will have to update two places. Thats why DIR was used.
If you remove below from your docker run
-v /var/lib/mysql:/var/opt/mssql \
The data of your DB will be stored inside container at /var/opt/mssql and the data will only exist till the container is running. Next time you restart the container the DB will be blank.
That is why you map it to an outside directory on host. So when you restart the container or launch a new one then that directory content would be made available inside the container and the DB will have all the changes you made

How to store my docker registry in the file system

I want to setup a private registry behind a nginx server. To do that I configured nginx with a basic auth and started a docker container like this:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/home/example/registry \
-p 5000:5000 \
registry
By doing that, I can login to my registry, push/pull images... But if I stop the container and start it again, everything is lost. I would have expected my registry to be save in /home/example/registry but this is not the case. Can someone tell me what I missed ?
I would have expected my registry to be save in /home/example/registry but this is not the case
it is the case, only the /home/exemple/registry directory is on the docker container file system, not the docker host file system.
If you run your container mounting one of your docker host directory to a volume in the container, it would achieve what you want:
docker run -d \
-e STANDALONE=true \
-e INDEX_ENDPOINT=https://docker.example.com \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-p 5000:5000 \
-v /home/example/registry:/registry \
registry
just make sure that /home/example/registry exists on the docker host side.

Resources