Containerizing Cordapp with Docker Image and Docker Compose - docker

When running Corda in docker with external Postgres DB configurations, I get insufficient privileges to access error.
Note:
Corda: 4.6 Postgresql: 9.6
Docker engine 20.10.6
Docker-compose: docker-compose version 1.29.1, build c34c88b2
docker-compose.yml file:
version: '3.3'
services:
partyadb:
hostname: partyadb
container_name: partyadb
image: "postgres:9.6"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: partyadb
ports:
- 5432
partya:
hostname: partya
# image: corda/corda-zulu-java1.8-4.7:RELEASE
image: corda/corda-zulu-java1.8-4.6:latest
container_name: partya
ports:
- 10006
- 2223
command: /bin/bash -c "java -jar /opt/corda/bin/corda.jar run-migration-scripts -f /etc/corda/node.conf --core-schemas --app-schemas && /opt/corda/bin/run-corda"
volumes:
- ./partya/node.conf:/etc/corda/node.conf:ro
- ./partya/certificates:/opt/corda/certificates:ro
- ./partya/persistence.mv.db:/opt/corda/persistence/persistence.mv.db:rw
- ./partya/persistence.trace.db:/opt/corda/persistence/persistence.trace.db:rw
# - ./partya/logs:/opt/corda/logs:rw
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
- ./shared/cordapps:/opt/corda/cordapps:rw
- ./shared/drivers:/opt/corda/drivers:ro
- ./shared/network-parameters:/opt/corda/network-parameters:rw
environment:
- ACCEPT_LICENSE=${ACCEPT_LICENSE}
depends_on:
- partyadb
Error:
[ERROR] 12:41:24+0000 [main] internal.NodeStartupLogging. - Exception during node startup. Corda started with insufficient privileges to access /opt/corda/additional-node-infos/nodeInfo-5B........................................47D

The corda/corda-zulu-java1.8-4.6:latest image runs under the user corda, not root. This user has user id 1000, and also is in a group called corda, also with gid 1000:
corda#5bb6f196a682:~$ id -u corda
1000
corda#5bb6f196a682:~$ groups corda
corda : corda
corda#5bb6f196a682:~$ id -G corda
1000
The problem here seems to be that the file you are mounting into the docker container (./shared/additional-node-infos/nodeInfo-5B) does not have permissions setup in such a way as to allow this user to access it. I'm assuming the user needs read and write access. A very simple fix would be to give other read and write access to this file:
$ chmod o+rw ./shared/additional-node-infos/nodeInfo-5B
There are plenty of other ways to manage this kind of permissions issue in docker, but remember that the permissions are based on uid/gid which usually do not map nicely from your host machine into the docker container.

So the error itself describes that it's a permission problem.
I don't know if you crafted this dockerfile yourself, you may want to take a look at generating them with the dockerform task (https://docs.corda.net/docs/corda-os/4.8/generating-a-node.html#use-cordform-and-dockerform-to-create-a-set-of-local-nodes-automatically)
This permission problem could be that you're setting only read / write within the container:
- ./shared/additional-node-infos:/opt/corda/additional-node-infos:rw
or it could be that you need to change the permissions on the shared folder. Try changing the permissions of shared to 777 and see if that works, then restrict your way back down to permissions you're comfortable with.

I just configure the image to be run as root. This works but may not be safe. Simply add
services:
cordaNode:
user: root
to the service configuration.
Ref: How to configure docker-compose.yml to up a container as root

Related

Docker volume mariadb has root permission

I stumbled across a problem with docker volumes while starting docker containers with a docker compose file (MariaDB, RabbitMQ, Maven). I start them simply with docker-compose up -d (WITHOUT SUDO)
My volumes are definied like this:
...
volumes:
- ./production/mysql:/var/lib/mysql:z
...
Everything is working fine and the ./production directory is created (where the volumes are mapped)
But when I again try to restart the docker containers with down/up, I get following error:
error checking context: 'no permission to read from '…/production/mysql/aria_log.00000001'
When I check the mentioned file I saw that it needs root:root permission. This is because the file is generated with the root user inside the container. So I tried to use namespace as mentioned in the docs.
Anyway the error still occurs. Any ideas or references?
Thanks.
Docker Compose File:
version: '3.8'
services:
mysql:
image: mariadb:latest
restart: always
env_file:
- config.env
volumes:
- ./production/mysql:/var/lib/mysql:z
environment:
MYSQL_DATABASE: ${DATABASE_NAME}
MYSQL_USER: ${DATABASE_USER}
MYSQL_PASSWORD: ${DATABASE_PASSWORD}
MYSQL_ROOT_PASSWORD: ${DATABASE_PASSWORD}
networks:
- testnetwork
networks:
testnetwork:
The issue comes from the mapping between the host user/group IDs and the ones inside the container. One of the solutions is to use a named volume and avoid all this hassle, but you can also do the following:
Add user: ${UID}:${GID} to your service inside the docker-compose file.
Run UID=${id -u} GID=${id -g} docker-compose up. This way you make sure that the user in the container will have the same UID/GID as the user on the host and files created in the container will have proper permissions.
NOTE: Docker for Mac (using the osxfs driver) does this behind the scenes and you don't need to worry about users and groups.
Run the Docker daemon as a non-root user this can be helpfull for your purpose.
all document are here.

How to Add a shared folder location to my application (Docker)

I have a shared network folder, e.g.
\\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp
There is a file in the shared folder that I would like to be visible to my dockerized application. The ultimate goal is, to have my application pick up and process this file, then put it into a database.
I have a Dockerfile and docker-compose.yml that I am thinking I will need to add a volume with the shared folder location (I'm not sure if this is the correct approach, this is where I need help!)
So far I've tried adding a volume in my yml which threw an error when i did docker-compose up -d
airflow:
build: ./airflow
image: digitalImage/airflow
container_name: di-airflow
environment:
AIRFLOW__CORE__EXECUTOR: 'LocalExecutor'
POSTGRES_USER: 'airflowStuff'
POSTGRES_PASSWORD: 'postgresCreds'
POSTGRES_HOST: 'host-postgres'
POSTGRES_PORT: '5432'
POSTGRES_DB: 'postgres-db'
DATE_VALUE: '1 DEC 2020 00:00:00'
volumes:
- ./airflow/released_dags:/usr/local/airflow/dags
- \\pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp:/usr/local/airflow/dags/inboundFiles
networks:
- di-airflowStuff
ports:
- 8081:8080
depends_on:
- postgres
ERROR: Cannot create container for service airflow: \pa02ptsdfs002.corp.lgd.afk\files\Public\chris\temp%! (EXTRA string=is not a valid Windows path)
p.s. I can access this shared folder location from my file explorer and python without a problem.
You don't need docker-compose to mount an external volume to your container, just configure it when running the container:
docker run --name name -v path_host:path_in_container image:tag
both directories must exist
Microsoft recomends mapping shares to network drives (if you're running docker on Windows):
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts

Docker persisted volum has no permissions (Apache Solr)

My docker-compose.yml:
solr:
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
After bringing up the docker with docker-compose up -d --build, the solr container is down and the log (docker logs myproject-solr) shows this:
Copying solr.xml
cp: cannot create regular file '/var/solr/data/solr.xml': Permission denied
I've noticed that if I give full permissions on my machine to the data directory sudo chmod 777 ./data/solr/ -R and I run the Docker again, everything is fine.
I guess the issue comes when the solr user is not my machine, because Docker creates the data/solr folder with root:root. Having my ./data folder gitignored, I cannot manage these folder permissions.
I'd like to know a workaround to manage permissions properly with the purpose of persisting data
It's a known "issue" with docker-compose: all files created by Docker engine are owned by root:root. Usually it's solved in one of the two ways:
Create the volume in advance. In your case, you can create the ./data/solr directory in advance, with appropriate permissions. You might make it accessible to anyone, or, better, change its owner to the solr user. The solr user and group ids are hardcoded inside the solr image: 8983 (Dockerfile.template)
mkdir -p ./data/solr
sudo chown 8983:8983 ./data/solr
If you want to avoid running additional commands before docker-compose, you can create additional container which will fix the permissions:
version: "3"
services:
initializer:
image: alpine
container_name: solr-initializer
restart: "no"
entrypoint: |
/bin/sh -c "chown 8983:8983 /solr"
volumes:
- ./data/solr:/solr
solr:
depends_on:
- initializer
image: solr:8.6.2
container_name: myproject-solr
ports:
- "8983:8983"
volumes:
- ./data/solr:/var/solr/data
networks:
static-network:
ipv4_address: 172.20.1.42
There is docker-compose-only solution :)
Problem
Docker mounts local folders with root permissions.
In Solr's docker image, the default user is solr - for a good reason: Solr commands should be run with this user (you can force to run them with root but that is not recommended).
Most Solr commands require write permissions to /var/solr/, for data and logs storage.
In this context, when you run a solr command as the solr user, you are rejected because you don't have write permission to /var/solr/.
Solution
What you can do is to first start the container as root to change the permissions of /var/solr/. And then switch to solr user to run all necessary solr commands. You can't start our Solr server.
In the example below, we use solr-precreate to create a default core and start solr.
version: '3.7'
services:
solr:
image: solr:8.5.2
volumes:
- ./mnt/solr:/var/solr
ports:
- 8983:8983
user: root # run as root to change the permissions of the solr folder
# Change permissions of the solr folder, create a default core and start solr as solr user
command: bash -c "
chown -R 8983:8983 /var/solr
&& runuser -u solr -- solr-precreate default-core"
Set with a Dockerfile
It's possibly not exactly what you wanted as the files aren't persisted when rebuilding the container, but it solves the 'rights' problem. Copy the files over and chown them with a Dockerfile:
FROM solr:8.7.0
COPY --chown=solr ./data /var/solr/data
This is more useful if you're trying to initialise a single core:
FROM solr:8.7.0
COPY --chown=solr ./core /var/solr/data/someCollection
It also has the advantage that you can create an image for reuse.
With a named volume
For persistence, you can also create a volume (in this case core) and copy the contents of a directory (also called core here), assigning the rights to the files on the way:
docker container create --name temp -v core:/data tianon/true || exit $?
tar -cf - --directory core --owner 8983 --group 8983 . | docker cp - temp:/data
docker rm temp
This was adapted from these answers:
https://github.com/moby/moby/issues/25245#issuecomment-365980572
https://stackoverflow.com/a/52446394
Then you can mount the named volume in your Docker Compose file:
version: '3'
services:
solr:
image: solr:8.7.0
networks:
- internal
ports:
- 8983:8983
volumes:
- core:/var/solr/data/someCollection
volumes:
core:
external: true
This solution persists the data without overriding the data on the host. And it doesn't need the extra build step. And can obviously be adapted for mounting the entire /var/solr/data folder.
It doesn't seem to matter that the mounted volume/directory doesn't have the correct rights (/var/solr/data/someCollection has owner root:root).

Keycloak 8: User with username 'admin' already added

I cannot start keycloak container using ansible and docker-compose. I'am getting error: User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
I have 3 ansible jobs:
Create netwrok:
- name: Create a internal network
docker_network:
name: internal
Setup postgres:
- name: "Install Postgres"
docker_compose:
project_name: posgressdb
restarted: true
pull: yes
definition:
version: '2'
services:
postgres:
image: postgres:12.1
container_name: postgres
restart: always
env_file:
- /etc/app/db.env
networks:
- internal
volumes:
- postgres-data:/var/lib/postgresql/data
- /etc/app/createdb.sh:/docker-entrypoint-initdb.d/init-app-db.sh
ports:
- "5432:5432"
volumes:
postgres-data:
networks:
internal:
external:
name: internal
Create keycloak container:
- name: Install keycloak
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
Does anyone have any idea why I get this error?
EDIT
If I downgrade keycloak to version 7 it starts normally!
Just to clarify the other answers. I had the same issue. What helped for me was:
stop all containers
comment out the two relevant lines
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
# KEYCLOAK_USER: admin
# KEYCLOAK_PASSWORD: pass
...
start all containers;
wait until keycloak container has successfully started
stop all containers, again
comment back in the two lines from above
version: "3"
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: pass
...
start all containers
This time (and subsequent times) it worked. Keycloak was running and the admin user was registered and working as expected.
This happens when Keycloak is interrupted during boot. After this, command which attempts to add admin user starts to fail. In Keycloak 7 this wasn't fatal, but in 8.0.1 this line was added to /opt/jboss/tools/docker-entrypoint.sh which aborts the entire startup script:
set -eou pipefail
Related issue: https://issues.redhat.com/browse/KEYCLOAK-12896
The reason commenting out the KEYCLOAK_USER works is it forces a recreation of the container. The same can be accomplished with:
docker rm -f keycloak
docker compose up keycloak
I had the same issue. After commenting out the KEYCLOAK_USER env variables in docker-compose and updating the stack, the container started again.
docker_compose:
project_name: appauth
restarted: true
pull: yes
definition:
version: '2'
services:
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak
restart: always
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=postgres
- DB_PORT=5432
- DB_SCHEMA=public
- DB_DATABASE=keycloak
- DB_USER=keycloak
- DB_PASSWORD=keycloak
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
networks:
- internal
networks:
internal:
external:
name: internal
According to my findings, the best way to set this default user is NOT by adding it via environment variables, but via the following command:
docker exec <CONTAINER> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
As per the official documentation.
I use Keycloak 12 where I still see this problem when the startup is interrupted. I could see that removing the file "keycloak-add-user.json" and restarting the container works.
Idea is to integrate this logic into container startup. I developed a simple custom-entrypoint script.
#!/bin/bash
set -e
echo "executing the custom entry point script"
FILE=/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json
if [ -f "$FILE" ]; then
echo "keycloak-add-user.json exist, hence deleting it"
rm $FILE
fi
echo "executing the entry point script from original image"
source "/opt/jboss/tools/docker-entrypoint.sh"
And I ensured to rebuild the keycloak image with appropriate adaptations to Entrypoint
in Dockerfile during the initial deployment.
ARG DEFAULT_IMAGE_BASEURL_APPS
FROM "${DEFAULT_IMAGE_BASEURL_APPS}/jboss/keycloak:12.0.1"
COPY custom-entrypoint.sh /opt/jboss/tools/custom-entrypoint.sh
ENTRYPOINT [ "/opt/jboss/tools/custom-entrypoint.sh" ]
As our deployment is on-premise, the access to the development team is not that easy. All that our first line support could do is try giving a restart of the server where we deployed. Hence the idea of this workaround.
The way I got past this was to replace set -eou pipefail with # set -eou pipefail within the container file systems.
Logged in as root on my docker host and then edited each of the files returned by this search:
find /var/lib/docker/overlay2 | grep /opt/jboss/tools/docker-entrypoint.sh
Thomas Solutions is good but restarting all containers and start again is worthless because my docker-compose file has 7 services.
I resolved the issue in two steps.
first I commend these two lines of code like other fellows did
#- KEYCLOAK_USER=admin
#- KEYCLOAK_PASSWORD=admin
Then new terminal I run this command and it works.
docker-compose up keycloak
keycloak is a ServiceName
For other users with this problem and none of the previous answers have helped, check your connection to the database, this error usually appears if keycloak cannot connect to the database.
Test in Keycloak 8 with Docker.
I have tried the solution by
Thomas as but it sometimes works sometimes does not.
The issue is that Keycloak on boot does not find the db required, so it gets interrupted as Zmey mentions. Have you tried in the second ansible job to add depends_on: - postgres ?
Having the same issue but with docker-compose, i first started with the postgres container in order to create the necessary dbs (manual step) docker-compose up postgres and the i booted the entire setup docker-compose up.
This was happening to me when I used to shut down the Keycloak containers in Portainer and tried to get them up and running again.
I can prevent the error by also 'removing' the container after I've shut it down (both in Portainer) and then running docker-compose up. Make sure not to remove any volumes attached to your containers else you may lose data.
In case you want to add user before server start or want it look like a classic migration, build custom image with admin parameters passed
FROM quay.io/keycloak/keycloak:latest
ARG ADMIN_USERNAME
ARG ADMIN_PASSWORD
RUN /opt/jboss/keycloak/bin/add-user-keycloak.sh -u $ADMIN_USERNAME -p $ADMIN_PASSWORD
docker-compose:
auth_service:
build:
context: .
dockerfile: Dockerfile
args:
ADMIN_USERNAME: ${KEYCLOAK_USERNAME}
ADMIN_PASSWORD: ${KEYCLOAK_PASSWORD}
(do not add KEYCLOAK_USERNAME/KEYCLOAK_PASSWORD to the environment section)
I was facing this issue with Keycloak "jboss/keycloak:11.0.3" running in Docker, error:
User with username 'admin' already added to '/opt/jboss/keycloak/standalone/configuration/keycloak-add-user.json'
Adicional info, was running with PostgreSQL v13.2 also in Docker. I create some schemas for other resources but I wasn't creating the schema for the Keycloak, so the solution was for my case, run in postgres the create schema command:
CREATE SCHEMA IF NOT EXISTS keycloak AUTHORIZATION postgres;
NOTE: Hope this helps, none of other solutions shared in this post solved my issue.
You can also stop the containers and simply remove associated volumes.
If you don't know wiwh volume is associated to your keycloak container, run:
docker-compose down
for vol in $(docker volume ls --format {{.Name}}); do
docker volume rm $vol
done

Docker for Mac and mkdir permisions

Using the docker for mac app. Just installed everything yesterday. Finally got the app going.
But I can't run migrations till I install postgis. So I dropped the official postgres dockerhub image for postgis:11-alpine image. But I keep on getting a permission denied issue when docker tries to mkdir for the pg_data volume.
Dockerfile:
version: '3'
# Containers we are going to run
services:
# Our Phoenix container
phoenix:
# The build parameters for this container.
build:
# Here we define that it should build from the current directory
context: .
environment:
# Variables to connect to our Postgres server
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: gametime_dev
PGPORT: 5432
# Hostname of our Postgres container
PGHOST: db
ports:
# Mapping the port to make the Phoenix app accessible outside of the container
- "4000:4000"
depends_on:
# The db container needs to be started before we start this container
- db
- redis
redis:
image: "redis:alpine"
ports:
- "6379:6379"
sysctls:
net.core.somaxconn: 1024
db:
# We use the predefined Postgres image
image: mdillon/postgis:11-alpine
environment:
# Set user/password for Postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
# Set a path where Postgres should store the data
PGDATA: /var/lib/postgresql/data/pgdata
restart: always
volumes:
- pgdata:/usr/local/var/postgres_data
# Define the volumes
volumes:
pgdata:
Error I'm getting:
db_1 | mkdir: can't create directory '/var/lib/postgresql/data/pgdata': Permission denied
This does not happen though when using the postgres(official) image. I have googled high and low. I did read something about Docker for Mac running commands on the containers it creates in a VM as the current user's localhost user and not root. But that doesn't make sense to me - how do I get around this, if that's the case?
[Extra note:] - I did try the :z and :Z - still got the exact same error as above.
Appreciate the the time - in advance.
Your environment variables for the db service state that PGDATA is in /var/lib/postgresql/data/pgdata but you are mounting a pgdata volume in the container at /usr/local/var/postgres_data.
My guess is that when postgres starts, it is looking at the env vars and expecting a dir in /var/lib/postgresql/data/pgdata. Since it probably does not exists, it is trying to create it as postgres user which does not have the right to do it.
Use the same path for both vars and I'm quite sure it will fix the error.

Resources