Dockerfile file not found in keycloak docker image - docker

I recently tried to clone our production code in local setup which means this code is running in production.
The docker file looks like
FROM jboss/keycloak
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]
I am successfully able to create docker image but when I try to run it I get error
Caused by: java.io.FileNotFoundException: km.json (No such file or directory)
Repo structure
km/keycloak-images/km.json
km/keycloak-images/DockerFile
km/keycloak-images/entrypoint.sh
Docker compose file structure
/km/docker-compose.yml
/km/docker-compose.dev.yml
The docker-compose.dev.yml looks like
version: '3'
# The only service we expose in local dev is the keycloak server
# running an h2 database.
services:
keycloak:
build: keycloak-image
image: dt-keycloak
environment:
DB_VENDOR: h2
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
I run the command from /km
docker-compose -f docker-compose.dev.yml up --build

Basically not able to find the file inside docker container to check.
$docker run --rm -it <containerName> /bin/bash #this command is used to run the docker and get inside the container.
cd /opt/jboss #check km.json file is there or not
Edited: Basically the path for the source in COPY(km.json) is incorrect. Try using absolute path the make it relative.
FROM jboss/keycloak
COPY ./km.json /opt/jboss # changed this line
COPY entrypoint.sh /opt/jboss
USER root
RUN chown jboss /opt/jboss/entrypoint.sh && chmod +x /opt/jboss/entrypoint.sh
USER 1000
ENTRYPOINT ["/opt/jboss/entrypoint.sh"]
CMD [""]

Your copy operation is wrong
if you run from
/km
you probably need to change COPY to
COPY keycloak-images/km.json /opt/jboss
if you run on Mac, try to use ADD instead of COPY, since mac has many issues with the copy

Try with this compose file:
version: '3'
services:
keycloak:
build:
context: ./keycloak-images
image: dt-keycloak
environment:
- DB_VENDOR: h2
- KEYCLOAK_USER: admin
- KEYCLOAK_PASSWORD: password
- KEYCLOAK_HOSTNAME: localhost
ports:
- 8080:8080
You have to specify the docker build context so that the files you need to copy are passed to the daemon.
Note that you need to adapt this context path when you do not execute docke-compose from km directory. This is because on your dockerfile you have specified
COPY km.json /opt/jboss
COPY entrypoint.sh /opt/jboss
Saying that the build context sent to docker daemon should be a directory containing these files.

Related

Preventing Docker Compose container from creating files as root

Dockerfile:
FROM node:18.13.0
ENV WORK_DIR=/app
RUN mkdir -p ${WORK_DIR}
WORKDIR ${WORK_DIR}
RUN mkdir ${WORK_DIR}/data
RUN chmod -R 755 ${WORK_DIR}/data
COPY package*.json ./
RUN npm ci
COPY . .
docker-compose.yml:
version: '3.8'
services:
fetch:
container_name: fetch
build: .
command: sh -c "npx prisma migrate deploy && npm start"
restart: unless-stopped
depends_on:
- postgres
volumes:
- ./data:/app/data:z
The container fetches new files and saves them into a directory configured by the app running in the container, defaulting to data/. The issue is that they're all created as root and cannot be manipulated by the host. If I chown the dir on the host, it works, but any new files are then created as root again.
I've tried a couple different variations of creating a new user in Dockerfile and passing host user info into the compose file but it always seems to result in a disconnect between the Dockerfile and compose file. I'm trying to keep things as easy as docker compose up, if possible.

How to run only specific command as root and other commands with default user in docker-compose

This is my Dockerfile.
FROM python:3.8.12-slim-bullseye as prod-env
RUN apt-get update && apt-get install unzip vim -y
COPY requirements.txt /app
RUN pip install -r requirements.txt
USER nobody:nogroup
This is how docker-compose.yml looks like.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
I want to add permissions read, write and execute permissions on shared directories.
And also need to run couple of other coommands as root.
So I have to execute this command with root every time after image is built.
docker exec -it -u root api_server_1 bash -c "python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images"
Now, I want docker-compose to execute these lines.
But as you can see, user in docker-compose has to be nobody as specified by Dockerfile. So how can I execute root commands in docker-compose file?
Option that I've been thinking:
Install sudo command from Dockerfile and use sudo
Is there any better way ?
In docker-compose.yml create another service using same image and volumes.
Override user with user: root:root, command: your_command_to_run_as_root, for this new service and add dependency to run this new service before starting regular working container.
api_server:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
ports:
- 8200:8200
command: gunicorn -b 0.0.0.0:8200 --threads "8" --log-level info --reload "server:gunicorn_app(command='start', project='app_server')"
# This make sure that startup order is correct and api_server_decorator service is starting first
depends_on:
- api_server_decorator
api_server_decorator:
build:
context: .
target: prod-env
image: company/server
volumes:
- ./shared/model_server/models:/models
- ./static/images:/images
# No ports needed - it is only decorator
# Overriding USER with root:root
user: "root:root"
# Overriding command
command: python copy_stuffs.py; chmod -R a+rwx models; chmod -R a+rwx /images
There are other possibilities like changing Dockerfile by removing USER restriction and then you can use entrypoint script doing as root what you want as privileged user and running su - nobody or better exec gosu to retain PID=1 and proper signal handling.
In my eyes the approach giving a container root rights is quite hacky and dangerous.
If you want to e.g. remove the files written by container you need root rights on host as well.
If you want to allow a container to access files on host filesystem just run the container with appropriate user.
api_server:
user: my_docker_user:my_docker_group
then give on host the rights to that group
sudo chown -R my_docker_user:my_docker_group models
You should build all of the content you need into the image itself, especially if you have this use case of occasionally needing to run a process to update it (you are not trying to use an isolation tool like Docker to simulate a local development environment). In your Dockerfile, COPY these directories into the image
COPY shared/model_server/models /models
COPY static/images /images
Do not make these directories writeable, and do not make the individual files in the directories executable. The directories will generally be mode 0755 and the files mode 0644, owned by root, and that's fine.
In the Compose setup, do not mount host content over these directories either. You should just have:
services:
api_server:
build: . # use the same image in all environments
image: company/server
ports:
- 8200:8200
# no volumes:, do not override the image's command:
Now when you want to update the files, you can rebuild the image (without interrupting the running application, without docker exec, and without an alternate user)
docker-compose build api_server
and then do a relatively quick restart, running a new container on the updated image
docker-compose up -d

Docker-compose builds app and copy's not inteded directory content specified by dockerfile

I'm trying to containerize two services an socket service and a django application
My file structure is
\main file {docker-compose file}
\ django application {Dockerfile}
\ socket app {Dockerfile}
When I run docker build . it build the image
then when I run docker-compose build,
I notice that the socket app and django app are copied to the container instead of only the django application as specified by the Dockerfile.
I get the idea that the Dockerfile is executed in the main directory instead of the django directory?
Here is Dockerfile that is inside the django app application
# Pull base image
FROM python:3
# Set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# Copy project
COPY . /code/
RUN ls
And here is the docker-compose file.
With the usage of the ls command I tried to figure out what happend and the output is that the applications in the main folder are copied instead of the django application.
version: '3'
services:
db:
image: postgres:10.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: ./django_app
command: ls /code/
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
volumes:
postgres_data:
is this intended use or am I doing something wrong?
The volumes: directive in your docker-compose.yml file is hiding literally everything your Dockerfile does. You'll solve your immediate problem by changing the two directories to match: in the volumes: directive, bind-mount ./django_app:/code.
In a more production-oriented workflow, I'd recommend making your Docker image totally self-contained: make sure it has a CMD that runs your application, and do not use volumes: to inject your code. Delete command: and volumes: from the docker-compose.yml and let the image provide its own code and default command. (To do development, use a Python virtual environment for local code isolation, and make sure all of your tests and a basic hand-run workflow pass before using Docker for anything.)

Operation of the mkdir command with dockerfile

I cannot create a directory with the mkdir command in a container with dockerfile.
My Dockerfile file is simply ;
FROM php:fpm
WORKDIR /var/www/html
VOLUME ./code:/var/www/html
RUN mkdir -p /var/www/html/foo
In this way I created a simple php: fpm container.
and I wrote to create a directory called foo.
docker build -t phpx .
I have built with the above code.
In my docker-compose file as follows.
version: '3'
services:
web:
container_name: phpx
build : .
ports:
- "80:80"
volumes:
- ./code:/var/www/html
later; run the following code and I entered the container kernel.
docker exec -it phpx /bin/bash
but there is no a directory called foo in / var / www / html.
I wonder where I'm doing wrong.
Can you help me?
The reason is that you are mounting a volume from your host to /var/www/html.
Step by step:
RUN mkdir -p /var/www/html/foo creates the foo directory inside the filesystem of your container.
docker-compose.yml ./code:/var/www/html "hides" the content of /var/www/html in the container filesystem behind the contents of ./code on the host filesystem.
So actually, when you exec into your container you see the contents of the ./code directory on the host when you look at /var/www/html.
Fix: Either you remove the volume from your docker-compose.yml or you create the foo-directory on the host before starting the container.
Additional Remark: In your Dockerfile you declare a volume as VOLUME ./code:/var/www/html. This does not work and you should probably remove it. In a Dockerfile you cannot specify a path on your host.
Quoting from docker:
The host directory is declared at container run-time: The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability. since a given host directory can’t be guaranteed to be available on all hosts. For this reason, you can’t mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.
I am able to create a directory inside the 'workdir' for docker as follows:
Dockerfile content
COPY src/ /app
COPY logging.conf /app
COPY start.sh /app/
COPY Pipfile /app/
COPY Pipfile.lock /app/
COPY .env /app/
RUN mkdir -p /app/logs
COPY logs/some_log.log /app/logs/
WORKDIR /app
I have not mentioned the volume parameter in my 'docker-compose.yaml' file
So here is what I suggest: Remove the volume parameter from the 'Dockerfile' as correctly pointed by the Fabian Braun.
FROM php:fpm
RUN mkdir -p /var/www/html/foo
WORKDIR /var/www/html
And remove the volume parameter from the docker-compose file. It will work. Additionally, I would like to know how you tested of there is a directory named 'foo'.
Docker-compose file content
version: '3'
services:
web:
build:
context: .
dockerfile: Dockerfile # The name of your docker file
container_name: phpx
ports:
- "80:80"
You can use the SHELL instruction of Dockerfile.
ENV HOME /usr/local
SHELL ["/bin/sh", "-c"]
RUN mkdir $HOME/logs

docker-compose file removes the files extracted by dockerfile in container directory

I want to build drupal from dockerfile and install a module in drupal using that dockerfile in container directory - /var/www/html/sites/all/modules.
but when I build the dockerfile by docker-compose build it extracts correctly ..
as soon as I perform docker-compose up , the files are gone but the volume is mapped .
please look the both the docker-compose and dockerfile
DockerFile
FROM drupal:7
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
ENV DRUPAL_VERSION 7.36
ENV DRUPAL_MD5 98e1f62c11a5dc5f9481935eefc814c5
ADD . /var/www/html/sites/all/modules
WORKDIR /var/www/html
RUN chown -R www-data:www-data sites
WORKDIR /var/www/html/sites/all/modules
# Install drupal-chat
ADD "http://ftp.drupal.org/files/projects/{drupal-module}.gz {drupal-module}.tar.gz"
RUN tar xzvf {drupal-module} \
&& rm {drupal-module} \
docker-compose file
# PHP Web Server
version: '2'
drupal_box:
build: .
ports:
- "3500:80"
external_links:
- docker_mysqldb_1
volumes:
- ~/Desktop/mydockerbuild/drupal/modules:/var/www/html/sites/all/modules
- ~/Desktop/mydockerbuild:/var/log/apache2
networks:
- default
- docker_default
environment:
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: drupal
restart: always
#entrypoint: ./Dockerfile
networks:
docker_default:
external: true
executing:
sudo docker-compose build
sudo docker-compose up
on executing both of the commands above-
the directory in the container doesnot have the {drupal-module folder} but i see it is successfully extracting in the directory in console(due to xzvf in tar command in dockerfile).
but this helps me in mapping both the host directory and the container directory and files added or deleted can be seen both virtually and locally.
but as soon as I remove the first mapping in volume (i.e ~/Desktop...) the module is extracted in the directory but the mapping is not done.
My main aim is to extract the {drupal-module} folder in /var/www/html/sites/all/modules and map the same folder to the host directory.
Please Help !
So yes.
The answer is that you cannot have the extracted contents of the container folder into your host folder specified in the volumes mapping in docker-compose. i.e (./modules:/var/www/html/sites/all/modules) is not acceptable for drupal docker.
I did it with named volumes where you can achieve this.
Eg - modules:/var/www/html/sites/all/modules
this will create a volume in var/lib/docker/volumes... (you can have the address by "docker inspect ") and the volume will have the same data as extracted in your container.
Note- the difference lies in ./ !!!

Resources