How to use GCP service account json files in Docker - docker

I am dockerizing Fastapi application which is using Firebase. I need to access service json file and I have configured my docker container as follows.
Dockerfile
FROM python:3.10-slim
ENV PYTHONUNBUFFERED 1
WORKDIR /app
# Install dependencies
COPY ./requirements.txt /requirements.txt
EXPOSE 8000
RUN pip install --no-cache-dir --upgrade -r /requirements.txt
RUN mkdir /env
# Setup directory structure
COPY ./app /app/app
COPY ./service_account.json /env
CMD ["uvicorn", "app.app:app", "--host", "0.0.0.0", "--port", "8000"]
Docker-compose file
version: "3.9"
services:
app:
build:
context: .
restart: always
environment:
- GOOGLE_APPLICATION_CREDENTIALS_CLOUDAPI=${GOOGLE_APPLICATION_CREDENTIALS_CLOUDAPI}
- GOOGLE_APPLICATION_CREDENTIALS=${GOOGLE_APPLICATION_CREDENTIALS}
volumes:
- ./env:/env
volumes:
env:
Now when I run docker-compose up -d --build the container fails with the error FileNotFoundError: [Errno 2] No such file or directory: '/env/service_account.json'. When I inspect the container I can see the ENV variable set successfully as shown "GOOGLE_APPLICATION_CREDENTIALS=/env/service_account.json",. Now why is this failing?

You have context: . and COPY ./service_account.json /env
But when you run the container, you have
volumes:
- ./env:/env
Meaning your service_acccount file is not in ./env folder, and is instead outside of it.
When you mount a volume, it replaces the directory inside the container, so if you need a local env folder mounted as /env in the container, then you should move your JSON file somewhere else such as /opt (COPY ./service_account.json /opt), and then set GOOGLE_APPLICATION_CREDENTIALS=/opt/service_account.json
If you don't need the whole folder, then you only need
volumes:
- ./service_account.json:/env/service_account.json:ro
Otherwise, move the JSON file into ./env on your host and change COPY ./env/service_account.json /env

Related

Dockerfile permission for volume

FROM --platform=$BUILDPLATFORM maven:3.8.5-eclipse-temurin-17 AS builder
WORKDIR /server
COPY pom.xml /server/pom.xml
RUN mvn dependency:go-offline
COPY src /server/src
RUN mvn install
# install Docker tools (cli, buildx, compose)
COPY --from=gloursdocker/docker / /
CMD ["mvn", "spring-boot:run"]
FROM builder as prepare-production
RUN mkdir -p target/dependency
WORKDIR /server/target/dependency
RUN jar -xf ../*.jar
FROM eclipse-temurin:17-jre-focal
EXPOSE 8080
VOLUME /app
ARG DEPENDENCY=/server/target/dependency
COPY --from=prepare-production ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=prepare-production ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=prepare-production ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-cp","app:app/lib/*","com.server.backend.BackendApplicaiton"]
and I need to save the files in /app to the /opt/containers/backend directory (absolute). Code bellow is my docker compose file.
version: "3.9"
services:
backend:
container_name: "backend"
build: backend
environment:
- ${MSSQL_PASSWORD}
ports:
- 3000:8080
volumes:
- /opt/containers/backend:/app
networks:
- backend
networks:
backend:
name: backend
driver: bridge
internal: false
if I run this and create volume in docker, everything works, files are saved inside docker volume, but when I set absolute path as in the docker compose file, directory is empty and app does not run. I am sure the error is in permissions, but I cant figured it out where and I could not find any solutions :(
Thank you for all your replies and help.

Dockerfile is copying files from outside of parent directory

I have a simple Dockerfile that is in a directory called /App when I build my docker container using a docker-compose yaml file the dockerfile copies files from one level up, outside the /App folder into the container.
Here is my dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
and here is my docker-compose file
version: '3'
services:
dash:
build:
context: ./App
dockerfile: Dockerfile.dash
container_name: dash_dash
command: ls
volumes:
- .:/code
ports:
- "80:8080"
When I build and run the container the ls command shows that it copied the directory one level above the /App directory, such that the /App directory is included but is not the main directory.
The volumes section of your docker-compose.yml is overriding the working directory:
volumes:
- .:/code
is copying the whole folder where the docker-compose.yml is (so it is copying the /App folder entirely). As a result, your files in your working directory (/code) are overridden.
You should remove the volumes section of your docker-compose.yml. The ls command will then show the contents of the App directory, copied by the COPY . . section of your Dockerfile.

Docker container works from Dockerfile but get next: not found from docker-compose container

I am having an issue with my docker-compose configuration file. My goal is to run a Next.js app with a docker-compose file and enable hot reload.
Running the Next.js app from its Dockerfile works but hot reload does not work.
Running the Next.js app from the docker-compose file triggers an error: /bin/sh: next: not found and I was not able to figure what's wrong...
Dockerfile: (taken from Next.js' documentation website)
[Notice it's a multistage build however, I am only referencing the builder stage in the docker-compose file.]
# Install dependencies only when needed
FROM node:18-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install # --frozen-lockfile
# Rebuild the source code only when needed
FROM node:18-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
RUN yarn build
# If using npm comment out above and use below instead
# RUN npm run build
# Production image, copy all the files and run next
FROM node:18-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
# COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3001
ENV PORT 3001
CMD ["node", "server.js"]
docker-compose.yml:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRESQL_PASSWORD}
backend:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
DATABASE_USERNAME: ${MYAPP_DATABASE_USERNAME}
DATABASE_PASSWORD: ${POSTGRESQL_PASSWORD}
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
target: builder
command: yarn dev
volumes:
- ./frontend:/app
expose:
- "3001"
ports:
- "3001:3001"
depends_on:
- backend
environment:
FRONTEND_BUILD: ${FRONTEND_BUILD}
PORT: 3001
package.json:
{
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"next": "latest",
"react": "^18.1.0",
"react-dom": "^18.1.0"
}
}
When calling yarn dev from docker-compose.yml it actually calls next dev and that's when it triggers the error /bin/sh: next: not found. However, running the container straight from the Dockerfile works and does not lead to this error.
[Update]:
If I remove the volume attribute from my docker-compse.yml file, I don't get the /bin/sh: next: not found error and the container runs however, I now don't get the hot reload feature I am looking for. Any idea why the volume is messing up with the /bin/sh next command?
This is happening because your local filesystem is being mounted over what is in the docker container. Your docker container does build the node modules in the builder stage, but I'm guessing you don't have the node modules available in your local file system.
To see if this is what is happening, on your local file system, you can do a yarn install. Then try running your container via docker again. I'm predicting that this will work, as yarn will have installed next locally, and it is actually your local file system's node modules that will be run in the docker container.
One way to fix this is to volume mount everything except the node modules folder. Details on how to do that: Add a volume to Docker, but exclude a sub-folder
So in your case, I believe you can add a line to your compose file:
frontend:
...
volumes:
- ./frontend:/app
- ./frontend/node_modules # <-- try adding this!
...
That should allow the docker container's node_modules to not be overwritten by any volume mount.

Docker context on remote server “Error response from daemon: invalid volume specification”

I am using docker context to deploy my local container to my debian webserver. I use Docker Desktop for Windows on Windows 10. The app is written using Flask.
At some point I tried “docker-compose up --build” after “docker context use remote” and I was getting the following error:
Error response from daemon: invalid volume specification: ‘C:\Users\user\fin:/fin:rw’
Locally everything works fine when I try to deploy it to the production server the error pops up.
The Dockerfile looks like the following:
FROM python:3.8-slim-buster
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
ENV PATH="/home/user/.local/bin:${PATH}"
COPY . ./
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN useradd -ms /bin/bash user && chown -R user $INSTALL_PATH
USER user
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
CMD gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
while an excerpt of the docker-compose.yml look like the following:
version: '3.8'
services:
flask-app:
container_name: flask-app
restart: always
build: .
command: >
gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
environment:
PYTHONUNBUFFERED: 'true'
volumes:
- '.:/fin'
ports:
- 8000:8000
env_file:
- '.env'
In the .env file the option
COMPOSE_CONVERT_WINDOWS_PATHS=1 is set.
At some point I tried the same procedure using WSL2 with Ubuntu installed, which led to the following message:
Error response from daemon: create \\wsl.localhost\Ubuntu-20.04\home\user\fin: "\\\\wsl.localhost\\Ubuntu-20.04\\home\\user\\fin" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
Based on this message I changed the Dockerfile to:
FROM python:3.8-slim-buster
ENV INSTALL_PATH=/usr/src/app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
ENV PATH=/home/user/.local/bin:${PATH}
COPY . /usr/src/app/
# set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
#ENV COMPOSE_CONVERT_WINDOWS_PATHS=1
RUN useradd -ms /bin/bash user && chown -R user $INSTALL_PATH
USER user
COPY requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
RUN pip install --upgrade pip
CMD gunicorn -c "python:config.gunicorn" "fin.app:create_app()"
But still the error remains, and I have to clue how to solve it.
Thank you in advance for your help.
You are getting invalid volume specification: ‘C:\Users\user\fin:/fin:rw’ in your production environment is because, the host path C:\Users\user\fin isn't available. You can remove it when you are deploying or change it to an absolute path which is available in your production environment as below.
volumes:
- '/root:/fin:rw'
where /root is a directory available in my production environment.
/path:/path/in/container mounts the host directory, /path at the /path/in/container
path:/path/in/container creates a volume named path with no relationship to the host.
Note the slash at the beginning. if / is present it will be considered as a host directory, else it will be considered as a volume
use this (without quotes and with a slash so it knows you mean this folder):
volumes:
- ./:/fin

Docker compose ERROR: Service 'web' failed to build : COPY failed: forbidden path

I'm following the official docker tutorial to set up rails in docker the link of the same is given below
https://docs.docker.com/samples/rails/
My Dockerfile
# syntax=docker/dockerfile:1
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client
WORKDIR /myapp
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock
RUN bundle install
COPY ../compose /myapp
# Add a script to be executed every time the container starts.
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000
# Configure the main process to run when running the image
CMD ["rails", "server", "-b", "0.0.0.0"]
docker-compose.yml
version: "3.9"
services:
db:
image: postgres
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: password
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
The error which I'm getting when run this command
docker-compose run --no-deps web rails new . --force --database=postgresql
Error:
Step 7/12 : COPY ../compose /myapp
ERROR: Service 'web' failed to build : COPY failed: forbidden path outside the build context: ../compose ()
My dir structure is
docker-rails tree.
├── Dockerfile
├── Gemfile
├── Gemfile.lock
├── docker-compose.yml
└── entrypoint.sh
0 directories, 5 files
I'm relatively new to docker setups. I saw similar kind of errors in SO but their set up files are different so couldn't understood how to fix it.
You are getting the error because you are trying to copy a file that is outside of the build context. the build context according to the documentation of docker-compose:
Either a path to a directory containing a Dockerfile, or a url to a
git repository.
When the value supplied is a relative path, it is interpreted as
relative to the location of the Compose file. This directory is also
the build context that is sent to the Docker daemon.
try changing COPY ../compose /myapp to COPY . .
and run the build command in a terminal within the directory

Resources