I have some extra dependecies in my actions.py file of rasa chatbot. So i build a DockerFile using this link Deploying a Rasa Open Source Assistant in Docker Compose.
Everything went file like creating a image build and push.
Then i used this link Docker-Compose Quick Install.
but when i try to do sudo docker-compose up -d it returns following error:
Pulling app (athenassaurav/rasa:12345)… ERROR: pull access denied for athenassaurav/rasa, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied
image :
Screenshot of Error
My Dockerfile is as follows :
# Extend the official Rasa SDK image
FROM rasa/rasa-sdk:1.9.0
# Use subdirectory as working directory
WORKDIR /app
# Copy any additional custom requirements
COPY actions/requirements-actions.txt ./
# Change back to root user to install dependencies
USER root
# Install extra requirements for actions code, if necessary (otherwise comment this out)
RUN pip install -r requirements-actions.txt
# Copy actions folder to working directory
COPY ./actions /app/actions
# By best practices, don't run the code with root user
USER 1001
My docker-compose.override.yml look like this:
version: '3.0'
services:
app:
image: <account_username>/<repository_name>:<custom_image_tag>
Your action server image is not on the same repo as the rest of the images you are pulling, so you need to do a docker login for your GCP repository. See options for authentication here https://cloud.google.com/container-registry/docs/advanced-authentication
Related
I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.
I am trying to build a new Docker image dynamically using a Cloud Build trigger job, however I fail to see how to safely retrieve my credentials to authenticate against GCP with a service account.
Here are the steps:
Dockerfile created with steps to build a Docker image. One of the steps includes downloading a file from Google Storage (bucket) that I need to access as a GCP service account.
Docker image is built by using a Cloud Build trigger that is triggered after each change in the linked repository and stored in GCR.
Step one fails because:
1.) By default, for some reason, the user running the Dockerfile in GCP is not authenticated against GCP. It is not a default Google Cloud Build account, it is an anonymous user.
2.) I can authenticate as a service account BUT
a.) I don't want to store the JSON private key unencrypted locally or in the repository.
b.) If I stored it encrypted in the GCP repository, then I need to authenticate before decrypting it with KMS. But I don't have the key because it's still encrypted. So I am back to my problem.
c.) If I stored it in a GCP Storage bucket, I need to authenticate, too. So I am back to my problem.
Is there any other approach how I can execute the Cloud build trigger job and stay/get a GCP service account context?
The #1 solution of #ParthMehta is the right one.
Before calling the Docker Build, add this step in your Cloud Build for downloading the file from Cloud Storage by using the permission of Cloud Build environment (the service account is the following: <PROJECT_NUMBER>#cloudbuild.gserviceaccount.com)
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://mybucket/my_file', 'my_file']
The file are copied in the current directory of Cloud Build execution /workspace. Then add the files to your container by adding a simple COPY in your Dockerfile
....
COPY ./my_file ./my_file
....
In a general way, when you are working on GCP environment, you should never have to use JSON key file.
you can let cloud build to download the file from cloud storage for you and let docker to access the directory so it can use the file. You'll need to allow cloud build service account to access your bucket.
see: https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions
OR
Use gcloud auth configure-docker and then you can impersonate as service account using --impersonate-service-account with access to the bucket, so docker user has sufficient access to download the file
see: https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker
Old question but neither answer above was satisfactory for me because I needed to pull private packages from the Artifact Registry. After a lot of trial and error I found a solution using short-lived access tokens and service account impersonation and I'm sharing the solution in case anyone else has the same issue.
Specifically I'm using Cloud Build and a Docker container to transpile my Node app before deploying it. The build process needs to pull private NPM packages from the Artifact Registry, but didn't work because it wasn't authorized.
Working Solution
First create a Service Account that has access to whatever GCP service you need. In my case I created artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com and gave it access to the Artifact Registry repository as an "Artifact Registry Reader." In your case you'd give it access to that bucket.
Edit the newly created Service Account and under permissions add your Cloud Builder Service Account (<PROJECT_ID>#cloudbuild.gserviceaccount.com) as a Principal and grant it the "Service Account Token Creator" role.
Next, your cloudbuild.yaml file should look something like this:
steps:
# Step 1: Generate an Access Token and save it
#
# Here we call `gcloud auth print-access-token` to impersonate the service account
# we created above and to output a short-lived access token to the default volume
# `/workspace/access_token`. This is accessible in subsequent steps.
#
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- '-c'
- >
gcloud auth print-access-token --impersonate-service-account
artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com >
/workspace/access_token
entrypoint: sh
# Step 2: Build our Docker container
#
# We build the Docker container passing the access token we generated in Step 1 as
# the `--build-arg` `TOKEN`. It's then accessible within the Dockerfile using
# `ARG TOKEN`
#
- name: gcr.io/cloud-builders/docker
args:
- '-c'
- >
docker build -t us-docker.pkg.dev/<PROJECT>/services/frontend:latest
--build-arg TOKEN=$(cat /workspace/access_token) -f
./docker/prod/Dockerfile . &&
docker push us-docker.pkg.dev/<PROJECT>/services/frontend
entrypoint: sh
This next step is specific to private npm packages in the Artifact Registry, but I created a partial .npmrc file (missing the :_authToken line) with the following content:
#<NAMESPACE>:registry=https://us-npm.pkg.dev/<PROJECT>/npm/
//us-npm.pkg.dev/<PROJECT>/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/<PROJECT>/npm/:email=artifact-registry-reader#<PROJECT>.iam.gserviceaccount.com
//us-npm.pkg.dev/<PROJECT>/npm/:always-auth=true
Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.
ARG NODE_IMAGE=node:17.2-alpine
FROM ${NODE_IMAGE} as base
ENV APP_PORT=8080
ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production
FROM base AS builder
# Create our WORKDIR
RUN mkdir -p ${WORKDIR}
# Set the current working directory
WORKDIR ${WORKDIR}
# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src
#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be
# authorized to download packages from the Artifact Registry
#
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################
ARG TOKEN
RUN echo "//us-npm.pkg.dev/<PROJECT>/npm/:_authToken=\"$TOKEN\"" >> .npmrc
RUN npm install
RUN npm run build
EXPOSE ${APP_PORT}/tcp
CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]
Obviously in your case you would authenticate with the access token in a different manner with GCS, but the overall concepts should translate well to any similar situation.
I link my hub.docker.com account with bitbucket.org for automated build. In core folder of my repository exist Dockerfile, which is inside 2 image building steps. If I build images based same Dockerfile in local (i mean in Windows), I get 2 different images. But if I will use hub.docker.com for building, only last image is saved and tagged as "latest".
Dockerfile:
#-------------First image ----------
FROM nginx
#-------Adding html files
RUN mkdir /usr/share/nginx/s1
COPY content/s1 /usr/share/nginx/s1
RUN mkdir /usr/share/nginx/s2
COPY content/s2 /usr/share/nginx/s2
# ----------Adding conf file
RUN rm -v /etc/nginx/nginx.conf
COPY conf/nginx.conf /etc/nginx/nginx.conf
RUN service nginx start
# ------Second image -----------------
# Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file
ARG JAR_FILE=jar/testbootstap-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} test.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/test.jar"]
Anybody did this before or its not possible?
PS: There only one private repository for free use, may be this is main reason.
Whenever you specify a second FROM in your Dockerfile, you start creating a new image. That's the reason why you see only the last image being saved and tagged.
You can accomplish what you want by creating multiple Dockerfiles, i.e. by creating the first image in its Dockerfile and then using that to create the second image - all of it using docker-compose to co-ordinate between the containers.
I found some walk-around for this problem.
I separate docker file to two file.
1.docker for ngix
2.docker for java app
In build settings set this two files as dockerfile and tag with different tags.
After building you have one image but versions is define as image name. For example you can use
for nginx server youraccount/test:nginx
for app image youraccount/test:java
I hope this will be no problem in future processes.
I have this image:
FROM ubuntu:14.04.3
# copy db project to container
ADD . /db_project
WORKDIR /db_project
CMD ["./gitinitall.sh"]
This will copy the content of the current dir with db project that it contains git submodules which will be checked out and pulled from repo. So in db_project is the shell script that I run to get the submodules. Also has the backend that uses the dbs. The image is pushed to a private repo.
I want to use this image to create a container where I need to add the config of the database for the environment where it needs to be deployed, something like:
FROM myprivatedockerrepo:5000/db_project
...
WORKDIR /db_project
COPY config/dev.config /db_project/apps/mydb_db/config/dev.config
# get everything needed for backend
RUN mix deps.get
# expose the backend port
EXPOSE backendport
# start the beckend with the proper db configured
CMD ["./startbeckend"]
But it is failing to RUN mix deps.get:
Step 14/20 : RUN mix deps.get
---> Running in ab7375d69989
warning: path "apps/mydb_db" is a directory but it has no mix.exs. Mix won't consider this directory as part of your umbrella application. Please add a "mix.exs" or set the ":apps" key in your umbrella configuration with all relevant apps names as atoms
If I add a
RUN ls apps/mydb_db
before running the mix command:
ls: cannot access apps/mydb_db: No such file or directory
So it seems although in the image used, myprivatedockerrepo:5000/db_project, there should be
db_project/apps/mydb_db - mydb_db created by the shell script submodule get from git, it cannot find it, maybe I do not understand the docker layers or something?
To copy a folder you need to add a final '/'
# Dockerfile
ADD . /db_project/
See also here: https://docs.docker.com/engine/reference/builder/#add
Was my bad,
the gitinitall.sh script to get and update all submodules from git didn't work, since had no git settings or ssh key in the image, but it didn't return any code so it shows no failure.
I cannot seem to run composer install in Dockerfile but I can in the container after building an image and running the container.
Here's the command from Dockerfile:
RUN composer require drupal/video_embed_field:1.5
RUN composer install --no-autoloader --no-scripts --no-progress
The output is:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
But after running the container with docker-compose:
...
drupal:
image: docker_image
container_name: container
ports:
- 8081:80
volumes:
- ./container/modules:/var/www/html/web/modules
links:
# Link the DB container:
- db
running docker exec composer install will install the packages correctly:
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Package operations: 1 installs, 0 updates, 0 removals
...
Generating autoload files
I am assuming that the composer.json and composer.lock files are correct because I can run the composer install command in the container without any further effort, but only after the container is running.
Update
Tried combining the composer commands with:
RUN composer require drupal/video_embed_field:1.5 && composer install
Same issue, "Nothing to install or update". Ultimately I would like to continue using seperate RUN statements in Dockerfile to take advantage of docker caching.
Your issue is coming from the fact that, docker-compose is meant to orchestrate multiple docker container build and run at the same time and it somehow is not really showing easily what it does behind the scene to people starting with docker.
Behind a docker-compose up there are four steps:
docker-compose build if needed, and if there is no existing image(s) yet, create the image(s)
docker-compose create if needed, and if there is no container(s) existing yet, create the container(s)
docker-compose start start existing container(s)
docker-compose logs logs stderr and stdout of the containers
So what you have to spot on there, is the fact that action contained into you Dockerfile are executed at the image creation step.
When mounting folders is executed at start of containers step.
So when you try to use a RUN command, part of the image creation step, on files like composer.lock and composer.json that are mounted on starting step, you end up having nothing to install via composer because your files are not mounted anywhere yet.
If you do a COPY of those files that may actual get you somewhere, because you will then have the composer files as part of your image.
This said, be careful that the mounted source folder will totally override the mounting point so you could end up expecting a vendor folder and not have it.
What you should ideally do is to have it as the ENTRYPOINT, this one happens at the last step of the container booting.
Here is for a little developing comparison: a docker image is to a docker container what a class is to an instance of an class — an object.
Your container are all created from images built possibly long time before.
Most of the steps in your Dockerfile happens at image creation and not at container boot time.
While most of the instruction of docker-compose are aimed at the automatisation of the container build, which include the mounting of folders.
Just noting a docker-compose.yml approach to the issue when the volume mount overwrites the composer files inside the container:
docker-compose.yml
environment:
PROJECT_DIR: /var/www/html
volumes:
- ./php/docker/entrypoint/90-composer-install.sh:/docker-entrypoint-init.d/90-composer-install.sh
composer-install.sh
#!/usr/bin/env bash
cd ${PROJECT_DIR}
composer install
This runs composer install after the build, using the docker-entrypoint-init.d shell script