I'm using docker-compose to produce a docker image which requires access to a secure Azure Artifacts directory via Paket. As I'm sure at least some people are aware, Paket does not have default compatibility with the Azure Artifacts Credential Provider. To gain the access I need, I'm trying to mount the access token produced by the credential provider as a secret, then consume it using cat within a paket config command. cat then returns an error message stating that the file is not found at the default secret location.
I'm running this code within an Azure Pipeline on the Microsoft-provided ubuntu-latest agent.
Here's the relevant code snippets (It's possible I'm going into too much detail...):
docker-compose.ci.build.yml:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
secrets:
azure_credential:
file: ./credential.txt
dockerfile:
FROM mcr.microsoft.com/dotnet/sdk:6.0.102-bullseye-slim-amd64 AS build
<LABEL maintainer="<Engineering lead>"
WORKDIR /src
<Various COPY instructions>
RUN dotnet tool restore
RUN dotnet paket restore
RUN --mount=type=secret,id=azure_credential dotnet paket config add-token "<ARTIFACT_FEED_URL>" "$(cat /run/secrets/azure_credential)"
Azure pipeline definition YAML:
jobs:
- job: BuildPublish
displayName: Build & Publish
steps:
- task: PowerShell#2
displayName: pwsh build.ps1
inputs:
filePath: ${{ parameters.workingDirectory }}/.azure-pipelines/build.ps1
pwsh: true
workingDirectory: ${{ parameters.workingDirectory }}
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
The relevant lines of the powershell script initiating docker-compose:
$projectRoot = Split-Path -Path $PSScriptRoot -Parent
Push-Location -Path $projectRoot
try {
...
Out-File -FilePath ./credential.txt -InputObject $Env:SYSTEM_ACCESSTOKEN
...
& docker-compose -f ./docker-compose.ci.build.yml build
...
}
finally {
...
Pop-Location
}
The error message:
0.276 cat: /run/secrets/azure_credential: No such file or directory
If there's other relevant code, let me know.
I tried to verify that the environment variable I'm housing the secret in on the agent even existed and that the value was being saved to the ./credential.txt file for mounting in the image. I verified that the text file was being properly created. I've tried fiddling with the syntax for all the relevant commands--fun fact, Docker docs have two different versions of the mounting syntax, but the other version just crashed. I tried using Windows default pathing in case my source image was a Windows one, but it doesn't appear to be.
Essentially, here's where I've left it: I know that the file ./credential.txt exists and contains some value. I know my mounting syntax is correct, or Docker would crash. The issue appears to be something to do with the default mounting path and/or how docker-compose embeds its secrets.
I figured it out. For reasons I do not understand, the path to the mounted secret has to be defined as an environment variable in the docker-compose YAML. So, like this:
version: '3.6'
services:
ci_build:
build:
context: .
dockerfile: Dockerfile
image: <IMAGE IDENTITY>
secrets:
- azure_credential
environment:
AZURE_CREDENTIAL_FILE: /run/secrets/azure_credential
secrets:
azure_credential:
file: credential.txt
This solved the issue. If anyone knows why this solved the issue, I'd love to hear.
Related
I can omit (set to empty) build context using docker build:
docker build -t my_useless_image_name - << EOF
FROM ubuntu:22.04
RUN echo "I need no context to be built. Thnx"
EOF
How could I omit build context in the same way using docker-compose.yml?
version: "3.8"
services:
srv1:
build:
context: ??
srv2:
build:
context: .
I need a way beyond using .dockerignore
I found no answer in docker-compose official documentation.
Compose doesn't allow this. More generally, Compose has no way to read external files and take action on them, and doesn't route its own standard input to any particular container or image build, so if you didn't provide a build context there would be no other way to pass the Dockerfile into the container.
If you're just trying to use some unmodified base image, use image: instead of build:. You can do many customizations using environment: and volume: to inject files. So, for example, you'll frequently see an unmodified postgres image used as a database with several environment variables set plus injecting a /docker-entrypoint-initdb.d directory.
In any case I'd probably avoid the docker build - path. Since it has no build context, it can't COPY local files into the image, which significantly limits what you can do with this approach. And if you're doing anything at all here you probably want the Dockerfile on disk and checked into source control, at which point you can put it in an otherwise-empty directory, name it Dockerfile, and use that directory as the build context.
version: '3.8'
services:
using-the-image-directly:
image: ubuntu:22.04
# environment:
# volumes:
with-a-trivial-build-context:
build: ./my-useless-image
# Create an empty directory that _only_ contains the Dockerfile
rm -rf my-useless-image
mkdir my-useless-image
cat >my-useless-image <<EOF
FROM ubuntu:22.04
# ...
CMD echo 'hello world' && sleep 15
EOF
# That mostly-empty directory is the build: context
docker-compose up
We have used the technique detailed here to expose host environment variables to Docker build in a secured fashion.
# syntax=docker/dockerfile:1.2
FROM golang:1.18 AS builder
# move secrets out of the build process (and docker history)
RUN --mount=type=secret,id=github_token,dst=/app/secret_github_token,required=true,uid=10001 \
export GITHUB_TOKEN=$(cat /app/secret_github_token) && \
<nice command that uses $GITHUB_TOKEN>
And this command to build the image:
export DOCKER_BUILDKIT=1
docker build --secret id=github_token,env=GITHUB_TOKEN -t cool-image-bro .
The above works perfectly.
Now we also have a docker-compose file running in CI that needs to be modified. However, even if I confirmed that the ENV vars are present in that job, I do not know how to assign the environment variable to the github_token named secret ID.
In other words, what is the equivalent docker-compose command (up --build, or build) that can accept a mapping of an environment variable with a secret ID?
Turns out I was a bit ahead of the times. docker compose v.2.5.0 brings support for secrets.
After having modified the Dockerfile as explained above, we must then update the docker-compose to defined secrets.
docker-compose.yml
services:
my-cool-app:
build:
context: .
secrets:
- github_user
- github_token
...
secrets:
github_user:
file: secrets_github_user
github_token:
file: secrets_github_token
But where are those files secrets_github_user and secrets_github_token coming from? In your CI you also need to export the environment variable and save it to the default secrets file location. In our project we are using Tasks so we added these too lines.
Note that we are running this task from our CI, so you could do it differently without Tasks for example.
- printenv GITHUB_USER > /root/project/secrets_github_user
- printenv GITHUB_TOKEN > /root/project/secrets_github_token
We then update the CircleCI config and add two environment variable to our job:
.config.yml
name-of-our-job:
environment:
DOCKER_BUILDKIT: 1
COMPOSE_DOCKER_CLI_BUILD: 1
You might also need a more recent Docker version, I think they introduced it in a late 19 release or early 20. I have used this and it works:
steps:
- setup_remote_docker:
version: 20.10.11
Now when running your docker-compose based commands, the secrets should be successfully mounted through docker-compose and available to correctly build or run your Dockerfile instructions!
I've been struggling with this concept. To start I'm new to docker and self teaching myself (slowly). I am using a docker swarm instance and trying to leverage docker secrets for a simple username and password to an exiting rocker/rstudio image. I've set up the reverse proxy and can successfully use https to access the R studio via my browser. Now when I pass the variables at path /run/secrets/user and /run/secrets/pass to the environment variables it doesn't work. Its essentially think the path is the actual username and password. I need the environment variables to actually pull the values (in this case user=test, pass=test123 as set up using the docker secret command). I've looked around and a bit of a loss on how to accomplish this. I know some have mentioned leveraging a custom entrypoint shell script and I'm a bit confused on how to do this. Here is what I've tried
Rebuild a brand new image using the existing r image with a dockerfile that adds entrypoint.sh to the image -> it can't find the entrypoint.sh doc
added entrypoint: entrypoint.sh as a part of my docker compose. Same issue.
I'm trying to use docker stack to build the containers. The stack gets built but the containers keep restarting to the point they are unusable.
Here are my files
Dockerfile
FROM rocker/rstudio
COPY entry.sh /
RUN chmod +x /entry.sh
ENTRYPOINT ["entry.sh"]
Here is my docker-compose.yaml
version: '3.3'
secrets:
user:
external: true
pass:
external: true
services:
rserver:
container_name: rstudio
image: rocker/rstudio:latest (<-- this is the output of the build using rocker/rstudio and Dockerfile)
secrets:
- user
- pass
environment:
- USER=/run/secrets/user
- PASSWORD=/run/secrets/pass
volumes:
- ./rstudio:/home/user/rstudio
ports:
- 8787:8787
restart: always
entrypoint: /entry.sh
Finally here is the entry.sh file that I found on another thread
#get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
#if you need some specific file, where password is the secret name
#export $(egrep -v '^#' /run/secrets/password| xargs)
#call the dockerfile's entrypoint
source /docker-entrypoint.sh
In the end it would be great to use my secret user and pass and pass those to the environment variable so that I can authenticate into an R studio instance. If I just put a username and password in plain text under environment it works fine.
Any help is appreciated. Thanks in advance
i'm quite new to GCP and been using mostly AWS. I am currently trying to play around with GCP and want to deploy a container using docker-compose.
I set up a very basic docker-compose.yml file as follows:
# docker-compose.yml
version: '3.3'
services:
git:
image: alpine/git
volumes:
- ${PWD}:/git
command: "clone https://github.com/PHP-DI/demo.git"
composer:
image: composer
volumes:
- ${PWD}/demo:/app
command: "composer install"
depends_on:
- git
web:
image: php:7.4-apache
ports:
- "8080:${PORT:-80}"
- "8000:${PORT:-8000}"
volumes:
- ${PWD}/demo:/var/www/html
command: php -S 0.0.0.0:8000 -t /var/www/html
depends_on:
- composer
So the container will get the code from git, then install the dependencies using composer and finally be available on port 8000.
On my machine, running docker-compose up does everything. However how can push this docker-compose to google cloud.
I have tried building a container using the docker/compose image and a Dockerfile as follows:
FROM docker/compose
WORKDIR /opt
COPY docker-compose.yml .
WORKDIR /app
CMD docker-compose -f /opt/docker-compose.yml up web
Then push the container to the registry. And from there i tried deploying to:
cloud run - did not work as i could not find a way to specify mounted volume for /var/run/docker.sock
Kubernetes - i mounted the docker.sock but i keep getting an error in the logs that /app from the git service is read only
compute engine - same error as above
I don't want to make a container by copying all local files into it then upload, as the dependencies could be really big thus making a heavy container to push.
I have a working docker-compose and just want to use it on GCP. What's the easiest way?
This can be done by creating a cloudbuild.yaml file in your project root directory.
Add the following step to cloudbuild.yaml:
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
On Google Cloud Platform > Cloud Builder : configure the file type of your build configuration as Cloud Build configuration file (yaml or json), enter the file location : cloudbuild.yaml
If the repository event that invokes trigger is set to "push to a branch" then Cloud Build will launch docker-compose.yml to build your containers.
Take a look at Kompose. It can help you convert the docker compose instructions into Kuberenetes specific deployment and services. You can then apply the Kubernetes files against your GKE Clusters. Note that you will have to build the containers and store in Container Registry first and update the image tag in service definitions accordingly.
If you are trying to setup same as on-premise VM in GCE, you can install these and run. Ref: https://dev.to/calvinqc/the-easiest-docker-docker-compose-setup-on-compute-engine-1op1
I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet