New Docker Build secret information for use with aws cli - docker

I would like to use the new --secret flag in order to retreive something from aws with its cli during the build process.
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=mysecret,dst=/root/.aws cat /root/.aws
I can see the credentials when running the following command:
docker build --no-cache --progress=plain --secret id=mysecret,src=%USERPROFILE%/.aws/credentials .
However, if I adjust the command to be run, the aws cli cannot find the credentials file and asks me to do aws configure:
RUN --mount=type=secret,id=mysecret,dst=/root/.aws aws ssm get-parameter
Any ideas?

The following works:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=aws,dst=/aws export AWS_SHARED_CREDENTIALS_FILE=/aws aws ssm get-parameter ...

Related

Use/pass system.accesstoken into Dockerfile as ENV var for npm auth token using Powershell

I'm trying to automate a docker build that needs to access a personal npm registry on Azure
Currently I manually grab a personal Azure DevOps PAT token, base64 encode it and then save it to a file
I then mount the file as a secret during the docker build
I have this step in the Azure pipeline yaml that calls a Powershell script
steps:
- template: docker/steps/docker-login.yml#templates
parameters:
containerRegistry: ${{ parameters.containerRegistry }}
- pwsh: |
./scripts/buildAndPushContainerImage.ps1 `
-containerRepository ${{ parameters.containerRepository }} `
-branchName ${{ parameters.branchName }} `
-version ${{ parameters.version }} `
-action Build
displayName: Docker build
And this in the Powershell script to build the image
function Build-UI-Image {
$npmTokenFilePath = Join-Path $buildContext "docker/secrets/npm_token.txt"
if(-not (Test-Path $npmTokenFilePath)){
Write-Error "Missing file: $npmTokenFilePath"
}
docker build `
-f $dockerfilePath `
--secret id=npm_token,src=$npmTokenFilePath `
--build-arg "BUILDKIT_INLINE_CACHE=1" `
.....(rest of code)
}
And finally a Dockerfile with the value mounted as a secret
RUN --mount=type=secret,id=npm_token \
--mount=type=cache,sharing=locked,target=/tmp/yarn-cache <<EOF
export NPM_TOKEN=$(cat /run/secrets/npm_token)
yarn install --frozen-lockfile --silent --non-interactive --cache-folder /tmp/yarn-cache
unset NPM_TOKEN
EOF
I've read multiple articles about using the Azure built in 'system.accesstoken' to authorise with private npm registries, but I'm not sure how to go about this for my scenario (as I am not using Azure predefied tasks and I'm using Powershell not bash)
I think I can add this to the pipeline yaml as the first step
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
But I'm not sure how I then pass that to the Powershell build script and ultimately get it into the Docker container as an ENV that I can then reference instead of the file?
Do I maybe need to add it as another --build-arg in the Powershell script like this?
--build-arg NPM_TOKEN=$(System.AccessToken)
And then if it was exposed as an ENV value inside the container, how would I reference it?
Would it just be there as NPM_TOKEN and I don't need to do anything further?
Or would I need to take it and try to base64 encode it and export it again?
Bit out of my depth as I've never used a private npm registry before.
Appreciate any info or suggestions.

Mounted the AWS CLI credentials as volume to docker container however still credentials are not being referred

I have created a docker image using AmazonLinux:2 base image in my Dockerfile. This docker container will run as Jenkins build agent on a Linux server and has to make certain AWS API calls. In my Dockerfile, I'm copying a shell-script called assume-role.sh.
Code snippet:-
COPY ./assume-role.sh .
RUN ["chmod", "+x", "assume-role.sh"]
ENTRYPOINT ["/assume-role.sh"]
CMD ["bash", "--"]
Shell script definition:-
#!/usr/bin/env bash
#echo Your container args are: "${1} ${2} ${3} ${4} ${5}"
echo Your container args are: "${1}"
ROLE_ARN="${1}"
AWS_DEFAULT_REGION="${2:-us-east-1}"
SESSIONID=$(date +"%s")
DURATIONSECONDS="${3:-3600}"
#Temporary loggings starts here
id
pwd
ls .aws
cat .aws/credentials
#Temporary loggings ends here
# AWS STS AssumeRole
RESULT=(`aws sts assume-role --role-arn $ROLE_ARN \
--role-session-name $SESSIONID \
--duration-seconds $DURATIONSECONDS \
--query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
--output text`)
# Setting up temporary creds
export AWS_ACCESS_KEY_ID=${RESULT[0]}
export AWS_SECRET_ACCESS_KEY=${RESULT[1]}
export AWS_SECURITY_TOKEN=${RESULT[2]}
export AWS_SESSION_TOKEN=${AWS_SECURITY_TOKEN}
echo 'AWS STS AssumeRole completed successfully'
# Making test AWS API calls
aws s3 ls
echo 'test calls completed'
I'm running the docker container like this:-
docker run -d -v $PWD/.aws:/.aws:ro -e XDG_CACHE_HOME=/tmp/go/.cache arn:aws:iam::829327394277:role/myjenkins test-image
What I'm trying to do here is mounting .aws credentials from host directory to the volume on container at root level. The volume mount is successful and I can see the log outputs as describe in its shell file :-
ls .aws
cat .aws/credentials
It tells me there is a .aws folder with credentials inside it in the root level (/). However somehow, AWS CLI is not picking up and as a result remaining API calls like AWS STS assume-role is getting failed.
Can somebody please suggest me here?
[Output of docker run]
Your container args are: arn:aws:iam::829327394277:role/myjenkins
uid=0(root) gid=0(root) groups=0(root)
/
config
credentials
[default]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXXXP
aws_secret_access_key = e8SYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxYm
Unable to locate credentials. You can configure credentials by running "aws configure".
AWS STS AssumeRole completed successfully
Unable to locate credentials. You can configure credentials by running "aws configure".
test calls completed
I found the issue finally.
The path was wrong while mounting the .aws volume to the container.
Instead of this -v $PWD/.aws:/.aws:ro, it was supposed to be -v $PWD/.aws:/root/.aws:ro

How to configure Databricks token inside Docker File

I have a docker file where I want to
Download the Databricks CLI
Configure the CLI by adding a host and token
And then running a python file that hits the Databricks token
I am able to install the CLI in the docker image, and I have a working python file that is able to submit the job to the Databricks API but Im unsure of how to configure my CLI within docker.
Here is what I have
FROM python
MAINTAINER nope
# Creating Application Source Code Directory
RUN mkdir -p /src
# Setting Home Directory for containers
WORKDIR /src
# Installing python dependencies
RUN pip install databricks_cli
# Not sure how to do this part???
# databricks token kicks off the config via CLI
RUN databricks configure --token
# Copying src code to Container
COPY . /src
# Start Container
CMD echo $(databricks --version)
#Kicks off Pythern Job
CMD ["python", "get_run.py"]
If I was to do databricks configure --token in the CLI it would prompt for the configs like this :
databricks configure --token
Databricks Host (should begin with https://):
It's better not to do it this way for multiple reasons:
It's insecure - if you configure Databricks CLI this way it will generate a file inside the container that could be read by anyone who has access to it
Token has time-to-live (default is 90 days) - this means that you'll need to rebuild your containers regularly...
Instead it's just better to pass two environment variables to the container, and they will be picked up by the databricks command. These are DATABRICKS_HOST and DATABRICKS_TOKEN as it described in the documentation.
When databricks configure is run successfully, it writes the information to the file ~/.databrickscfg:
[DEFAULT]
host = https://your-databricks-host-url
token = your-api-token
One way you could set this in the container is by using a startup command (syntax here for docker-compose.yml):
/bin/bash -ic "echo '[DEFAULT]\nhost = ${HOST_URL}\ntoken = ${TOKEN}' > ~/.databrickscfg"
It is not very secure to put your token in the DockerFile. However, if you want to pursue this approach you can use the code below.
RUN export DATABRICKS_HOST=XXXXX && \
export DATABRICKS_API_TOKEN=XXXXX && \
export DATABRICKS_ORG_ID=XXXXX && \
export DATABRICKS_PORT=XXXXX && \
export DATABRICKS_CLUSTER_ID=XXXXX && \
echo "{\"host\": \"${DATABRICKS_HOST}\",\"token\": \"${DATABRICKS_API_TOKEN}\",\"cluster_id\":\"${DATABRICKS_CLUSTER_ID}\",\"org_id\": \"${DATABRICKS_ORG_ID}\", \"port\": \"${DATABRICKS_PORT}\" }" >> /root/.databricks-connect
Make sure to run all the commands in a line using one RUN command. Otherwise, the variable such as DATABRICKS_HOST or DATABRICKS_API_TOKEN may not properly propagate.
If you want to connect to a Databricks Cluster within a docker container you need more configuration. You can find the required details in this article: How to Connect a Local or Remote Machine to a Databricks Cluster
The number of personal access tokens per user is limited to 600
But via bash is easy
echo "y
$(WORKSPACE-REGION-URL)
$(CSE-DEVELOP-PAT)
$(EXISTING-CLUSTER-ID)
$(WORKSPACE-ORG-ID)
15001" | databricks-connect configure
If you want to access databricks models/download_artifacts using hostname and access token like how you do on databricks cli
databricks configure --token --profile profile_name
Databricks Host (should begin with https://): your_hostname
Token : token
if you have created profile name and pushed models and just want to access the model/artifacts in docker using this profile
Add below code in the docker file.
RUN pip install databricks_cli
ARG HOST_URL
ARG TOKEN
RUN echo "[<profile name>]\nhost = ${HOST_URL}\ntoken = ${TOKEN}" >> ~/.databrickscfg
#this will created your .databrickscfg file with host and token after build the same way you do using databricks configure command
Add args HOST_URL and TOKEN in the docker build
e.g
your host name = https://adb-5443106279769864.19.azuredatabricks.net/
your access token = dapi********************53b1-2
sudo docker build -t test_tag --build-arg HOST_URL=<your host name> --build-arg TOKEN=<your access token> .
And now you can access your experiments using this profilename Databricks:profile_name in the code.

Docker: share private key via arguments

I want to share my github private key into my docker container.
I'm thinking about sharing it via docker-compose.yml via ARGs.
Is it possible to share private key using ARG as described here?
Pass a variable to a Dockerfile from a docker-compose.yml file
# docker-compose.yml file
version: '2'
services:
my_service:
build:
context: .
dockerfile: ./docker/Dockerfile
args:
- PRIVATE_KEY=MULTI-LINE PLAIN TEXT RSA PRIVATE KEY
and then I expect to use it in my Dockerfile as:
ARG PRIVATE_KEY
RUN echo $PRIVATE_KEY >> ~/.ssh/id_rsa
RUN pip install git+ssh://git#github.com/...
Is it possible via ARGs?
If you can use the latest docker 1.13 (or 17.03 ce), you could then use the docker swarm secret: see "Managing Secrets In Docker Swarm Clusters"
That allows you to associate a secret to a container you are launching:
docker service create --name test \
--secret my_secret \
--restart-condition none \
alpine cat /run/secrets/my_secret
If docker swarm is not an option in your case, you can try and setup a docker credential helper.
See "Getting rid of Docker plain text credentials". But that might not apply to a private ssh key.
You can check other relevant options in "Secrets and LIE-abilities: The State of Modern Secret Management (2017)", using standalone secret manager like Hashicorp Vault.
Although the ARG itself will not persist in the built image, when you reference the ARG variable somewhere in the Dockerfile, that will be in the history:
FROM busybox
ARG SECRET
RUN set -uex; \
echo "$SECRET" > /root/.ssh/id_rsa; \
do_deploy_work; \
rm /root/.ssh/id_rsa
As VonC notes there's now a swarm feature to store and manage secrets but that doesn't (yet) solve the build time problem.
Builds
Coming in Docker ~ 1.14 (or whatever the equivalent new release name is) should be the --build-secret flag (also #28079) that lets you mount a secret file during a build.
In the mean time, one of the solutions is to run a network service somewhere that you can use a client to pull secrets from during the build. Then if the build puts the secret in a file, like ~/.ssh/id_rsa, the file must be deleted before the RUN step that created it completes.
The simplest solution I've seen is serving a file with nc:
docker network create build
docker run --name=secret \
--net=build \
--detach \
-v ~/.ssh/id_rsa:/id_rsa \
busybox \
sh -c 'nc -lp 8000 < /id_rsa'
docker build --network=build .
Then collect the secret, store it, use it and remove it in the Dockerfile RUN step.
FROM busybox
RUN set -uex; \
nc secret 8000 > /id_rsa; \
cat /id_rsa; \
rm /id_rsa
Projects
There's a number of utilities that have this same premise, but in various levels of complexity/features. Some are generic solutions like Hashicorps Vault.
Dockito Vault
Hashicorp Vault
docker-ssh-exec

Google gsutil auth without prompt

I want to use gsutil inside a Docker container. I have created an O2Auth Service Account JSON file.
How can I setup gsutil auth to use the JSON config file and execute commands without prompting?
Currently I get something like this:
$ gsutil config -e
It looks like you are trying to run "/.../google-cloud-sdk/bin/bootstrapping/gsutil.py config".
The "config" command is no longer needed with the Cloud SDK.
To authenticate, run: gcloud auth login
Really run this command? (y/N) y
This command will create a boto config file at /.../.boto
containing your credentials, based on your responses to the following questions.
What is the full path to your private key file?
What command/parameters/setup do I have to use to circumstance prompts?
Solved this issue by executing:
gcloud auth activate-service-account --key-file=/opt/gcloud/auth.json
The whole example and finished container can be found here: blacklabelops/gcloud
If you want gsutil only and bypass the prompt you can do it easily with an expect script:
#!/usr/bin/expect
spawn gsutil config -e
expect "What is the full path to your private key file?" { send "/path/your.key\r" }
expect "Would you like gsutil to change the file permissions for you? (y/N)" { send "y\r" }
expect "What is your project-id?" { send "your-projet-42\r" }
interact
The -o Credentials:gs_service_key_file=<path to JSON file> does the good work, using the boto configuration override parameters as documented at https://cloud.google.com/storage/docs/gsutil/addlhelp/TopLevelCommandLineOptions
$ gsutil -v
gsutil version: 4.57
$ gsutil -o Credentials:gs_service_key_file=key.json ls -al gs://bucket/filename
79948 2021-05-24T02:12:25Z gs://bucket/filename#1111111145678393 metageneration=2
The above solutions don't work for me.
What solved this problem for me is the following
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS /.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID
using only gsutil:
first run this command to configure the authentication manually
gsutil config -a
This will create /root/.boto file with the needed credentials.
Copy that path/file into your docker image.
gsutil will now work with those credentials.

Resources