Register environment variables in Dockerfiles dynamically - docker

Problem
Inside a Dockerfile, I want to do something like this:
ENV CONTAINER_INFO_GCC=\"$(gcc --version | head -n 1)\"
In a perfect world, at build time, this would produce this environment variable:
CONTAINER_INFO_GCC="gcc (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609"
Motivation
I want to see what is installed in my docker container with the Azure DevOps system capabilities windows.
For example, if I want to know which version of gcc is installed in my build environment, I can just take a look here:
Question
I have tried doing something like this, without the desired effect.
RUN echo CONTAINER_INFO_GCC=\"$(gcc --version | head -n 1)\" >> /etc/environment
Is there a way to use the Dockerfile ENV command dynamically?

As each dockerfile command is run, it will generate a intermediate container.
So the RUN , COPY, CMD ... command couldn't pass the environment to next container.
You need to use ENV set the Environment, but the ENV will not execute the command.
In Azure Devops, you could use Self-hosted Agent(Running in Docker) to create a variable.
Here is the steps:
Step1: Create a Self-Hosted Agent running in Docker.
Step2: In Build Pipeline, you could run the gcc --version | head -n 1
Here is a Blog about create Self-Hosted Agent running in Docker.
Update:
You could try to add the container resource to Azure Pipeline, then you could run the script on the container.
Here is a doc about this feature.
Here is the Yaml example:
resources:
containers:
- container: python
image: python:3.8
trigger:
- none
pool:
vmimage: ubuntu-16.04
steps:
- script: |
gcc --version | head -n 1
echo "##vso[task.setvariable variable=test]$(gcc --version | head -n 1)"
displayName: 'Run a multi-line script'
target:
container: python
commands: restricted
- script: |
echo "$(test)"
displayName: 'Run a multi-line script'
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
$token = "PAT"
$url="https://dev.azure.com/{Organization Name}/_apis/distributedtask/pools/{Pool Id}/agents/{AgentID}/usercapabilities?api-version=5.0"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$JSON = #'
{
"Gcc-version":"$(test)"
}
'#
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method PUT -Body $JSON -ContentType application/json
Result:
The Rest API is used to update the Agent Capabilities.
Note: We can only manually change the user defined Capabilities.
On the other hand, you still could create a Self-hosted agent running in docker.
Then you could directly run the same script on the agent and get the tool version.

Related

Use/pass system.accesstoken into Dockerfile as ENV var for npm auth token using Powershell

I'm trying to automate a docker build that needs to access a personal npm registry on Azure
Currently I manually grab a personal Azure DevOps PAT token, base64 encode it and then save it to a file
I then mount the file as a secret during the docker build
I have this step in the Azure pipeline yaml that calls a Powershell script
steps:
- template: docker/steps/docker-login.yml#templates
parameters:
containerRegistry: ${{ parameters.containerRegistry }}
- pwsh: |
./scripts/buildAndPushContainerImage.ps1 `
-containerRepository ${{ parameters.containerRepository }} `
-branchName ${{ parameters.branchName }} `
-version ${{ parameters.version }} `
-action Build
displayName: Docker build
And this in the Powershell script to build the image
function Build-UI-Image {
$npmTokenFilePath = Join-Path $buildContext "docker/secrets/npm_token.txt"
if(-not (Test-Path $npmTokenFilePath)){
Write-Error "Missing file: $npmTokenFilePath"
}
docker build `
-f $dockerfilePath `
--secret id=npm_token,src=$npmTokenFilePath `
--build-arg "BUILDKIT_INLINE_CACHE=1" `
.....(rest of code)
}
And finally a Dockerfile with the value mounted as a secret
RUN --mount=type=secret,id=npm_token \
--mount=type=cache,sharing=locked,target=/tmp/yarn-cache <<EOF
export NPM_TOKEN=$(cat /run/secrets/npm_token)
yarn install --frozen-lockfile --silent --non-interactive --cache-folder /tmp/yarn-cache
unset NPM_TOKEN
EOF
I've read multiple articles about using the Azure built in 'system.accesstoken' to authorise with private npm registries, but I'm not sure how to go about this for my scenario (as I am not using Azure predefied tasks and I'm using Powershell not bash)
I think I can add this to the pipeline yaml as the first step
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
But I'm not sure how I then pass that to the Powershell build script and ultimately get it into the Docker container as an ENV that I can then reference instead of the file?
Do I maybe need to add it as another --build-arg in the Powershell script like this?
--build-arg NPM_TOKEN=$(System.AccessToken)
And then if it was exposed as an ENV value inside the container, how would I reference it?
Would it just be there as NPM_TOKEN and I don't need to do anything further?
Or would I need to take it and try to base64 encode it and export it again?
Bit out of my depth as I've never used a private npm registry before.
Appreciate any info or suggestions.

Gitlab job not pulling Docker image

Running a Gitlab project that uses a Docker image I created.
Problem: Gitlab job execution log shows that image is not being pulled.
Here is the .gitlab-ci.yml file, with the company stuff removed:
default:
image:
name: guythedocker/jmeter-mssql-windows:latest
entrypoint: [""]
api test:
stage: test
script:
- get-variable
- $env:path -split ";"
- echo $WORKDIR
- Get-ChildItem -Path / -File
- entrypoint.ps1 --version
- |
/entrypoint.ps1 -n -t ./JMeter/xxx.jmx -l ./xxx.log -e -o ./testresults/xxx-Jthreads=$xx-Jrampup=$xxx -JtestCases=$xxx -Jhost=xxx.com -f
retry: 2
only:
- schedules
artifacts:
paths:
- testresults
tags:
- win2019
Here is the Dockerfile, which is essentially copied from QAInsights' Dockerfile:
# Dockerfile for Apache JMeter for Windows
# Indicates that the windowsservercore along with OpenJDK will be used as the base image.
# Based on work by NaveenKumar Namachivayam
FROM openjdk:8-windowsservercore
ARG JMETER_VERSION="5.4.3"
ENV JMETER_HOME /apache-jmeter-$JMETER_VERSION/apache-jmeter-$JMETER_VERSION/
# Metadata indicating an image maintainer.
LABEL maintainer="Guy L."
# Downloads JMeter from one of the mirrors, if you prefer to change, you can change the URL
RUN Invoke-WebRequest -URI https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-$env:JMETER_VERSION.zip \
-UseBasicParsing -Outfile /apache-jmeter-$env:JMETER_VERSION.zip
# Extract the downloaded zip file
RUN Expand-Archive /apache-jmeter-$env:JMETER_VERSION.zip -DestinationPath /apache-jmeter-$env:JMETER_VERSION
# For JDBC
RUN Invoke-WebRequest -URI https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/9.4.1.jre8/mssql-jdbc-9.4.1.jre8.jar -Outfile mssql-jdbc-9.4.1.jre8.jar
RUN Invoke-WebRequest -URI https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc_auth/9.4.1.x86/mssql-jdbc_auth-9.4.1.x86.dll -Outfile mssql-jdbc_auth-9.4.1.x86.dll
COPY ./mssql-jdbc-9.4.1.jre8.jar ${JMETER_HOME}/lib/
COPY ./mssql-jdbc_auth-9.4.1.x86.dll ${JMETER_HOME}/lib/
# Copies the entrypoint.ps1
COPY /entrypoint.ps1 /entrypoint.ps1
COPY /jmeter-plugins-install.ps1 /jmeter-plugins-install.ps1
RUN ["powershell.exe","/jmeter-plugins-install.ps1"]
# Sets the Working directory
WORKDIR ${JMETER_HOME}/bin
# Sets a command or process that will run each time a container is run from the new image. For detailed instruction, go to entrypoint.ps1 file.
ENTRYPOINT ["powershell.exe", "/entrypoint.ps1"]
The image was successfully published.
So why is my Gitlab project not pulling this image?
It is a Windows runner (since I'm using the -tag, and as I can see in the Job history).
Your runner is configured to use shell executor (as you can see line 3 of your printscreen) but to run Docker image, you have to use docker or docker-windows executor (depending of if container you want to run is Linux or Windows based).

How to setup google cloud Cloudbuild.yaml to replicate a jenkins job?

I have the following script thats run in my jenkins job
set +x
SERVICE_ACCOUNT=`cat "$GCLOUD_AUTH_FILE"`
docker login -u _json_key -p "${SERVICE_ACCOUNT}" https://gcr.io
set -x
docker pull gcr.io/$MYPROJECT/automation:master
docker run --rm --attach STDOUT -v "$(pwd)":/workspace -v "$GCLOUD_AUTH_FILE":/gcloud-auth/service_account_key.json -v /var/run/docker.sock:/var/run/docker.sock -e "BRANCH=master" -e "PROJECT=myproject" gcr.io/myproject/automation:master "/building/buildImages.sh" "myapp"
if [ $? -ne 0 ]; then
exit 1
fi
I am now trying to do this in cloudbuild.yaml such that I can run my script using my own automation image (which has a bunch of dependencies docker/jdk/pip etc installed) , and mount my git folders in my workspace directory
I tried putting my cloudbuild.yaml at the top level in my directory in my git repo and set it up as this
steps:
- name: 'gcr.io/myproject/automation:master'
volumes:
- name: 'current-working-dir'
path: /mydirectory
args: ['bash', '-c','/building/buildImages.sh', 'myapp']
timeout: 4000s
But this gives me errors saying the
invalid build: Volume "current-working-dir" is only used by one step
Just FYI, my script buildImages.sh, copies folders and dockerfiles, runs pip install/ npm/ and gradle commands and then docker build commands (kind of all in one solution).
Whats the way to translate my script to cloudbuild.yaml
try this in your cloudbuild.yaml:
steps:
- name: 'gcr.io/<your-project>/<image>'
args: ['sh','<your-script>.sh']
using this I was able to pull the image from Google Cloud Registry that has my script, then run the script using 'sh'. It didn't matter where the script is. I'm using alpine in my Dockerfile as base image.

Github Actions workflow fails when running steps in a container

I've just started setting up a Github-actions workflow for one of project.I attempted to run the workflow steps inside a container with this workflow definition:
name: TMT-Charts-CI
on:
push:
branches:
- master
- actions-ci
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://alpine/helm:2.13.0
steps:
- name: Checkout Code
uses: actions/checkout#v1
- name: Validate and Upload Chart to Chart Museum
run: |
echo "Hello, world!"
export PAGER=$(git diff-tree --no-commit-id --name-only -r HEAD)
echo "Changed Components are => $PAGER"
export COMPONENT="NOTSET"
for CHANGE in $PAGER; do ENV_DIR=${CHANGE%%/*}; done
for CHANGE in $PAGER; do if [[ "$CHANGE" != .* ]] && [[ "$ENV_DIR" == "${CHANGE%%/*}" ]]; then export COMPONENT="$CHANGE"; elif [[ "$CHANGE" == .* ]]; then echo "Not a Valid Dir for Helm Chart" ; else echo "Only one component per PR should be changed" && exit 1; fi; done
if [ "$COMPONENT" == "NOTSET" ]; then echo "No component is changed!" && exit 1; fi
echo "Initializing Component => $COMPONENT"
echo $COMPONENT | cut -f1 -d"/"
export COMPONENT_DIR="${COMPONENT%%/*}"
echo "Changed Dir => $COMPONENT_DIR"
cd $COMPONENT_DIR
echo "Install Helm and Upload Chart If Exists"
curl -L https://git.io/get_helm.sh | bash
helm init --client-only
But Workflow fails stating the container stopped due immediately.
I have tried many images including "alpine:3.8" image described in official documentation, but container stops.
According to Workflow syntax for GitHub Actions, in the Container section: "A container to run any steps in a job that don't already specify a container." My assumption is that the container would be started and the steps would be run inside the Docker container.
We can achieve this my making custom docker images, Actually Github runners somehow stops the running container after executing the entrypoint command, I made docker image with entrypoint the make container alive, so container doesn't die after start.
Here is the custom Dockerfile (https://github.com/rizwan937/Helm-Image)
You can publish this image to dockerhub and use it in workflow file like
container:
image: docker://rizwan937/helm
You can add this entrypoint to any docker image so that It remains alive for further steps execution.
This is a temporary solution, if anyone have better one, let me know.

How can I use a dredd docker image interactively?

I would like to use this docker container apiaryio/dredd instead of the npm package dredd. I am not familiar with running and debugging npm based docker images. How can I run the basic usage example of the npm package "Quick Start" section
$ dredd init
$ dredd
if I have a Swagger file instead of the api-description.apib in $PWD/api/api.yaml or $PWD/api/api.json?
TL;DR
Run dredd image as a command line. Dredd Image at Docker Hub
docker run -it -v $PWD:/api -w /api apiaryio/dredd init
[Optional] Turn it into a script:
#!/bin/bash
echo '***'
echo 'Root dir is /api'
export MYIP=`ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'`
echo 'Host ip is: ' $MYIP
echo 'Configure URL of tested API endpoint: http://api-srv::<enpoint-port>. Set api-srv to point to your server.'
echo 'This script will set api-srv to docker host machine - ' $MYIP
echo '***'
docker run -it --add-host "api-srv:$MYIP" -v $PWD:/api -w /api apiaryio/dredd dredd $1
[Optional] And put this script in a folder that is in your PATH variable and create an alias for short it
alias dredd='bash ./scripts/dredd.sh'
Code at Github gist.

Resources