In my action.yml I defined an input:
name: 'test action'
author: Param Thakkar
description: 'test'
inputs:
test_var:
description: 'A test variable'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
And in my workflow I passed the test_var:
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Test the GH action
uses: paramt/github-actions-playground#master
with:
test_var: "this is just a test"
So there should be an environment variable that's created when the workflow runs, right? But when I run this short python script:
import os
print(os.getenv('TEST_VAR'))
print("It works!")
exit(0)
It prints:
None
It works!
I think that I have to pass the ENV variable through my Dockerfile... Right now my Dockerfile looks like this:
FROM python:latest
# Add files to the image
ADD entrypoint.py /entrypoint.py
ADD requirements.txt /requirements.txt
# Save ENV var in a temp file
RUN $TEST_VAR > /temp_var
# Install dependencies and make script executable
RUN pip install -r requirements.txt
RUN chmod +x entrypoint.py
RUN echo "temp var: "
RUN cat /temp_var
# Run script with the ENV var
ENTRYPOINT export TEST_VAR="$TEST_VAR"; /entrypoint.py
But the variable isn't echoed and isn't passed to the pythons script either.. am I missing something? When I tried to set my $TEMP_VAR to a random piece of string, it is sent through to the Python script. Is this a mistake on my behalf or is the GitHub action not working as intended?
Here's the link to the test repo
I think you are trying to read the wrong environment variable name. GitHub Actions adds INPUT_ to the name of the input variable. So try the following:
print(os.getenv('INPUT_TEST_VAR'))
From the documentation:
When you specify an input to an action in a workflow file or use a
default input value, GitHub creates an environment variable for the
input with the name INPUT_. The environment variable
created converts input names to uppercase letters and replaces spaces
with _ characters.
For example, if a workflow defined the numOctocats and octocatEyeColor
inputs, the action code could read the values of the inputs using the
INPUT_NUMOCTOCATS and INPUT_OCTOCATEYECOLOR environment variables.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#inputs
A bit late but for the next one, you can also use the env field :
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Test the GH action
uses: paramt/github-actions-playground#master
env:
test_var: "this is just a test"
which will be included during the creation of your docker and pass without the prefix INPUT_
Keep env vars secret by specifying them in Settings -> Secrets in the repo, and then calling them in the workflow:
For example, consider a workflow that runs an R script followed by a Python script. First, in .github/workflows/my_job.yml notice the MY_VAR variable, which points to a stored secret with ${{ secrets.MY_VAR}}. The rest is standard code (run on cron, specify Ubuntu OS and Docker image, define workflow steps).
on:
schedule:
- cron: '0 17 * * *'
jobs:
my_job:
name: my job
env:
MY_VAR: ${{ secrets.MY_VAR }}
runs-on: ubuntu-18.04
container:
image: docker.io/my_username/my_image:my_tag
steps:
- name: checkout_repo
uses: actions/checkout#v2
- name: run some code
run: bash ./src/run.sh
Next, in the scripts that compose your workflow, you can access the env var specified in the workflow file above as you would locally.
For example, in the repo, let's assume src/run.sh calls an R script followed by a Python script.
In R access the env var and store as an object:
my_var <- Sys.getenv("MY_VAR")
.
.
.
In Python access the env var and store as an object:
import os
my_var = os.getenv("MY_VAR")
.
.
.
See the docs here.
In my case, none of the answers worked. Here's how I fixed it.
---
name: Build and Push Docker Image to AWS ECR
on:
push:
branches: [ master ]
env:
FOO: '${{ secrets.FOO }}'
jobs:
build-and-push:
name: Build Project and Push to AWS ECR
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
...
- name: Build and Push to AWS ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
run: |
docker build --build-arg FOO=$FOO -t $ECR_REGISTRY/crew-charge-app:latest .
docker push $ECR_REGISTRY/crew-charge-app:latest
I first had to get the FOO variable from github secrets using ${{secrets.FOO}} then pass it onto the docker file using docker build --build-arg FOO=$FOO --build-arg BAR=$BAR -t .
Then inside the docker file I had to declare both as an ARG and ENV to be available at all times.
FROM node:14
ARG FOO=${FOO}
ENV FOO=${FOO}
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN yarn install
COPY . /usr/src/app
RUN FOO=$FOO yarn build
EXPOSE 80
CMD ["yarn", "start" ]
Important part was to RUN FOO=$FOO yarn build because setting the ENV alone doesn't pass it onto the container.
Related
I am working on a Flask application and setting up a GitHub pipeline. My Docker file has an entry point that runs a couple of commands to upgrade DB and start gunicorn.
This works perfectly fine while running locally but while deploying through GitHub action it just ignores the entry point and does not run those commands.
Hare is my Docker file -
FROM python:3.10-slim
WORKDIR /opt/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install --upgrade pip
COPY ./requirements.txt /opt/app/requirements.txt
RUN chmod +x /opt/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /opt/app/
RUN chmod +x /opt/app/docker-entrypoint.sh
ENTRYPOINT [ "/opt/app/docker-entrypoint.sh" ]
Docker entry point content
#! /bin/sh
echo "*********Upgrading database************"
flask db upgrade
echo "**************Statring gunicorn server***************"
gunicorn --bind 0.0.0.0:5000 wsgi:app
echo "************* started gunicorn server****************"
Here is my GitHub action -
name: CI
on:
push:
branches: [master]
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
deploy:
needs: build_and_push
runs-on: ubuntu-latest
steps:
- name: Checkout files
uses: actions/checkout#v2
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#v0.1.3
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
port: 22
- name: Start containers
run: docker-compose up -d --build
- name: Check running containers
run: docker ps
I am very new to Dockerfiles, writing shell commands, and GitHub Actions, so please suggest if there is an any better approach.
Thanks in advance!
Instead of using ENTRYPOINT use CMD inside your dockerfile. Because Entrypoint can override but CMD can not override.
I have got a repo setup with 3 major branches. master, development and demo. When i commit I run through a global gitops file and pass in a Dockerfile.
Using Github.
If i'm pushing either development or demo i want to run npm run development, if master then i want to run npm run production. However I can't figure out how to pass the branch names into the docker file.
# gitops.yaml
jobs:
gitops:
uses: github-actions/.github/workflows/gitops.yaml#v1
with:
dockerfile: ./docker/php/Dockerfile
secrets:
DOCKER_BUILD_ARGS: |
ENVIRONMENT=${GITHUB_REF#refs/heads/}
# gitops.yaml#v1
jobs:
build:
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
context: .
file: ${{ inputs.dockerfile }}
build-args: ${{ secrets.DOCKER_BUILD_ARGS }}
# Dockerfile
FROM node:11 as node
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
ARG ENVIRONMENT
RUN npm run ${ENVIRONMENT} && rm -rf node_modules/
The above doesn't work at all, not too sure how I go about this.
you didn't specify which event triggers your workflow.
assuming you are using the pull_request event to trigger your workflow, then you can use github context. specifically, github.head_ref (or GITHUB_HEAD_REF environment variable)
The head_ref or source branch of the pull request in a workflow run. This property is only available when the event that triggers a workflow run is either pull_request or pull_request_target.
since you are using node, i suggest you will leverage NODE_ENV environment variable within your Dockerfile.
I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).
It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.
I intend to pass my npm token to gcp cloud build,
so that I can use it in a multistage build, to install private npm packages.
I have the following abridged Dockerfile:
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
RUN echo "NPM_TOKEN:: ${NPM_TOKEN}"
and the following abridged cloudbuild.yaml:
---
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: [ '-c', 'gcloud secrets versions access latest --secret=my-npm-token > npm-token.txt' ]
- name: gcr.io/cloud-builders/docker
args:
- build
- "-t"
- gcr.io/my-project/my-program
- "."
- "--build-arg NPM_TOKEN= < npm-token.txt"
- "--no-cache"
I based my cloudbuild.yaml on the documentation, but it seems like I am not able to put two and two together, as the expression: "--build-arg NPM_TOKEN= < npm-token.txt" does not work.
I have tested the DockerFile, when I directly pass in the npm token, and it works. I simply have trouble passing in a token from gcloud secrets as a build argument to docker.
Help is greatly appreciated!
Your goal is to get the secret file contents into the build argument. Therefore you have to read the file content using either NPM_TOKEN="$(cat npm-token.txt)"or NPM_TOKEN="$(< npm-token.txt)".
name: gcr.io/cloud-builders/docker
entrypoint: 'bash'
args: [ '-c', 'docker build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN="$(cat npm-token.txt)" --no-cache' ]
Note: The gcr.io/cloud-builders/docker however use exec entrypoint form. Therefore you set entrypoint to bash.
Also note that you save the secret to the build workspace (/workspace/..). This also allows you to copy the secret as a file into your container.
FROM ubuntu:14.04 AS build
ARG NPM_TOKEN
COPY npm-token.txt .
RUN echo "NPM_TOKEN:: $(cat npm-token.txt)"
I won't write your second step like you did, but like this:
- name: gcr.io/cloud-builders/docker
entrypoint: "bash"
args:
- "-c"
- |
build -t gcr.io/my-project/my-program . --build-arg NPM_TOKEN=$(cat npm-token.txt) --no-cache
Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2