Run different npm build depending on branch - docker

I have got a repo setup with 3 major branches. master, development and demo. When i commit I run through a global gitops file and pass in a Dockerfile.
Using Github.
If i'm pushing either development or demo i want to run npm run development, if master then i want to run npm run production. However I can't figure out how to pass the branch names into the docker file.
# gitops.yaml
jobs:
gitops:
uses: github-actions/.github/workflows/gitops.yaml#v1
with:
dockerfile: ./docker/php/Dockerfile
secrets:
DOCKER_BUILD_ARGS: |
ENVIRONMENT=${GITHUB_REF#refs/heads/}
# gitops.yaml#v1
jobs:
build:
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
push: true
context: .
file: ${{ inputs.dockerfile }}
build-args: ${{ secrets.DOCKER_BUILD_ARGS }}
# Dockerfile
FROM node:11 as node
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
ARG ENVIRONMENT
RUN npm run ${ENVIRONMENT} && rm -rf node_modules/
The above doesn't work at all, not too sure how I go about this.

you didn't specify which event triggers your workflow.
assuming you are using the pull_request event to trigger your workflow, then you can use github context. specifically, github.head_ref (or GITHUB_HEAD_REF environment variable)
The head_ref or source branch of the pull request in a workflow run. This property is only available when the event that triggers a workflow run is either pull_request or pull_request_target.
since you are using node, i suggest you will leverage NODE_ENV environment variable within your Dockerfile.

Related

Github Actions Mkdocs and Docker containers ? not playing nicely

I am having trouble getting mkdocs to work within a container being run by GitHub actions on commit.
Hi all,
I have been trying to get my python code documentation up on GitHub. I have managed to do this via GitHub actions running
mkdocs gh-deploy --force
using the below GitHub action workflow:
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-python#v4
with:
python-version: 3.x
- run: pip install mkdocs
- run: pip install mkdocs-material
- run: pip install mkdocstrings[python]
- run: mkdocs gh-deploy --force --config-file './docs/mkdocs.yml'
The issue with this is that mkdocstrings did not work, and so no source code was shown on the webpage. I have made a docker container with access via volume binding to the .github folder on my local computer.
Dockerfile:
FROM ubuntu:20.04
# This stops being asked for geographical location with apt-get
ARG DEBIAN_FRONTEND=noninteractive
WORKDIR /
COPY requirements.txt /
# TODO: #1 Maybe should not use update (as this can change environment from update to update)
RUN apt-get update -y
RUN apt-get install -y python3.10 python3-pip git-all expect
RUN pip install -r requirements.txt
Docker compose:
version: "3.9"
services:
mkdocs:
build: .
container_name: mkdocs
ports:
- 8000:8000
env_file:
- ../.env
volumes:
- ../:/project
working_dir: /project/docs
command:
sh -c "./gh-deploy.sh"
This works when I run the docker container on my computer, but of course when it is run as a workflow on GitHub actions it does not have access to a .github folder. The GitHub action is:
name: dockerMkdocs
on:
push:
branches:
- master
- main
jobs:
build:
runs-on: ubuntu-latest
env:
GH_user: ${{ secrets.GH_user }}
GH_token: ${{ secrets.GH_token }}
steps:
- uses: actions/checkout#v2
- name: Build the Docker image and run
run: docker compose --file ./docs/Docker-compose_GA.yml up
Anyone know how mkdocs knows it is running in a github action when run in the first example above but it then does not have access to the same "environment" when running in a container in docker? If I could answer this, then I can get 'mkdocs gh-deploy --force' to work within github actions and speed up CI/CD.
My GitHub repo is at: https://github.com/healthENV/healthENVsandbox
Many thanks
I think you have two options:
1. Run the entire job inside of a container
In that case, the checkout action will get your repository and then the script you run can find the necessary files. This works because all steps in the job are executed inside of the container.
2. Mount the $GITHUB_WORKSPACE folder
Mount the folder with the checked out repo in the container. You already mount a folder to the project folder, but it seems that is not the correct folder. You can run a check to see what the current folder is before you run docker compose (and maybe an extra one inside of the script as well.

Create nginx docker image from npm web app with GitHub Actions

I'm trying to create a docker image using GitHub Actions from a static web application built with npm. However, while running the dockerfile, the /dist folder is not copied into the image as expected.
This is the dockerfile:
FROM nginx:1.21.6-alpine
COPY dist /usr/share/nginx/html
And this is the action:
name: Deploy
on:
push:
tags:
- v*
jobs:
build-homolog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup
uses: actions/setup-node#v3
with:
node-version: '16'
- name: Build
env:
NODE_ENV: homolog
run: npm install; npm run build; docker build -t my-image:1.0.0 .
The result is a working nginx but without content, it just shows its default page. When I run the npm build and the docker build locally on my machine, it works as expected. I think there is a problem with the directory structure on the GitHub Actions machine, but I can't seem to understand it.

github actions: run multiple jobs in the same docker

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).
It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.

How to pass environment variable received from GitHub actions

In my action.yml I defined an input:
name: 'test action'
author: Param Thakkar
description: 'test'
inputs:
test_var:
description: 'A test variable'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
And in my workflow I passed the test_var:
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Test the GH action
uses: paramt/github-actions-playground#master
with:
test_var: "this is just a test"
So there should be an environment variable that's created when the workflow runs, right? But when I run this short python script:
import os
print(os.getenv('TEST_VAR'))
print("It works!")
exit(0)
It prints:
None
It works!
I think that I have to pass the ENV variable through my Dockerfile... Right now my Dockerfile looks like this:
FROM python:latest
# Add files to the image
ADD entrypoint.py /entrypoint.py
ADD requirements.txt /requirements.txt
# Save ENV var in a temp file
RUN $TEST_VAR > /temp_var
# Install dependencies and make script executable
RUN pip install -r requirements.txt
RUN chmod +x entrypoint.py
RUN echo "temp var: "
RUN cat /temp_var
# Run script with the ENV var
ENTRYPOINT export TEST_VAR="$TEST_VAR"; /entrypoint.py
But the variable isn't echoed and isn't passed to the pythons script either.. am I missing something? When I tried to set my $TEMP_VAR to a random piece of string, it is sent through to the Python script. Is this a mistake on my behalf or is the GitHub action not working as intended?
Here's the link to the test repo
I think you are trying to read the wrong environment variable name. GitHub Actions adds INPUT_ to the name of the input variable. So try the following:
print(os.getenv('INPUT_TEST_VAR'))
From the documentation:
When you specify an input to an action in a workflow file or use a
default input value, GitHub creates an environment variable for the
input with the name INPUT_. The environment variable
created converts input names to uppercase letters and replaces spaces
with _ characters.
For example, if a workflow defined the numOctocats and octocatEyeColor
inputs, the action code could read the values of the inputs using the
INPUT_NUMOCTOCATS and INPUT_OCTOCATEYECOLOR environment variables.
https://help.github.com/en/articles/metadata-syntax-for-github-actions#inputs
A bit late but for the next one, you can also use the env field :
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Test the GH action
uses: paramt/github-actions-playground#master
env:
test_var: "this is just a test"
which will be included during the creation of your docker and pass without the prefix INPUT_
Keep env vars secret by specifying them in Settings -> Secrets in the repo, and then calling them in the workflow:
For example, consider a workflow that runs an R script followed by a Python script. First, in .github/workflows/my_job.yml notice the MY_VAR variable, which points to a stored secret with ${{ secrets.MY_VAR}}. The rest is standard code (run on cron, specify Ubuntu OS and Docker image, define workflow steps).
on:
schedule:
- cron: '0 17 * * *'
jobs:
my_job:
name: my job
env:
MY_VAR: ${{ secrets.MY_VAR }}
runs-on: ubuntu-18.04
container:
image: docker.io/my_username/my_image:my_tag
steps:
- name: checkout_repo
uses: actions/checkout#v2
- name: run some code
run: bash ./src/run.sh
Next, in the scripts that compose your workflow, you can access the env var specified in the workflow file above as you would locally.
For example, in the repo, let's assume src/run.sh calls an R script followed by a Python script.
In R access the env var and store as an object:
my_var <- Sys.getenv("MY_VAR")
.
.
.
In Python access the env var and store as an object:
import os
my_var = os.getenv("MY_VAR")
.
.
.
See the docs here.
In my case, none of the answers worked. Here's how I fixed it.
---
name: Build and Push Docker Image to AWS ECR
on:
push:
branches: [ master ]
env:
FOO: '${{ secrets.FOO }}'
jobs:
build-and-push:
name: Build Project and Push to AWS ECR
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
...
- name: Build and Push to AWS ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
run: |
docker build --build-arg FOO=$FOO -t $ECR_REGISTRY/crew-charge-app:latest .
docker push $ECR_REGISTRY/crew-charge-app:latest
I first had to get the FOO variable from github secrets using ${{secrets.FOO}} then pass it onto the docker file using docker build --build-arg FOO=$FOO --build-arg BAR=$BAR -t .
Then inside the docker file I had to declare both as an ARG and ENV to be available at all times.
FROM node:14
ARG FOO=${FOO}
ENV FOO=${FOO}
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN yarn install
COPY . /usr/src/app
RUN FOO=$FOO yarn build
EXPOSE 80
CMD ["yarn", "start" ]
Important part was to RUN FOO=$FOO yarn build because setting the ENV alone doesn't pass it onto the container.

How to use a script of a Docker container from CI pipeline

Newbie in Docker & Docker containers over here.
I'm trying to realize how can I run a script which is in the image from my bitbucket-pipeline process.
Some context about where I am and some knowledge
In a Bitbucket-Pipelines step you can add any image to run in that specific step. What I already tried and works without problem for example is get an image like alpine:node so I can run npm commands in my pipeline script:
definitions:
steps:
- step: &runNodeCommands
image: alpine/node
name: "Node commands"
script:
- npm --version
pipelines:
branches:
master:
- step: *runNodeCommands
This means that each push on master branch will run a build where using the alpine/node image we can run npm commands like npm --version and install packages.
What I've done
Now I'm working with a custom container where I'm installing a few node packages (like eslint) to run commands. I.E. eslint file1.js file2.js
Great!
What I'm trying but don't know how to
I've a local bash script awesomeScript.sh with some input params in my repository. So my bitbucket-pipelines.yml file looks like:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- ./awesomeScript.sh -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
I'm using the same awesomeScript.sh in different repositories and I want to move that functionality inside my Docker container and get rid of that script in the repository
How can I build my Dockerfile to be able to run that script "anywhere" where I use the docker image?
PS:
I've been thinking in build a node_module, installing the module in the Docker Image like the eslint module... but I would like to know if this is possible
Thanks!
If you copy awesomeScript.sh to the my-container-with-eslint Docker image then you should be able to use it without needing the script in each repository.
Somewhere in the Dockerfile for my-container-with-eslint you can copy the script file into the image:
COPY awesomeScript.sh /usr/local/bin/
Then in Bitbucket-Pipelines:
definitions:
steps:
- step: &runCommands
image: my-user/my-container-with-eslint
name: "Running awesome script"
script:
- awesomeScript -a $PARAM1 -e $PARAM2
pipelines:
branches:
master:
- step: *runCommands
As peterevans said, If you copy the script to your docker image, then you should be able to use it without needing the script in each repository.
In your Dockerfile add the following line:
COPY awesomeScript.sh /usr/local/bin/ # you may use ADD too
In Bitbucket-Pipelines:
pipelines:
branches:
master:
- step:
image: <your user name>/<image name>
name: "Run script from the image"
script:
- awesomeScript -a $PARAM1 -e $PARAM2

Resources