Github Actions Mkdocs and Docker containers ? not playing nicely - docker

I am having trouble getting mkdocs to work within a container being run by GitHub actions on commit.
Hi all,
I have been trying to get my python code documentation up on GitHub. I have managed to do this via GitHub actions running
mkdocs gh-deploy --force
using the below GitHub action workflow:
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-python#v4
with:
python-version: 3.x
- run: pip install mkdocs
- run: pip install mkdocs-material
- run: pip install mkdocstrings[python]
- run: mkdocs gh-deploy --force --config-file './docs/mkdocs.yml'
The issue with this is that mkdocstrings did not work, and so no source code was shown on the webpage. I have made a docker container with access via volume binding to the .github folder on my local computer.
Dockerfile:
FROM ubuntu:20.04
# This stops being asked for geographical location with apt-get
ARG DEBIAN_FRONTEND=noninteractive
WORKDIR /
COPY requirements.txt /
# TODO: #1 Maybe should not use update (as this can change environment from update to update)
RUN apt-get update -y
RUN apt-get install -y python3.10 python3-pip git-all expect
RUN pip install -r requirements.txt
Docker compose:
version: "3.9"
services:
mkdocs:
build: .
container_name: mkdocs
ports:
- 8000:8000
env_file:
- ../.env
volumes:
- ../:/project
working_dir: /project/docs
command:
sh -c "./gh-deploy.sh"
This works when I run the docker container on my computer, but of course when it is run as a workflow on GitHub actions it does not have access to a .github folder. The GitHub action is:
name: dockerMkdocs
on:
push:
branches:
- master
- main
jobs:
build:
runs-on: ubuntu-latest
env:
GH_user: ${{ secrets.GH_user }}
GH_token: ${{ secrets.GH_token }}
steps:
- uses: actions/checkout#v2
- name: Build the Docker image and run
run: docker compose --file ./docs/Docker-compose_GA.yml up
Anyone know how mkdocs knows it is running in a github action when run in the first example above but it then does not have access to the same "environment" when running in a container in docker? If I could answer this, then I can get 'mkdocs gh-deploy --force' to work within github actions and speed up CI/CD.
My GitHub repo is at: https://github.com/healthENV/healthENVsandbox
Many thanks

I think you have two options:
1. Run the entire job inside of a container
In that case, the checkout action will get your repository and then the script you run can find the necessary files. This works because all steps in the job are executed inside of the container.
2. Mount the $GITHUB_WORKSPACE folder
Mount the folder with the checked out repo in the container. You already mount a folder to the project folder, but it seems that is not the correct folder. You can run a check to see what the current folder is before you run docker compose (and maybe an extra one inside of the script as well.

Related

Create nginx docker image from npm web app with GitHub Actions

I'm trying to create a docker image using GitHub Actions from a static web application built with npm. However, while running the dockerfile, the /dist folder is not copied into the image as expected.
This is the dockerfile:
FROM nginx:1.21.6-alpine
COPY dist /usr/share/nginx/html
And this is the action:
name: Deploy
on:
push:
tags:
- v*
jobs:
build-homolog:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup
uses: actions/setup-node#v3
with:
node-version: '16'
- name: Build
env:
NODE_ENV: homolog
run: npm install; npm run build; docker build -t my-image:1.0.0 .
The result is a working nginx but without content, it just shows its default page. When I run the npm build and the docker build locally on my machine, it works as expected. I think there is a problem with the directory structure on the GitHub Actions machine, but I can't seem to understand it.

GitHub Actions flutter-action job fails when I use a private container

Container registry setup
I use the following Dockerfile to create an image that I then push to google cloud container registry as a private image. I want to run my CD workflow in my workflow so that I can fetch deployment credentials that I store within my image.
Side Note: Not sure if this is the safest method to be managing sensitive files such as .jks files I need to deploy my app to play store. I'd appreciate it if anyone could shed some light on this as well (Not sure if I should move this side note to a different SO question).
FROM ubuntu:latest
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/
Workflow configuration
Following is the contents of my workflow configuration in .github/workflows/main.yml. See here for complete file.
# This is a basic workflow to help you get started with Actions
# [ ... ]
jobs:
build:
runs-on: ubuntu-latest
container:
image: gcr.io/positive-affirmations-313800/droid-deploy-env:latest
credentials:
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
steps:
- uses: actions/checkout#v2
working-directory: $HOME
- uses: actions/setup-java#v1
working-directory: $HOME
with:
java-version: '12.x'
- uses: subosito/flutter-action#v1
working-directory: $HOME
with:
flutter-version: '2.0.5'
# [ ... ]
Error occured :(
But I keep getting this error:
Full logs available here
I found the solution to the problem.
I was just missing xz-utils on my container so I updated my docker image to install it
Referenced from the related github issue here
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
xz-utils \
git \
android-sdk \
&& rm -rf /var/lib/apt/lists/*
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/

github actions: run multiple jobs in the same docker

I'm learning to deploy github actions to run multiple jobs with docker, and this is what I have so far:
github actions yml file is shown as follow. There are 2 jobs: job0 builds docker with Dockerfile0 and job1 builds docker with Dockerfile1.
# .github/workflows/main.yml
name: docker CI
on: push
jobs:
job0:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile0 --tag job0
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build and Run
run: docker build . --file Dockerfile1 --tag job1
Dockerfile0 and Dockerfile1 share basically the same content, except for the argument in the last line:
FROM ubuntu:20.04
ADD . /docker_ci
RUN apt-get update -y
RUN apt-get install -y ... ...
WORKDIR /docker_ci
RUN python3 script.py <arg>
I wonder, can I build a docker for the 1st job, and then invoke multiple jobs execute command within the docker built from the 1st job? Thus I don't have to keep multiple Dockerfile and save some docker building time.
It would be better to build my docker locally from Dockerfile so I hope to avoid using container from docker hub.
runs-for-docker-actions looks relevant but I have trouble finding example deploying action locally (without publishing).
It definitely sounds like you should not build two different images - not for CI, and not for local development purposes (if it matters).
From the details you have provided, I would consider the following approach:
Define a Dockerfile with an ENTRYPOINT which is the lowest common denominator for your needs (it can be bash or python script.py).
In GitHub Actions, have a single job with multiple steps - one for building the image, and the others for running it with arguments.
For example:
FROM ubuntu
RUN apt-get update && apt-get install -y python3
WORKDIR /app
COPY script.py .
ENTRYPOINT ["python3", "script.py"]
This Dockerfile can be executed with any argument which will be passed on to the script.py entrypoint:
$ docker run --rm -it imagename some arguments
A sample GitHub Actions config might look like this:
jobs:
jobname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build the image
run: docker build --tag job .
- name: Test 1
run: docker run --rm -it job arg1
- name: Test 2
run: docker run --rm -it job arg2
If you insist on separating these to different jobs, as far as I understand it, your easiest option would still be to rebuild the image (but still using a single Dockerfile), since sharing a docker image built in one job, to another job, will be a more complicated task that I would recommend trying to avoid.

Missing installed dependencies when docker image is used

Here is my Dockerfile
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
When the docker file is built it installs all dependencies, and the last command RUN bluebase plugins outputs the list of plugins installed. But when this image is pushed and used in github actions, bluebase is available globally but no plugins are installed. What am I doing wrong?
Github Workflow
name: Development CI
on:
push:
# Sequence of patterns matched against refs/heads
branches:
- '*' # Push events on all branchs
- '*/*'
- '!master' # Exclude master
- '!next' # Exclude next
- '!alpha' # Exclude alpha
- '!beta' # Exclude beta
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase #Outputs list of comamnds available with bluebase
- name: Check BlueBase Plugins
run: bluebase plugins #Outputs no plugins installed
This was a tricky problem! Here is the solution that worked for me. I'll try and explain why below.
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase
- name: Check BlueBase Plugins
run: HOME=/root bluebase plugins
- name: Check web plugin
run: HOME=/root bluebase web:build --help
Background
Firstly the Docker image. The command bluebase plugins:add seems to be very dependent on the $HOME environment variable. Your Docker image is built as the root user, so $HOME is /root. The bluebase plugins:add command installs plugin dependencies at $HOME/.cache/#bluebase so they end up at /root/.cache/#bluebase.
Now the jobs.<id>.container feature. When your container is run there is some rather complicated Docker networking and volume mounts that take place. One of those mounts is -v "/home/runner/work/_temp/_github_home":"/github/home". This mounts local files from the host, including a copy of your checked out repository, into the container. Then it changes $HOME to point to /github/home.
Problem
The reason bluebase plugins doesn't work is because it depends on $HOME pointing to /root but now GitHub Actions has changed it to /github/home.
Solutions
A solution I tried was to install the plugins at /github/home instead of /root in the Docker image.
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN mkdir -p /github/home
ENV HOME /github/home
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
The problem with this is that the volume mount that GitHub Actions creates overwrites the /github/home directory. So then I tried a few tricks like symlinks or moving the .cache/#bluebase directory around to avoid it being clobbered by the mount. None of those worked.
So the only solution seemed to be changing $HOME back to /root. This should NOT be done permanently in the workflow because GitHub Actions depends on HOME=/github/home to work correctly. So the solution is to set it temporarily for each command.
HOME=/root bluebase web:build --help
Takeaway
The main takeaway from this is that any tooling pre-built in a container that relies on $HOME pointing to a specific location may not work correctly when used in the jobs.<container_id>.container syntax.
I do not think the issue with the image, it's easy to confirm on local image and you will see the that the plugin is available in Docker image.
Just try to run
docker build -t plugintest .
#then run the image on local system to verify plugin
docker run -it --rm --entrypoint "/bin/sh" plugintest -c "bluebase plugins"
Seems like the issue with your YML config file.
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
This line runs-on: ubuntu-latest make does not sense, I think it should be
runs-on:ashimsohail/bluebase-image.

How to conditionally update a CI/CD job image?

I just got into the (wonderful) world of CI/CD and have working pipelines. They are not optimal, though.
The application is a dockerized website:
the source needs to be compiled by webpack and end up in dist
this dist directory is copied to a docker container
which is then remotely built and deployed
My current setup is quite naïve (I added some comments to show why I believe the various elements are needed/useful):
# I start with a small image
image: alpine
# before the job I need to have npm and docker
# the problem: I need one in one job, and the second one in the other
# I do not need both on both jobs but do not see how to split them
before_script:
- apk add --update npm
- apk add docker
- npm install
- npm install webpack -g
stages:
- create_dist
- build_container
- stop_container
- deploy_container
# the dist directory is preserved for the other job which will make use of it
create_dist:
stage: create_dist
script: npm run build
artifacts:
paths:
- dist
# the following three jobs are remote and need to be daisy chained
build_container:
stage: build_container
script: docker -H tcp://eu13:51515 build -t widgets-sentinels .
stop_container:
stage: stop_container
script: docker -H tcp://eu13:51515 stop widgets-sentinels
allow_failure: true
deploy_container:
stage: deploy_container
script: docker -H tcp://eu13:51515 run --rm -p 8880:8888 --name widgets-sentinels -d widgets-sentinels
This setups works bit npm and docker are installed in both jobs. This is not needed and slows down the deployment. Is there a way to state that such and such packages need to be added for specific jobs (and not globally to all of them)?
To make it clear: this is not a show stopper (and in reality not likely to be an issue at all) but I fear that my approach to such a job automation is incorrect.
You don't necessarily need to use the same image for all jobs. Let me show you one of our pipelines (partially) which does a similar thing, just with composer for php instead of npm:
cache:
paths:
- vendor/
build:composer:
image: registry.example.com/base-images/php-composer:latest # use our custom base image where only composer is installed on to build the dependencies)
stage: build dependencies
script:
- php composer.phar install --no-scripts
artifacts:
paths:
- vendor/
only:
changes:
- composer.{json,lock,phar} # build vendor folder only, when relevant files change, otherwise use cached folder form s3 bucket (configured in runner config)
build:api:
image: docker:18 # use docker image to build the actual application image
stage: build api
dependencies:
- build:composer # reference dependency dir
script:
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
- docker build -t $CI_REGISTRY_IMAGE:latest.
- docker push $CI_REGISTRY_IMAGE:latest
The composer base image contains all necessary packages to run composer, so in your case you'd create a base image for npm:
FROM alpine:latest
RUN apk add --update npm
Then, use this image in your create_dist stage and use image: docker:latest as image in the other stages.
As well as referncing different images for different jobs you may also try gitlab anchors which provides reusable templates for the jobs:
.install-npm-template: &npm-template
before_script:
- apk add --update npm
- npm install
- npm install webpack -g
.install-docker-template: &docker-template
before_script:
- apk add docker
create_dist:
<<: *npm-template
stage: create_dist
script: npm run build
...
deploy_container:
<<: *docker-template
stage: deploy_container
...
Try multistage builder, you can intermediate temporary images and copy generated content final docker image. Also, npm should be part on docker image, create one npm image and use in final docker image as builder image.

Resources