Missing installed dependencies when docker image is used - docker

Here is my Dockerfile
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
When the docker file is built it installs all dependencies, and the last command RUN bluebase plugins outputs the list of plugins installed. But when this image is pushed and used in github actions, bluebase is available globally but no plugins are installed. What am I doing wrong?
Github Workflow
name: Development CI
on:
push:
# Sequence of patterns matched against refs/heads
branches:
- '*' # Push events on all branchs
- '*/*'
- '!master' # Exclude master
- '!next' # Exclude next
- '!alpha' # Exclude alpha
- '!beta' # Exclude beta
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase #Outputs list of comamnds available with bluebase
- name: Check BlueBase Plugins
run: bluebase plugins #Outputs no plugins installed

This was a tricky problem! Here is the solution that worked for me. I'll try and explain why below.
jobs:
web-deploy:
container:
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: Check BlueBase
run: bluebase
- name: Check BlueBase Plugins
run: HOME=/root bluebase plugins
- name: Check web plugin
run: HOME=/root bluebase web:build --help
Background
Firstly the Docker image. The command bluebase plugins:add seems to be very dependent on the $HOME environment variable. Your Docker image is built as the root user, so $HOME is /root. The bluebase plugins:add command installs plugin dependencies at $HOME/.cache/#bluebase so they end up at /root/.cache/#bluebase.
Now the jobs.<id>.container feature. When your container is run there is some rather complicated Docker networking and volume mounts that take place. One of those mounts is -v "/home/runner/work/_temp/_github_home":"/github/home". This mounts local files from the host, including a copy of your checked out repository, into the container. Then it changes $HOME to point to /github/home.
Problem
The reason bluebase plugins doesn't work is because it depends on $HOME pointing to /root but now GitHub Actions has changed it to /github/home.
Solutions
A solution I tried was to install the plugins at /github/home instead of /root in the Docker image.
FROM node:10
RUN apt-get -qq update && apt-get -qq -y install bzip2
RUN mkdir -p /github/home
ENV HOME /github/home
RUN yarn global add #bluebase/cli && bluebase plugins:add #bluebase/cli-expo && bluebase plugins:add #bluebase/cli-web
RUN bluebase plugins
The problem with this is that the volume mount that GitHub Actions creates overwrites the /github/home directory. So then I tried a few tricks like symlinks or moving the .cache/#bluebase directory around to avoid it being clobbered by the mount. None of those worked.
So the only solution seemed to be changing $HOME back to /root. This should NOT be done permanently in the workflow because GitHub Actions depends on HOME=/github/home to work correctly. So the solution is to set it temporarily for each command.
HOME=/root bluebase web:build --help
Takeaway
The main takeaway from this is that any tooling pre-built in a container that relies on $HOME pointing to a specific location may not work correctly when used in the jobs.<container_id>.container syntax.

I do not think the issue with the image, it's easy to confirm on local image and you will see the that the plugin is available in Docker image.
Just try to run
docker build -t plugintest .
#then run the image on local system to verify plugin
docker run -it --rm --entrypoint "/bin/sh" plugintest -c "bluebase plugins"
Seems like the issue with your YML config file.
image: hashimsohail/bluebase-image
name: Deploy Web
runs-on: ubuntu-latest
This line runs-on: ubuntu-latest make does not sense, I think it should be
runs-on:ashimsohail/bluebase-image.

Related

Github Actions Mkdocs and Docker containers ? not playing nicely

I am having trouble getting mkdocs to work within a container being run by GitHub actions on commit.
Hi all,
I have been trying to get my python code documentation up on GitHub. I have managed to do this via GitHub actions running
mkdocs gh-deploy --force
using the below GitHub action workflow:
name: ci
on:
push:
branches:
- master
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: actions/setup-python#v4
with:
python-version: 3.x
- run: pip install mkdocs
- run: pip install mkdocs-material
- run: pip install mkdocstrings[python]
- run: mkdocs gh-deploy --force --config-file './docs/mkdocs.yml'
The issue with this is that mkdocstrings did not work, and so no source code was shown on the webpage. I have made a docker container with access via volume binding to the .github folder on my local computer.
Dockerfile:
FROM ubuntu:20.04
# This stops being asked for geographical location with apt-get
ARG DEBIAN_FRONTEND=noninteractive
WORKDIR /
COPY requirements.txt /
# TODO: #1 Maybe should not use update (as this can change environment from update to update)
RUN apt-get update -y
RUN apt-get install -y python3.10 python3-pip git-all expect
RUN pip install -r requirements.txt
Docker compose:
version: "3.9"
services:
mkdocs:
build: .
container_name: mkdocs
ports:
- 8000:8000
env_file:
- ../.env
volumes:
- ../:/project
working_dir: /project/docs
command:
sh -c "./gh-deploy.sh"
This works when I run the docker container on my computer, but of course when it is run as a workflow on GitHub actions it does not have access to a .github folder. The GitHub action is:
name: dockerMkdocs
on:
push:
branches:
- master
- main
jobs:
build:
runs-on: ubuntu-latest
env:
GH_user: ${{ secrets.GH_user }}
GH_token: ${{ secrets.GH_token }}
steps:
- uses: actions/checkout#v2
- name: Build the Docker image and run
run: docker compose --file ./docs/Docker-compose_GA.yml up
Anyone know how mkdocs knows it is running in a github action when run in the first example above but it then does not have access to the same "environment" when running in a container in docker? If I could answer this, then I can get 'mkdocs gh-deploy --force' to work within github actions and speed up CI/CD.
My GitHub repo is at: https://github.com/healthENV/healthENVsandbox
Many thanks
I think you have two options:
1. Run the entire job inside of a container
In that case, the checkout action will get your repository and then the script you run can find the necessary files. This works because all steps in the job are executed inside of the container.
2. Mount the $GITHUB_WORKSPACE folder
Mount the folder with the checked out repo in the container. You already mount a folder to the project folder, but it seems that is not the correct folder. You can run a check to see what the current folder is before you run docker compose (and maybe an extra one inside of the script as well.

GitHub Actions flutter-action job fails when I use a private container

Container registry setup
I use the following Dockerfile to create an image that I then push to google cloud container registry as a private image. I want to run my CD workflow in my workflow so that I can fetch deployment credentials that I store within my image.
Side Note: Not sure if this is the safest method to be managing sensitive files such as .jks files I need to deploy my app to play store. I'd appreciate it if anyone could shed some light on this as well (Not sure if I should move this side note to a different SO question).
FROM ubuntu:latest
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/
Workflow configuration
Following is the contents of my workflow configuration in .github/workflows/main.yml. See here for complete file.
# This is a basic workflow to help you get started with Actions
# [ ... ]
jobs:
build:
runs-on: ubuntu-latest
container:
image: gcr.io/positive-affirmations-313800/droid-deploy-env:latest
credentials:
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
steps:
- uses: actions/checkout#v2
working-directory: $HOME
- uses: actions/setup-java#v1
working-directory: $HOME
with:
java-version: '12.x'
- uses: subosito/flutter-action#v1
working-directory: $HOME
with:
flutter-version: '2.0.5'
# [ ... ]
Error occured :(
But I keep getting this error:
Full logs available here
I found the solution to the problem.
I was just missing xz-utils on my container so I updated my docker image to install it
Referenced from the related github issue here
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
xz-utils \
git \
android-sdk \
&& rm -rf /var/lib/apt/lists/*
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/

how to run a command in host before entering docker container in github ci

Im trying to install a package on the host system i.e (ubuntu-latest) before entering the docker container (arch linux)
I tried a lot of syntax but Im getting it wrong
on: [push]
jobs:
update-aur:
runs-on: ubuntu-latest
steps:
- run : sudo apt-get install runc
container: archlinux
steps:
- run: |
pacman --noconfirm -Syu
pacman --noconfirm -S base-devel
this gives an error of steps is already defined
If the running machine configuration is important for the build, try using self-hosted runner.
You can create VM in some of the cloud providers (ex. AWS, Azure, etc.) and register it with GitHub-CI.
Install the GitHub-CI service
Install all the utilities you need
Register the runner to the repository
Change the build script for run-on to point to the self-hoster runner
You can find more information in the GitHub-CI Docs
You could run only a step in the container instead of the whole job. Something like:
on: [push]
jobs:
update-aur:
runs-on: ubuntu-latest
steps:
- run: sudo apt-get install ...
- uses: docker://archlinux
with:
entrypoint: /usr/bin/bash
args: -c 'pacman --noconfirm -Syu && pacman --noconfirm -S base-devel'
I don't know if it's practical in your case since you probably want to run a bunch of steps afterward in the same container. In any case you can find more info on the running steps in containers here and the different options available here

How to cache deployment image in gitlab-ci?

I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages

How to use Dockerfile in Gitlab CI

Using gitlab-ci for my node/react app, I'm trying to use phusion/passenger-nodejs as the base docker image
I can specify this easily in .gitlab-ci.yml:
image: phusion/passenger-nodejs:latest
variables:
HOME: /root
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
set_environment:
stage: build
script:
- npm install
tags:
- docker
test_node:
stage: test
script:
- npm install
- npm test
tags:
- docker
However, Phusion Passenger expects you to make configuration changes, e.g. python support, using their special init process, etc. in the Dockerfile.
#FROM phusion/passenger-ruby24:<VERSION>
#FROM phusion/passenger-jruby91:<VERSION>
FROM phusion/passenger-nodejs:<VERSION>
#FROM phusion/passenger-customizable:<VERSION>
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init process.
CMD ["/sbin/my_init"]
# If you're using the 'customizable' variant, you need to explicitly opt-in
# for features.
#
# N.B. these images are based on https://github.com/phusion/baseimage-docker,
# so anything it provides is also automatically on board in the images below
# (e.g. older versions of Ruby, Node, Python).
#
# Uncomment the features you want:
#
# Ruby support
#RUN /pd_build/ruby-2.0.*.sh
#RUN /pd_build/ruby-2.1.*.sh
#RUN /pd_build/ruby-2.2.*.sh
#RUN /pd_build/ruby-2.3.*.sh
#RUN /pd_build/ruby-2.4.*.sh
#RUN /pd_build/jruby-9.1.*.sh
# Python support.
RUN /pd_build/python.sh
# Node.js and Meteor standalone support.
# (not needed if you already have the above Ruby support)
RUN /pd_build/nodejs.sh
# ...put your own build instructions here...
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Is there a way to use a Dockerfile with gitlab-ci? Is there a good work around other than apt-get install and adding shell scripts?
Yes, create a second Gitlab repository where you place your Dockerfile in. There you add a gitlab-ci.yml file with a script command that builds you modified image and push it to your private registry or the Gitlab embedded Docker registry, eg:
script:
docker build . -t http://myregistry:5000/mymodified image
docker push http://myregistry:5000/mymodified
Inside your other Gitlab repository, change the image: line accordingly:
image: http://myregistry:5000/mymodified
Information on the Gitlab embedded Docker registry can be found here -> here

Resources