I have a GitHub action which builds some software. I have it set to cache one of the tools it builds to save time on subsequent runs, and it works great.
jobs:
build:
runs-on: ubuntu-latest
steps:
# installs the apt packages from a cache, speeding up the process substantially
- name: Install Compiler
run: sudo apt-get update && sudo apt-get install -y binutils-arm-none-eabi gcc-arm-none-eabi
- name: Cache Tools
uses: actions/cache#v3
id: tools
with:
path: |
tools/agbcc
key: ${{ runner.os }}-tools
# build and install agbcc into the tools directory
- name: Clone agbcc
if: steps.tools.outputs.cache-hit != 'true'
uses: actions/checkout#master
with:
path: 'tmp/agbcc'
repository: 'pret/agbcc'
- name: Build agbcc
if: steps.tools.outputs.cache-hit != 'true'
run: |
./build.sh
./install.sh $GITHUB_WORKSPACE
working-directory: tmp/agbcc
I created a second action with the exact same .yml, the one difference being that it runs inside a container. The only difference to the .yml is the addition of container: devkitpro/devkitarm:latest. This is running on the same branch.
For some reason, the container action can't find my cache and I have no idea why. It runs on the same branch so it should be in scope, it's running on the same OS, and for all I can figure out it should be able to see the cache. However, it doesn't find it.
Is this possible to resolve, or is it a limitation of the cache action preventing it from working if you're running inside a container?
Related
I'm working on building a Gitlab-CI pipeline that deploys an Spring Boot application to a Kubernetes cluster hosted on DigitalOcean.
Fortunately, I'm right at the beginning of doing this so there's very little bloat, and I figured I'd just test that I had everything wired correctly before I went ahead and built some crazy stuff.
Essentially I've got a Gitlab-CI job that pulls this image: digitalocean/doctl:1.87.0 and I then attempt to run a number of doctl commands in the script section of the job. The results of this very simple "deploy" script:
deploy-to-kubernetes:
stage: deploy
image: digitalocean/doctl:1.87.0
script:
- doctl --help
looked like this:
Error: unknown command "sh" for "doctl"
Run 'doctl --help' for usage.
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 255
After doing a bit of digging and googling and searching and head-scratching, I hit upon this post, and figured it may apply to the doctl image too, so I then updated my Gitlab-CI job to this:
deploy-to-kubernetes:
stage: deploy
image:
name: digitalocean/doctl:1.87.0
entrypoint: [""]
script:
- doctl --help
and the result was this:
$ doctl --help
/bin/bash: line 128: doctl: command not found
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
I'm pretty sure I'm doing something absolutely idiotic, but I can't figure out what that is, so if anybody could help out that would be really appreciated, and if you need more information, let me know.
FYI: This is my first question ever posted on StackOverflow, so any feedback on what I need to change, improve etc is greatly appreciated!
Thanks!
$PATH contains default values in this image. But doctl available in /app/doctl. So you can use it this way: /app/doctl %command% (eg, /app/doctl version)
I ran into the same problem and opted to just use a regular alpine container and install the doctl tool myself. It's a workaround though.
deploy-to-kubernetes:
stage: deploy
image: debian:11.6-slim
before_script:
- apt update && apt -y upgrade && apt-get install -y curl
- curl -Lo doctl.tar.gz https://github.com/digitalocean/doctl/releases/download/v1.91.0/doctl-1.91.0-linux-amd64.tar.gz && tar xf doctl.tar.gz
- chmod u+x doctl
- mv doctl /usr/local/bin/doctl
script:
- doctl --help
I had the same issue, here is job that works for me:
"Deploy to DigitalOcean":
image:
name: digitalocean/doctl:latest
entrypoint: [""]
stage: deploy
needs:
- job: "Test Docker image"
script:
- /app/doctl auth init
- /app/doctl apps create-deployment $DIGITALOCEAN_STUDENT_BACKEND_ID
only:
- master
But it also requires DIGITALOCEAN_ACCESS_TOKEN variable with DigitalOcean token
I have installed SonarQube on a ubuntu machine via a docker image. All working fine and I'm able to log in without issues.
Have connected to our GitLab installation and see all available projects, when I try to configure the existing pipeline with the following, I got stuck.
I have the following pipeline.yml in use (partially shown here):
sonarqube-check:
stage: sonarqube-check
image: mcr.microsoft.com/dotnet/core/sdk:latest
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: "0" # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- "apt-get update"
- "apt-get install --yes openjdk-11-jre"
- "dotnet tool install --global dotnet-sonarscanner"
- "export PATH=\"$PATH:$HOME/.dotnet/tools\""
- "dotnet sonarscanner begin /k:\"my_project_location_AYDMUbUQodVNV6NM7qxd\" /d:sonar.login=\"$SONAR_TOKEN\" /d:\"sonar.host.url=$SONAR_HOST_URL\" "
- "dotnet build"
- "dotnet sonarscanner end /d:sonar.login=\"$SONAR_TOKEN\""
allow_failure: true
only:
- master
All looking good, but when it runs it gives me this error:
$ apt-get update
bash: apt-get: command not found
I just don't know how to fix this and can't find a solution on the internet somewhere
dotnet/core/sdk image has apt (not apt-get):
$ docker run -ti --rm mcr.microsoft.com/dotnet/core/sdk:latest sh
# apt update
Following SonarCube documentation, you can use their docker image with the CLI already installed:
image:
name: sonarsource/sonar-scanner-cli:latest
variables:
SONAR_TOKEN: "your-sonarqube-token"
SONAR_HOST_URL: "http://your-sonarqube-instance.org"
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Defines the location of the analysis task cache
GIT_DEPTH: 0 # Tells git to fetch all the branches of the project, required by the analysis task
cache:
key: ${CI_JOB_NAME}
paths:
- .sonar/cache
sonarqube-check:
stage: test
script:
- sonar-scanner -Dsonar.qualitygate.wait=true
allow_failure: true
only:
- master
Apt /apt-get command not found - Problem fixed:
I think in your /usr/bin have no the apt and apt-get, you can download it and install it on that https://packages.debian.org/stretch/apt, like this
wget http://ftp.cn.debian.org/debian/pool/main/a/apt/apt_1.4.9_amd64.deb
dpkg -i apt_1.4.9_amd64.deb
I'm working on segregation of common modules into dedicated repositories for our github organization. Use pip install from git repo in Dockerfile to install shared modules developed inside the organization
RUN pip3 install -r requirements.txt
where git repo dependency referenced like
git+https://github.com/org/repo.git#master
The faced issue is that I can't make pip3 install to authenticate against organisation private repository when running as github action with pip3 install inside Dockerfile. I want to avoid creating private access token (PAT) for one of the dev as want to be user-agnostic and don't maintain tokens for leaving team members. Tried to use ${{ secrets.GITHUB_TOKEN }} but with deeper reading realized that the token has access to repository where github action is initiated (link)
The token's permissions are limited to the repository that contains your workflow
Is there a way to make pip3 install working in github actions without PAT?
Error getting in one of many iterations:
Collecting git+https://****#github.com/org/repo.git#master (from -r requirements.txt (line 17))
Cloning https://****#github.com/org/repo.git (to revision master) to /tmp/pip-req-build-mnge3zvd
Running command git clone -q 'https://****#github.com/org/repo.git' /tmp/pip-req-build-mnge3zvd
fatal: could not read Password for 'https://${GITHUB_TOKEN}#github.com': No such device or address
WARNING: Discarding git+https://****#github.com/org/repo.git#master. Command errored out with exit status 128: git clone -q 'https://****#github.com/org/repo.git' /tmp/pip-req-build-mnge3zvd Check the logs for full command output.
ERROR: Command errored out with exit status 128: git clone -q 'https://****#github.com/org/repo.git' /tmp/pip-req-build-mnge3zvd Check the logs for full command output.
I suggest you using ssh like this:
In your dockerfile:
RUN --mount=type=ssh,id=default pip install -r requirements.txt
In your requirements.txt, change to
git+ssh://git#github.com/org/repo.git#master
Prepare a ssh private key associated with your github account in the repo Settings/Actions/Secrets, with name SSH_KEY (It would be better using a dedicate ssh key)
In your action defining yaml, create a step
- name: Prepare Key
uses: webfactory/ssh-agent#v0.5.4
with:
ssh-private-key: ${{ secrets.SSH_KEY }}
This will export an env variable SSH_AUTH_SOCK for later use
Next action step, use the SSH_AUTH_SOCK
- name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
ssh: |
default=${{ env.SSH_AUTH_SOCK }}
reference: https://github.com/webfactory/ssh-agent#using-the-dockerbuild-push-action-action
I have a monorepo of 2 packages:
produces a docker-image
has tests that deploy (locally) to k8s the image from package-1
to make sure that k8s will talk to the local docker deamon and take the image from there, in my local machine, I run:
eval $(minikube docker-env --shell sh)
locally build the docker-image in package-1 (no docker-push)
run the tests in package-2
In github-actions, I tried to do run the same commands, but the first step doesn't work: (https://github.com/stavalfi/k8test/pull/6/checks?check_run_id=785330120)
Run eval $(minikube docker-env --shell sh)
/home/runner/work/_temp/932fe76c-855f-4ed6-9fa3-dcd5cea6df7e.sh: line 1: README.md: command not found
##[error]Process completed with exit code 127.
I have no-idea what does this error means and why "README.md" appears in the error.
Question:
Is there any way to make it work? even an alternative way to make sure that in github-actions, k8s will find the docker-image that I build?
After some time I created a working solution for this problem.
I'm not sure why I got that error but here is a working solution:
github actions configuraiton file:
name: Node.js CI
on: [pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- uses: secrethub/actions/env-export#v0.1.0
env:
SECRETHUB_CREDENTIAL: ${{ secrets.SECRETHUB_CREDENTIAL }}
DOCKER_USERNAME: secrethub://stavalfi/dockerhub/username
DOCKER_PASSWORD: secrethub://stavalfi/dockerhub/access-token
- name: install k8s
uses: engineerd/setup-kind#v0.4.0
- run: minikube start
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: yarn install
- run: yarn build
- run: eval $(minikube docker-env --shell sh) && yarn workspace simple-service build:docker # build the docker image and let k8s use it locally
- run: eval $(minikube docker-env --shell sh) && yarn workspace k8test-monitoring build:docker # build the docker image and let k8s use it locally
- run: DEBUG=k8test:* yarn test # create k8s deployments from the docker-images from the last 2 steps
I am building my project on CircleCI and I have a build job that looks like this:
build:
<<: *defaults
steps:
- checkout
- setup_remote_docker
- run:
name: Install pip
command: curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && sudo python get-pip.py
- run:
name: Install AWS CLI
command: curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip && sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- run:
name: Login to Docker Registry
command: aws ecr get-login --no-include-email --region us-east-1 | sh
- run:
name: Install Dep
command: curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
- run:
name: Save Version Number
command: echo "export VERSION_NUM=${CIRCLE_TAG}.${CIRCLE_BUILD_NUM}" > deployment/dev/.env
- run:
name: Build App
command: source deployment/dev/.env && docker-compose -f deployment/dev/docker-compose.yml build
- run:
name: Test App
command: |
git config --global url."https://${GITHUB_PERSONAL_ACCESS_TOKEN} :x-oauth-basic#github.com/".insteadOf "https://github.com/"
dep ensure
go test -v ./...
- run:
name: Push Image
command: |
if [[ "${CIRCLE_TAG}" =~ ^v[0.9]+(\.[0-9]+)*-[a-z]*$ ]]; then
source deployment/dev/.env
docker-compose -f deployment/dev/docker-compose.yml push
else
echo 'No tag, not deploying'
fi
- persist_to_workspace:
root: .
paths:
- deployment/*
- tools/*
When I push a change to a branch, the build fails every time with Couldn't connect to Docker daemon at ... - is it running? when it reaches the Build App step of the build job.
Please help me figure out why branch builds are failing but tag builds are not.
I suspect you are hitting this docker-compose bug: https://github.com/docker/compose/issues/6050
The bug reports a misleading error (the one you're getting) when an image name in the docker-compose file is invalid.
If you use an environment variable for the image name or image tag, and that variable is set from a branch name, then it would fail on some branches, but not others.
The problem was occurring on the Save Version Number step. Sometimes that version would be .${CIRCLE_BUILD_NUM} since no tag was passed. Docker dislikes these tags starting with ., so I added a conditional check to see if CIRCLE_TAG was empty, and if it was, use some default version: v0.1.0-build.