I've created a custom docker image as follows, and pushed it to a custom repo:
# Use php as parent image
FROM ruby:2.4-slim
# Install core prerequisites
RUN apt-get update && apt-get install -y php5.6 python-pip python-dev build-essential zip software-properties-common wget
# Install awscli
RUN pip install awscli
Then I've added this custom image to my bitbucket-pipelines.yml like so:
image:
name: xxx/adzooma-ruby:v1
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
pipelines:
tags:
release-*:
- step:
script:
- wget https://wordpress.org/latest.tar.gz
...
My pipeline immediately fails when run due to:
+ wget https://wordpress.org/latest.tar.gz
bash: wget: command not found
This sort of makes me think that actually my docker image isn't being used at all since I explicitly install wget in the image - so is my pipeline syntax correct or am I missing a step here?
You've have to define e-mail address too.
image:
name: account-name/openjdk:8
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
See: https://confluence.atlassian.com/bitbucket/use-docker-images-as-build-environments-792298897.html
I've faced nearly the same issue with yours. The cause of my problem is, docker container registry somehow cannot capture the changes on my locally build docker image. To overcome the issue I've deleted images both on my local machine and my private docker container registry. Rebuild and Repush solved my problem. Maybe this can work for you too.
Related
I have a gitlab pages site that is built inside a docker container. I found a base image that contains 95% of what I want. With my old ci, I was installing extra packages before the build step. I want to create a new image with these packages installed, and use that image. I was able to build and run this image, but it no longer runs in gitlab actions. I'm not sure why.
Git repo: https://gitlab.com/hybras/hybras.gitlab.io
CI Config:
image: klakegg/hugo:asciidoctor-ci # old
image: registry.gitlab.com/hybras/hybras.gitlab.io # new
variables:
GIT_SUBMODULE_STRATEGY: recursive
SSHPASS: $SSHPASS
pages:
script:
- gem install asciidoctor-html5s # remove these package installs in the new version
- apk add openssh sshpass # remove these package installs in the new version
- hugo
- sshpass -p $SSHPASS scp -r public/* $HOST_AND_DIR
artifacts:
paths:
- public
only:
- master
Successful Job: Installs the packages, builds the site, scp's it to my mirror
Dockerfile:
FROM klakegg/hugo:asciidoctor
RUN gem install asciidoctor-html5s --no-document \
&& apk --no-cache add openssh sshpass
CMD hugo
Failed CI Run: Error: unknown command "sh" for "hugo"
As it can be seen in your pipeline, there is an error:
Error: unknown command "sh" for "hugo"
It means that you haven't hugo installed on the image you've been using. In order to solve this problem, you can use the official docker image of Hugo. For this purpose, you have to use its image or install hugo on the image you are using.
1. Using Hugo image:
Add the following lines to the stage you want to use hugo:
<pipeline_name>:
image: klakegg/hugo
2. Install Hugo on existing image
Check out this page to find out how to install the Hugo on different distribution of your base image.
For example you can install Hugo on ubuntu using the following command:
<pipeline_name>:
before_scripts:
- sudo apt-get install hugo
3. Install Hugo in your Dockerfile
RUN apk add hugo # Or something like this!
I'm following a tutorial to wrap a tool in a docker container.
In the linked tutorial page, step 2 describes how to create the container
$ docker run -ti ubuntu
and
root#70235f7726cf:/#
I install a number of libraries/programs
$ apt-get install wget build-essential zlib1g-dev libncurses5-dev
[...]
then
exit
Step 3 describes how the docker container is saved into an image but there is only the procedure to save it in a private repo and not in docker hub.
I did some research and the following is the command to push an image to a Docker repository in the Hub
$ docker push myusr/my-repo:mytoolv1
but since I did not save the image the push does not work.
The tutorial I'm following is missing some steps in between or maybe it is me that is missing some knowledge of Docker.
I think you might have some terms mixed up. You can't push containers to dockerhub, you can only push images.
To create a custom image you need a Dockerfile. Something like this:
FROM ubuntu:18.04
RUN apt update
RUN apt install -y wget build-essential zlib1g-dev libncurses5-dev
...
Then from the same folder build the custom image by running
docker build -t myusr/my-repo:mytoolv1 .
Ath thus point you can push the image to dockerhub using the command you tried:
docker push myusr/my-repo:mytoolv1
What is in your mind and is not correct, is that you think you can push container to your local repo, but in fact you are pushing the image.
I hope you know the difference between an image and a container, if not you can search about.
You can create a file called Dockerfile (with no extension and with this exact name) with the following contents:
FROM ubuntu:20.04
# to set time zone as you may encounter some unexpected stuck when selecting time zone during the build
ENV TZ=Asia/Tehran # search more about it in https://www.php.net/manual/en/timezones.php
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update && apt-get install -y wget build-essential zlib1g-dev libncurses5-dev
Now you should build your image:
docker build -t yourrepo/NAME:TAG_VERSION .
example:
docker build -t yourrepo/my_image:1.0.0 .
Now you can push it:
docker push yourrepo/my_image:1.0.0
Container registry setup
I use the following Dockerfile to create an image that I then push to google cloud container registry as a private image. I want to run my CD workflow in my workflow so that I can fetch deployment credentials that I store within my image.
Side Note: Not sure if this is the safest method to be managing sensitive files such as .jks files I need to deploy my app to play store. I'd appreciate it if anyone could shed some light on this as well (Not sure if I should move this side note to a different SO question).
FROM ubuntu:latest
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/
Workflow configuration
Following is the contents of my workflow configuration in .github/workflows/main.yml. See here for complete file.
# This is a basic workflow to help you get started with Actions
# [ ... ]
jobs:
build:
runs-on: ubuntu-latest
container:
image: gcr.io/positive-affirmations-313800/droid-deploy-env:latest
credentials:
username: _json_key
password: ${{ secrets.GCR_JSON_KEY }}
steps:
- uses: actions/checkout#v2
working-directory: $HOME
- uses: actions/setup-java#v1
working-directory: $HOME
with:
java-version: '12.x'
- uses: subosito/flutter-action#v1
working-directory: $HOME
with:
flutter-version: '2.0.5'
# [ ... ]
Error occured :(
But I keep getting this error:
Full logs available here
I found the solution to the problem.
I was just missing xz-utils on my container so I updated my docker image to install it
Referenced from the related github issue here
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
xz-utils \
git \
android-sdk \
&& rm -rf /var/lib/apt/lists/*
COPY Gemfile .
COPY Gemfile.lock .
COPY fastlane/ ./fastlane/
I'm creating a gitlab-ci deployment stage that requires some more libraries than existing in my image. In this example, I'm adding ssh (in real world, I want to add many more libs):
image: adoptopenjdk/maven-openjdk11
...
deploy:
stage: deploy
script:
- which ssh || (apt-get update -y && apt-get install -y ssh)
- chmod 600 ${SSH_PRIVATE_KEY}
...
Question: how can I tell gitlab runner to cache the image that I'm building in the deploy stage, and reuse it for all deployment runs in future? Because as written, the library installation takes place for each and every deployment, even if nothing changed between runs.
GitLab can only cache files/directories, but because of the way apt works, there is no easy way to tell it to cache installs you've done this way. You also cannot "cache" the image.
There are two options I see:
Create or use a docker image that already includes your dependencies.
FROM adoptopenjdk/maven-openjdk11
RUN apt update && apt install -y foo bar baz
Then build/push the image the image to dockerhub, then change the image: in the yaml:
image: membersound/maven-openjdk11-with-deps:latest
OR simply choose an image that already has all the dependencies you want! There are many useful docker images out there with useful tools installed. For example octopusdeploy/worker-tools comes with many runtimes and tools installed (java, python, AWS CLI, kubectl, and much more).
attempt to cache the deb packages and install from the deb packages. (beware this is ugly)
Commit a bash script as so to a file like install-deps.sh
#!/usr/bin/env bash
PACKAGES="wget jq foo bar baz"
if [ ! -d "./.deb_packages" ]; then
apt update && apt --download-only install -y ${PACKAGES}
cp /var/cache/apt/archives/*.deb ./.deb_packages
fi
apt install -y ./.deb_packages/*.deb
This should cause the debian files to be cached in the directory ./.deb_packages. You can then configure gitlab to cache them so you can use them later.
my_job:
before_script:
- install-deps.sh
script:
- ...
cache:
paths:
- ./.deb_packages
I've now tried for several days to get a runner working on a docker container. I have a Debian running system with GitLab, gitlab-runner and docker installed. I want to use docker as a container for my runners, because shell executors are installing all things on my CI maschine...
What I have done until now: I installed docker like it is described in the GitLab CE docs and run this command:
gitlab-runner register -n \
--url DOMAIN \
--registration-token TOKEN \
--executor docker \
--description "docker-builder" \
--docker-image "gliderlabs/alpine" \
--docker-privileged
then I created a test repo to look if it is working, with this .gitlab-ci-yml
variables:
# GIT_STRATEGY: fetch # re-uses the project workspace
GIT_CHECKOUT: "false" # don't checkout the working copy to a revision related to the CI pipeline
GIT_DEPTH: "3"
cache:
paths:
- node_modules/
stages:
- deploy
before_script:
- apt-get update
- apt-get install -y -qq sshpass
- ls -la
# ======================= Jobs=======================
# Teporaly disable jobs by adding a . (dot) before the job name
ftp-upload:
stage: deploy
# environment: Production
except:
- testing
script:
- rm ./package-lock.json
- npm install
- ls -la
- sshpass -V
- export SSHPASS=$PASSWORD
- sshpass -e scp -o stricthostkeychecking=no -r . $USERNAME#$HOST:/Test
only:
- master
# ===================== ./Jobs ======================
but I get an error in the GitLab CI console:
Running with gitlab-runner 11.1.0 (081978aa)
on docker-builder 5ce3c211
Using Docker executor with image gliderlabs/alpine ...
Pulling docker image gliderlabs/alpine ...
Using docker image sha256:74a78e860d7b39aa694197a70d4467019b611b80c21d886fcd1bfc04d2e767d4 for gliderlabs/alpine ...
Running on runner-5ce3c211-project-3-concurrent-0 via srvvgit001...
Cloning repository for master with git depth set to 3...
Cloning into '/builds/additive/test'...
Skipping Git checkout
Skipping Git submodules setup
Checking cache for default...
Successfully extracted cache
/bin/sh: eval: line 64: apt-get: not found
$ apt-get update
ERROR: Job failed: exit code 127
I don't know much about those docker containers but them seems good for reuse without modifying my CI system. It looks here that it is installing another alpine image/container, but have I not said GitLab runner to use an existing one?
Hopefully, there is someone that can easier explain to me how this works... I really have tried anything google gave me.
The Docker image you are using is a Alpine image, which is a minimal Linux distribution.
Alpine Linux is not using apt for package management but apk.
The problem is in your .gitlab-ci-yml's before_script section where you are trying to run apt.
To solve your issue, replace the use of apt by apk:
before_script:
- apk update
- apk add sshpass
...
Read more about the Alpine Linux package management here.