I learned from this answer that Docker does not by default support IPv6 and that /etc/docker/daemon.json has to be edited to change that.
I have a GitHub Action which operates inside of a Docker container. I.e.,
jobs:
test:
runs-on: ubuntu-latest
container:
image: my/image:sometag
steps:
- uses: actions/checkout#v3
- run: make tests
For this action, I'm running a test that binds to a non-privileged port on ::1. Is there something I can put in my YAML file to configure the Docker daemon to allow IPv6?
This closed issue might be relevant. However, in that case, the user was trying to access the internet via IPv6. I just need to use the loopback interface.
Related
I have a complex transformation that I need to apply whenever a particular file is pushed to GitHub. The transfomation is written in Kotlin (Java) and containerized using Jib. That all works ok.
The problem is I don't know how to run the containerized java app from within a GitHub action. The GitHub action is defined as
# This is a workflow that transforms a data file into a json file
name: file-transform
# Controls when the workflow will run
on:
workflow_dispatch:
jobs:
container-test-job:
runs-on: ubuntu-latest
container:
image: docker.io/apigeneration/github-action-test
username: ${{ github.actor }}
password: ${{ secrets.github_token }}
volumes:
- /config:/config
- /data:/data
steps:
- name: Run docker application
run: ???
I have tried all the options I can think of for the run step but the action fails.
Part of the problem is that I'm not clear how Jib defines the app entry point and so how to define a java command as part of the run step (I've tried all the options I can think of based on the Jib documentation).
Just running the docker container automatically runs the java app so perhaps there's a better way to invoke it in the action though the container is is a private registry so I have to be able to pass in credentials.
Any help gratefully received.
I'm executing CI jobs with gitlab-ci runner which is configured with kubernetes executor, and actually runs on openshift. I want to be able to build docker images to dockerfiles, with the following constraints:
The runner (openshift pod) is ran as user with high and random uid (234131111111 for example).
The runner pod is not privileged.
Not having cluster admin permissions, or ability to reconfigure the runner.
So obviously DinD cannot work, since is requires special docker device configuration. Podman, kaniko, buildah, buildkit and makisu don't work for random non-root user and without any volume.
Any suggestions?
DinD (Docker-in-Docker) does work in OpenShift 4 gitlab runners... just made it, and it was... a fight! Fact is, the solution is extremely brittle to any change of a version elsewhere. I just tried e.g. to swap docker:20.10.16 for docker:latest or docker:stable, and that breaks.
Here is the config I use inside which it does work:
OpenShift 4.12
the RedHat certified GitLab Runner Operator installed via the OpenShift Cluster web console / OperatorHub; it features gitlab-runner v 14.2.0
docker:20.10.16 & docker:20.10.16-dind
Reference docs:
GitLab Runner Operator installation guide: https://cloud.redhat.com/blog/installing-the-gitlab-runner-the-openshift-way
Runner configuration details: https://docs.gitlab.com/runner/install/operator.html and https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html
and this key one about matching pipeline and runner settings: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html which is actually the one to follow very precisely for your settings in gitlab .gitlab-ci.yml pipeline definitions AND runner configuration config.toml file.
Installation steps:
follow docs 1 and 2 in reference above for the installation of the Gitlab Runner Operator in OpenShift, but do not instantiate yet a Runner from the operator
on your gitlab server, copy the runner registration token for a group-wide or projec-wide runner registration
elswhere in a terminal session where the oc CLI is installed, login to the openshift cluster via the 'oc' CLI such as to have cluster:admin or system:admin role
create a OpenShift secret like:
vi gitlab-runner-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-runner-secret
namespace: openshift-operators
type: Opaque
stringData:
runner-registration-token: myRegistrationTokenHere
oc apply -f gitlab-runner-secret.yml
create a Custom configuration map; note that OpenShift operator will merge the supplied content to that of the config.toml generated by the gitlab runner operator itself; therefore, we only provide the fields we want to complement (we cannot even override an existing field value). Note too that the executor is preset to "kubernetes" by the OC Operator. For the detailed understanding, see docs hereabove.
vi gitlab-runner-config-map.toml
[[runners]]
[runners.kubernetes]
host = ""
tls_verify = false
image = "alpine"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
oc create configmap gitlab-runner-config-map --from-file config.toml=gitlab-runner-config-map.toml
create a Runner to be deployed by the operator (adjust the url)
vi gitlab-runner.yml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
namespace: openshift-operators
spec:
gitlabUrl: https://gitlab.example.com/
buildImage: alpine
token: gitlab-runner-secret
tags: openshift, docker
config: gitlab-runner-config-map
oc apply -f gitlab-runner.yml
you shall then see the runner just created via the openshift console (installed operators > gitlab runner > gitlab runner tab), followed by the outomatic creation of a PoD (see workloads). You may even enter a terminal session on the PoD and type for instance: gitlab-runner list to see the location of the config.toml file. You shall also see on the gitlab repo server console the runner being listed at the group or project level. Of course, firewalls in between your OC cluster and your gitlab server may ruin your endeavors at this point...
the rest of the trick takes place in your .gitlab-ci.yml file, e.g. (extract only showing one job at some stage). For the detailed understanding, see doc Nb 3 hereabove. the variable MY_ARTEFACT is pointing to a sub-dirctory in the relevant git project/repo in which a Dockerfile is contained that you have already successfully executed in your IDE for instance; and REPO_PATH holds a common prefix string including a docker Hub repository path and some extra name piece. You adjust all that to your convenience, BUT don't edit any of the first 3 variables defined under this job and do not change the docker[dind] version; it would break everything.
my_job_name:
stage: my_stage_name
tags:
- openshift # to run on specific runner
- docker
image: docker:20.10.16
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
REPO_TAG: ${REPO_PATH}-${MY_ARTEFACT}:${IMAGE_TAG}
services:
- docker:20.10.16-dind
before_script:
- sleep 10 && docker info #give time for starting the service and confirm good setup in logs
- echo $DOKER_HUB_PWD | docker login -u $DOKER_HUB_USER --password-stdin
script:
- docker build -t $REPO_TAG ./$MY_ARTEFACT
- docker push $REPO_TAG
There you are, trigger the gitlab pipeline...
If you miss-configured anything, you'll get the usual error message "is the docker daemon running?" after a claim regarding failing access to "/var/run/docker.sock" or failing connection to "tcp://localhost:2375". And no-no! port 2376 is not a typo but the exact value to use at step 8 hereabove.
So far so good? ... not yet!
Security settings:
Well, you may now see your docker builds starting (meanin D-in-D is OK), and then failing for security sake (or locked up).
Although we set 'privileged=true' at step 5:
Docker comes with a nasty (and built-in) feature: it runs by default as 'root' in every container it builds, and for building containers.
on the other hand, OpenShift is built with strict security in mind, and would prevent any pod to run as root.
So we have to change security settings to enable those runners to execute in privileged mode, reason why it is important to restrict these permissions to a namespace, here 'openshift-operators' and the specific account 'gitlab-runner-sa'.
`oc adm policy add-scc-to-user privileged -z gitlab-runner-sa -n openshift-operators`
The above will create a RoleBinding that you may remove or change as required. Fact is, 'gitlab-runner-sa' is the service account used by the Gitlab Runner Operator to instantiate runner pod's, and '-z' indicates to target the permission settings to a service account (not a regular user account). '-n' references the specific namespace we use here.
So you can now build images.... but may still be defeated when importing those images into an OpenShift project and trying to execute the generated pod's. There are two contraints to anticipate:
OpenShift will block any image that requires to run as 'root', i.e. in privileged mode (the default in docker run and docker compose up). ==> SO, PLEASE ENSURE THAT ALL THE IMAGES YOU WILL BUILD WITH DOCKER-in-DOCKER can run as a non root user with the dockerfile directive USER : !
... but the above may not be suffient! indeed, by default, OpenShift generates a random user ID to launch the container and ignores the one set in docker build as USER :. To effectively allow the container to switch to the defined user you have to bind the service account that runs your pods to the "anyuid" Security Context Constraint. This is easy to achieve via a role binding, else the command in oc CLI:
oc adm policy add-scc-to-user anyuid -n myProjectName -z default
where -z denotes a service account into the -n namespace.
This is my yml file
- name: Start jaegar daemon services
docker:
name: jaegar-logz
image: logzio/jaeger-logzio:latest
state: started
env:
ACCOUNT_TOKEN: {{ token1 }}
API_TOKEN: {{ token2 }}
ports:
- "5775:5775"
- "6831:6831"
- "6832:6832"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "14250:14250"
- "9411:9411"
- name: Wait for jaegar services to be up
wait_for: delay=60 port=5775
Can ansible discover the docker image from the docker hub registry by itself?
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
The docker image is from here - https://hub.docker.com/r/logzio/jaeger-logzio
Assuming you are using docker CE.
You should be able to run according to this documentation from ansible. Do note however; this module is deprecated in ansible 2.4 and above as the documentation itself dictates. Use the docker_container task if you want to run containers instead. The links are available in said documentation link.
As far as your questions go:
Can ansible discover the docker image from the docker hub registry by itself?
This would depend on the client machine that you will run it on. By default, docker will point to it's own docker hub registry unless you specifically log in to another repository. If you use the public repo (which it looks like in your link) and the client is able to go out online to said repo you should be fine.
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
According to the docker_container documentation you should be able to run the container directly from this task. This would mean that you are good to go.
P.S.: The image parameter on that page tells us that:
Repository path and tag used to create the container. If an image is
not found or pull is true, the image will be pulled from the registry.
If no tag is included, 'latest' will be used.
In other words, with a small adjustment to your task you should be fine.
I'm trying to understand how CI/CD providers implement Docker networking. When reading the docs for CircleCI and GitHub Actions, they describe how to use service containers (like PostgreSQL etc) differently.
CircleCI (https://circleci.com/docs/2.0/executor-types/) says:
All containers run in a common network and every exposed port will be
available on localhost from a primary container.
They have an example config that shows accessing a MongoDB service container from the primary container on localhost:
jobs:
build:
docker:
# Primary container image where all steps run.
- image: buildpack-deps:trusty
# Secondary container image on common network.
- image: mongo:2.6.8-jessie
command: [mongod, --smallfiles]
working_directory: ~/
steps:
# command will execute in trusty container
# and can access mongo on localhost
- run: sleep 5 && nc -vz localhost 27017
GitHub Actions (https://docs.github.com/en/free-pro-team#latest/actions/guides/about-service-containers) says:
When you run jobs in a container, GitHub connects service containers to the job using Docker's user-defined bridge networks.
They have examples elsewhere in their docs that show how to connect to the container. Here's one for PostgreSQL:
name: PostgreSQL service example
on: push
jobs:
# Label of the container job
container-job:
# Containers must run in Linux based operating systems
runs-on: ubuntu-latest
# Docker Hub image that `container-job` executes in
container: node:10.18-jessie
# Service containers to run with `container-job`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: postgres
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
# Downloads a copy of the code in your repository before running CI tests
- name: Check out repository code
uses: actions/checkout#v2
# Performs a clean installation of all dependencies in the `package.json` file
# For more information, see https://docs.npmjs.com/cli/ci.html
- name: Install dependencies
run: npm ci
- name: Connect to PostgreSQL
# Runs a script that creates a PostgreSQL client, populates
# the client with data, and retrieves data
run: node client.js
# Environment variable used by the `client.js` script to create a new PostgreSQL client.
env:
# The hostname used to communicate with the PostgreSQL service container
POSTGRES_HOST: postgres
# The default PostgreSQL port
POSTGRES_PORT: 5432
Here, the hostname of the PostgreSQL container is postgres (not localhost) because that label was chosen under services:.
CircleCI's docs don't explicitly describe what they do for networking, but does this mean that they're using the "host" networking mode (https://docs.docker.com/engine/reference/run/#network-host)? And, since the Docker documentation states:
Publishing ports and linking to other containers only works with the default (bridge).
Why does CircleCI not require you to include ports to be mapped in the config? Could they be parsing the Dockerfile for the containers and looking for EXPOSE directives and doing this automatically?
When I create a GitHub Actions workflow file, the example YAML file contains runs-on: ubuntu-latest. According to the docs, I only have the options between a couple versions of Ubuntu, Windows Server and macOS X.
I thought GitHub Actions runs inside Docker. How do I choose my Docker image?
GitHub actions provision a virtual machine - as you noted, either Ubuntu, Windows or macOS - and run your workflow inside of that. You can then use that virtual machine to run a workflow inside a container.
Use the container specifier to run a step inside a container. Be sure to specify runs-on as the appropriate host environment for your container (ubuntu-latest for Linux containers, windows-latest for Windows containers). For example:
jobs:
vm:
runs-on: ubuntu-latest
steps:
- run: |
echo This job does not specify a container.
echo It runs directly on the virtual machine.
name: Run on VM
container:
runs-on: ubuntu-latest
container: node:10.16-jessie
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
A job (as part of a workflow) runs inside a virtual machine. You choose one of the environments provided by them (e.g. ubuntu-latest or windows-2019).
A job consists of one or more steps. A step may be a simple shell command, using run. But it may also be an action, using uses
name: CI
on: [push]
jobs:
myjob:
runs-on: ubuntu-18.04 # linux required if you want to use docker
steps:
# Those steps are executed directly on the VM
- run: ls /
- run: echo $HOME
- name: Add a file
run: touch $HOME/stuff.txt
# Those steps are actions, which may run inside a container
- uses: actions/checkout#v1
- uses: ./.github/actions/my-action
- uses: docker://continuumio/anaconda3:2019.07
run: <COMMAND> executes the command with the shell of the OS
uses: actions/checkout#v1 runs the action from the user / organization actions in the repository checkout (https://github.com/actions/checkout), major release 1
uses: ./.github/actions/my-action runs the action which is defined in your own repository under this path
uses: docker://continuumio/anaconda3:2019.07 runs the anaconda3 image from user / organization continuumio, version 2019.07, from the Docker Hub (https://hub.docker.com/r/continuumio/anaconda3)
Keep in mind that you need to select a linux distribution as the environment if you want to use Docker.
Take a look at the documentation for uses and run for further details.
It should also be noted that there is a container option, allowing you to run any steps that would usually run on the host to be runned inside a container: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer