How to pull a docker image and execute it from github actions - docker

How does one pull an image from a github action. Specifically one that requires authentication:
steps:
- name: Pull Docker Image
uses: docker/???
image: image_host.com/image:latest
^^^ Is wrong and I am not sure what the right syntax is.
I want to then run a command inside of the action
- name: Run test
run: |
node index.js # (index.js is inside of the container)
```

In order to use a GitHub workflow with a Docker container, you need a workflow runner which has Docker installed on its system, such as ubuntu-latest. Then use the container directive in order to pick a container.

Related

How to setup Docker in Docker (DinD) on CloudBuild?

I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.

Run GitHub workflow on Docker image with a Dockerfile?

I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image

How can I run a docker container in ansible?

This is my yml file
- name: Start jaegar daemon services
docker:
name: jaegar-logz
image: logzio/jaeger-logzio:latest
state: started
env:
ACCOUNT_TOKEN: {{ token1 }}
API_TOKEN: {{ token2 }}
ports:
- "5775:5775"
- "6831:6831"
- "6832:6832"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "14250:14250"
- "9411:9411"
- name: Wait for jaegar services to be up
wait_for: delay=60 port=5775
Can ansible discover the docker image from the docker hub registry by itself?
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
The docker image is from here - https://hub.docker.com/r/logzio/jaeger-logzio
Assuming you are using docker CE.
You should be able to run according to this documentation from ansible. Do note however; this module is deprecated in ansible 2.4 and above as the documentation itself dictates. Use the docker_container task if you want to run containers instead. The links are available in said documentation link.
As far as your questions go:
Can ansible discover the docker image from the docker hub registry by itself?
This would depend on the client machine that you will run it on. By default, docker will point to it's own docker hub registry unless you specifically log in to another repository. If you use the public repo (which it looks like in your link) and the client is able to go out online to said repo you should be fine.
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
According to the docker_container documentation you should be able to run the container directly from this task. This would mean that you are good to go.
P.S.: The image parameter on that page tells us that:
Repository path and tag used to create the container. If an image is
not found or pull is true, the image will be pulled from the registry.
If no tag is included, 'latest' will be used.
In other words, with a small adjustment to your task you should be fine.

How do I use Docker with GitHub Actions?

When I create a GitHub Actions workflow file, the example YAML file contains runs-on: ubuntu-latest. According to the docs, I only have the options between a couple versions of Ubuntu, Windows Server and macOS X.
I thought GitHub Actions runs inside Docker. How do I choose my Docker image?
GitHub actions provision a virtual machine - as you noted, either Ubuntu, Windows or macOS - and run your workflow inside of that. You can then use that virtual machine to run a workflow inside a container.
Use the container specifier to run a step inside a container. Be sure to specify runs-on as the appropriate host environment for your container (ubuntu-latest for Linux containers, windows-latest for Windows containers). For example:
jobs:
vm:
runs-on: ubuntu-latest
steps:
- run: |
echo This job does not specify a container.
echo It runs directly on the virtual machine.
name: Run on VM
container:
runs-on: ubuntu-latest
container: node:10.16-jessie
steps:
- run: |
echo This job does specify a container.
echo It runs in the container instead of the VM.
name: Run in container
A job (as part of a workflow) runs inside a virtual machine. You choose one of the environments provided by them (e.g. ubuntu-latest or windows-2019).
A job consists of one or more steps. A step may be a simple shell command, using run. But it may also be an action, using uses
name: CI
on: [push]
jobs:
myjob:
runs-on: ubuntu-18.04 # linux required if you want to use docker
steps:
# Those steps are executed directly on the VM
- run: ls /
- run: echo $HOME
- name: Add a file
run: touch $HOME/stuff.txt
# Those steps are actions, which may run inside a container
- uses: actions/checkout#v1
- uses: ./.github/actions/my-action
- uses: docker://continuumio/anaconda3:2019.07
run: <COMMAND> executes the command with the shell of the OS
uses: actions/checkout#v1 runs the action from the user / organization actions in the repository checkout (https://github.com/actions/checkout), major release 1
uses: ./.github/actions/my-action runs the action which is defined in your own repository under this path
uses: docker://continuumio/anaconda3:2019.07 runs the anaconda3 image from user / organization continuumio, version 2019.07, from the Docker Hub (https://hub.docker.com/r/continuumio/anaconda3)
Keep in mind that you need to select a linux distribution as the environment if you want to use Docker.
Take a look at the documentation for uses and run for further details.
It should also be noted that there is a container option, allowing you to run any steps that would usually run on the host to be runned inside a container: https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idcontainer

Gitlab CI docker-in-docker deployment not running commands inside of the container

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.
Here's the contents of my gitlab-ci.yml:
stages:
- build
- analytics
variables:
TEST_IMAGE_NAME: 'registry.server.de/testimage'
build_testing_container:
stage: build
image: docker:stable
services:
- dind
script:
- docker build --target=testing -t $TEST_IMAGE_NAME .
- docker push $TEST_IMAGE_NAME
mess_detection:
stage: analytics
image: $TEST_IMAGE_NAME
script:
- vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php
artifacts:
name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
paths:
- mess_detection.html
expire_in: 1 week
when: always
except:
- production
allow_failure: true
What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?
UPDATE:
It's getting even more interesting:
I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project.
However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.
UPDATE2:
After some more research, I learned that gitlab executes four sub-steps for each build step:
After some more research, I found that gitlab runs 4 sub-steps for each build step:
Prepare : Create and start the services.
Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
Build : User build. This is run on the user-provided docker image.
Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.
It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.
UPDATE3
In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.
this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks.
you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.
access can be done by a script or entry-points for example :
in the docker file of the image you are going to build add a script that you want to execute like that :
ADD exemple.sh /
RUN chmod +x exemple.sh
then you can add the image as a service in gitlab-ci and the script would change to :
docker exec <container_name> /exemple.sh
this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :
docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"
here's a reference :
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html

Resources