How to build docker image frome .drone.yml? - docker

I have a (.drone.yml) test file from which i want to build a docker image. According to the documentations i have to build it using drone .
I tried this tutorial ( https://www.digitalocean.com/community/tutorials/how-to-perform-continuous-integration-testing-with-drone-io-on-coreos-and-docker ) and several other tutorials but i failed .
can anyone show me please a simple way to build .drone.yml !
Thank you

Note that this answer applies to drone version 0.5
You can use the Docker plugin to build and publish a Docker image at the successful completion of your build. You add the Docker plugin as a step in your build pipeline section of the .drone.yml file:
pipeline:
build:
image: golang
commands:
- go build
- go test
publish:
image: plugins/docker
repo: foo/bar
In many cases you will want to limit execution of the this step to certain branches. This can be done by adding runtime conditions:
publish:
image: plugins/docker
repo: foo/bar
when:
branch: master
You will need to provide drone with credentials to your Docker registry in order for drone to publish. These credentials can be declared directly in the yaml file, although storing these values in plain text in the yaml is generally not recommended:
publish:
image: plugins/docker
repo: foo/bar
username: johnsmith
password: pa55word
when:
branch: master
You can alternatively provide your credentials using the built-in secret store. Secrets can be added to the secret store on a per-repository basis using the Drone command line utility:
export DRONE_SERVER=http://drone.server.address.com
export DRONE_TOKEN=...
drone secret add \
octocat/hello-world DOCKER_USERNAME johnsmith
drone secret add \
octocat/hello-world DOCKER_PASSWORD pa55word
drone sign octocat/hello-world
Secrets are then interpolated in your yaml at rutnime:
publish:
image: plugins/docker
repo: foo/bar
username: ${DOCKER_USERNAME}
password: ${DOCKER_PASSWORD}
when:
branch: master

Related

Unable to build go lang image using Circle CI config.yml due to bad syntax

I am using below config.yml file ( .circleci/config.yml ) to run the circle CI job for github and build and push docker image to repo:
orbs:
docker: circleci/docker#1.5.0
version: 2.1
executors:
docker-publisher:
environment:
IMAGE_NAME: johndocker/docker-node-app
docker: # Each job requires specifying an executor
# (either docker, macos, or machine), see
— image: circleci/golang:1.15.1
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
jobs:
publishLatestToHub:
executor: docker-publisher
steps:
— checkout
— setup_remote_docker
— run
name: Publish Docker Image to Docker Hub
command: |
echo “$DOCKERHUB_PASSWORD” | docker login -u “$DOCKERHUB_USERNAME” — password-stdin
docker build -t $IMAGE_NAME .
docker push $IMAGE_NAME:latest
workflows:
version: 2
build-master:
jobs:
— publishLatestToHub
The config.yml is the magic that tells circleci what to do with our app, for this demo we want it to build a docker image.
In circleci *workflows* are simply orchestrators, they order how things should be done, *executors* defines or groups up task, *jobs* define the basic steps and commands to run.
But, it shows below error in Circle CI dashboard:
Unable to parse YAML, while scanning a simple key in 'string', line 21,
I checked using yml formatted also , but couldn't resolve the issue. Please help.

Run GitHub workflow on Docker image with a Dockerfile?

I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image

A locally built Docker image within a Bitbucket Pipeline

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

How to manage ssh key file when we want to execute ansible command with github actions

I have a github repository, a docker repository and a Amazon ec2 instance. I am trying to create a CI/CD pipeline with these tools. The idea is to deploy a docker container to ec2 instance when a push happened to github repository master branch. I have used github actions to build the code, build docker image and push docker image to docker hub. Now I want to pull the latest image from docker hub to remote ec2 instance and run the same. For this I am trying to execute ansible command from github actions. But I need to specify .pem file as an argument to the ansible command. I tried to keep .pem file in github secretes, but it didn't work. I am really confused how to proceed with this.
Here is my github workflow file
name: helloworld_cicd
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout#v1
- name: Go Build
run: go build
- name: Docker build
run: docker build -t helloworld .
- name: Docker login
run: docker login --username=${{ secrets.docker_username }} --password=${{ secrets.docker_password }}
- name: Docker tag
run: docker tag helloworld vijinvv/helloworld:latest
- name: Docker push
run: docker push vijinvv/helloworld:latest
I tried to run something like
ansible all -i '3.15.152.219,' --private-key ${{ secrets.ssh_key }} -m rest of the command
but that didn't work. What would be the best way to solve this issue
I'm guessing what you meant by "it didn't work" is that ansible expects the private key to be a file, whereas you are supplying a string.
This page on github actions shows how to use secret files on github actions. The equivalent for your case would be to do the following steps:
gpg --symmetric --cipher-algo AES256 my_private_key.pem
Choose a strong passphrase and save this passphrase as a secret in github secrets. Call it LARGE_SECRET_PASSPHRASE
Commit your encrypted my_private_key.pem.gpg in git
Create a step in your actions that decrypts this file. It could look something like:
- name: Decrypt Pem
run: gpg --quiet --batch --yes --decrypt --passphrase="$LARGE_SECRET_PASSPHRASE" --output $HOME/secrets/my_private_key.pem my_private_key.pem.gpg
env:
LARGE_SECRET_PASSPHRASE: ${{ secrets.LARGE_SECRET_PASSPHRASE }}
Finally you can run your ansible command with ansible all -i '3.15.152.219,' --private-key $HOME/secrets/my_private_key.pem
You can easily use webfactory/ssh-agent to add your ssh private key. You can see its documentation and add the following stage before running the ansible command.
# .github/workflows/my-workflow.yml
jobs:
my_job:
...
steps:
- actions/checkout#v2
# Make sure the #v0.5.2 matches the current version of the
# action
- uses: webfactory/ssh-agent#v0.5.2
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- ... other steps
SSH_PRIVATE_KEY must be the key that is registered in repository secrets. After that, run your ansible command without passing the private key file.

drone.io use private repo for pulling FROM image

Have to use a private Image from aws or gcp for my build process in drone.
The simplest DOCKERFILE example:
FROM ***.dkr.ecr.eu-central-1.amazonaws.com/***:latest
That means i have to login, which works fine. My drone.yml example:
steps:
- name: docker
privileged: true
image: revenuehack/drone-ecr-auth
environment:
AWS_ACCESS_KEY_ID:
from_secret: aws_access_id
AWS_SECRET_ACCESS_KEY:
from_secret: aws_key
AWS_REGION: eu-central-1
commands:
- aws ecr get-login --region $AWS_REGION --no-include-email | sh
But now i have to pull the image and use it in different steps of the ci process. Other questions suggest binding the docker.sock like here. Does not feel right to me. Id rather have some sort of service for that. Is that possible? Also this binding does not work:
volumes:
- /var/run/docker.sock:/var/run/docker.sock)
Option 1: Global Registry
Personally I would suggest using the drone-registry-plugin, works great to keep access to the ECR repos live:
https://github.com/drone/drone-registry-plugin
Now, a caveat to this suggestion, is that it mimics the v0.8 global registries function, so all pipelines maintained in the installation would have ability to access the registry.
Option 2: Local docker config.json
Since this could technically be run by anyone also, I'm not sure what you gain over the registry plugin, but here is the ref:
https://discourse.drone.io/t/how-to-pull-private-images-with-1-0/3155
This option involves placing a .docker/config.json onto the agents using cloud-init or some other mechanism and then in the individual pipelines you would then be able to add another root level yaml block image_pull_secrets:
kind: pipeline
name: default
steps:
- name: someStep
image: some.registry.dev/some-image:latest
image_pull_secrets:
- dockerconfigjson

Resources