I am trying to push a docker image into a private registry in Drone 0.8.5 and it works when I hardcode username and password into the pipeline however I have tried adding both the registry details in the registry tab and as secrets.
Registry Pipeline
docker-registry-push:
image: plugins/docker
repo: registry.domain.com:5000/app
registry: registry.domain.com:5000
insecure: true
pull: true
Fails with no basic auth credentials
Finally I've tried variable substitution. (with $REGISTRY_USERNAME and $$REGISTRY_USERNAME variables. All result in a error msg="Error authenticating: exit status 1"
docker-registry-push:
image: plugins/docker
repo: registry.domain.com:5000/app
registry: registry.domain.com:5000
secrets:
- source: registry_username
target: username
- source: registry_password
target: password
insecure: true
pull: true
another attempt
docker-registry-push:
image: plugins/docker
repo: registry.domain.com:5000/app
registry: registry.domain.com:5000
username: ${REGISTRY_USERNAME}
password: ${REGISTRY_PASSWORD}
secrets: [ registry_username, registry_password ]
insecure: true
pull: true
It is really frustrating. I need to add secrets for Rancher accesskey secretkey also after this via the correct method.
I have read other topics and the drone docs and am still stumped.
Thanks in advance.
The secrets need to be injected into the docker container via the environment with the names docker_username and and docker_password.
Your .drone.yml file should look something like this:
pipeline:
docker:
image: plugins/docker
repo: username/app
registry: registry.domain.com:5000
insecure: true
pull: true
secrets:
- source: registry_username
target: docker_username
- source: registry_password
target: docker_password
See the drone plugin docs for more configuration options.
here is to manage drone secret key http://docs.drone.io/manage-secrets/#pull-requests
also, you might wanna consider using .netrc inside Dockerfile on your build, so your credential is embeded inside of your docker images
Related
I am trying to create a Private Container Registry on DigitalOcean Kubernetes. And I want all data to be saved in the DigitalOcean Spaces. I am using this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-private-docker-registry-on-top-of-digitalocean-spaces-and-use-it-with-digitalocean-kubernetes
Things and pod are running well, I am able to push or pull images and I would like to configure basic auth (htpasswd) on top on it, but when I add htpasswd attribute to my chart values file, I am getting error:
{"level":"fatal","msg":"configuring application: unable to configure authorization (htpasswd): no access controller registered with name: htpasswd","time":"2022-12-14T13:02:23.608Z"}
My chart_values.yaml file:
ingress:
enabled: true
hosts:
- cr.somedomain.com
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "30720m"
args:
- --set controller.extraArgs.ingress-class=nginx
tls:
- secretName: somedomain-cr-prod
hosts:
- cr.somedomain.com
storage: s3
secrets:
htpasswd: |-
username:someBcryptPassword
s3:
accessKey: "someaccesskey"
secretKey: "someaccesssecret"
s3:
region: region
regionEndpoint: region.digitaloceanspaces.com
secure: true
bucket: somebucketname
image:
repository: somerepo
tag: latest
Maybe someone can answer where did I go wrong?
I have tried different formats to enter htpasswd, but it did produce the same error.
I am using GitHub Actions to trigger the building of my dockerfile, it is uploading the container to GitHub Container Registry. In the last step i am connecting via SSH to my remote DigitalOcean Droplet and executing a script to pull and install the new image from GHCR. This workflow was good for me as I was only building a single container in the project. Now I am using docker compose as I need NGINX besides by API. I would like to keep the containers on a single dropplet as the project is not demanding in ressources at the moment.
What is the right way to automate deployment with Github Actions and Docker Compose to DigitalOcean on a single VM?
My currently known options are:
Skip building containers on GHCR and fetch the repo via ssh to start building on remote from source by executing a production compose file
Building each container on GHCR, copy the production compose file on remote to pull & install from GHCR
If you know more options, that may be cleaner or more efficient please let me know!
Unfortunatly I have found a docker-compose with Github Actions for CI question for reference.
GitHub Action for single Container
name: Github Container Registry to DigitalOcean Droplet
on:
# Trigger the workflow via push on main branch
push:
branches:
- main
# use only trigger action if the backend folder changed
paths:
- "backend/**"
- ".github/workflows/**"
jobs:
# Builds a Docker Image and pushes it to Github Container Registry
push_to_github_container_registry:
name: Push to GHCR
runs-on: ubuntu-latest
# use the backend folder as the default working directory for the job
defaults:
run:
working-directory: ./backend
steps:
# Checkout the Repository
- name: Checking out the repository
uses: actions/checkout#v2
# Setting up Docker Builder
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
# Set Github Access Token with "write:packages & read:packages" scope for Github Container Registry.
# Then go to repository setings and add the copied token as a secret called "CR_PAT"
# https://github.com/settings/tokens/new?scopes=repo,write:packages&description=Github+Container+Registry
# ! While GHCR is in Beta make sure to enable the feature
- name: Logging into GitHub Container Registry
uses: docker/login-action#v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
# Push to Github Container Registry
- name: Pushing Image to Github Container Registry
uses: docker/build-push-action#v2
with:
context: ./backend
version: latest
file: backend/dockerfile
push: true
tags: ghcr.io/${{ github.repository }}:latest
# Connect to existing Droplet via SSH and (re)installs add. runs the image
# ! Ensure you have installed the preconfigured Droplet with Docker
# ! Ensure you have added SSH Key to the Droplet
# ! - its easier to add the SSH Keys bevore createing the droplet
deploy_to_digital_ocean_dropplet:
name: Deploy to Digital Ocean Droplet
runs-on: ubuntu-latest
needs: push_to_github_container_registry
steps:
- name: Deploy to Digital Ocean droplet via SSH action
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: ${{ secrets.PORT }}
script: |
# Stop all running Docker Containers
docker kill $(docker ps -q)
# Free up space
docker system prune -a
# Login to Github Container Registry
docker login https://ghcr.io -u ${{ github.repository_owner }} -p ${{ secrets.CR_PAT }}
# Pull the Docker Image
docker pull ghcr.io/${{ github.repository }}:latest
# Run a new container from a new image
docker run -d -p 80:8080 -p 443:443 -t ghcr.io/${{ github.repository }}:latest
Current Docker-Compose
version: "3"
services:
api:
build:
context: ./backend/api
networks:
api-network:
aliases:
- api-net
nginx:
build:
context: ./backend/nginx
ports:
- "80:80"
- "443:443"
networks:
api-network:
aliases:
- nginx-net
depends_on:
- api
networks:
api-network:
Thought I'd post this as an answer instead of a comment since it was cleaner.
Here's a gist: https://gist.github.com/Aldo111/702f1146fb88f2c14f7b5955bec3d101
name: Server Build & Push
on:
push:
branches: [main]
paths:
- 'server/**'
- 'shared/**'
- docker-compose.prod.yml
- Dockerfile
jobs:
build_and_push:
runs-on: ubuntu-latest
steps:
- name: Checkout the repo
uses: actions/checkout#v2
- name: Create env file
run: |
touch .env
echo "${{ secrets.SERVER_ENV_PROD }}" > .env
cat .env
- name: Build image
run: docker compose -f docker-compose.prod.yml build
- name: Install doctl
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Log in to DO Container Registry
run: doctl registry login --expiry-seconds 600
- name: Push image to DO Container Registry
run: docker compose -f docker-compose.prod.yml push
- name: Deploy Stack
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.GL_SSH_HOST }}
username: ${{ secrets.GL_SSH_USERNAME }}
key: ${{ secrets.GL_SSH_SECRET }}
port: ${{ secrets.GL_SSH_PORT }}
script: |
cd /srv/www/game
./init.sh
In the final step, the directory in my case just contains a .env file and my prod compose file but these things could also be rsyncd/copied/automated as another step in this workflow before actually running things.
My init.sh simply contains:
docker stack deploy -c <(docker-compose -f docker-compose.yml config) game --with-registry-auth
The with-registry-auth part is important since my docker-compose has image:....s that use containers in DigitalOcean's container registry. So on my server, I'd already logged in once when I first setup the directory.
With that, this docker command consumes my docker-compose.yml along with the environment vairables (i.e. docker-compose -f docker-compose.yml config will pre-process the compose file with the .env file in the same directory, since stack deploy doesn't use .env) and registry already authenticated, pulls the relevant images, and restarts things as needed!
This can definitely be cleaned up and made a lot simpler but it's been working pretty well for me in my use case.
I am trying to use the container option in a GitHub Actions workflow to run the entire job in a docker container. How do I specify the login credentials to retrieve this docker image from a private repository on docker hub?
jobs:
build:
runs-on: ubuntu-18.04
container: private_org/test-runner:1.0
I have successfully used the following docker-login "action" to authenticate with docker hub as a "step", but this does not get performed until after the job-level container gets initialized.
jobs:
build:
runs-on: ubuntu-18.04
steps:
- uses: azure/docker-login#v1
with:
username: me
password: ${{ secrets.MY_DOCKERHUB_PASSWORD }}
- name: test docker creds
run: docker pull private_org/test-runner:1.0
This was implemented recently. Use the following workflow definition:
jobs:
build:
container:
image: private_org/test-runner:1.0
credentials:
username: me
password: ${{ secrets.MY_DOCKERHUB_PASSWORD }}
Source:
https://github.blog/changelog/2020-09-24-github-actions-private-registry-support-for-job-and-service-containers/
I have set up Drone with the Docker plugin. It is building just fine, but fails to push to a private Dockerhub repo.
I have confirmed that dockerhub_username and dockerhub_password are environment variables.
kind: pipeline
type: exec
name: default
steps:
- name: docker
image: plugins/docker
settings:
repo: jbc22/myrepo
username:
from_secret: dockerhub_username
password:
from_secret: dockerhub_password
publish:
image: jbc22/myrepo
report: jbc22/myrepo
Drone returns with:
denied: requested access to the resource is denied
time="2019-09-03T19:34:32Z" level=fatal msg="exit status 1"
I would expect to see the image pushed to Dockerhub.
Just fixed the same issue... Code down below works for me!
name: default
kind: pipeline
steps:
- name: backend
image: python:3.7
commands:
- pip3 install -r req.txt
- python manage.py test
- name: publish
image: plugins/docker
settings:
username: dockerhub_username
password: dockerhub_password
repo: user/repo_name
I am trying to build docker container which should include startup scripts in container's /etc/my_init.d directory via ansible. I have difficulty finding any documentation how to do this. Here is relevant portion of my yaml file:
- name: Create container
docker:
name: myserver
image: "{{ docker_repo }}/myserver:{{ server.version }}"
state: started
restart_policy: always
docker_api_version: 1.18
registry: "{{ docker_repo }}"
username: "{{ registry_user }}"
password: "{{ registry_password }}"
links:
- "mywebservices"
ports:
- "8000:8000"
- "9899:9899"
volumes:
- "{{ myserver_home_dir }}/logs:/var/log/my_server"
env:
MY_ENVIRONMENT: "{{ my_environment }}"
when: myserver_action == "create"
or (myserver_action == "diff-create" and myserver.changed)
or myserver_action == "update"
What should I add in here to tell ansible to put my files into container's /etc/my_init.d during build?
First of all, you can't build container (you can start it), you build images.
Second, docker module is deprecated, use docker_image to build images.
You should copy your files into build directory (with copy or synchronize modules), for example:
/tmp/build
Then create Dockerfile that will take them from build directory and add into your image.
After that call docker_image:
docker_image:
path: /tmp/build
name: myimage
Finally start your container:
docker_container:
image: myimage
name: mycontainer
Unsure if it's relevant, as I don't know what your startup Ansible content is doing, but it's probably worth looking at the Ansible Container project.
https://github.com/ansible/ansible-container
You can build your container images using Ansible roles instead of a Dockerfile, orchestrate them locally, and deploy them to production Kubernetes or Red Hat OpenShift.