401 with Wercker internal/docker-push to dockerhub - dockerhub

The relevant (I think) part of my wercker.yml is:
deploy:
steps:
- internal/docker-push:
username: $USERNAME
password: $PASSWORD
entrypoint: /pipeline/source/pipeline
tag: latest
repository: colezlaw/pipeline
registry: https://registry.hub.docker.com
I have a repository on hub called colezlaw/pipeline, and I've got my dockerhub credentials set in the pipeline on wercker. However, once it tries to push to docker, it's getting a 401:
Error interacting with this repository: colezlaw/pipeline PUT https://registry.hub.docker.com/v1/repositories/colezlaw/pipeline/ returned 401
Is there something else I need to set up on the dockerhub side?

Instead of setting the credentials in the pipeline add the environment variables $USERNAME, and $PASSWORD in Environment tab similar to the image below.

Related

Access github package from github actions services section

With GitHub Actions I'm trying to set up a service that runs a specific image (MySQL preloaded with a database) that I have pushed to ghcr.io however when it runs I get this error:
Error response from daemon: denied
Warning: Docker pull failed with exit code 1, back off 8.976 seconds before retry.
Workflow:
services:
mysql:
image: ghcr.io/my-name/my-image
ports:
- 3306:3306
I see it does the following:
/usr/bin/docker --config /home/runner/work/_temp/.docker_[...] login ghcr.io -u myusername --password-stdin
There is no feedback so not sure if it is logged in or not. And, then:
/usr/bin/docker --config /home/runner/work/_temp/.docker[...] pull ghcr.io/my-name/my-image
And then I get that error.
I have found many examples (see below) to use GITHUB_TOKEN but not how to use it within the services section so I am not sure if this works or what the syntax would be. So is it even possible to use with services or not? Also have given the repository in which the GitHub action is defined access to the specific package.
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Log in to the Container registry
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
So I finally found the issue, in my workflow (started from default template) I had:
permissions:
contents: read
Then I saw this:
Setting permissions in the workflow
A new permissions key supported at the workflow and job level enables
you to specify which permissions you want for the token. Any
permission that is absent from the list will be set to none.
This caused packages to be set to none. Removing the whole permissions or adding:
packages: read
fixes this issue I had, thanks for the help.

Access image in GCP Artifact Registry from Gitlab CI/CD using a service-account key

I am trying to run a Gitlab CI job using an image from a private GCP Artifact Registry, i.e.:
build job:
stage: build_job
image: toto-tata-docker.pkg.dev/my-gcp/my-project/my-image:${IMAGE_VERSION}
variables:
...
I have read in many places, including in GitLab doc that using a DOCKER_AUTH_CONFIG CI/CD variable could do the trick, and it is suggested to use
docker login toto-tata-docker.pkg.dev --username my_username --password my_password
and copy whatever is in ~/.docker/config.json after that.
Now my problem is that I am using a service account to authenticate into that AR, and I have no idea how to find the username and password necessary to generate the "right" ~/.docker/config.json.... :'(
I tried using a Credential Helper as suggested in GCP doc (that is actually what my ~/.docker/config.json looks like on my laptop when I docker login with the service account key):
{
"credHelpers": {
"toto-tata-docker.pkg.dev": "gcloud",
}
}
but the job fails because of a denied permission:
...
denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on reso
...
It would be awesome if anyone could advise !!
Thank you !!

How to define private repo image in Gitlab Runner

I have a gitlab runner register with docker+machine executor and I have setup my `.gitlab-ci.yml' as below:
stages:
- RUN_TESTS
- CLEAN
image:
name: <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG>
And the image <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> is a private AWS ECR repository. And It fails everytime as this is a private repository.
How can I configure this to pull this private image?
I got the password of the ecr using aws ecr get-login-password --region us-east-2 command and it gave an password.
I looked into this docker-credential-ecr-login tool and installed this in runner instance. And I configured the AWS credentials using aws configure and credentials are now at ~/.aws/credentials
And also add the following block to ~/.docker/config.json as below:
"credHelpers": {
"<aws_account_id>.dkr.ecr.<region>.amazonaws.com": "ecr-login"
}
But when I try to docker pull <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAMESPACE>:<TAG> it gives me the following error:
Error response from daemon: Get https://Account-ID.dkr.ecr.REGION.amazonaws.com/v2/spot-runner-image/manifests/latest: no basic auth credentials
Is there anything to do with docker-credential-ecr-login
You're getting that error because your runner (and/or the job) isn't authenticating with the ECR registry before it tries to pull an image. You'll need to provide the auth data in one of the following forms:
a $DOCKER_AUTH_CONFIG variable in the job (see the docs for info: https://docs.gitlab.com/ee/ci/variables/README.html#create-a-custom-variable-in-gitlab-ciyml)
a $DOCKER_AUTH_CONFIG project variable from your project's Settings -> CI/CD page (see the docs here: https://docs.gitlab.com/ee/ci/variables/#create-a-custom-variable-in-the-ui)
a $DOCKER_AUTH_CONFIG variable in your runners config.toml file (https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#configuring-a-runner)
The docs on setting the DOCKER_AUTH_CONFIG are here: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-an-image-from-a-private-container-registry.
To determine what should be in the DOCKER_AUTH_CONFIG variable, you can log into your registry from your local machine with docker login example-registry.example.com --username my_user --password my_password. This will create a config.json file in the ~/.docker directory. However, on my mac the credentials are stored in my keychain, and the config.json file can't be used by Gitlab. In that case, you'll have to create the content manually. All this information and more is in the docs: https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#determining-your-docker_auth_config-data

How to login pull registry at Github Action for services

I want to have a services section for Github Actions workflow file with a private registry. The simplified config looks like this:
jobs:
my_job:
runs-on: ubuntu-latest
services:
image-name:
image: docker.pkg.github.com/<org>/<repo>/<image>
steps:
# ...
The repo resides within the same organization, if it matters. Also the image can be pulled with a proper credentials locally, but obviously fails at github actions pipeline with an error:
Error response from daemon: Get <image_url>: no basic auth credentials
So my question is: is it possible to specify credentials either via env vars (aka Secrets in Github), or maybe some flag for services.options exists? I believe this can be an alternative with manual login/pulling/starting, but I would prefer declarative way.
Since Sept. 24th 2020, yes, it should be possible to specify your credentials for a private registry.
See "GitHub Actions: Private registry support for job and service containers"
You can now use images from private registeries in job and service containers.
Here's an example of using private images from Docker Hub:
jobs:
build:
container:
image: octocat/ci-image:latest
credentials:
username: mona
password: ${{ secrets.docker_hub_password}}
services:
db:
image: octocat/testdb:latest
credentials:
username: mona
password: ${{ secrets.docker_hub_password }}

How to build docker image frome .drone.yml?

I have a (.drone.yml) test file from which i want to build a docker image. According to the documentations i have to build it using drone .
I tried this tutorial ( https://www.digitalocean.com/community/tutorials/how-to-perform-continuous-integration-testing-with-drone-io-on-coreos-and-docker ) and several other tutorials but i failed .
can anyone show me please a simple way to build .drone.yml !
Thank you
Note that this answer applies to drone version 0.5
You can use the Docker plugin to build and publish a Docker image at the successful completion of your build. You add the Docker plugin as a step in your build pipeline section of the .drone.yml file:
pipeline:
build:
image: golang
commands:
- go build
- go test
publish:
image: plugins/docker
repo: foo/bar
In many cases you will want to limit execution of the this step to certain branches. This can be done by adding runtime conditions:
publish:
image: plugins/docker
repo: foo/bar
when:
branch: master
You will need to provide drone with credentials to your Docker registry in order for drone to publish. These credentials can be declared directly in the yaml file, although storing these values in plain text in the yaml is generally not recommended:
publish:
image: plugins/docker
repo: foo/bar
username: johnsmith
password: pa55word
when:
branch: master
You can alternatively provide your credentials using the built-in secret store. Secrets can be added to the secret store on a per-repository basis using the Drone command line utility:
export DRONE_SERVER=http://drone.server.address.com
export DRONE_TOKEN=...
drone secret add \
octocat/hello-world DOCKER_USERNAME johnsmith
drone secret add \
octocat/hello-world DOCKER_PASSWORD pa55word
drone sign octocat/hello-world
Secrets are then interpolated in your yaml at rutnime:
publish:
image: plugins/docker
repo: foo/bar
username: ${DOCKER_USERNAME}
password: ${DOCKER_PASSWORD}
when:
branch: master

Resources