I'm using a simple ballerina code to build my program (simple hello world) with ballerinax/kubernetes annotations. The service is being compiled succesfully and accessible via the specific bind port from local host.
When configuration a kubernetes deployment I'm specifying the image build and push flags:
#kubernetes:Deployment {
replicas: 2,
name: "hello-deployment",
image: "gcr.io/<gct-project-name>/hello-ballerina:0.0.2",
imagePullPolicy: "always",
buildImage: true,
push: true
}
When building the source code:
ballerina build hello.bal
This is what I'm getting:
Compiling source
hello.bal
Generating executable
./target/hello.balx
#docker - complete 3/3
Run following command to start docker container:
docker run -d -p 9090:9090 gcr.io/<gcr-project-name>/hello-ballerina:0.0.2
#kubernetes:Service - complete 1/1
#kubernetes:Deployment - complete 1/1
error [k8s plugin]: Unable to push docker image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Note that when pushing it manually via docker on my local machine it works find and the new image is getting pushed.
What am I missing? Is there a way to tell ballerina about docker registry credentials via the kubernetes package?
Ballerina doesn't support gcloud docker registry yet, but it supports dockerhub.
Please refer sample6 for more info.
Basically, you can export docker registry username and password as environment variables.
Please create an issue at https://github.com/ballerinax/kubernetes/issues for track this.
Seems like a problem with Container Registry, you are not able to authenticate.
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry.
We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources.
You can check yourself the steps for Container Registry Authentication Methods
Related
I am trying to run a script (unitest) that uses docker behind the scenes on a CI. The script works as expected on droneci but switching to CloudBuild it is not clear how to setup DinD.
For the droneci I basically use the DinD as shown here my question is, how do I translate the code to Google CloudBuild. Is it even possible?
I searched the internet for the syntax of CloudBuild wrt DinD and couldn't find something.
Cloud Build lets you create Docker container images from your source code. The Cloud SDK provides the container buildsubcommand for using this service easily.
For example, here is a simple command to build a Docker image:
gcloud builds submit -t gcr.io/my-project/my-image
This command sends the files in the current directory to Google Cloud Storage, then on one of the Cloud Build VMs, fetch the source code, run Docker build, and upload the image to Container Registry
By default, Cloud Build runs docker build command for building the image. You can also customize the build pipeline by having custom build steps.If you can use any arbitrary Docker image as the build step, and the source code is available, then you can run unit tests as a build step. By doing so, you always run the test with the same Docker image. There is a demonstration repository at cloudbuild-test-runner-example. This tutorial uses the demonstration repository as part of its instructions.
I would also recommend you to have a look at these informative links with similar use case:
Running Integration test on Google cloud build
Google cloud build pipeline
I managed to figure out a way to run Docker-in-Docker (DinD) in CloudBuild. To do that we need to launch a service in the background with docker-compose. Your docker-compose.yml script should look something like this.
version: '3'
services:
dind-service:
image: docker:<dnd-version>-dind
privileged: true
ports:
- "127.0.0.1:2375:2375"
- "127.0.0.1:2376:2376"
networks:
default:
external:
name: cloudbuild
In my case, I had no problem using versions 18.03 or 18.09, later versions should also work. Secondly, it is important to attach the container to the cloudbuild network. This way the dind container will be on the same network as every container spawned during your step.
To start the service you need to add a step to your cloudbuild.yml file.
- id: start-dind
name: docker/compose
args: ['-f', 'docker-compose.yml', 'up', '-d', 'dind-service']
To validate that the dind service works as expected, you can just create a ping step.
- id: 'Check service is listening'
name: gcr.io/cloud-builders/curl
args: ["dind-service:2375"]
waitFor: [start-dind]
Now if it works you can run your script as normal with dind in the background. What is important is to pass the DOCKER_HOST env variable so that the docker client can locate the docker engine.
- id: my-script
name: my-image
script: myscript
env:
- 'DOCKER_HOST=tcp://dind-service:2375'
Take note, any container spawned by your script will be located in dind-service, thus if you are to do any request to it you shouldn't do it to http://localhost but instead to the http://dind-service. Moreover, if you are to use private images you will require some type of authentication before running your script. For that, you should run gcloud auth configure-docker --quiet before running your script. Make sure your docker image has gcloud installed. This creates the required authentication credentials to run your app. The credentials are saved in path relevant to the $HOME variable, so make sure your app is able to access it. You might have some problems if you use tox for example.
I'm executing CI jobs with gitlab-ci runner which is configured with kubernetes executor, and actually runs on openshift. I want to be able to build docker images to dockerfiles, with the following constraints:
The runner (openshift pod) is ran as user with high and random uid (234131111111 for example).
The runner pod is not privileged.
Not having cluster admin permissions, or ability to reconfigure the runner.
So obviously DinD cannot work, since is requires special docker device configuration. Podman, kaniko, buildah, buildkit and makisu don't work for random non-root user and without any volume.
Any suggestions?
DinD (Docker-in-Docker) does work in OpenShift 4 gitlab runners... just made it, and it was... a fight! Fact is, the solution is extremely brittle to any change of a version elsewhere. I just tried e.g. to swap docker:20.10.16 for docker:latest or docker:stable, and that breaks.
Here is the config I use inside which it does work:
OpenShift 4.12
the RedHat certified GitLab Runner Operator installed via the OpenShift Cluster web console / OperatorHub; it features gitlab-runner v 14.2.0
docker:20.10.16 & docker:20.10.16-dind
Reference docs:
GitLab Runner Operator installation guide: https://cloud.redhat.com/blog/installing-the-gitlab-runner-the-openshift-way
Runner configuration details: https://docs.gitlab.com/runner/install/operator.html and https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html
and this key one about matching pipeline and runner settings: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html which is actually the one to follow very precisely for your settings in gitlab .gitlab-ci.yml pipeline definitions AND runner configuration config.toml file.
Installation steps:
follow docs 1 and 2 in reference above for the installation of the Gitlab Runner Operator in OpenShift, but do not instantiate yet a Runner from the operator
on your gitlab server, copy the runner registration token for a group-wide or projec-wide runner registration
elswhere in a terminal session where the oc CLI is installed, login to the openshift cluster via the 'oc' CLI such as to have cluster:admin or system:admin role
create a OpenShift secret like:
vi gitlab-runner-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-runner-secret
namespace: openshift-operators
type: Opaque
stringData:
runner-registration-token: myRegistrationTokenHere
oc apply -f gitlab-runner-secret.yml
create a Custom configuration map; note that OpenShift operator will merge the supplied content to that of the config.toml generated by the gitlab runner operator itself; therefore, we only provide the fields we want to complement (we cannot even override an existing field value). Note too that the executor is preset to "kubernetes" by the OC Operator. For the detailed understanding, see docs hereabove.
vi gitlab-runner-config-map.toml
[[runners]]
[runners.kubernetes]
host = ""
tls_verify = false
image = "alpine"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
oc create configmap gitlab-runner-config-map --from-file config.toml=gitlab-runner-config-map.toml
create a Runner to be deployed by the operator (adjust the url)
vi gitlab-runner.yml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
namespace: openshift-operators
spec:
gitlabUrl: https://gitlab.example.com/
buildImage: alpine
token: gitlab-runner-secret
tags: openshift, docker
config: gitlab-runner-config-map
oc apply -f gitlab-runner.yml
you shall then see the runner just created via the openshift console (installed operators > gitlab runner > gitlab runner tab), followed by the outomatic creation of a PoD (see workloads). You may even enter a terminal session on the PoD and type for instance: gitlab-runner list to see the location of the config.toml file. You shall also see on the gitlab repo server console the runner being listed at the group or project level. Of course, firewalls in between your OC cluster and your gitlab server may ruin your endeavors at this point...
the rest of the trick takes place in your .gitlab-ci.yml file, e.g. (extract only showing one job at some stage). For the detailed understanding, see doc Nb 3 hereabove. the variable MY_ARTEFACT is pointing to a sub-dirctory in the relevant git project/repo in which a Dockerfile is contained that you have already successfully executed in your IDE for instance; and REPO_PATH holds a common prefix string including a docker Hub repository path and some extra name piece. You adjust all that to your convenience, BUT don't edit any of the first 3 variables defined under this job and do not change the docker[dind] version; it would break everything.
my_job_name:
stage: my_stage_name
tags:
- openshift # to run on specific runner
- docker
image: docker:20.10.16
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
REPO_TAG: ${REPO_PATH}-${MY_ARTEFACT}:${IMAGE_TAG}
services:
- docker:20.10.16-dind
before_script:
- sleep 10 && docker info #give time for starting the service and confirm good setup in logs
- echo $DOKER_HUB_PWD | docker login -u $DOKER_HUB_USER --password-stdin
script:
- docker build -t $REPO_TAG ./$MY_ARTEFACT
- docker push $REPO_TAG
There you are, trigger the gitlab pipeline...
If you miss-configured anything, you'll get the usual error message "is the docker daemon running?" after a claim regarding failing access to "/var/run/docker.sock" or failing connection to "tcp://localhost:2375". And no-no! port 2376 is not a typo but the exact value to use at step 8 hereabove.
So far so good? ... not yet!
Security settings:
Well, you may now see your docker builds starting (meanin D-in-D is OK), and then failing for security sake (or locked up).
Although we set 'privileged=true' at step 5:
Docker comes with a nasty (and built-in) feature: it runs by default as 'root' in every container it builds, and for building containers.
on the other hand, OpenShift is built with strict security in mind, and would prevent any pod to run as root.
So we have to change security settings to enable those runners to execute in privileged mode, reason why it is important to restrict these permissions to a namespace, here 'openshift-operators' and the specific account 'gitlab-runner-sa'.
`oc adm policy add-scc-to-user privileged -z gitlab-runner-sa -n openshift-operators`
The above will create a RoleBinding that you may remove or change as required. Fact is, 'gitlab-runner-sa' is the service account used by the Gitlab Runner Operator to instantiate runner pod's, and '-z' indicates to target the permission settings to a service account (not a regular user account). '-n' references the specific namespace we use here.
So you can now build images.... but may still be defeated when importing those images into an OpenShift project and trying to execute the generated pod's. There are two contraints to anticipate:
OpenShift will block any image that requires to run as 'root', i.e. in privileged mode (the default in docker run and docker compose up). ==> SO, PLEASE ENSURE THAT ALL THE IMAGES YOU WILL BUILD WITH DOCKER-in-DOCKER can run as a non root user with the dockerfile directive USER : !
... but the above may not be suffient! indeed, by default, OpenShift generates a random user ID to launch the container and ignores the one set in docker build as USER :. To effectively allow the container to switch to the defined user you have to bind the service account that runs your pods to the "anyuid" Security Context Constraint. This is easy to achieve via a role binding, else the command in oc CLI:
oc adm policy add-scc-to-user anyuid -n myProjectName -z default
where -z denotes a service account into the -n namespace.
References
Terraform Docker Provider
Terraform Google Provider
GCP Cloud Build
Context Details
The deployemnt is done by CICD based on the GCP Cloud Build (and the cloud build service account has an 'owner' role for relevant projects).
Inside a 'cloudbuild.yaml' file there is a step with a 'hashicorp/terraform' worker an a command like 'terraform apply'.
Goal
To build and push a docker image into a GCP Artefact Registry, so that it can be used in a container optimised compute engine deployment in other TF resources.
Issue
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images, I have to use the Terraform Docker Provider.
The docker image is described as:
resource "docker_registry_image" "my_image" {
name = "europe-west2-docker.pkg.dev/${var.my_project_id}/my-docker-reg/my-vm-image:test"
build {
context = "${path.module}/image"
dockerfile = "Dockerfile"
}
}
According to the comment Creating and pushing a docker image to a google cloud registry using terraform: For pushing images, the only way to set credentials is to declare them at the provider level.
Therefore the registry_auth block is to be provided as described in the Terraform Docker Provider documentation.
On one hand, as described in the GCP Artefact Registry authentication documentation You do not need to configure authentication for Cloud Build. So I use this for configuration (as it is to be executed under the Cloud Build service account):
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
}
}
and the Cloud Build job (terraform step) failed with an error:
Error: Error loading registry auth config: could not open config file from filePath: /root/.docker/config.json. Error: open /root/.docker/config.json: no such file or directory
with provider["registry.terraform.io/kreuzwerker/docker"],
on my-vm-image.tf line 6, in provider "docker":
6: provider "docker" {
as the Docker Provider mandatory would like to get some credentials for authentication...
So, another option is to try an 'access token' as descibed in the comment and documentation.
The access token for the cloud build service account can be retrieved by a step in the cloud build yaml:
## Get Cloud Build access token
- id: "=> get CB access token =>"
name: 'gcr.io/cloud-builders/gcloud'
entrypoint: 'sh'
args:
- '-c'
- |
access_token=$(gcloud auth print-access-token) || exit 1
echo ${access_token} > /workspace/access_token || exit 1
and later used in the TF step as a variable value:
...
access_token=$(cat /workspace/access_token)
...
terraform apply -var 'access_token=${access_token}' ....
...
So the Terraform Docker Provider is supposed to be configured according to the example gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://LOCATION-docker.pkg.dev from the GCP Artefact Registry authentication documentation:
provider "docker" {
registry_auth {
address = "europe-west2-docker.pkg.dev"
username = "oauth2accesstoken"
password = var.access_token
}
}
But the Cloud Build job (terraform step) failed again:
Error: Error pushing docker image: Error pushing image: unauthorized: failed authentication
Questions
So, if I dont' try any completely different alternative approach, how the Terraform Docker Provider works within the GCP Cloud Build? What is to be done for a correct authentication?
As the Terraform Google Provider does not have resources to work with the Artefact Registry docker images
First, I don't understand the above sentence. Here is Google's Artifact Registry resource.
Second, why use docker_registry_image? Or even docker provider?
If you provide your service account with the right role (no need for full ownership, roles/artifactregistry.writer will do) then you can push images built by Cloud Build to Artifact Registry without any problem. Just set the image name to docker in the necessary build steps.
For example:
steps:
- id: build
name: docker
args:
- build
- .
- '-t'
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE
- id: push
name: docker
args:
- push
- LOCATION-docker.pkg.dev/PROJECT_ID/ARTIFACT_REGISTRY_REPO/IMAGE
This is my yml file
- name: Start jaegar daemon services
docker:
name: jaegar-logz
image: logzio/jaeger-logzio:latest
state: started
env:
ACCOUNT_TOKEN: {{ token1 }}
API_TOKEN: {{ token2 }}
ports:
- "5775:5775"
- "6831:6831"
- "6832:6832"
- "5778:5778"
- "16686:16686"
- "14268:14268"
- "14250:14250"
- "9411:9411"
- name: Wait for jaegar services to be up
wait_for: delay=60 port=5775
Can ansible discover the docker image from the docker hub registry by itself?
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
The docker image is from here - https://hub.docker.com/r/logzio/jaeger-logzio
Assuming you are using docker CE.
You should be able to run according to this documentation from ansible. Do note however; this module is deprecated in ansible 2.4 and above as the documentation itself dictates. Use the docker_container task if you want to run containers instead. The links are available in said documentation link.
As far as your questions go:
Can ansible discover the docker image from the docker hub registry by itself?
This would depend on the client machine that you will run it on. By default, docker will point to it's own docker hub registry unless you specifically log in to another repository. If you use the public repo (which it looks like in your link) and the client is able to go out online to said repo you should be fine.
Does this actually start the jaegar daemons or does it just build the image? If it's the latter, how can I run the container?
According to the docker_container documentation you should be able to run the container directly from this task. This would mean that you are good to go.
P.S.: The image parameter on that page tells us that:
Repository path and tag used to create the container. If an image is
not found or pull is true, the image will be pulled from the registry.
If no tag is included, 'latest' will be used.
In other words, with a small adjustment to your task you should be fine.
I made a JHipster application, and I want to add CI/CD with a private Gitlab runner to deploy on a private Docker registry. I get this failure:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:2.0.0:build (default-cli) on project powerfront: Invalid image reference :master-35274d52bd71e28f08a0428832001cc67e9c446d, perhaps you should check that the reference is formatted correctly according to https://docs.docker.com/engine/reference/commandline/tag/#extended-description
[ERROR] For example, slash-separated name components cannot have uppercase letters: Invalid image reference: :master-35274d52bd71e28f08a0428832001cc67e9c446d
This is the relevant part of my .gitlab-ci.yml
# Uncomment the following line to use gitlabs container registry. You need to adapt the REGISTRY_URL in case you are not using gitlab.com
docker-push:
stage: release
variables:
REGISTRY_URL: 10.1.10.58
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHA
dependencies:
- maven-package
script:
- ./mvnw -ntp compile jib:build -Pprod -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN -Dmaven.repo.local=$MAVEN_USER_HOME
EDIT: There was an unconfigured variable. Now I get
[ERROR] I/O error for image [10.1.10.58:5000/powerfront]:
[ERROR] javax.net.ssl.SSLException
[ERROR] Unrecognized SSL message, plaintext connection?
How do I tell the runner to accept unsecure (plaintext) connections?
To publish to a private (or another public) registry, your image name must start by the host name of the registry : private.registry.io/group/image:version so that Docker daemon knows that it doesn't push to Docker Hub (which the default) but to private.registry.io
Also U can use Kaniko to publish your image, as it doesn't require dind or privileged mode on Docker daemon.
I'm not sure that is a Gitlab CI problem. but with JHipster.
What is the value for CI_REGISTRY_IMAGE? We don't see the value in the error message.