Gitlab CI/CD with JHipster and Docker registry - docker

I made a JHipster application, and I want to add CI/CD with a private Gitlab runner to deploy on a private Docker registry. I get this failure:
[ERROR] Failed to execute goal com.google.cloud.tools:jib-maven-plugin:2.0.0:build (default-cli) on project powerfront: Invalid image reference :master-35274d52bd71e28f08a0428832001cc67e9c446d, perhaps you should check that the reference is formatted correctly according to https://docs.docker.com/engine/reference/commandline/tag/#extended-description
[ERROR] For example, slash-separated name components cannot have uppercase letters: Invalid image reference: :master-35274d52bd71e28f08a0428832001cc67e9c446d
This is the relevant part of my .gitlab-ci.yml
# Uncomment the following line to use gitlabs container registry. You need to adapt the REGISTRY_URL in case you are not using gitlab.com
docker-push:
stage: release
variables:
REGISTRY_URL: 10.1.10.58
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHA
dependencies:
- maven-package
script:
- ./mvnw -ntp compile jib:build -Pprod -Djib.to.image=$IMAGE_TAG -Djib.to.auth.username=gitlab-ci-token -Djib.to.auth.password=$CI_BUILD_TOKEN -Dmaven.repo.local=$MAVEN_USER_HOME
EDIT: There was an unconfigured variable. Now I get
[ERROR] I/O error for image [10.1.10.58:5000/powerfront]:
[ERROR] javax.net.ssl.SSLException
[ERROR] Unrecognized SSL message, plaintext connection?
How do I tell the runner to accept unsecure (plaintext) connections?

To publish to a private (or another public) registry, your image name must start by the host name of the registry : private.registry.io/group/image:version so that Docker daemon knows that it doesn't push to Docker Hub (which the default) but to private.registry.io
Also U can use Kaniko to publish your image, as it doesn't require dind or privileged mode on Docker daemon.

I'm not sure that is a Gitlab CI problem. but with JHipster.
What is the value for CI_REGISTRY_IMAGE? We don't see the value in the error message.

Related

Gitlab CI - Kubernetes executor on openshift - build an image with docker/podman/makisu/buildah/kaniko

I'm executing CI jobs with gitlab-ci runner which is configured with kubernetes executor, and actually runs on openshift. I want to be able to build docker images to dockerfiles, with the following constraints:
The runner (openshift pod) is ran as user with high and random uid (234131111111 for example).
The runner pod is not privileged.
Not having cluster admin permissions, or ability to reconfigure the runner.
So obviously DinD cannot work, since is requires special docker device configuration. Podman, kaniko, buildah, buildkit and makisu don't work for random non-root user and without any volume.
Any suggestions?
DinD (Docker-in-Docker) does work in OpenShift 4 gitlab runners... just made it, and it was... a fight! Fact is, the solution is extremely brittle to any change of a version elsewhere. I just tried e.g. to swap docker:20.10.16 for docker:latest or docker:stable, and that breaks.
Here is the config I use inside which it does work:
OpenShift 4.12
the RedHat certified GitLab Runner Operator installed via the OpenShift Cluster web console / OperatorHub; it features gitlab-runner v 14.2.0
docker:20.10.16 & docker:20.10.16-dind
Reference docs:
GitLab Runner Operator installation guide: https://cloud.redhat.com/blog/installing-the-gitlab-runner-the-openshift-way
Runner configuration details: https://docs.gitlab.com/runner/install/operator.html and https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html
and this key one about matching pipeline and runner settings: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html which is actually the one to follow very precisely for your settings in gitlab .gitlab-ci.yml pipeline definitions AND runner configuration config.toml file.
Installation steps:
follow docs 1 and 2 in reference above for the installation of the Gitlab Runner Operator in OpenShift, but do not instantiate yet a Runner from the operator
on your gitlab server, copy the runner registration token for a group-wide or projec-wide runner registration
elswhere in a terminal session where the oc CLI is installed, login to the openshift cluster via the 'oc' CLI such as to have cluster:admin or system:admin role
create a OpenShift secret like:
vi gitlab-runner-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-runner-secret
namespace: openshift-operators
type: Opaque
stringData:
runner-registration-token: myRegistrationTokenHere
oc apply -f gitlab-runner-secret.yml
create a Custom configuration map; note that OpenShift operator will merge the supplied content to that of the config.toml generated by the gitlab runner operator itself; therefore, we only provide the fields we want to complement (we cannot even override an existing field value). Note too that the executor is preset to "kubernetes" by the OC Operator. For the detailed understanding, see docs hereabove.
vi gitlab-runner-config-map.toml
[[runners]]  
[runners.kubernetes]
host = ""
tls_verify = false
image = "alpine"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
oc create configmap gitlab-runner-config-map --from-file config.toml=gitlab-runner-config-map.toml
create a Runner to be deployed by the operator (adjust the url)
​vi gitlab-runner.yml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
namespace: openshift-operators
spec:
gitlabUrl: https://gitlab.example.com/
buildImage: alpine
token: gitlab-runner-secret
tags: openshift, docker
config: gitlab-runner-config-map
oc apply -f gitlab-runner.yml
you shall then see the runner just created via the openshift console (installed operators > gitlab runner > gitlab runner tab), followed by the outomatic creation of a PoD (see workloads). You may even enter a terminal session on the PoD and type for instance: gitlab-runner list to see the location of the config.toml file. You shall also see on the gitlab repo server console the runner being listed at the group or project level. Of course, firewalls in between your OC cluster and your gitlab server may ruin your endeavors at this point...
the rest of the trick takes place in your .gitlab-ci.yml file, e.g. (extract only showing one job at some stage). For the detailed understanding, see doc Nb 3 hereabove. the variable MY_ARTEFACT is pointing to a sub-dirctory in the relevant git project/repo in which a Dockerfile is contained that you have already successfully executed in your IDE for instance; and REPO_PATH holds a common prefix string including a docker Hub repository path and some extra name piece. You adjust all that to your convenience, BUT don't edit any of the first 3 variables defined under this job and do not change the docker[dind] version; it would break everything.
my_job_name:
stage: my_stage_name
tags:
- openshift # to run on specific runner
- docker
image: docker:20.10.16
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
REPO_TAG: ${REPO_PATH}-${MY_ARTEFACT}:${IMAGE_TAG}
services:
- docker:20.10.16-dind
before_script:
- sleep 10 && docker info #give time for starting the service and confirm good setup in logs
- echo $DOKER_HUB_PWD | docker login -u $DOKER_HUB_USER --password-stdin
script:
- docker build -t $REPO_TAG ./$MY_ARTEFACT
- docker push $REPO_TAG
There you are, trigger the gitlab pipeline...
If you miss-configured anything, you'll get the usual error message "is the docker daemon running?" after a claim regarding failing access to "/var/run/docker.sock" or failing connection to "tcp://localhost:2375". And no-no! port 2376 is not a typo but the exact value to use at step 8 hereabove.
So far so good? ... not yet!
Security settings:
Well, you may now see your docker builds starting (meanin D-in-D is OK), and then failing for security sake (or locked up).
Although we set 'privileged=true' at step 5:
Docker comes with a nasty (and built-in) feature: it runs by default as 'root' in every container it builds, and for building containers.
on the other hand, OpenShift is built with strict security in mind, and would prevent any pod to run as root.
So we have to change security settings to enable those runners to execute in privileged mode, reason why it is important to restrict these permissions to a namespace, here 'openshift-operators' and the specific account 'gitlab-runner-sa'.
`oc adm policy add-scc-to-user privileged -z gitlab-runner-sa -n openshift-operators`
The above will create a RoleBinding that you may remove or change as required. Fact is, 'gitlab-runner-sa' is the service account used by the Gitlab Runner Operator to instantiate runner pod's, and '-z' indicates to target the permission settings to a service account (not a regular user account). '-n' references the specific namespace we use here.
So you can now build images.... but may still be defeated when importing those images into an OpenShift project and trying to execute the generated pod's. There are two contraints to anticipate:
OpenShift will block any image that requires to run as 'root', i.e. in privileged mode (the default in docker run and docker compose up). ==> SO, PLEASE ENSURE THAT ALL THE IMAGES YOU WILL BUILD WITH DOCKER-in-DOCKER can run as a non root user with the dockerfile directive USER : !
... but the above may not be suffient! indeed, by default, OpenShift generates a random user ID to launch the container and ignores the one set in docker build as USER :. To effectively allow the container to switch to the defined user you have to bind the service account that runs your pods to the "anyuid" Security Context Constraint. This is easy to achieve via a role binding, else the command in oc CLI:
oc adm policy add-scc-to-user anyuid -n myProjectName -z default
where -z denotes a service account into the -n namespace.

getting errors when using Docker image as build agent in Azure pipeline

I'm trying to use a Docker image as a build agent in an Azure pipeline. I'm using Azure DevOps Server 2019 and Docker Enterprise Edition.
I get this error when the pipeline runs:
##[section]Starting: Initialize containers
##[command]C:\Program Files\Docker\docker.EXE version --format '{{.Server.APIVersion}}'
error during connect: Get http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.40/version: open //./pipe/docker_engine: Access is denied. In the default daemon configuration on Windows, the docker client must be run elevated to connect. This error may also indicate that the docker daemon is not running.
'
##[error]Exit code 1 returned from process: file name 'C:\Program Files\Docker\docker.EXE', arguments 'version --format '{{.Server.APIVersion}}''.
##[section]Finishing: Initialize containers
I've verified that Docker is installed on the server. I ran the following commands to make sure Docker is running on the server and all ran successfully:
Start-Service Docker
docker network ls
docker version
Here is the pipeline YAML file I'm using:
trigger:
none
resources:
containers:
- container: angulardocker
image: mdailey77/angularbuild:1.0
endpoint: Docker Hub
stages:
- stage: Client
jobs:
- job: BuildTest
pool:
name: Default Windows
container: angulardocker
steps:
- task: Npm#1
displayName: 'Client Build'
inputs:
command: custom
customCommand: run build -- --prod
workingDir: client
- task: CopyFiles#2
displayName: 'Copy Client Build to Staging Directory'
inputs:
contents: '**'
SourceFolder: $(Build.SourcesDirectory)/client/dist
targetFolder: $(Build.ArtifactStagingDirectory)/client
- task: PublishBuildArtifacts#1
displayName: 'Publish Build Artifact - Client'
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/client'
artifactName: 'Client'
The error occurs at the initialization container pipeline step. The Microsoft documentation regarding Docker images as build agents is lacking to say the least. It was no help at all. This post I found had the best example: https://im5tu.io/article/2018/12/building-a-custom-build-agent-image-with-docker-and-azure-devops-pipelines/. I can't figure out the error and stuck at the moment.
UPDATE:
After doing more research and #vaibhavnd suggestion, I'm fairly certain the issue is my pipeline build agent doesn't have access to the Docker daemon. True to form, the Microsoft documentation mentions this but doesn't actually show how to do it. How do I configure my build agents so they have access? What would be the steps?
UPDATE 2:
I looked into adding a user to the docker-user group, but there is no such group on the server. According to one GitHub post, the docker user group is supposed to be automatically created after installing Docker and doing a restart. I restarted the server and the group still doesn't exist.
I figured out how to solve the issue. This GitHub post really helped me. I used the dockeraccesshelper mentioned in the post. I granted the account associated with my pipeline build agents, so 'NT AUTHORITY\NETWORK SERVICE' in this case, access to Docker. Once I did this my pipeline was able to initialize the Docker container.
Going the docker-user group route didn't work for me. The docker-user group was not automatically created when I installed Docker Enterprise. I think this might be normal when using Docker Enterprise on Windows Server 2019. I tried creating the group on my own and add the 'NT AUTHORITY/NETWORK SERVICE' account to the group but was unable to add it.

Problem running test command inside Docker container in a Gitlab runner

I'm just getting started with docker and continuous integration with Gitlab. I've added the following gitlab-ci.yml file to the root of my repository:
# Official docker image
image: docker:latest
services:
- docker:dind
build-dev:
stage: build
script:
- docker build -t obikerui/project -f app/Dockerfile.dev ./app
test:
stage: test
script:
- docker run obikerui/project npm run test -- --coverage
The build-dev stage runs and passes but the test stage fails with the following error message:
$ docker run obikerui/project npm run test -- --coverage
Unable to find image 'obikerui/project:latest' locally
docker: Error response from daemon: pull access denied for obikerui/project, repository does not exist or may require 'docker login'.
See 'docker run --help'.
ERROR: Job failed: exit code 125
Can anyone explain what's going wrong and suggest a fix? The repository is private, so do I need to provide some extra configuration to accommodate this?
Each job runs in a different container. You build and you tag your image correctly but that stays in that container.
For the test job a new container starts and that one does not have the image build by the previous job.
You should push your image to a registry (after you tag it accordingly) and then the test job should use the image from the repository.
You can use a public registry like the one offered by Docker or you can run a local container based on the image registry:2 provided by docker. In this case you have to make sure that the domain name pointing to the registry is available on your network (it can be an nginx with reverse proxy)

Ballerina: build image and push to gcr.io via k8s plugin

I'm using a simple ballerina code to build my program (simple hello world) with ballerinax/kubernetes annotations. The service is being compiled succesfully and accessible via the specific bind port from local host.
When configuration a kubernetes deployment I'm specifying the image build and push flags:
#kubernetes:Deployment {
replicas: 2,
name: "hello-deployment",
image: "gcr.io/<gct-project-name>/hello-ballerina:0.0.2",
imagePullPolicy: "always",
buildImage: true,
push: true
}
When building the source code:
ballerina build hello.bal
This is what I'm getting:
Compiling source
hello.bal
Generating executable
./target/hello.balx
#docker - complete 3/3
Run following command to start docker container:
docker run -d -p 9090:9090 gcr.io/<gcr-project-name>/hello-ballerina:0.0.2
#kubernetes:Service - complete 1/1
#kubernetes:Deployment - complete 1/1
error [k8s plugin]: Unable to push docker image: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Note that when pushing it manually via docker on my local machine it works find and the new image is getting pushed.
What am I missing? Is there a way to tell ballerina about docker registry credentials via the kubernetes package?
Ballerina doesn't support gcloud docker registry yet, but it supports dockerhub.
Please refer sample6 for more info.
Basically, you can export docker registry username and password as environment variables.
Please create an issue at https://github.com/ballerinax/kubernetes/issues for track this.
Seems like a problem with Container Registry, you are not able to authenticate.
To authenticate to Container Registry, use gcloud as a Docker credential helper. To do so, run the following command:
gcloud auth configure-docker
You need to run this command once to authenticate to Container Registry.
We strongly recommend that you use this method when possible. It provides secure, short-lived access to your project resources.
You can check yourself the steps for Container Registry Authentication Methods

Issues with proxy in Gitlab CI using Docker runner

I want to package my Maven/Java app in a Docker Gitlab CI runner.
I'm behind a corporation proxy. This is my .gitlab-ci.yml:
image: maven:3-jdk-7
build:
script: "mvn clean package -B"
When a build is triggered, I get this error (in the Gitlab build console):
Unknown host repo.maven.apache.org: Name or service not known -> [Help 1]
Then, I have added
variables:
http_proxy: http://user:pass#corp.proxy.ip:port
to the .gitlab-ci.yml. But I get another error:
fatal: unable to access
'http://gitlab-ci-token:xxxxxx#170.20.20.20:8080/myapp.git/':
The requested URL returned error: 504
When I registered the Docker runner, Docker image selected was maven:3-jdk-7.
I have just tried adding no_proxy variable with 172.20.20.20 as value (Gitlab IP) but I get the same error (the first one.)
How can I solve it? Is there a way to force the Docker runner (container) to use --net=host?
What I did was I had to open up the mvnw document. Inside of it I found this line
MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS"
In between the ')' and the $MAVEN_OPTS I placed the
-Dhttps.proxyHost=yourHost -Dhttps.proxyPort=yourPort
arguments. This worked for me. Hope this helps. I didn't need the "variables" section you described above.

Resources