Can't build docker container for Kotlin + gRPC project - docker

Every time I try to build the application docker I get the following error:
java.io.IOException: Cannot run program "/home/gradle/.gradle/caches/modules-2/files-2.1/com.google.protobuf/protoc/3.20.3/898c37cfe230a987c908cff24f04a58d320578f1/protoc-3.20.3-linux-x86_64.exe": error=2, No such file or directory
I have tried to download and unzip the protoc in my dockerfile through the following command before building:
RUN curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.20.3/protoc-3.20.3-linux-x86_64.zip
RUN unzip -q protoc-3.20.3-linux-x86_64.zip
and the error stills.
The protobuf section in my "build.gradle" file is like this:
protobuf {
protoc {
artifact = "com.google.protobuf:protoc:3.20.3"
}
plugins {
id("grpc") {
artifact = "io.grpc:protoc-gen-grpc-java:1.46.0"
}
id("grpckt") {
artifact = "io.grpc:protoc-gen-grpc-kotlin:0.2.0:jdk7#jar"
}
}
generateProtoTasks {
ofSourceSet("main").forEach {
it.plugins {
// Apply the "grpc" plugin whose spec is defined above, without options.
id("grpc")
id("grpckt")
}
}
}
}

As I could find in this github issue I could solve it by adding the gcompact package into the Dockerfile prior to the building step.
Now my Dockerfile is as follow:
FROM gradle:7.5.1-jdk18-alpine AS build
USER root
WORKDIR /data
COPY . /data
RUN apk add gcompat
RUN gradle assemble --no-daemon
.
.
.

Related

how to build docker images with terraform providers preinstalled

I am trying to build a docker image that contains all of the necessary plugins/providers that several source repos need, so that when an automated terraform validate runs, it doesn't have to download gigs of redundant data.
However, I recognize that this provides for a maintenance problem in that someone may update a plugin version, and that would needed to be downloaded, since the docker image would not contain it.
The question
How can I pre-download all providers and plugins
Tell the CLI use those predownloaded plugins AND
also tell it that, if it doesn't find what it needs locally, then it can go to the network
Below are the relevant file:
.terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
disable_checkpoint = true
provider_installation {
filesystem_mirror {
path = "$HOME/.terraform/providers"
}
direct {
}
}
tflint (not relevant to this question, but it shows up in the below Dockerfile)
plugin "aws" {
enabled = true
version = "0.21.1"
source = "github.com/terraform-linters/tflint-ruleset-aws"
}
plugin "azurerm" {
enabled = true
version = "0.20.0"
source = "github.com/terraform-linters/tflint-ruleset-azurerm"
}
Dockerfile
FROM ghcr.io/terraform-linters/tflint-bundle AS base
LABEL name=tflint
RUN adduser -h /home/jenkins -s /bin/sh -u 1000 -D jenkins
RUN apk fix && apk --no-cache --update add git terraform openssh
ADD .terraformrc /home/jenkins/.terraformrc
RUN mkdir -p /home/jenkins/.terraform.d/plugin-cache/registry.terraform.io
ADD .tflint.hcl /home/jenkins/.tflint.hcl
WORKDIR /home/jenkins
RUN tflint --init
FROM base AS build
ARG SSH_PRIVATE_KEY
RUN mkdir /root/.ssh && \
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_ed25519 && \
chmod 400 /root/.ssh/id_ed25519 && \
touch /root/.ssh/known_hosts && \
ssh-keyscan mygitrepo >> /root/.ssh/known_hosts
RUN git clone git#mygitrepo:wrai/tools/g.git
RUN git clone git#mygitrepo:myproject/a.git && \
git clone git#mygitrepo:myproject/b.git && \
git clone git#mygitrepo:myproject/c.git && \
git clone git#mygitrepo:myproject/d.git && \
git clone git#mygitrepo:myproject/e.git && \
git clone git#mygitrepo:myproject/f.git
RUN ls -1d */ | xargs -I {} find {} -name '*.tf' | xargs -n 1 dirname | sort -u | \
xargs -I {} -n 1 -P 20 terraform -chdir={} providers mirror /home/jenkins/.terraform.d
RUN chown -R jenkins:jenkins /home/jenkins
USER jenkins
FROM base AS a
COPY --from=build /home/jenkins/a/ /home/jenkins/a
RUN cd /home/jenkins/a && terraform init
FROM base AS b
COPY --from=build /home/jenkins/b/ /home/jenkins/b
RUN cd /home/jenkins/b && terraform init
FROM base AS c
COPY --from=build /home/jenkins/c/ /home/jenkins/c
RUN cd /home/jenkins/c && terraform init
FROM base AS azure_infrastructure
COPY --from=build /home/jenkins/d/ /home/jenkins/d
RUN cd /home/jenkins/d && terraform init
FROM base AS aws_infrastructure
COPY --from=build /home/jenkins/e/ /home/jenkins/e
RUN cd /home/jenkins/e && terraform init
Staging plugins:
This is most easily accomplished with the plugin cache dir setting in the CLI. This supersedes the old usage with the -plugin-dir=PATH argument for the init command. You could also set a filesystem mirror in each terraform block within the root module config, but this would be cumbersome for your use case. In your situation, you are already configuring this in your .terraformrc, but the filesystem_mirror path conflicts with the plugin_cache_dir. You would want to resolve that conflict, or perhaps remove the mirror block entirely.
Use staged plugins:
Since the setting is captured in the CLI configuration file within the Dockerfile, this would be automatically used in future commands.
Download additional plugins if necessary:
This is default behavior of the init command, and therefore requires no further actions on your part.
Side note:
The jenkins user typically is /sbin/nologin for shell and /var/lib/jenkins for home directory. If the purpose of this Docker image is for a Jenkins build agent, then you may want the jenkins user configuration to be more aligned with the standard.
TL;DR:
Configure the terraform plugin cache directory
Create directory with a single TF file containing required_providers block
Run terraform init from there
...
I've stumbled over this question as I tried to figure out the same thing.
I first tried leveraging an implied filesystem_mirror by running terraform providers mirror /usr/local/share/terraform/plugins in a directory containing only one terraform file containing the required_providers block. This works fine as long as you only use the versions of the providers you mirrored.
However, it's not possible to use a different version of a provider than the one you mirrored, because:
Terraform will scan all of the filesystem mirror directories to see which providers are placed there and automatically exclude all of those providers from the implied direct block.
I've found it to be a better solution to use a plugin cache directory instead. EDIT: You can prefetch the plugins by setting TF_PLUGIN_CACHE_DIR to some directory and then running terraform init in a directory that only declares the required_providers.
Previously overengineered stuff below:
The only hurdle left was that terraform providers mirror downloads the providers in the packed layout:
Packed layout: HOSTNAME/NAMESPACE/TYPE/terraform-provider-TYPE_VERSION_TARGET.zip is the distribution zip file obtained from the provider's origin registry.
while Terraform expects the plugin cache directory to use the unpacked layout:
Unpacked layout: HOSTNAME/NAMESPACE/TYPE/VERSION/TARGET is a directory containing the result of extracting the provider's distribution zip file.
So I converted the packed layout to the unpacked layout with the help of find and parallel:
find path/to/plugin-dir -name index.json -exec rm {} +`
find path/to/plugin-dir -name '*.json' | parallel --will-cite 'mkdir -p {//}/{/.}/linux_amd64; unzip {//}/*.zip -d {//}/{/.}/linux_amd64; rm {}; rm {//}/*.zip'

Zip Command not found in Jenkins

I am trying to zip a directory in a Jenkins pipeline, my code is similar to this
stages {
stage('ZIP') {
steps {
script {
currentBuild.displayName = "DISPLAY_NAME"
}
// Zip DIRECTORY
sh '''
cd ${WORKSPACE}
zip -r zip_file_name src_dir
'''
}
}
}
I get the following error
#tmp/durable-4423e1f6/script.sh: line 3: zip: command not found
However, when I create another job with execute as a shell option for the build, zip is working fine.
I have tried using zip pipeline utillity plugin, but when I try to access the zip file it is not found.
script {
currentBuild.displayName = "${VERSION}"
zip zipFile: '${zip_file_name}', dir: 'src_dir', overwrite: true
}
#raviTeja, I guess zip utility is missing in your jenkins agent machine. What is the OS flavour of your Jenkins agent? Lets say if you are using Linux flavours like Redhat, ubuntu.. first you need to install the zip utility in the agent machine. Then alone you can use the zip command in your script
If you are using RedHat flavour in the agent machine
First install zip utility
sudo dnf install zip
Then execute zip command in your pipeline script
zip -r zip_file_name src_dir
If you are using ubuntu/debian flavour for jenkins agent
Install zip utility using apt
sudo apt install zip
Execute zip command in your pipeline script
zip -r zip_file_name src_dir
Update:
If you are using Jenkins in the docker container, you can do something similar to the below.
I am guessing you are running ubuntu base image (identify the respective base image Linux flavour and execute the below commands)
Get into docker container using exec command
docker exec -it <container> /bin/bash
Update packages
apt-get -y update
Install zip
apt-get install zip -y
But remember if you delete this container, you are going to loose this set-up. You might have repeat all these steps

Is there a way to run donet sonnarscanner inside docker container in Jenkins?

I am trying to do some SonarCloud code analyses on Jenkins with a donet docker container that is already built by my organization. However, I have to install the sonnarscanner tool on top of it, but I am getting permission denials when running the tool.
This is what I've tried so far:
stage("Pull dotnet image") {
dotnetCoreImage = dockerImage("dotnet2.0")
}
dotnetCoreImage.inside() {
stage("Start sonar scanner") {
sh "dotnet tool install --tool-path /tmp/.donet/tools dotnet-sonarscanner; \
chmod 777 /tmp/.donet/tools/ -R;
export PATH=/tmp/.donet/tools; \
dotnet sonarscanner begin \
/d:sonar.host.url=https://sonarcloud.io \
/v:'1.0' \
/d:sonar.cs.opencover.reportsPaths='src/test/coverage/*.opencover.xml'\
/d:sonar.branch.name=${env.BRANCH_NAME}"
}
But I get the following errors:
16:00:46.527 16:00:46.527 WARNING: Error occurred when installing the loader targets to '/.local/share/Microsoft/MSBuild/4.0/Microsoft.Common.targets/ImportBefore/SonarQube.Integration.ImportBefore.targets'. 'Access to the path '/.local/share/Microsoft/MSBuild/4.0/Microsoft.Common.targets/ImportBefore' is denied.'
So my question is if there is a way to run inside the docker image with more permissions such as root.
(Cannot do sudo since it is not installed in the docker image)

Switch environment in docker file instead of in maven command

I am trying to run parallel builds using jenkins file and docker . I am currently able to run maven commands by putting a -f tag in maven command. I however want to move this in docker to decouple that from jenkinsfile
I tried adding WORKDIR in dockerfile and trying to switch
I tried using this in copyDocker.inside with "-w /containerworkspace" no luck
dockerfile
ARG BASE
FROM artifactory.XXX.XXXX.com/${BASE}
ADD . /containerworkspace/
RUN chmod -R 777 /containerworkspace
jenkinsfile code
def builtImage = self.docker.build(
"${imageName}:${branchNumber}-${buildNumber}",
" --build-arg BASE=${imageName}"
+ " -f ${dockerFile}")
BBDocker copyDocker = dockerBuild.clone()
copyDocker.image("${imageName}:${branchNumber}-${buildNumber}").inside (){
self.sh("mvn -U clean install -f /containerworkspace/")
}
I am trying to eliminate -f /containerworkspace/ from maven and put it into docker file somehow.

Flutter in Docker container fails in Jenkins but not locally

I'm trying to get one of our very simple flutter apps to run its test suite in Jenkins.
I have the following Dockerfile:
FROM ubuntu:bionic
ARG WORKING_DIR=${HOME}
ARG FLUTTER_ARCHIVE=flutter_linux_v0.5.1-beta.tar.xz
COPY ${FLUTTER_ARCHIVE} ${WORKING_DIR}
RUN apt-get update
RUN apt-get -y install xz-utils
RUN tar -xf "${WORKING_DIR}/${FLUTTER_ARCHIVE}"
ENV PATH="${WORKING_DIR}/flutter/bin:${PATH}"
RUN apt-get -y install git libglu1 lib32stdc++6
RUN rm ${WORKING_DIR}/${FLUTTER_ARCHIVE}
I build the container on my machine, navigate to the source code directory of the flutter app and run the following command:
docker run -td -vpwd:/source flutter:flutter
Where flutter:flutter is my built image.
I navigate into the container using docker exec and execute flutter packages get and then flutter test. Everything works as expected and all tests pass.
However, when I try to use the exact same Dockerfile through Jenkins, I get following error:
/flutter/bin/flutter: line 161: /flutter/bin/cache/dart-sdk/bin/dart: Permission denied
I'm guessing it has something do with users and that differs somehow between how Jenkins runs and how it works when I run the steps manually. The steps in the Jenkinsfile are exactly as what I ran manually above:
stages {
stage('test') {
steps {
script {
sh 'flutter packages get'
sh 'flutter test'
}
}
}
}
And I have pointed to our local registry where the image resides:
agent {
docker "flutter:flutter"
}
I am out of ideas and I don't really know how to solve this error. Any help would be greatly appreciated.
UPDATE
I have tried the following:
Creating a user in the Dockerfile using useradd and then switching to that user using USER <user> Dockerfile syntax and performing all operations as that user.
Creating a user in the Dockerfile using useradd, performing all operations as root but then setting chown user:user -R /flutter to change ownership of Flutter directory
Both of the above required me to add the following to the Jenkinsfile:
agent {
docker {
image 'flutter:flutter'
args '-u user --privileged'
}
}
But still no luck.

Resources