How to import groovy package/class into pipeline? - jenkins

I'm writing a Jenkins shared library.
I'm not a coder myself and because of that I bump into many errors, which usually I don't know how to solve.
My shared library structure looks like so:
itai#Itais-MBP ~/src/company/pipeline_utils - (master) $ tree -f
.
├── ./1
├── ./README.md
├── ./functions.groovy
├── ./src
│   └── ./src/com
│   ├── ./src/com/company
│   │   ├── ./src/com/company/pipelines
│   │   │   └── ./src/com/company/pipelines/standardPipeline.groovy
│   │   └── ./src/com/company/utils
│   │   ├── ./src/com/company/utils/Git.groovy
│   │   ├── ./src/com/company/utils/SlackPostBuild.groovy
│   │   ├── ./src/com/company/utils/dockerBuild.groovy
│   │   ├── ./src/com/company/utils/dockerTest.groovy
│   │   ├── ./src/com/company/utils/notifyEmail.groovy
│   │   ├── ./src/com/company/utils/notifySlack.groovy
│   │   ├── ./src/com/company/utils/pipeline.groovy
│   │   └── ./src/com/company/utils/pipelineFunctions.groovy
│   └── ./src/com/company-in-idea
├── ./test_utils.groovy
├── ./utils.groovy
└── ./vars
├── ./vars/standardPipeline.groovy
└── ./vars/utils.groovy
The pipeline file looks like so:
itai#Itais-MBP ~/src/company/pipeline_utils - (master) $ cat ./vars/standardPipeline.groovy
import com.company.utils.Git;
def call(body) {
def config = [:]
body.resolveStrategy = Closure.DELEGATE_FIRST
body.delegate = config
body()
node {
// Clean workspace before doing anything
deleteDir()
try {
stage ('Clone') {
checkout scm
def committer = getCommitter()
}
stage ('Build') {
sh "echo 'building ${config.projectName} ...'"
}
stage ('Tests') {
parallel 'static': {
sh "echo 'shell scripts to run static tests...'"
},
'unit': {
sh "echo 'shell scripts to run unit tests...'"
},
'integration': {
sh "echo 'shell scripts to run integration tests...'"
}
}
stage ('Deploy') {
sh "echo 'deploying to server ${config.serverDomain}...'"
sh "echo Itai ganot"
sh "echo Itai"
}
} catch (err) {
currentBuild.result = 'FAILED'
throw err
}
}
}
You can see in the pipeline file that I import "com.company.utils.Git", the git function file looks like so:
itai#Itais-MBP ~/src/company/pipeline_utils - (master) $ cat ./src/com/company/utils/Git.groovy
#!/usr/bin/env groovy
package com.company.utils;
def sh_out(command) {
sh(returnStdout: true, script: command).trim()
}
def getCommitter(){
node {
committer = this.sh_out('git show -s --format=\'%ce\' | tr -d "\'" | cut -d# -f1')
return committer
}
}
def getRepo(){
node {
reponame = this.sh_out('basename $(git remote show -n origin | grep Push | cut -d: -f2- | rev | cut -c5- | rev)')
return reponame
}
}
void gitClean(){
node {
this.sh_out('''
sudo chown -R ubuntu:ubuntu .
if [ -d ".git" ]; then
sudo git reset --hard &>/dev/null
sudo git clean -fxd &>/dev/null
sudo git tag -d $(git tag) &>/dev/null
fi
|| true ''')
}
}
return this
The Jenkinsfile looks like so:
#Library("company") _
standardPipeline {
projectName = "Project1"
serverDomain = "Project1 Server Domain"
}
When I run the job, it fails with the following error:
java.lang.NoSuchMethodError: No such DSL method 'getCommitter' found
among steps [AddInteractivePromotion, ArtifactoryGradleBuild,
ArtifactoryMavenBuild, ConanAddRemote, ConanAddUser, InitConanClient,
MavenDescriptorStep, RunConanCommand, ansiColor, ansiblePlaybook,
archive...
As far as I understand, I've imported the git package into the pipeline so I don't understand why this function is not recognized.
Another problem I have is that the pipeline only "looks" at the standardPipeline.groovy file at projectDir/vars and not under src/com/company/pipelines/standardPipeline.groovy ... I even tried removing the vars dir but the pipeline keeps looking there... why is that?
Any idea what I'm doing wrong?

It looks like your line def committer = getCommitter() is what is calling the error because it's looking for a step named getCommitter(), instead of calling the Git class's method.
Referencing the documentation here, you should do something like this in the pipeline file:
def gitUtil = new Git()
def committer = gitUtil.getCommitter()

Related

Yocoto project, copy prebuild files to target filesystem

I've cross-compiled opencv3.4, and it is running well on board. The project is managed through Yocto. so I wrote this opencv-gl.bb file to copy prebuilt opencv files to the target filesystem. But after I burned the mirror image to the developed board, I got nothing. It seems the copy command has neveber been executed. Where am I wrong?
SUMMARY = "Install opencv 3.4.14 libraries"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
SRC_URI = "\
file://etc \
file://usr \
"
S = "${WORKDIR}"
## prebuilt library don't need following steps
do_configure[noexec] = "1"
do_compile[noexec] = "1"
do_package_qa[noexec] = "1"
do_install[nostamp] += "1"
do_install() {
install -d ${D}/usr/local/bin
cp -rf ${S}/usr/bin/* ${D}/usr/local/bin/
install -d ${D}/usr/local/lib
cp -rf ${S}/usr/lib/* ${D}/usr/local/lib/
install -d ${D}/usr/local/include
cp -rf ${S}/usr/include/* ${D}/usr/local/include/
install -d ${D}/usr/local/share
cp -rf ${S}/usr/share/* ${D}/usr/local/share/
}
# let the build system extends the FILESPATH file search path
FILESEXTRAPATHS_prepend := "${THISDIR}/prebuilts:"
FILES_${PN} += " \
/usr/local/bin/* \
/usr/local/lib/* \
/usr/local/include/* \
/usr/local/share/* \
"
# INSANE_SKIP_${PN} += "installed-vs-shipped"
the file structure is as follows:
wb#ubuntu:~/Yocto/meta-semidrive/recipes-test/opencv-gl$ tree -L 3
.
├── opencv-gl.bb
└── prebuilts
├── etc
│   └── ld.so.conf
├── LICENSE
└── usr
├── bin
├── include
├── lib
└── share
7 directories, 3 files

My gRPC client in a docker container doesn't work but the client outside the container go well

Envs
$ protoc --version
libprotoc 3.7.1
$ docker-compose --version
docker-compose version 1.22.0, build f46880fe
$ docker --version
Docker version 20.10.0-beta1, build ac365d7
What I want to do
I want to make a microservice implemented by gin-gonic.
Codes
$ tree
.
├── api
│   ├── Dockerfile
│   ├── go.mod
│   ├── go.sum
│   ├── main.go
│   ├── pb
│   │   ├── proto
│   │   │   └── user.proto
│   │   ├── user_grpc.pb.go
│   │   └── user.pb.go
│   └── services
│   ├── README.md
│   └── user
│   ├── Dockerfile
│   ├── go.mod
│   ├── go.sum
│   ├── main.go
│   └── pb
│   ├── user_grpc.pb.go
│   └── user.pb.go
├── docker-compose.yml
├── make_pb.sh
└── README.md
As you can see I have 2 Dockerfile now.
Dockefile in api dir run gRPC client program.
Another Dockerfile in user dir run gRPC server program.
api/main.go
package main
import (
"context"
"log"
"os"
"time"
pb "github.com/Asuha-a/URLShortener/api/pb"
"google.golang.org/grpc"
)
const (
address = "localhost:50051"
defaultName = "world"
)
func main() {
hello()
//r := gin.Default()
//r.GET("", hello)
//r.Run()
}
func hello() {
log.Println(grpc.WithBlock())
conn, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
client := pb.NewGreeterClient(conn)
// Contact the server and print out its response.
name := defaultName
if len(os.Args) > 1 {
name = os.Args[1]
}
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
r, err := client.SayHello(ctx, &pb.HelloRequest{Name: name})
if err != nil {
log.Fatalf("could not greet: %v", err)
}
log.Printf("Greeting: %s", r.GetMessage())
}
api/services/user/main.go
package main
import (
"context"
"log"
"net"
pb "github.com/Asuha-a/URLShortener/api/services/user/pb"
"google.golang.org/grpc"
)
const (
port = ":50051"
)
// server is used to implement helloworld.GreeterServer.
type server struct {
pb.UnimplementedGreeterServer
}
// SayHello implements helloworld.GreeterServer
func (s *server) SayHello(ctx context.Context, in *pb.HelloRequest) (*pb.HelloReply, error) {
log.Printf("Received: %v", in.GetName())
return &pb.HelloReply{Message: "Hello " + in.GetName()}, nil
}
func main() {
log.Println("now listening")
lis, err := net.Listen("tcp", port)
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
s := grpc.NewServer()
pb.RegisterGreeterServer(s, &server{})
if err := s.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
docker-compose.yml
version: '3'
services:
gateway:
build:
context: ./api/
dockerfile: Dockerfile
ports:
- 8080:8080
tty:
true
depends_on:
- user
user:
build:
context: ./api/services/user
dockerfile: Dockerfile
ports:
- 50051:50051
tty:
true
Dockerfiles have the same codes.
FROM golang:latest
RUN mkdir /go/src/work
WORKDIR /go/src/work
ADD . /go/src/work
CMD go run main.go
What happened
When I run docker-compose up, grpc.Dial doesn't seem to work.
The log is here
$ docker-compose up
Recreating urlshortener_user_1 ... done
Recreating urlshortener_gateway_1 ... done
Attaching to urlshortener_user_1, urlshortener_gateway_1
user_1 | go: downloading google.golang.org/grpc v1.33.1
user_1 | go: downloading google.golang.org/protobuf v1.25.0
user_1 | go: downloading github.com/golang/protobuf v1.4.3
gateway_1 | go: downloading google.golang.org/grpc v1.33.1
gateway_1 | go: downloading github.com/golang/protobuf v1.4.3
gateway_1 | go: downloading google.golang.org/protobuf v1.25.0
user_1 | go: downloading google.golang.org/genproto v0.0.0-20201028140639-c77dae4b0522
user_1 | go: downloading golang.org/x/net v0.0.0-20201029055024-942e2f445f3c
gateway_1 | go: downloading google.golang.org/genproto v0.0.0-20201026171402-d4b8fe4fd877
gateway_1 | go: downloading golang.org/x/net v0.0.0-20201027133719-8eef5233e2a1
gateway_1 | go: downloading golang.org/x/sys v0.0.0-20201027140754-0fcbb8f4928c
user_1 | go: downloading golang.org/x/sys v0.0.0-20201029020603-3518587229cd
user_1 | go: downloading golang.org/x/text v0.3.4
gateway_1 | go: downloading golang.org/x/text v0.3.4
user_1 | 2020/10/29 08:29:42 now listening
gateway_1 | 2020/10/29 08:29:43 true
That is expected printed 'Greeting: Hello world'.
What I tried
When I run the server and client without the container, it was succeeded.
run client
$ go run main.go
2020/10/29 17:37:17 true
2020/10/29 17:38:02 Greeting: Hello world
run server
$ go run main.go
2020/10/29 17:38:00 now listening
2020/10/29 17:38:02 Received: world
When I run the server without a container and run the client with a container, it was failed.
run client
$ docker-compose up
Recreating urlshortener_gateway_1 ... done
Attaching to urlshortener_gateway_1
gateway_1 | go: downloading google.golang.org/protobuf v1.25.0
gateway_1 | go: downloading github.com/golang/protobuf v1.4.3
gateway_1 | go: downloading google.golang.org/grpc v1.33.1
gateway_1 | go: downloading google.golang.org/genproto v0.0.0-20201026171402-d4b8fe4fd877
gateway_1 | go: downloading golang.org/x/net v0.0.0-20201027133719-8eef5233e2a1
gateway_1 | go: downloading golang.org/x/sys v0.0.0-20201027140754-0fcbb8f4928c
gateway_1 | go: downloading golang.org/x/text v0.3.4
gateway_1 | 2020/10/29 08:40:47 true
run server
$ go run main.go
2020/10/29 17:40:19 now listening
When I run the server with the client and run the client without the container, it was succeeded.
run client
$ go run main.go
2020/10/29 17:43:40 true
2020/10/29 17:43:41 Greeting: Hello world
run server
$ docker-compose up
Removing urlshortener_user_1
Recreating 42fb18da11cf_urlshortener_user_1 ... done
Attaching to urlshortener_user_1
user_1 | go: downloading google.golang.org/grpc v1.33.1
user_1 | go: downloading google.golang.org/protobuf v1.25.0
user_1 | go: downloading github.com/golang/protobuf v1.4.3
user_1 | go: downloading golang.org/x/net v0.0.0-20201029055024-942e2f445f3c
user_1 | go: downloading google.golang.org/genproto v0.0.0-20201028140639-c77dae4b0522
user_1 | go: downloading golang.org/x/sys v0.0.0-20201029020603-3518587229cd
user_1 | go: downloading golang.org/x/text v0.3.4
user_1 | 2020/10/29 08:43:41 now listening
user_1 | 2020/10/29 08:43:41 Received: world
What I want to know
Why did running the client in container fail?
How to fix it?
The Issue is solved by this comment.
Localhost in the container is not the same as where the server is running. – Matt 14 mins ago
I fixed the api/main.go.
address = "localhost:50051" -> address = "user:50051"
Now it works.
The host name have to be container name specified in docker-compose.yml.

Why bitbucket-pipelines doesn't create cache?

Here is result tree on server after my script:
> pwd
/opt/atlassian/pipelines/agent/build
> tree -d
.
├── android-sdk-linux
│ ├── build-tools
│ │ └── 28.0.3
...
├── app
│ ├── build
...
└── readme
8005 directories
Here is my script from https://opatry.net/2017/11/06/bitbucket-pipelines-for-android/:
ci_install.sh
#!/usr/bin/env bash
set -eu
cur_dir=$(cd "$(dirname "$0")"; pwd)
origin_dir=$(cd "${cur_dir}/.."; pwd)
app_dir="${origin_dir}/android"
output_dir="${origin_dir}/artifacts"
default_android_sdk_zip_version="3859397"
android_sdk_zip_version=${1:-${default_android_sdk_zip_version}}
case $(uname -s) in
Linux)
os="linux"
;;
Darwin)
os="darwin"
;;
CYGWIN*|MINGW*)
os="windows"
;;
*)
echo "!! Unsupported OS $(uname -s)"
exit 1
;;
esac
export ANDROID_HOME="${origin_dir}/android-sdk-${os}"
if [ ! -f "${ANDROID_HOME}/tools/bin/sdkmanager" ]; then
# Download and unzip Android sdk
echo "Downloading Android SDK '${android_sdk_zip_version}' for '${os}'"
wget "https://dl.google.com/android/repository/sdk-tools-${os}-${android_sdk_zip_version}.zip"
unzip "sdk-tools-${os}-${android_sdk_zip_version}.zip" -d "${ANDROID_HOME}"
rm "sdk-tools-${os}-${android_sdk_zip_version}.zip"
fi
# Add Android binaries to PATH
export PATH="${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools:${PATH}"
# Accept all licenses (source: http://stackoverflow.com/questions/38096225/automatically-accept-all-sdk-licences)
echo "Auto Accepting licenses"
mkdir -p "$ANDROID_HOME/licenses"
echo -e "\n8933bad161af4178b1185d1a37fbf41ea5269c55" > "${ANDROID_HOME}/licenses/android-sdk-license"
echo -e "\n84831b9409646a918e30573bab4c9c91346d8abd" > "${ANDROID_HOME}/licenses/android-sdk-preview-license"
# Update android sdk
echo "Downloading packages described by ${cur_dir}/package_file.txt"
cat "${cur_dir}/package_file.txt"
( sleep 5 && while [ 1 ]; do sleep 1; echo y; done ) | sdkmanager --package_file="${cur_dir}/package_file.txt"
package_file.txt
platform-tools
build-tools;26.0.2
platforms;android-26
bitbucket-pipelines.yml:
image: java:8
pipelines:
branches:
master:
- step:
caches:
- gradle
- android-sdk
script:
- bash ./build/ci_install.sh
- ANDROID_HOME=$PWD/android-sdk-linux bash ./build/android.sh
definitions:
caches:
android-sdk: android-sdk-linux
gradle: gradle
In result:
Build teardown
You already have a 'gradle' cache so we won't create it again
Assembling contents of new cache 'android-sdk'
But in Pipelines -> Caches->Dependency caches cache for android-sdk is not displayed:
And at next run:
Cache "android-sdk": Downloading
Cache "android-sdk": Not found
All works fine. My ci_install.sh file was in /MyProject/utils/pipelines/ci_install.sh so android-sdk-linux was created in /opt/atlassian/pipelines/agent/build/utils/android-sdk-linux folder
So I moved file ci_install.sh to /MyProject/pipelines/ci_install.sh and now android-sdk-linux is created in /opt/atlassian/pipelines/agent/build/android-sdk-linux folder
Removed folder utils from my project

Replace default configuration with my own in a Docker image

I am working in the following Dockerfile:
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y \
curl \
apache2 \
php5 \
php5-cli \
libapache2-mod-php5 \
php5-gd \
php5-json \
php5-mcrypt \
php5-mysql \
php5-curl \
php5-memcached \
php5-mongo \
zend-framework
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
chown www-data /usr/local/bin/composer && composer --version
# Install usefull PHP tools
RUN composer global require sebastian/phpcpd && \
composer global require phpmd/phpmd && \
composer global require squizlabs/php_codesniffer
# Install xdebug after we install composer since it cause issues
# see https://getcomposer.org/doc/articles/troubleshooting.md#xdebug-impact-on-composer
RUN apt-get install -y php5-xdebug
As you may notice this install PHP 5.5.x and it comes with the default configuration which I would like to override with my own values.
I have the following directory structure:
docker-php55/
├── container-files
│   ├── config
│   │   └── init
│   │   └── vhost_default
│   └── etc
│   └── php.d
│   ├── zz-php-directories.ini
│   └── zz-php.ini
├── Dockerfile
├── LICENSE
├── README.md
└── run
The files zz-php-directories.ini and zz-php.ini are my configurations that I should write to /etc/php5/apache2/php.ini upon image creation. The content of the files is the following:
zz-php.ini
; Basic configuration override
expose_php = Off
memory_limit = 512M
post_max_size = 128M
upload_max_filesize = 128M
date.timezone = UTC
max_execution_time = 120
; Error reporting
display_errors = stderr
display_startup_errors = Off
error_reporting = E_ALL
; A bit of performance tuning
realpath_cache_size = 128k
; OpCache tuning
opcache.max_accelerated_files = 32000
; Temporarily disable using HUGE PAGES by OpCache.
; This should improve performance, but requires appropriate OS configuration
; and for now it often results with some weird PHP warning:
; PHP Warning: Zend OPcache huge_code_pages: madvise(HUGEPAGE) failed: Invalid argument (22) in Unknown on line 0
opcache.huge_code_pages=0
; Xdebug
[Xdebug]
xdebug.remote_enable = true
xdebug.remote_host = "192.168.3.1" // this IP should be the host IP
xdebug.remote_port = "9001"
xdebug.idekey = "XDEBUG_PHPSTORM"
zz-php-directories.ini
; Configure temp path locations
sys_temp_dir = /data/tmp/php
upload_tmp_dir = /data/tmp/php/uploads
session.save_path = /data/tmp/php/sessions
uploadprogress.file.contents_template = "/data/tmp/php/upload_contents_%s"
uploadprogress.file.filename_template = "/data/tmp/php/upt_%s.txt"
How do I override the default php.ini parameters on the image with the ones on those files upon image creation?
EDIT: Improve the question
To leave an example, zz-php.ini is a local file placed in my laptop|PC. As soon as I install PHP in the image it comes with a default configuration file, this mean I should have a file under /etc/php5/apache2/php.ini.
This default configuration file already has default values like for example: expose_php = On (again this is the default, others comes as ;realpath_cache_size =) so what I want to do is to change the value for the default file with the value from my file, in other words:
default (as in /etc/php5/apache2/php.ini) expose_php = On
override (as in zz-php.ini) expose_php = Off
At the end I should have the values from zz-php.ini overwrited in /etc/php5/apache2/php.ini
As for the host IP address I think I could use a ENV var and pass to the build as an argument, I am right? If no, then how would you get the host IP address needed for that setup?
That's two questions.
1) Just use the COPY instruction to copy your local php.ini into the image location. Eg:
COPY php.ini /etc/php5/apache2/php.ini
2) You don't want to hardcode any ip into your image. That needs to be done when the container is started. The standard way of doing this with docker is to specify an environment variable like HOST_IP and you use a shell script to make the modifications on the container at start time. For instance:
Your inject.sh script:
#!/usr/bin/bash
sed -i -E "s/xdebug.remote_host.*/xdebug.remote_host=$HOST_IP/" /etc/php5/apache2/php.ini
You need to add the inject.sh file to your image when you build it.
COPY inject.sh /usr/local/bin/
Then you can initialize and start your container as follow:
docker run -e HOST_IP=53.62.10.12 mycontainer bash -c "inject.sh && exec myphpapp"
The exec is needed to make sure the myphpapp becomes the main process of the container (ie: it has PID 1) otherwise it won't receive kill commands (like Ctrl-C).

search for directory name not containing file with pattern

In big list of recursive directory i am searching for directory name that does not contain a file with pattern ending with pattern "mt"
how can i search for directory name in this case .
I search net and found command to find directory name containing file with f :
find . -type f -name '*f*' | sed -r 's|/[^/]+$||' | sort | uniq
But how can i search for directory name w/o mt
find command but didn't found right argument
i am able to search for file name in recursive directory but here i am interested in searching for directory name in recursive pattern that does not contain a file containing a pattern with mt
You can use:
find . -type f -name '*f*' | grep -v 'mt$'
grep -v filters out every match to the pattern that follows.
directory tree
∮ tree -F
.
├── a/
│   └── x.mt
├── b/
│   ├── x/
│   ├── y/
│   │   └── x.mt
│   └── z/
├── blacklist
└── c/
create blacklist
∮ find . -name '*mt' | sed 's#/[^/]*mt$##; s#^#^#' > blacklist
∮ cat blacklist
^./b/y
^./a
find directories
∮ find . -type d | grep -v -f blacklist
.
./b
./b/x
./b/z
./c
This solution ignores all directories which directly contain *mt file.
It's easy to write a script (using fgrep) to ignore those (., ./b) which indirectly contain *mt file.
Something like this should work
find -type d -exec bash -c \
'
shopt -s nullglob dotglob;
found=;
for f in "{}"/*mt; do
[[ -f $f ]] && found=true && break;
done;
[[ $found == true ]] || echo "{}";
' \;
Change || to && if you instead want directories that do contain *mt
Might be an easier way to do this, but you can't delay glob expansion in -exec without eval (using which would cause security issues), so couldn't think of a better way then putting it into bash -c.

Resources