Jenkins Console Print Encoded Characters - docker

When outputting characters from a declarative pipeline running inside a linux container is it possible to change the encoding to match the true output from the terminal? I.e.
├── file1 +-- file1
├── file2 +-- file2
└── file3 +-- file3
^Formatting I want ^Formatting I get
.
I tried passing the following arguments to my Docker Agent:
-e JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF-8"
-e LC_ALL="en_US.UTF-8"
.
Combined with:
sh returnStdout: true, script: " "
And got ├── in place of the "+--", which seems to be the ANSI encoding for the "├──".
I am using the ansiColor Option but that didn't seem to help much.
.
I saw this similar question, but I was unsure on how to implement the solution in the pipeline.
Jenkins: console output characters

You can use Jenkins II to change the encoding to UTF-8.
Go to
Jenkins -> Manage Jenkins -> Configure System -> Global properties
and add two envirenment variables JAVA_TOOL_OPTIONS and LANG having values -Dfile.encoding=UTF-8 and en_US.UTF-8 respectively
.
After adding these you may need to restart Jenkins.
Reference: https://www.linkedin.com/pulse/how-resolve-utf-8-encoding-issue-jenkins-ajuram-salim/
UPDATE:
or you can update <arguments> in jenkins.xml file.
e.g.
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dfile.encoding=UTF-8 -jar "%BASE%\jenkins.war" --httpPort=8080 --webroot="%BASE%\war"</arguments>

Here is the official answer from cloudbees. Unfortunately all of these did not work for me.
https://support.cloudbees.com/hc/en-us/articles/360004397911-How-to-address-issues-with-unmappable-characters-
Add these to JVM Arguments in master and also on agents -
-Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8

For me, the problem was really in specifying the optional 'encoding' parameter to the 'sh' pipeline step: sh: Shell Script
Of course, this will only work provided the file.encoding is set properly as described in other posts here.

Related

Parsing config file with sections in Jenkins Pipeline and get specific section

I have to parse a config with section values in Jenkins Pipeline . Below is the example config file
[deployment]
10.7.1.14
[control]
10.7.1.22
10.7.1.41
10.7.1.17
[worker]
10.7.1.45
10.7.1.42
10.7.1.49
10.7.1.43
10.7.1.39
[edge]
10.7.1.13
Expected Output:
control1 = 10.7.1.17 ,control2 = 10.7.1.22 ,control3 = 10.7.1.41
I tried the below code in my Jenkins Pipeline script section . But it seems to be incorrect function to use
def cluster_details = readProperties interpolate: true, file: 'inventory'
echo cluster_details
def Var1= cluster_details['control']
echo "Var1=${Var1}"
Could you please help me with the approach to achieve the expected result
Regarding to documentation readProperties is to read Java properties file. But not INI files.
https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#readproperties-read-properties-from-files-in-the-workspace-or-text
I think to read INI file you have find available library for that,
e.g. https://ourcodeworld.com/articles/read/839/how-to-read-parse-from-and-write-to-ini-files-easily-in-java
Hi i got the solution for the problem
control_nodes = sh (script: """
manish=\$(ansible control -i inventory --list-host |sort -t . -g -k1,1 -k2,2 -k3,3 -k4,4 |awk '{if(NR>1)print}' |awk '{\$1=\$1;print}') ; \
echo \$manish
""",returnStdout: true).trim()
echo "Cluster Control Nodes are : ${control_nodes}"
def (control_ip1,control_ip2,control_ip3) = control_nodes.split(' ')
//println c1 // this also works
echo "Control 1: ${control_ip1}"
echo "Control 2: ${control_ip2}"
echo "Control 3: ${control_ip3}"
Explaination:
In the script section . I am getting the list of hostnames.Using sort i am sorting the hostname based on dot(.) delimeter. then using awk removing the first line in output. Using the later awk i am removing the leading white spaces.
Using returnStdout to save the shell variable output to jenkins property, which has list of ips separated by white space.
Now once i have the values in jenkins property variable, extracting the individual IPs using split methods.
Hope it helps.

Clean composer output from non readable characters in Jenkins' console output page

I have a Jenkins job to tweak, but no administration right on Jenkins itself.
I'd like to clean composer output from non readable characters, e.g:
the command is composer update --no-progress --ansi which outputs
in Jenkins'console.
I didn't exactly get the the reason why Jenkins cannot output some characters correctly.
As per https://medium.com/pacroy/how-to-fix-jenkins-console-log-encoding-issue-on-windows-a1f4b26e0db4, I perhaps could have tried to specify -Dfile.encoding=UTF8 for java, but as I said I don't have rights for Jenkins administration.
How could I get rid of these 'squares' characters ?
By pasting output lines into Notepad++, i noticed that these characters were backspaces. Hereafter how I've managed to embellish the output for Jenkins console :
# run the command, redirect the output into composer.out file
bin/composer.sh update --no-progress --ansi >composer.out 2>&1
# getting rid of backspaces
composer_out=$(cat composer.out | tr -d '\b')
# adding line feeds instead of numerous spaces
composer_out=$(echo "$composer_out" | sed -r 's/\)\s*(\w+)/\)\n\1/g')
echo "$composer_out"

How to escape a $-sign in a dockerfile?

I am trying to write a dockerfile in which I add a few java-options to a script called envvars.
To achieve that I want to append a few text-lines to said file like so:
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=${PWD_TRUSTSTORE}" >> ${BIN_DIR}/envvars
RUN echo "export JAVA_OPTS" >> ${BIN_DIR}/envvars
The issue here is, that I want the misc. placeholders ${varname} (those with curly braces) to be replaced during execution of the docker build command while the substring '$JAVA_OPTS' (i.e. those without braces) should be echoed and thus added to the envvars file verbatim, i.e. in the end the result in the /usr/local/apache2/bin/envvars file should read:
...
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=/usr/local/apache2/cert/myserver_truststore.jks
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=my_secret
export JAVA_OPTS
How can one escape a $-sign from variable substitution in dockerfiles?
I found hints to use \$ or $$ but neither worked for me.
In case that matters (which I hope/expect not to): I am building the image using "Docker Desktop" on Windows 10 but I would expect the dockerfile to be agnostic of that.
first you need to add this # escape=` to your Dockerfile since \ is an escape charachter in the Dockerfile . then you can use \$ to escape the dollar sign in the RUN section
Example:
# escape=`
RUN echo "JAVA_OPTS=\$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
that will be JAVA_OPTS=$JAVA_OPTS in your env file

How to edit "Version: xxx" from a script to automate a debian package build?

The Debian control file has a line like this (among many others):
Version: 1.1.0
We are using jenkins to build our application as a .deb package. in Jenkins we are doing something like this:
cp -r $WORKSPACE/p1.1/ourap/scripts/ourapp_debian $TARGET/
cd $TARGET
fakeroot dpkg-deb --build ourapp_debian
We would like to do shomething like this in our control file:
Packages: ourapp
Version: 1.1.$BUILD_NUMBER
but obviously this is not possible.
So we need something like a sed script to find the line starting with Version: and replace anything after it with a constant plus the BUILD_NUMBER env var which Jenkins creates.
We have tried things like this:
$ sed -i 's/xxx/$BUILD_NUMBER/g' control
then put "Version: xxx" in our file, but this doesn't work, and there must be a better way?
Any ideas?
We don't use the change-log, as this package will be installed on servers which no one has access to. the change logs are word docs given to the customer.
We don't use or need any of the Debian helper tools.
Create two files:
f.awk
function vp(s) { # return 1 for a string with version info
return s ~ /[ \t]*Version:/
}
function upd() { # an example of version number update function
v[3] = ENVIRON["BUILD_NUMBER"]
}
vp($0) {
gsub("[^.0-9]", "") # get rid of everything but `.' and digits
split($0, v, "[.]") # split version info into array `v' elements
upd()
printf "Version: %s.%s.%s\n", v[1], v[2], v[3]
next # done with this line
}
{ # print the rest without modifications
print
}
f.example
rest1
Version: 1.1.0
rest2
Run the command
BUILD_NUMBER=42 awk -f f.awk f.example
Expected output is
rest1
Version: 1.1.42
rest2
With single quote:
sed -ri "s/(Version.*\.)[0-9]*/\1$BUILD_NUMBER/g" <control file>
OR
sed -ni "/Version/{s/[0-9]*$/$BUILD_NUMBER/};p" <control file>

spark submit add multiple jars in classpath

I am trying to run a spark program where i have multiple jar files, if I had only one jar I am not able run. I want to add both the jar files which are in same location. I have tried the below but it shows a dependency error
spark-submit \
--class "max" maxjar.jar Book1.csv test \
--driver-class-path /usr/lib/spark/assembly/lib/hive-common-0.13.1-cdh​5.3.0.jar
How can i add another jar file which is in the same directory?
I want add /usr/lib/spark/assembly/lib/hive-serde.jar.
Just use the --jars parameter. Spark will share those jars (comma-separated) with the executors.
Specifying full path for all additional jars works.
./bin/spark-submit --class "SparkTest" --master local[*] --jars /fullpath/first.jar,/fullpath/second.jar /fullpath/your-program.jar
Or add jars in conf/spark-defaults.conf by adding lines like:
spark.driver.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
spark.executor.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
You can use * for import all jars into a folder when adding in conf/spark-defaults.conf .
spark.driver.extraClassPath /fullpath/*
spark.executor.extraClassPath /fullpath/*
I was trying to connect to mysql from the python code that was executed using spark-submit.
I was using HDP sandbox that was using Ambari. Tried lot of options such as --jars, --driver-class-path, etc, but none worked.
Solution
Copy the jar in /usr/local/miniconda/lib/python2.7/site-packages/pyspark/jars/
As of now I'm not sure if it's a solution or a quick hack, but since I'm working on POC so it kind of works for me.
In Spark 2.3 you need to just set the --jars option. The file path should be prepended with the scheme though ie file:///<absolute path to the jars>
Eg : file:////home/hadoop/spark/externaljsrs/* or file:////home/hadoop/spark/externaljars/abc.jar,file:////home/hadoop/spark/externaljars/def.jar
Pass --jars with the path of jar files separated by , to spark-submit.
For reference:
--driver-class-path is used to mention "extra" jars to add to the "driver" of the spark job
--driver-library-path is used to "change" the default library path for the jars needed for the spark driver
--driver-class-path will only push the jars to the driver machine. If you want to send the jars to "executors", you need to use --jars
And to set the jars programatically set the following config:
spark.yarn.dist.jars with comma-separated list of jars.
Eg:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Spark config example") \
.config("spark.yarn.dist.jars", "<path-to-jar/test1.jar>,<path-to-jar/test2.jar>") \
.getOrCreate()
You can use --jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') to include entire folder of Jars.
So,
spark-submit -- class com.yourClass \
--jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') \
...
For --driver-class-path option you can use : as delimeter to pass multiple jars.
Below is the example with spark-shell command but I guess the same should work with spark-submit as well
spark-shell --driver-class-path /path/to/example.jar:/path/to/another.jar
Spark version: 2.2.0
if you are using properties file you can add following line there:
spark.jars=jars/your_jar1.jar,...
assuming that
<your root from where you run spark-submit>
|
|-jars
|-your_jar1.jar

Resources