We are facing an issue that we are not able to overwrite log4j.properties.
Here is our code
Dockerfile
FROM confluentinc/cp-zookeeper:6.2.0
#RUN chmod 777 /etc/kafka/connect-log4j.properties
USER root
COPY ./log4j.properties /etc/kafka/log4j.properties
log4j.properties
log4j.rootLogger=INFO, ROLLINGFILE
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=DEBUG
log4j.appender.ROLLINGFILE.File=/var/log/zookeeper.log
Note :- We are using this confluent zookeeper 'confluentinc/cp-zookeeper:6.2.0'
Or, you could copy a template file instead
FROM confluentinc/cp-zookeeper:6.2.0
COPY --chown=appuser:appuser ./log4j.properties.template /etc/confluent/docker/log4j.properties.template
where
log4j.rootLogger={{ env["ZOOKEEPER_LOG4J_ROOT_LOGLEVEL"] | default('INFO') }}, rollingfile
log4j.appender.rollingfile=org.apache.log4j.RollingFileAppender
log4j.appender.rollingfile.layout=org.apache.log4j.PatternLayout
log4j.appender.rollingfile.File=/var/log/kafka/zookeeper.log
log4j.appender.rollingfile.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.rollingFile.MaxFileSize=10MB
log4j.appender.rollingfile.MaxBackupIndex=1
log4j.appender.rollingFile.append=true
{% if env['ZOOKEEPER_LOG4J_LOGGERS'] %}
{% set loggers = parse_log4j_loggers(env['ZOOKEEPER_LOG4J_LOGGERS']) %}
{% for logger,loglevel in loggers.items() %}
log4j.logger.{{logger}}={{loglevel}}, rollingfile
{% endfor %}
{% endif %}
You shouldn't copy in a log4j file. This gets overridden at runtime with a Jinja2 template defined here - https://github.com/confluentinc/kafka-images/blob/master/zookeeper/include/etc/confluent/docker/log4j.properties.template
To set custom loggers, simply add an environment variable for ZOOKEEPER_LOG4J_LOGGERS to the existing image rather than creating your own
You should also use Docker logging bridges rather than force the process to write to a file within the container
Related
I am trying to write a dockerfile in which I add a few java-options to a script called envvars.
To achieve that I want to append a few text-lines to said file like so:
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
RUN echo "JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=${PWD_TRUSTSTORE}" >> ${BIN_DIR}/envvars
RUN echo "export JAVA_OPTS" >> ${BIN_DIR}/envvars
The issue here is, that I want the misc. placeholders ${varname} (those with curly braces) to be replaced during execution of the docker build command while the substring '$JAVA_OPTS' (i.e. those without braces) should be echoed and thus added to the envvars file verbatim, i.e. in the end the result in the /usr/local/apache2/bin/envvars file should read:
...
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStore=/usr/local/apache2/cert/myserver_truststore.jks
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.trustStorePassword=my_secret
export JAVA_OPTS
How can one escape a $-sign from variable substitution in dockerfiles?
I found hints to use \$ or $$ but neither worked for me.
In case that matters (which I hope/expect not to): I am building the image using "Docker Desktop" on Windows 10 but I would expect the dockerfile to be agnostic of that.
first you need to add this # escape=` to your Dockerfile since \ is an escape charachter in the Dockerfile . then you can use \$ to escape the dollar sign in the RUN section
Example:
# escape=`
RUN echo "JAVA_OPTS=\$JAVA_OPTS -Djavax.net.ssl.trustStore=${CERT_DIR}/${HOSTNAME}_truststore.jks" >> ${BIN_DIR}/envvars
that will be JAVA_OPTS=$JAVA_OPTS in your env file
When outputting characters from a declarative pipeline running inside a linux container is it possible to change the encoding to match the true output from the terminal? I.e.
├── file1 +-- file1
├── file2 +-- file2
└── file3 +-- file3
^Formatting I want ^Formatting I get
.
I tried passing the following arguments to my Docker Agent:
-e JAVA_TOOL_OPTIONS="-Dfile.encoding=UTF-8"
-e LC_ALL="en_US.UTF-8"
.
Combined with:
sh returnStdout: true, script: " "
And got ├── in place of the "+--", which seems to be the ANSI encoding for the "├──".
I am using the ansiColor Option but that didn't seem to help much.
.
I saw this similar question, but I was unsure on how to implement the solution in the pipeline.
Jenkins: console output characters
You can use Jenkins II to change the encoding to UTF-8.
Go to
Jenkins -> Manage Jenkins -> Configure System -> Global properties
and add two envirenment variables JAVA_TOOL_OPTIONS and LANG having values -Dfile.encoding=UTF-8 and en_US.UTF-8 respectively
.
After adding these you may need to restart Jenkins.
Reference: https://www.linkedin.com/pulse/how-resolve-utf-8-encoding-issue-jenkins-ajuram-salim/
UPDATE:
or you can update <arguments> in jenkins.xml file.
e.g.
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -Dfile.encoding=UTF-8 -jar "%BASE%\jenkins.war" --httpPort=8080 --webroot="%BASE%\war"</arguments>
Here is the official answer from cloudbees. Unfortunately all of these did not work for me.
https://support.cloudbees.com/hc/en-us/articles/360004397911-How-to-address-issues-with-unmappable-characters-
Add these to JVM Arguments in master and also on agents -
-Dfile.encoding=UTF-8 -Dsun.jnu.encoding=UTF-8
For me, the problem was really in specifying the optional 'encoding' parameter to the 'sh' pipeline step: sh: Shell Script
Of course, this will only work provided the file.encoding is set properly as described in other posts here.
I am trying to generate Dockerfiles with Ansible template - see the role source and the template in Ansible Galaxy and Github
I need to genarate a standard Dockerfile line like:
...
VOLUME ["/etc/postgresql/9.4"]
...
However, when I put this in the input file:
...
instruction: CMD
value: "[\"/etc/postgresql/{{postgresql_version}}\"]"
...
It ends up rendered like:
...
VOLUME ['/etc/postgresql/9.4']
...
and I lose the " (which renders the Dockerfiles useless)
Any help ? How can I convince Jinja to not substitute " with ' ? I tried \" , |safe filter, even {% raw %} - it just keeps doing it!
Update:
Here is how to reproduce the issue:
Go get the peruncs.docker role from galaxy.ansible.com or Github (link is given above)
Write up a simple playbook (say demo.yml) with the below content and run: ansible-playbook -v demo.yml. The -v option will allow you to see the temp directory where the generated Dockerfile goes with the broken content, so you can examine it. Generating the docker image is not important to succeed, just try to get the Dockerfile right.
- name: Build docker image
hosts: localhost
vars:
- somevar: whatever
- image_tag: "blabla/booboo"
- docker_copy_files: []
- docker_file_content:
- instruction: CMD
value: '["/usr/bin/runit", "{{somevar}}"]'
roles:
- peruncs.docker
Thanks in advance!
Something in Ansible appears to be recognizing that as valid Python, so it's getting transformed into a Python list and then serialized using Python's str(), which is why you end up with the single-quoted values.
An easy way to work around this is to stick a space at the beginning of the value, which seems to prevent it from getting converted into Python:
- name: Build docker image
hosts: localhost
vars:
- somevar: whatever
- image_tag: "blabla/booboo"
- docker_copy_files: []
- docker_file_content:
- instruction: CMD
value: ' ["/usr/bin/runit", "{{somevar}}"]'
roles:
- peruncs.docker
This results in:
CMD ["/usr/bin/runit", "whatever"]
I've got a variable in Ansible that I use to pass environment variables to a task. However, I've got another playbook that uses the role, and I'd like to tack more values onto the variable. How can I accomplish this? For example, I want to have a different ORACLE_HOME depending on which type of server I'm running the playbook against.
--- group_vars/application.yml
environment_vars:
PIP_EXTRA_INDEX_URL=https://my.local.repo/pypi
--- group_vars/ubuntu.yml
environment_vars:
ORACLE_HOME: '/usr/lib/oracle/instantclient_11_2'
--- group_vars/centos.yml
environment_vars:
ORACLE_HOME: '/opt/oracle/instantclient_11_2'
--- roles/test_role/tasks/main.yml
- name: Install Python Requirements
pip:
name: my_app==1.0
environment: environment_vars
--- main.yml
- hosts: application
roles:
- role: test_role
--- inventory
[application:children]
ubuntu
centos
I'd do it this way:
environment_vars:
ORACLE_HOME: >
{% if ansible_distribution=='CentOS' %}
/opt/oracle/instantclient_11_2
{% elif ansible_distribution == 'Debian' %}
/usr/local/oracle/instantclient_11_2
{% else %}
/dev/null
{% endif %}
It may need some tweaking to get rid of white space.
I am trying to run a spark program where i have multiple jar files, if I had only one jar I am not able run. I want to add both the jar files which are in same location. I have tried the below but it shows a dependency error
spark-submit \
--class "max" maxjar.jar Book1.csv test \
--driver-class-path /usr/lib/spark/assembly/lib/hive-common-0.13.1-cdh5.3.0.jar
How can i add another jar file which is in the same directory?
I want add /usr/lib/spark/assembly/lib/hive-serde.jar.
Just use the --jars parameter. Spark will share those jars (comma-separated) with the executors.
Specifying full path for all additional jars works.
./bin/spark-submit --class "SparkTest" --master local[*] --jars /fullpath/first.jar,/fullpath/second.jar /fullpath/your-program.jar
Or add jars in conf/spark-defaults.conf by adding lines like:
spark.driver.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
spark.executor.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
You can use * for import all jars into a folder when adding in conf/spark-defaults.conf .
spark.driver.extraClassPath /fullpath/*
spark.executor.extraClassPath /fullpath/*
I was trying to connect to mysql from the python code that was executed using spark-submit.
I was using HDP sandbox that was using Ambari. Tried lot of options such as --jars, --driver-class-path, etc, but none worked.
Solution
Copy the jar in /usr/local/miniconda/lib/python2.7/site-packages/pyspark/jars/
As of now I'm not sure if it's a solution or a quick hack, but since I'm working on POC so it kind of works for me.
In Spark 2.3 you need to just set the --jars option. The file path should be prepended with the scheme though ie file:///<absolute path to the jars>
Eg : file:////home/hadoop/spark/externaljsrs/* or file:////home/hadoop/spark/externaljars/abc.jar,file:////home/hadoop/spark/externaljars/def.jar
Pass --jars with the path of jar files separated by , to spark-submit.
For reference:
--driver-class-path is used to mention "extra" jars to add to the "driver" of the spark job
--driver-library-path is used to "change" the default library path for the jars needed for the spark driver
--driver-class-path will only push the jars to the driver machine. If you want to send the jars to "executors", you need to use --jars
And to set the jars programatically set the following config:
spark.yarn.dist.jars with comma-separated list of jars.
Eg:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Spark config example") \
.config("spark.yarn.dist.jars", "<path-to-jar/test1.jar>,<path-to-jar/test2.jar>") \
.getOrCreate()
You can use --jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') to include entire folder of Jars.
So,
spark-submit -- class com.yourClass \
--jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') \
...
For --driver-class-path option you can use : as delimeter to pass multiple jars.
Below is the example with spark-shell command but I guess the same should work with spark-submit as well
spark-shell --driver-class-path /path/to/example.jar:/path/to/another.jar
Spark version: 2.2.0
if you are using properties file you can add following line there:
spark.jars=jars/your_jar1.jar,...
assuming that
<your root from where you run spark-submit>
|
|-jars
|-your_jar1.jar