How to edit a file with SED? - docker

I am running docker on a Virtual Machine on an Amazon AWS EC2 instance and I want to edit one file (application.properties) which lies in the following directory:
root#e2afc27e858e:/score-client/conf/applications.proteries
The docker image does not seem to contain vim/vi, nano or emacs. That's why I should edit the file with "sed".
In particular, in the application.properties file is a line called:
accessToken=
But I want to edit the file so that it says:
accessToken=abcd123
How do I edit the file in docker with SED?

Given the following application.properties example file
toto="some value"
accessToken="atoken"
pipo="other value"
bingo=1
the following sed command:
sed -i 's/^\(accessToken=\).*$/\1"abcd123"/' application.properties
gives as a result (i.e. cat application.properties)
toto="some value"
accessToken="abcd123"
pipo="other value"
bingo=1

sed -i /s/testtobechanged/textwanted/g applications.proteries

Related

Unix. Parse file with full paths to SHA256 checksums files. Run command in each path/file

I have a file file.txt with filenames ending with *.sha256, including the full paths of each file. This is a toy example:
file.txt:
/path/a/9b/x3.sha256
/path/7c/7j/y2.vcf.gz.sha256
/path/e/g/7z.sha256
Each line has a different path/file. The *.sha256 files have checksums.
I want to run the command "sha256sum -c" on each of these *.sha256 files and write the output to an output_file.txt. However, this command only accepts the name of the .sha256 file, not the name including its full path. I have tried the following:
while read in; do
sha256sum -c "$in" >> output_file.txt
done < file.txt
but I get:
"sha256sum: WARNING: 1 listed file could not be read"
which is due to the path included in the command.
Any suggestion is welcome
#!/bin/bash
while read in
do
thedir=$(dirname "$in")
thefile=$(basename "$in")
cd "$thedir"
sha256sum -c "$thefile" >>output_file.txt
done < file.txt
Modify your code to extract the directory and file parts of your in variable.

kubectl error: error loading config file "/var/lib/jenkins/.kube/config":

I am configuring Jenkins to automatically deploy my successfull builds to my Kubernetes cluster. I have manually set up the KUBECONFIG file in /var/lib/jenkins/.kube/config.
But my Jenkins job keeps giving the same error:
+ kubectl config --kubeconfig=/var/lib/jenkins/.kube/config view
error: error loading config file "/var/lib/jenkins/.kube/config": v1.Config.Contexts: \
[]v1.NamedContext: Clusters: []v1.NamedCluster: v1.NamedCluster.Name: Cluster: v1.Cluster.Server: \
CertificateAuthorityData: decode base64: illegal base64 data at input byte 47, error found in #10 byte of ... \
|ASXY9gkN$","server":|..., bigger context ...|"LS3tGS1PR0dJTiBDRVJUMLPAR0FURS0tLS0tCk1JSUV5gkN$", \
"server":"https://clsx-cloud-d734ef-0b|...
I copied the kube config file manually from my SSH accessible account, i.e
cat home/username/.kube/config
You most likely copied the wrong screen output from the terminal, or while editing the file in i.e nano.
The $ characters are illegal characters and the result of truncated file viewing in the terminal, make sure that you copy the real file data correctly.
For example:
xclip -sel clip < home/username/.kube/config
I have fixed this error by removing files in this directory:
/var/lib/jenkins/.kube/
Note: take backup on first priority of this directory as well.

How to set the environment variable in supervisord

I have a supervisord file where like this
[program:decrypt]
command=export KEYTOKEN=$(aws kms decrypt --ciphertext-blob fileb://<(echo %(ENV_TOKENENC)s | base64 -d) --output text --query Plaintext --region %(ENV_REGION)s | base64 -d )
I am passing the environment ENV_TOKENENC,ENV_REGION to the container and I can echo those variables and confirm that the docker container is getting them, also the command to decrypt kms value also works.But when I put the kms decrypt command in supervised file it throws error saying ('ENV_REGION')&('ENV_CONSULTOKENENC') which cannot be expanded.
Am I putting the right value in supervisord file?
Setting an environment variable is easy, if you're setting it to a constant value:
[program:decrypt]
command=/usr/bin/env foo=bar baz=qux /path/to/something ...
or, with less overhead:
environment=foo="bar",baz="qux"
command=/path/to/something ...
However, dynamically generating that variable's value requires a shell:
[program:decrypt]
command=/bin/sh -c 'foo=$(generate-bar) /path/to/something'
Note that export is not actually needed here, as var=value something as part of a single command exports var having value value during the execution of something.

the make.sh for fastdht not work.When running it shows "nm: /usr/lib/libc_r.so:no such file"

I want to install fastdht. So I download source code from github
https://github.com/happyfish100/fastdht
I follow the INSTALL file, run make.sh first.
./make.sh
However, it shows the following error messages.
[root#localhost fastdht]# sh make.sh
make.sh: line 142: warning: here-document at line 2 delimited by end-of-file
(wanted `EOF')
make.sh: line 2: ./a.out: No such file or directory
nm: '/usr/lib/libc_r.so': No such file
nm: '/lib64/libc_r.so': No such file
nm: '/usr/lib64/libc_r.so': No such file
[root#localhost fastdht]#
What's the matter?
Maybe there are some format errors in make.sh.
I create a new file a.sh and type the contents of make.sh in it. When I run a.sh it works!

spark submit add multiple jars in classpath

I am trying to run a spark program where i have multiple jar files, if I had only one jar I am not able run. I want to add both the jar files which are in same location. I have tried the below but it shows a dependency error
spark-submit \
--class "max" maxjar.jar Book1.csv test \
--driver-class-path /usr/lib/spark/assembly/lib/hive-common-0.13.1-cdh​5.3.0.jar
How can i add another jar file which is in the same directory?
I want add /usr/lib/spark/assembly/lib/hive-serde.jar.
Just use the --jars parameter. Spark will share those jars (comma-separated) with the executors.
Specifying full path for all additional jars works.
./bin/spark-submit --class "SparkTest" --master local[*] --jars /fullpath/first.jar,/fullpath/second.jar /fullpath/your-program.jar
Or add jars in conf/spark-defaults.conf by adding lines like:
spark.driver.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
spark.executor.extraClassPath /fullpath/firs.jar:/fullpath/second.jar
You can use * for import all jars into a folder when adding in conf/spark-defaults.conf .
spark.driver.extraClassPath /fullpath/*
spark.executor.extraClassPath /fullpath/*
I was trying to connect to mysql from the python code that was executed using spark-submit.
I was using HDP sandbox that was using Ambari. Tried lot of options such as --jars, --driver-class-path, etc, but none worked.
Solution
Copy the jar in /usr/local/miniconda/lib/python2.7/site-packages/pyspark/jars/
As of now I'm not sure if it's a solution or a quick hack, but since I'm working on POC so it kind of works for me.
In Spark 2.3 you need to just set the --jars option. The file path should be prepended with the scheme though ie file:///<absolute path to the jars>
Eg : file:////home/hadoop/spark/externaljsrs/* or file:////home/hadoop/spark/externaljars/abc.jar,file:////home/hadoop/spark/externaljars/def.jar
Pass --jars with the path of jar files separated by , to spark-submit.
For reference:
--driver-class-path is used to mention "extra" jars to add to the "driver" of the spark job
--driver-library-path is used to "change" the default library path for the jars needed for the spark driver
--driver-class-path will only push the jars to the driver machine. If you want to send the jars to "executors", you need to use --jars
And to set the jars programatically set the following config:
spark.yarn.dist.jars with comma-separated list of jars.
Eg:
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Spark config example") \
.config("spark.yarn.dist.jars", "<path-to-jar/test1.jar>,<path-to-jar/test2.jar>") \
.getOrCreate()
You can use --jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') to include entire folder of Jars.
So,
spark-submit -- class com.yourClass \
--jars $(echo /Path/To/Your/Jars/*.jar | tr ' ' ',') \
...
For --driver-class-path option you can use : as delimeter to pass multiple jars.
Below is the example with spark-shell command but I guess the same should work with spark-submit as well
spark-shell --driver-class-path /path/to/example.jar:/path/to/another.jar
Spark version: 2.2.0
if you are using properties file you can add following line there:
spark.jars=jars/your_jar1.jar,...
assuming that
<your root from where you run spark-submit>
|
|-jars
|-your_jar1.jar

Resources