Why export env within tmux fails - environment-variables

Command like the following:
ssh hostname 'tmux new -d "export A=whatever && echo $A && while true; do date; sleep 1; done"'
After executing this command, ssh to hostname and attach to the tmux session, you'll see that $A is empty.

The $ sign should be escaped.
ssh hostname 'tmux new -d "export A=whatever && echo \$A && while true; do date; sleep 1; done"'

Related

Docker CMD not running application if joined with other applications

I'm working on docker.
I have a webserver that uses port 8080, a server application that uses port 102 and an other application that uses port 500.
The webserver (run.sh) starts without issues with the following docker file
FROM ubuntu:18.04
USER root
...
CMD sudo echo '192.168.10.106 host1' >> /etc/hosts && echo '192.168.10.107 host2' >> /etc/hosts && echo '192.168.10.108 host3' >> /etc/hosts; sh /home/webserver/run.sh
The server starts without any issues with the following docker file
FROM ubuntu:18.04
USER root
...
RUN chmod +x /home/server/examples/cpp/x86_64-linux/server && ln -s /home/server/examples/cpp/x86_64-linux/server /usr/local/bin/
...
CMD sudo echo '192.168.10.106 host1' >> /etc/hosts && echo '192.168.10.107 host2' >> /etc/hosts && echo '192.168.10.108 host3' >> /etc/hosts; server
The application on port 500 also starts without issues
FROM ubuntu:18.04
USER root
...
CMD sudo echo '192.168.10.106 host1' >> /etc/hosts && echo '192.168.10.107 host2' >> /etc/hosts && echo '192.168.10.108 host3' >> /etc/hosts; nohup appScan -d -f /app/scan.conf &> scan.log
When I combine them, only the webserver and the application on port 500 start, the server (port 102) won't execute. Checking with ps aux the service won't appear
FROM ubuntu:18.04
USER root
RUN chmod +x /home/server/examples/cpp/x86_64-linux/server && ln -s /home/server/examples/cpp/x86_64-linux/server /usr/local/bin/
CMD sudo echo '192.168.10.106 host1' >> /etc/hosts && echo '192.168.10.107 host2' >> /etc/hosts && echo '192.168.10.108 host3' >> /etc/hosts; nohup appScan -d -f /app/scan.conf &> scan.log; sh /home/webserver/run.sh && server
I also tried by enclosing the server in a bash script and calling it from CMD but it won't start
#!/bin/bash
nohup server &> server.log &
Do you have any suggestion on how to run these 3 applications concurrently?

how to make docker keep running in frontend and not exit so that I could see the running log output

Now I want to make a docker command run in frontend so that I could see the log output. Now I am using this command to run my docker container:
docker run -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server dolphinjiang/canal-server:v1.1.5
this is the Dockerfile of my project:
FROM centos:7
RUN cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo ZONE=\"Asia/Shanghai\" > /etc/sysconfig/clock
RUN rm -rf /etc/yum.repos.d/*.repo
COPY CentOS6-Base-163.repo /etc/yum.repos.d/
RUN yum clean all
RUN groupadd -g 2500 canal; useradd -u 2501 -g canal -d /home/canal -m canal
RUN echo canal:De#2018er | chpasswd; echo root:dockerroot | chpasswd
RUN yum -y update && yum -y install wget vi openssl.x86_64 glibc.x86_64 tar tar.x86_64 inetutils-ping net-tools telnet which file
RUN yum clean all
COPY jdk-8u291-linux-x64.tar.gz /opt
RUN tar -zvxf /opt/jdk-8u291-linux-x64.tar.gz -C /opt && \
rm -rf /opt/jdk-8u291-linux-x64.tar.gz && \
chmod -R 755 /opt/jdk1.8.0_291 && \
chown -R root:root /opt/jdk1.8.0_291
RUN echo 'export JAVA_HOME=/opt/jdk1.8.0_291' >> /etc/profile
RUN echo 'export JRE_HOME=$JAVA_HOME/jre' >> /etc/profile
RUN echo 'export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH' >> /etc/profile
RUN echo 'export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH' >> /etc/profile
RUN source /etc/profile
RUN yum install kde-l10n-Chinese -y
RUN yum install glibc-common -y
RUN localedef -c -f UTF-8 -i zh_CN zh_CN.utf8
ENV JAVA_HOME /opt/jdk1.8.0_291
ENV PATH $PATH:$JAVA_HOME/bin
ENV LANG zh_CN.UTF-8
ENV LC_ALL zh_CN.UTF-8
ADD canal-server /home/canal/
RUN chmod 755 /home/canal/bin
WORKDIR /home/canal/bin
RUN chmod 777 /home/canal/bin/restart.sh
RUN chmod 777 /home/canal/bin/startup.sh
RUN chmod 777 /home/canal/bin/stop.sh
RUN chmod 777 /home/canal/bin/config.sh
CMD /home/canal/bin/config.sh
this is the config.sh:
cat > /home/canal/conf/canal.properties <<- EOF
# register ip
canal.register.ip = ${HOSTNAME}.canal-server-discovery-svc-stable.testcanal.svc.cluster.local
# canal admin config
canal.admin.manager = canal-admin-stable:8089
canal.admin.port = 11110
canal.admin.user = admin
canal.admin.passwd = 4ACFE3202A5FF5CF467898FC58AAB1D615029441
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster =
EOF
sh /home/canal/bin/restart.sh
and this is the restart.sh:
#!/bin/bash
args=$#
case $(uname) in
Linux)
bin_abs_path=$(readlink -f $(dirname $0))
;;
*)
bin_abs_path=$(cd $(dirname $0) ||exit ; pwd)
;;
esac
sh "$bin_abs_path"/stop.sh $args
sh "$bin_abs_path"/startup.sh $args
and this is the start.sh:
#!/bin/bash
current_path=`pwd`
case "`uname`" in
Linux)
bin_abs_path=$(readlink -f $(dirname $0))
;;
*)
bin_abs_path=`cd $(dirname $0); pwd`
;;
esac
base=${bin_abs_path}/..
canal_conf=$base/conf/canal.properties
canal_local_conf=$base/conf/canal_local.properties
logback_configurationFile=$base/conf/logback.xml
export LANG=en_US.UTF-8
export BASE=$base
if [ -f $base/bin/canal.pid ] ; then
echo "found canal.pid , Please run stop.sh first ,then startup.sh" 2>&2
exit 1
fi
if [ ! -d $base/logs/canal ] ; then
mkdir -p $base/logs/canal
fi
## set java path
if [ -z "$JAVA" ] ; then
JAVA=$(which java)
fi
ALIBABA_JAVA="/usr/alibaba/java/bin/java"
TAOBAO_JAVA="/opt/taobao/java/bin/java"
if [ -z "$JAVA" ]; then
if [ -f $ALIBABA_JAVA ] ; then
JAVA=$ALIBABA_JAVA
elif [ -f $TAOBAO_JAVA ] ; then
JAVA=$TAOBAO_JAVA
else
echo "Cannot find a Java JDK. Please set either set JAVA or put java (>=1.5) in your PATH." 2>&2
exit 1
fi
fi
case "$#"
in
0 )
;;
1 )
var=$*
if [ "$var" = "local" ]; then
canal_conf=$canal_local_conf
else
if [ -f $var ] ; then
canal_conf=$var
else
echo "THE PARAMETER IS NOT CORRECT.PLEASE CHECK AGAIN."
exit
fi
fi;;
2 )
var=$1
if [ "$var" = "local" ]; then
canal_conf=$canal_local_conf
else
if [ -f $var ] ; then
canal_conf=$var
else
if [ "$1" = "debug" ]; then
DEBUG_PORT=$2
DEBUG_SUSPEND="n"
JAVA_DEBUG_OPT="-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,address=$DEBUG_PORT,server=y,suspend=$DEBUG_SUSPEND"
fi
fi
fi;;
* )
echo "THE PARAMETERS MUST BE TWO OR LESS.PLEASE CHECK AGAIN."
exit;;
esac
str=`file -L $JAVA | grep 64-bit`
if [ -n "$str" ]; then
JAVA_OPTS="-server -Xms2048m -Xmx3072m -Xmn1024m -XX:SurvivorRatio=2 -XX:PermSize=96m -XX:MaxPermSize=256m -Xss256k -XX:-UseAdaptiveSizePolicy -XX:MaxTenuringThreshold=15 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError"
else
JAVA_OPTS="-server -Xms1024m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:MaxPermSize=128m "
fi
JAVA_OPTS=" $JAVA_OPTS -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8"
CANAL_OPTS="-DappName=otter-canal -Dlogback.configurationFile=$logback_configurationFile -Dcanal.conf=$canal_conf"
if [ -e $canal_conf -a -e $logback_configurationFile ]
then
for i in $base/lib/*;
do CLASSPATH=$i:"$CLASSPATH";
done
CLASSPATH="$base/conf:$CLASSPATH";
echo "cd to $bin_abs_path for workaround relative path"
cd $bin_abs_path
echo LOG CONFIGURATION : $logback_configurationFile
echo canal conf : $canal_conf
echo CLASSPATH :$CLASSPATH
$JAVA $JAVA_OPTS $JAVA_DEBUG_OPT $CANAL_OPTS -classpath .:$CLASSPATH com.alibaba.otter.canal.deployer.CanalLauncher 2>&1
echo $! > $base/bin/canal.pid
echo "cd to $current_path for continue"
cd $current_path
else
echo "canal conf("$canal_conf") OR log configration file($logback_configurationFile) is not exist,please create then first!"
fi
after I start the docker, it exit automaticlly, and the docker not startup, no log output. what should I do to make it run in frontend. after successs, switch to the backend. I also tried to run in deamon like this(make the container run background and not exit):
docker run -it -d -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server canal/canal-server:v1.1.5
the process still exit automaticlly. and docker container did not startup.
Basically, you should get the point (based on your latest comment).
Docker is based on some command, when it's done - it stops the container.
So to make it continuously running you should have command and run infinitely.
Also check this answer as well, there are more explanation
Why docker exiting with code 0
One of the easiest solution is to tail some logs.
Like,
tail -f /dev/null
Taken from here
you can use tail -f /dev/null to keep the container from stopping, try this
docker run -it -d -p 11110:11110 -p 11111:11111 -p 11112:11112 --name canal-server canal/canal-server:v1.1.5 tail -f /dev/null
see also this post

Docker stop/start resets to initial passwords

I've just started using docker by copy-pasting pre-made repos from github.
Here is the scenario and steps:
I've passed mysql/shell root password via environment variable -e and these passwords are set as expected inside entry.sh.
I then go inside container and reset shell/mysql root password to something different.
Now the main issue, each time I do docker stop + start from host, it resets passwords to the initial ones of step1.
Please suggest the changes so it retain the modified step2 passwords even I do docker start/stop.
Used entry.sh and dockerfile scripts can be checked from this github repo.
Thanks.
I just noticed that the entry.sh will always update the root password with $MYSQL_RANDOM_ROOT_PASSWORD on docker start. So assuming we already persist the /var/lib/mysql in the host, we can edit the entry.sh a bit to only update the password when /var/lib/mysql doesn't exists:
#!/bin/sh
# start apache
echo "Starting httpd"
httpd
echo "Done httpd"
# check if mysql data directory is nuked
# if so, install the db
echo "Checking /var/lib/mysql folder"
if [ ! -f /var/lib/mysql/ibdata1 ]; then
echo "Installing db"
mariadb-install-db --user=mysql --ldata=/var/lib/mysql > /dev/null
echo "Installed"
# from mysql official docker repo
if [ -z "$MYSQL_ROOT_PASSWORD" -a -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo >&2 'error: database is uninitialized and password option is not specified '
echo >&2 ' You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_RANDOM_ROOT_PASSWORD'
exit 1
fi
# random password
if [ ! -z "$MYSQL_RANDOM_ROOT_PASSWORD" ]; then
echo "Using random password"
MYSQL_ROOT_PASSWORD="$(pwgen -1 32)"
echo "GENERATED ROOT PASSWORD: $MYSQL_ROOT_PASSWORD"
echo "Done"
fi
tfile=`mktemp`
if [ ! -f "$tfile" ]; then
return 1
fi
cat << EOF > $tfile
USE mysql;
DELETE FROM user;
FLUSH PRIVILEGES;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY "$MYSQL_ROOT_PASSWORD" WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'#'localhost' WITH GRANT OPTION;
UPDATE user SET password=PASSWORD("") WHERE user='root' AND host='localhost';
FLUSH PRIVILEGES;
EOF
echo "Querying user"
/usr/bin/mysqld --user=root --bootstrap --verbose=0 < $tfile
rm -f $tfile
echo "Done query"
# setting ssh root password
if [ -z "$SSH_ROOT_PASSWORD" ]; then
echo >&2 'You need to specify SSH_ROOT_PASSWORD'
exit
fi
# Set root password to root, format is 'user:password'.
echo "root:$SSH_ROOT_PASSWORD" | chpasswd
fi;
echo "Generating ssh keys"
if [ ! -f "/etc/ssh/ssh_host_rsa_key" ]; then
# generate fresh rsa key
ssh-keygen -f /etc/ssh/ssh_host_rsa_key -N '' -t rsa
fi
if [ ! -f "/etc/ssh/ssh_host_dsa_key" ]; then
# generate fresh dsa key
ssh-keygen -f /etc/ssh/ssh_host_dsa_key -N '' -t dsa
fi
#prepare run dir
if [ ! -d "/var/run/sshd" ]; then
mkdir -p /var/run/sshd
fi
ssh-keygen -A
/usr/sbin/sshd
# start mysql
# nohup mysqld_safe --skip-grant-tables --bind-address 0.0.0.0 --user mysql > /dev/null 2>&1 &
echo "Starting mariadb database"
exec /usr/bin/mysqld --user=root --bind-address=0.0.0.0
Basically we just move this block of code to the if block above it

How to delete offline (slave/nodes) from Jenkins in SWARM mode

How do I delete a node from Jenkins?
Jenkins operates in Docker Swarm mode.
When I am trying to remove the offline container, swarm keeps generating new containers
You can try:
Using Jenkins Pipeline:
node("master") {
println "Cleaning up offline slaves..."
hudson.model.Hudson.instance.slaves.each {
if(it.getComputer().isOffline()) {
println "Deleting ${it.name}"
it.getComputer().doDoDelete()
}
}
println "Done."
}
Using SH
#!/bin/bash
# This script should be run on the Jenkins master. We set up a job in Jenkins to run this once a week on master.
function jenkins-cli {
java -jar /var/cache/jenkins/war/WEB-INF/jenkins-cli.jar -s http://localhost:8080 "$#"
}
for slave in slavename1 slavename2 slavename3; do
if [[ `curl -s "http://localhost:8080/computer/${slave}/api/xml?xpath=*/offline/text()"` = "true" ]]; then
echo "$slave is already offline. Skipping cleanup"
else
echo "Cleaning up docker on $slave"
echo "Taking $slave offline"
jenkins-cli offline-node $slave -m "Scheduled docker cleanup is running" && \
echo "Waiting on $slave to go offline" && \
jenkins-cli wait-node-offline $slave && \
while [[ `curl -s "http://localhost:8080/computer/${slave}/api/xml?xpath=*/idle/text()"` != "true" ]]; do echo "Waiting on $slave to be idle" && sleep 5; done && \
echo "Running cleanup_docker on $slave" && \
ssh -o "StrictHostKeyChecking no" $slave -i /var/lib/jenkins/.ssh/id_rsa "sudo /usr/local/bin/cleanup_docker"
echo "Bringing $slave back online"
jenkins-cli online-node $slave
fi
There are a lot of snippets with which one can list existing nodes in Python. Removing slave nodes works like this:
request = urllib.request.Request("https://insert_jenkins_url/computer/insert_node_name/doDelete", data=bytes("", "ascii"))
#according to https://stackoverflow.com/a/28052583/4609258 the following is ugly
context = ssl._create_unverified_context()
base64string = base64.b64encode(bytes('%s:%s' % ('insert_user_name', 'insert_api_token_with_roughly_35_characters'),'ascii'))
request.add_header("Authorization", "Basic %s" % base64string.decode('utf-8'))
with urllib.request.urlopen(request, context=context) as url:
print(str(url.read().decode()))

Bash script for psql

Here is the shell script I am trying to run. It works when just run as a command but getting errors when run from the script.
#!/bin/bash
# sets CE IP addresses to act as LUS on pgsql
#Checks that user is logged in as root
if [ $(id -u) = "0" ]; then
#Asks user for IP of CE1
echo -n "Enter the IP address of your first CE's management module > "
read CE1
$(psql -U asm -d asm -t -c) echo """update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '$CE1';"""
echo "LUS seetings have been completed"
else
#Warns user of error and sends status to stderr
echo "You must be logged in as root to run this script." >&2
exit 1
fi
Here is the error:
psql: option requires an argument -- 'c'
Try "psql --help" for more information.
update zr_fsinstance set lu_order='1' where managementaccesspointhostname = '10.134.39.139';
Instead of
$(psql -U asm -d asm -t -c) echo
"""update zr_fsinstance
set lu_order='1' where managementaccesspointhostname = '$CE1';"""
Try:
$(psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};")
OR (if you prefer):
`psql -U asm -d asm -t -c "UPDATE zr_fsinstance set lu_order='1'
where managementaccesspointhostname = ${CE1};"`

Resources