Is it possible to execute shell with parameterized remote ip? - jenkins

I have a job like this:
parameterized ${GIT_URL} and ${REMOTE_IP}.
clone code by git url and package my project as jar
scp jar file to remote ip, and then start it as server.`
I am using Publish Over SSH Plugin.
The problem is, I have to add every server to my job configuration.
So is it possible to execute shell with parameterized remote ip like this?
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP}
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &

Yes. Use "$REMOTE_IP" to resolve it to the parameter value.
#!/bin/sh
scp ${APP_NAME}.jar root#"$REMOTE_IP":/root/${APP_NAME}.jar
ssh root#"$REMOTE_IP"
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &

I solved this in another way.
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP} "sh -s" -- < /opt/jenkins/my.sh ${REMOTE_IP} ${APP_NAME}
So my.sh is a local shell file which define how to start jar as server with parameterized ip

Related

Can't get shell script that works directly working in crontab

I've written a script to backup my docker mysql containers:
export $(grep -v '^#' .env | xargs -d '\n')
filename=$(date +'%Y%m%d_%H%M%S')
docker-compose exec mysql bash -c "mysqldump --user=$MYSQL_USERNAME --password='$MYSQL_PASSWORD' --ignore-table=$MYSQL_DATABASE.forums_readData_forums_c --ignore-table=$MYSQL_DATABASE.forums_readData_newPosts $MYSQL_DATABASE | gzip > /tmp/$filename.gz"
mysql_container=$(docker ps | grep -E 'mysql' | awk '{ print $1 }')
docker cp $mysql_container:/tmp/$filename.gz $BACKUP_DIR/mysql/
docker-compose exec mysql rm /tmp/$filename.gz
sudo find $BACKUP_DIR/mysql/* -mtime +30 -exec rm {} \;
But when I add it to the crontab, I get the error the input device is not a TTY. That's coming from the docker-compose exec command, except there's no -it flag? When I run this script directly from the shell ./backup.sh, it works fine.

Passing arguments to Docker entry point

I'm having difficulty getting my docker image execute its entry point with the arguments I'm specifying.
In my docker file I have the following:
...
EXPOSE 8123
WORKDIR /bin_vts
VOLUME /bin_vts
ENTRYPOINT ["/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
I want my program to take as an argument the output of hostname -I | awk '{print $1}'(an ip address). I have tested this on my local machine and it works fine when I use /bin_vts/vts $(hostname -I | awk '{print $1}') 8123
However when I use this in docker my program tells me that I'm passing "$(hostname -I | awk '{print $1}')" instead of the expected ip address.
I'm not sure what I'm doing wrong. I've tried using a script but that says permission denied. This is getting deployed to ECS using Fargate and I tried it locally as well and in both places it fails.
Thanks!
Something like this should work:
ENTRYPOINT ["/bin/bash", "/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
The original entrypoint
ENTRYPOINT ["/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
is passing the literal argument $(hostname -I | awk '{print $1}') to vts. By using bash, you can evaluate the argument before passing it to vts.

Docker issue create directory

I'm using a shell script that is trying to create a directory inside a container which is running. But it produces error as binary file not found.
Here is an example script:
#!/bin/sh
set -x
CONTAINER_ID=`docker ps | grep postgres | awk '{print $1}'`
docker exec -it $CONTAINER_ID bash mkdir /backup
Try this:
#!/bin/sh
set -x
CONTAINER_ID=`docker ps | grep postgres | awk '{print $1}'`
docker exec -it $CONTAINER_ID sh -c "mkdir /backup"
The sh -c "mkdir /backup" should work.
In case your docker image have bash inside it then try bash -c "mkdir /backup"
I tried from my end and got the desired result.
$ sh script.sh
+ docker ps
+ awk '{print $1}'
+ grep inspiring_sinoussi
+ CONTAINER_ID=08a35fa3c040
+ docker exec -it 08a35fa3c040 sh -c 'mkdir /backup'
$ docker exec -it 08a35fa3c040 sh
/ # ls / | grep backup
backup

cannot apply sed to the stdout within a docker image

I have a docker file that has an entry point which is an s2i/bin/run script:
#!/bin/bash
export_vars=$(cgroup-limits); export $export_vars
exec /opt/app-root/services.sh
The services.sh script runs php-fpm and nginx:
php-fpm 2>&1
nginx -c /opt/app-root/etc/conf.d/nginx/nginx.conf
# this echo to stdout is needed otherwise no stdout doesn't show up on the docker run output
echo date 2>&1
The php scripts are logging to stderr so that script does 2>&1 to redirected to stdout which is needed for the log aggregator.
I want to run sed or awk over the log output. Yet if I try:
php-fpm 2>&1 | sed 's/A/B/g'
or
exec /opt/app-root/services.sh | sed 's/A/B/g'
Then nothing shows up when I run the container. Without the pipe to sed the output of php-fpm shows up as the output of docker run okay.
Is there a way to sed the output of php-fpm ensuring that the output makes it to the output of docker?
Edit Note that I tried the obvious | sed 's/A/B/g' in both places and was also trying running the pipe in a subshell $(stuff|sed 's/A/B/g') in both places. Neither works so this seems to be a Docker or s2i issue.
Try keeping sed arguments in double quotes.
php-fpm 2>&1 | sed "s/A/B/g"

Running `bash` using docker exec with xargs command

I've been trying to execute bash on running docker container which has specific name as follows. --(1)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -I'{}' docker exec -it '{}'
but it didn't work and it shows a message like
"docker exec" requires at least 2 argument(s)
when I tried using command as follows --(2)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs docker exec -it
it shows another error messages like
the input device is not a TTY
But when I tried using $() (sub shell) then it can be accomplished but I cannot understand why it does not work with the two codes (1)(2) above (using xargs)
Could any body explain why those happen?
I really appreciate any help you can provide in advance =)
EDIT 1:
I know how to accomplish my goal in other way like
docker exec -it $(docker ps | grep perf | awk '{print $1 " bash"}' )
But I'm just curious about why those codes are not working =)
First question
"docker exec" requires at least 2 argument(s)
In last pipe command, standard input of xargs is, for example, 42a9903486f2 bash. And you used xargs with -I (replace string) option.
So, docker recognizes that 42a9903486f2 bash is a first argument, without 2nd argument.
Below example perhaps is the what you expected.
docker ps | grep somename | awk '{print $1 " bash"}' | xargs bash -c 'docker exec -it $0 $1'
Second question
the input device is not a TTY
xargs excutes command on new child process. So you need to reopen stdin to child process for interactive communication. (MacOS: -o option)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -o docker exec -it
This worked for me:
sudo docker ps -q | xargs -I'{}' docker exec -t {} du -hs /tmp/
The exec command you run is something like this:
docker exec -it 'a1b2c3d4 bash'
And that is only one argument, not two. You need to remove the quotes around the argument to docker exec.
... | xargs -I'{}' docker exec -it {}
Then you will exec properly with two arguments.
docker exec -it a1b2c3d4 bash
------ ---
first arg ^ ^ second arg

Resources