I'm having difficulty getting my docker image execute its entry point with the arguments I'm specifying.
In my docker file I have the following:
...
EXPOSE 8123
WORKDIR /bin_vts
VOLUME /bin_vts
ENTRYPOINT ["/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
I want my program to take as an argument the output of hostname -I | awk '{print $1}'(an ip address). I have tested this on my local machine and it works fine when I use /bin_vts/vts $(hostname -I | awk '{print $1}') 8123
However when I use this in docker my program tells me that I'm passing "$(hostname -I | awk '{print $1}')" instead of the expected ip address.
I'm not sure what I'm doing wrong. I've tried using a script but that says permission denied. This is getting deployed to ECS using Fargate and I tried it locally as well and in both places it fails.
Thanks!
Something like this should work:
ENTRYPOINT ["/bin/bash", "/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
The original entrypoint
ENTRYPOINT ["/bin_vts/vts", "$(hostname -I | awk '{print $1}')", "8123"]
is passing the literal argument $(hostname -I | awk '{print $1}') to vts. By using bash, you can evaluate the argument before passing it to vts.
Related
I have a job like this:
parameterized ${GIT_URL} and ${REMOTE_IP}.
clone code by git url and package my project as jar
scp jar file to remote ip, and then start it as server.`
I am using Publish Over SSH Plugin.
The problem is, I have to add every server to my job configuration.
So is it possible to execute shell with parameterized remote ip like this?
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP}
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &
Yes. Use "$REMOTE_IP" to resolve it to the parameter value.
#!/bin/sh
scp ${APP_NAME}.jar root#"$REMOTE_IP":/root/${APP_NAME}.jar
ssh root#"$REMOTE_IP"
cd /root
ps -ef | grep ${APP_NAME} | grep -v grep | awk '{print $2}' | xargs kill
nohup java -jar ${APP_NAME}.jar &
I solved this in another way.
#!/bin/sh
scp ${APP_NAME}.jar root#${REMOTE_IP}:/root/${APP_NAME}.jar
ssh root#${REMOTE_IP} "sh -s" -- < /opt/jenkins/my.sh ${REMOTE_IP} ${APP_NAME}
So my.sh is a local shell file which define how to start jar as server with parameterized ip
I've been trying to execute bash on running docker container which has specific name as follows. --(1)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -I'{}' docker exec -it '{}'
but it didn't work and it shows a message like
"docker exec" requires at least 2 argument(s)
when I tried using command as follows --(2)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs docker exec -it
it shows another error messages like
the input device is not a TTY
But when I tried using $() (sub shell) then it can be accomplished but I cannot understand why it does not work with the two codes (1)(2) above (using xargs)
Could any body explain why those happen?
I really appreciate any help you can provide in advance =)
EDIT 1:
I know how to accomplish my goal in other way like
docker exec -it $(docker ps | grep perf | awk '{print $1 " bash"}' )
But I'm just curious about why those codes are not working =)
First question
"docker exec" requires at least 2 argument(s)
In last pipe command, standard input of xargs is, for example, 42a9903486f2 bash. And you used xargs with -I (replace string) option.
So, docker recognizes that 42a9903486f2 bash is a first argument, without 2nd argument.
Below example perhaps is the what you expected.
docker ps | grep somename | awk '{print $1 " bash"}' | xargs bash -c 'docker exec -it $0 $1'
Second question
the input device is not a TTY
xargs excutes command on new child process. So you need to reopen stdin to child process for interactive communication. (MacOS: -o option)
docker ps | grep somename | awk '{print $1 " bash"}' | xargs -o docker exec -it
This worked for me:
sudo docker ps -q | xargs -I'{}' docker exec -t {} du -hs /tmp/
The exec command you run is something like this:
docker exec -it 'a1b2c3d4 bash'
And that is only one argument, not two. You need to remove the quotes around the argument to docker exec.
... | xargs -I'{}' docker exec -it {}
Then you will exec properly with two arguments.
docker exec -it a1b2c3d4 bash
------ ---
first arg ^ ^ second arg
The following command does not correctly capture the 16714 from 16714 ssh -f -N -T -R3300:localhost:22
egrep -o '^[^ ]+(?= .*[R]3300:localhost:22)'
(However swapping to grep does if you use the -P flag. I was expecting egrep to be able to handle this)
grep -P forces grep to use the Perl regexp engine.
egrep is the same as grep -E and it forces grep to use the ERE (extended regular expression) engine, that does not support lookahead.
You can find a quick reference of the differences between Perl and ERE (and others) here : http://www.greenend.org.uk/rjk/tech/regexp.html
To handle this with POSIX grep, you would use grep to isolate the lines of interest and then use cut to isolate the fields of interest:
$ echo "16714 ssh -f -N -T -R3300:localhost:22" | grep 'R3300:localhost:22' | cut -d' ' -f1
16714
Or, just use awk:
$ echo "16714 ssh -f -N -T -R3300:localhost:22" | awk '/R3300:localhost:22/{print $1}'
16714
I have been trying to get the container id of docker instance using docker process command, but when i'm trying with filter by name it works fine for me.
sudo -S docker ps -q --filter="name=romantic_rosalind"
Results container id :
3c7e865f1dfb
But when i filter using image i'm getting all the instance container ids :
sudo -S docker ps -q --filter="image=docker-mariadb:1.0.1"
Results Container ids :
5570dc09b581
3c7e865f1dfb
But i wish to get only container id of mariadb.
How to get container id of docker process using filter as image ?
Use "ancestor" instead of "image" that works great. Example:
sudo -S docker ps -q --filter ancestor=docker-mariadb:1.0.1
The Docker team may have added it in the last versions:
http://docs.docker.com/engine/reference/commandline/ps/
You can use awk and grep to filter specified container id.
For example:
docker ps | grep "docker-mariadb:1.0.1" | awk '{ print $1 }'
This will print id of your container.
docker ps -a | awk '{ print $1,$2 }' | grep imagename | awk '{print $1 }'
This is perfect. if you need you can add a filter of running images of a particular stsatus alone, like below
docker ps -a --filter=running | awk '{ print $1,$2 }' | grep rulsoftreg:5000/mypayroll/cisprocessing-printdocsnotifyconsumer:latest | awk '{print $1 }'
Various other filter options can be explored here
https://docs.docker.com/v1.11/engine/reference/commandline/ps/
With a command docker container ls for listing containers( which is a replacement for docker ps) solution would be:
docker container ls | grep "docker-mariadb:1.0.1" | awk '{ print $1 }'
you may also use * sign(if needed) like this:
docker container ls | grep "docker-mariadb:*" | awk '{ print $1 }'
See https://docs.docker.com/engine/reference/commandline/container_ls/
Example to command
docker container ls -af 'name=mysql' --format '{{.ID}}'
The following answer is accurate.
docker ps --all --format='{{json .}}' | jq -c '. | select( .Image=="docker-mariadb:1.0.1" )'
docker client for docker ps has very useful flag -l which shows container information which was run recently. However all other docker commands requires providing either CONTAINER ID or NAME.
Is there any nice trick which would allow to call:
docker logs -f -l
instead of:
docker logs -f random_name
You can you docker logs -f `docker ps -ql`
For the last container
docker ps -n 1
or variants such as
docker ps -qan 1
can be handy
After a while playing with docker tutorial, I created small set of aliases:
alias docker_last="docker ps -l | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_all="docker ps -a | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_up="docker ps | tail -n +2 | awk '{ print \$(NF) }' | xargs docker $1"
alias docker_down="docker ps -a | tail -n +2 | grep -v Up | awk '{ print \$(NF) }' | xargs docker $1"
Which allow to call command on last, all, up and down containers:
docker_last logs # Display logs from last created container
docker_down rm # Remove all stopped containers
docker_up stop # Stop all running containers