how to run command in docker using a script outside thecontainer - docker

in my .mk file I have done as following
.PHONY : Test
Test:
#echo Starting Docker container
docker container start XXX;
docker exec -it -w /home/TA/G/T/P XXX bash
docker exec /home/CO/CO_ANAlysis/bin/c-build --dir results--encoding UTF-8 make
and then I want to execute that it execute the first two and when docker container starts the command "docker exec /home/CO/CO_ANAlysis/bin/c-build --dir results--encoding UTF-8 make" doesn't execute
and if I run this out of docker:
docker exec -it -w /home/TA/G/T/P SDIO bash /home/CO/CO_ANAlysis/bin/c-build --dir results--encoding UTF-8 make
it shows me :
/home/CO/CO_ANAlysis/bin/c-build: /home/CO/CO_ANAlysis/bin/c-build: cannot execute binary filehow can I, in a mk file, run commands inside a docker?
Thanks

Problem Solved
docker container start XXX;
docker exec -it -w /home/TA/G/T/P XXX /home/CO/CO_ANAlysis/bin/c-build --dir results--encoding UTF-8 make

Related

Any commands hang inside docker container

Any commands hang terminal inside docker container.
I login in container with docker exec -t php-zts /bin/bash
And then print any elementary command (date, ls, cd /, etc.)
Command hang
When I press ctrl+c I going back to host machine.
But, if I run any command without container - it's work normally
docker exec -t php-zts date
Wed Jan 26 00:04:38 UTC 2022
tty is enabled in docker-compose.yml
docker system prune and all cleanups can not help me.
I can't identify the problem and smashed my brain. Please help :(
The solution is to use the flag -i/--interactive with docker run. Here is a relevant section of the documentation:
--interactive , -i Keep STDIN open even if not attached
You can try to run your container using -i for interactive and -t for tty which will allow you to navigate and execute commands inside the container
docker run -it --rm alpine
In the other hand you can run the container with docker run then execute commands inside that container like so:
tail -f /dev/null will keep your container running.
-d will run the command in the background.
docker run --rm -d --name container1 alpine tail -f /dev/null
or
docker run --rm -itd --name container1 alpine sh # You can use -id or -td or -itd
This will allow you to run commands from inside the container.
you can choose sh, bash, or any other shell you prefer.
docker exec -it container1 alpine sh

Running shell script from PC in running Docker

I have pulled one docker image and docker container is running successfully as well. But I want to run one shell script in the running docker. The shell script is located in my hard disk. I am unable to find out which command to use and how to give pathname of the shell file so that it can be executed in running docker.
Please guide me.
Regards
TL;DR
There are two ways that could work in your case.
You can run one-liner-script using docker exec sh/bash with -c argument:
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
You can copy shell script into container using docker cp and then run it in docker context:
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh
Precaution
Not all containers allow to run shell scripts in their context. You can check it executing any shell command in docker:
docker exec -i <your_container_id> echo "Shell works"
For future reference check section Understand how CMD and ENTRYPOINT interact
Docker Exec One-liner
docker exec -i <your_container_id> sh -c 'sh-command-1 && sh-command-2 && sh-command-n'
If your container has sh or bash or BusyBox shell wrapper (such as alpine, you can send one-line shell script to container's shell.
Limitations:
only short scripts;
hard to pass command-line arguments;
only if your container has shell.
Docker Copy and Execute Script
docker cp ~/your-shell-script.sh <your_container_id>:/tmp
docker exec -i <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can copy script from host to container and then execute it.
You can pass arguments to the script.
You can run script with root credentials with -u root: docker exec -i -u root <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
You can run script interactively with -t: docker exec -it <your_container_id> /tmp/your-shell-script.sh -arg1 -arg2
Limitations:
one more command to execute;
only if your container has shell.

How to take Oracle-xe-11g backup from running Docker container

I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"

Running a script in docker container and not killing the script when leaving terminal

I have got some docker container for instance my_container
I Want to run a long living script in my container, but not killing it while leaving the shell
I would like to do something like that
docker exec -ti my_container /bin/bash
And then
screen -S myScreen
Then
Executing my script in screen and exit the terminal
Unfortunately, I cannot execute screen in docker terminal
this maybe help you.
docker exec -i -t c2ab7ae71ab8 sh -c "exec >/dev/tty 2>/dev/tty </dev/tty && /usr/bin/screen -r nmsrv -s /bin/bash"
and this is the reference link
Only way I can think of is to run your container with your script at the start;
docker run -d --name my_container nginx /etc/init.d/myscript
If you have to run the script directly in an already running container, you can do that with exec:
docker exec my_container /path/to/some_script.sh
or if you wanna run it through Php:
docker exec my_container php /path/to/some_script.php
That said, you typically don't want to run scripts in already running containers, but to just use the same image as some already running container. You can do that with a standard docker run:
docker run -a stdout --rm some_repo/some_image:some_tag php /path/to/some_script.php

Run command to docker container in a script

I need run command to docker container in a script, i trying with:
docker run -it <container> /bin/bash -c "command"
I had no error but the command is not executed
I can not create a new image, i must not stop the service ... I do it manually
It's possible to invoke a command in a running container:
docker exec <container_id> rm -f /tmp/cache/*/*/*

Resources