Can I do a docker exec using python on whales package - docker

I see that the python on whales python package allows you to run a docker command in another container. the commands I see are for docker.run but this will run a new container. Is there something similar to docker exec? I just want to run a terminal command in an already running container.

Related

How to automatically execute a command from a docker container on boot the container from the container's shell?

I have a container named test. I want to be able to start the container on Ubuntu boot which I think I can do with the always command.
Then I want to run one command when the container boots. For instance, I want to run ls so it shows me the list of files and directories within the container.
How do I run the ls command when the docker container boots? So, there are two things:
Docker container boots automatically on Ubuntu booting
As soon as the container boots, from within the container, one command will be executed.
I can manually do it using:
sudo docker run --rm -it test
Then when the test container begins, I can type ls in the terminal.
I want to do it automatically on boot, the actual command will be different, I am using ls for simplicity.
You can trigger a command to execute upon container start by adding this to the end of your Dockerfile:
CMD ls
If you need more than one command, the easiest thing to do is create an executable shell script and invoke that with the CMD. You can learn more about that from the Docker documentation here.

Docker run uses host PATH when chaining commands

I have written an image that bundles utils to run commands using several CLIs. I want to run this as an executable as follows:
docker run my_image cli command
Where CLI is my custom CLI and command is a command to that CLI.
When I build my image I have the following instruction in the Dockerfile:
ENV PATH="/cli/scripts:${PATH}"
The above works if I do not chain commands to the container. If I chain commands it stops working:
docker run my_image cli command && cli anothercommand
Command 'cli' not found, but can be installed with...
Where the first command works and the other fails.
So the logical conclusion is that cli is missing from path. I tried to verify that with:
docker run my_image printenv PATH
This actually outputs the containers PATH, and everything looks alright. So I tried to chain this command too:
docker run my_image printenv PATH && printenv PATH
And sure enough, this outputs first the containers PATH and then the PATH of my system.
What is the reason for this? How do I work around it?
When you type a command into your shell, your local shell processes it first before any command gets run. It sees (reformatted)
docker run my_image cli command \
&& \
cli anothercommand
That is, your host's shell picks up the &&, so the host first runs docker run and then runs cli anothercommand (if the container exited successfully).
You can tell the container to run a shell, and then the container shell will handle things like command chaining, redirections, and environment variables
docker run my_image sh -c 'cli command && cli anothercommand'
If this is more than occasional use, also consider writing this into a shell script
#!/bin/sh
set -e
cli command
cli another command
COPY the script into your Docker image, and then you can docker run my_image cli_commands.sh or some such.

Run a script inside docker container using octopus deploy

Trying to do config transformation once a docker container has been created and the docker CP command does not allow wildcard and file type searches. While testing manually, it was found that it was possible to solve this issue but running the docker exec command and running powershell inside our container. After some preliminary tests it doesn't look like this works out of the box with octopus deploy. Is there a way to run process steps inside a container with octopus deploy?
Turns out you can run powershell scripts that already exist in the container with the exec command:
docker exec <container> powershell script.ps1 -argument foo
This command will run a script as you would expect in command line.

Must I provide a command when running a docker container?

I'd like to install mysql server on a centos:6.6 container.
However, when I run docker run --name myDB -e MYSQL_ROOT_PASSWORD=my-secret-pw -d centos:6.6, I got docker: Error response from daemon: No command specified. error.
Checking the document from docker run --help, I found that the COMMAND seems to be an optional argument when executing docker run. This is because [COMMAND] is placed inside a pair of square brackets.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
I also find out that the official repository of mysql doesn't specify a command when starting a MySQL container:
Starting a MySQL instance is simple:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Why should I provide a command when running a centos:6.6 container, but not so when running a mysql container?
I'm guessing that maybe centos:6.6 is specially-configured so that the user must provide a command when running it.
if you use centos:6.6, you do need to provide a command when you issue "docker run" command.
The reason the officical repository of mysql does not specify a command is because it has CMD command in it's docker file: CMD ["mysqld"]. Check it's docker file here.
The CMD in docker file is the default command when run the container without a command.
You can read here to better understand what you can use in a docker file.
In your case, you can
Start your centos 6.6 container
Take official mysql docker file as reference, issue similar command (change apt-get to yum ( or sudo yum if you don't use the default root user)
Once you can successfully start mysql, you can put all your command in your docker file, just to make sure the first line is "From centos:6.6"
Build your image
Run a container with your image, then you don't need to provide a command in docker run
You can share your docker file in docker hub, so that other people can user yours.
good luck.

How to get Container Id of Docker in Jenkins

I am using Docker Custom Build Environment Plugin to build my project inside "jpetazzo/dind" docker image. After building, in console output it shows:
Docker container 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc started to host the build
$ docker exec --tty 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc env
[workspace] $ docker exec --tty --user 122:docker 4aea29fff86ba4e50dbcc7387f4f23c55ff3661322fb430a099435e905d6eeef env BUILD_DISPLAY_NAME=#73
Here Docker Container which got started has container id 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc .
Now further I want to execute some command on "Execute shell" part in "Build" option in Jenkins, there I want to use this Container Id. I tried using ${BUILD_CONTAINER_ID} as mentioned in the plugin page. But that does't work.
The documentation tells you to use docker run, but you're trying to do docker exec. The exec subcommand only works on a currently running container.
I suppose you could do a docker run -d to start the container in the background, and then make sure to docker stop when you're done. I suspect this will leave you with some orphaned running containers when things go wrong, though.

Resources