Run a script inside docker container using octopus deploy - docker

Trying to do config transformation once a docker container has been created and the docker CP command does not allow wildcard and file type searches. While testing manually, it was found that it was possible to solve this issue but running the docker exec command and running powershell inside our container. After some preliminary tests it doesn't look like this works out of the box with octopus deploy. Is there a way to run process steps inside a container with octopus deploy?

Turns out you can run powershell scripts that already exist in the container with the exec command:
docker exec <container> powershell script.ps1 -argument foo
This command will run a script as you would expect in command line.

Related

Can I do a docker exec using python on whales package

I see that the python on whales python package allows you to run a docker command in another container. the commands I see are for docker.run but this will run a new container. Is there something similar to docker exec? I just want to run a terminal command in an already running container.

How does the Jenkins CloudBees Docker Build Plugin set its Shell Path

I'm working with a Jenkins install I've inherited. This install has the CloudBees Docker Custom Build Environment Plugin installed. We think this plugin gives us a nifty Build inside a Docker container checkbox in our build configuration. When we configure jobs with this option, it looks like (based on Jenkins console output) Jenkins runs the build with the following command
Docker container 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q started to host the build
[WIP-Tests] $ docker exec -t 548ab230991a452af639deed5eccb304026105804b38194d98250583bd7bb83q /bin/sh -xe /tmp/hudson7939624860798651072.sh
However -- we've found that this runs /bin/sh with a very limited set of environmental variables -- including a $PATH that doesn't include /bin! So
How does the CloudBees Docker Custom Build Environment Plugin setup its /bin/sh environment. Is this user configurable via the UI (if so, where?)
It looks like Jenkins is using docker exec -- which i think means that it must have (invisibly) setup a container with a long running process using docker run. Doesn't anyone know how the CloudBees Docker Custom Build Environment Plugin plugin invokes docker run, and if this is user manageable?
Considering this plugin is "up for adoption", I would recommend the official JENKINS/Docker Pipeline Plugin.
It source code show very few recent commits.
But don't forget any container has a default entrypoint set to /bin/sh
ENTRYPOINT ["/bin/sh", "-c"]
Then:
The docker container is ran after SCM has been checked-out into a slave workspace, then all later build commands are executed within the container thanks to docker exec introduced in Docker 1.3
That means the image you will be pulling or building/running must have a shell, to allow the docker exec to execute a command in it.

Execute host shell script from meteor container

I have shell script on my host. I've installed docker container with meteord image. I have it running, however I would like to execute this shell script inside meteord docker image. Is that possible?
Yes. That is possible but you will have to copy the script in the container as follow:
docker cp <script> <container-name/id>:<path>
docker exec <container-name/id> <path>/<script>
For example:
docker cp script.sh silly_nightingale:/root
docker exec silly_nightingale /root/script.sh
Just make sure the script has executable permissions. Also, you can copy the script at build time in Dockerfile and run it using exec afterwards.
Updated:
You can also try docker volume for it as follow:
docker run -d -v /absolute/path/to/script/dir:/path/in/container <IMAGE>
Now run the script as follow:
docker exec -it <Container-name> bash /path/in/container/script.sh
Afterwards you will be able to see the generated files in /absolute/path/to/script/dir on host. Also, make sure to use absolute paths in scripts and commands to avoid redirection issues. I hope it helps.

set environment variable in running docker contianer

I need to set environment variable in a running docker container. I am already aware of the way of setting environment variable while creating a container. As far I found there is no available straight forward way to do this with docker and docker is planning to add something with new version 1.13.
But I found that some people able to manage it which is not working for me now. I tried following ways but did not work for me-
docker exec -it -u=root test /bin/bash -c "export port=8090"
echo "export port=8090" to /etc/bash.bashrc using a script and then source it
docker exec -it test /bin/bash -c "source /etc/bash.bashrc"
configuring the whole thing in a script and run it from host also did not work. While running script from host all the other command successfully executes except "export port=8090" or "source /etc/bash.bashrc" or "source /root/.bashrc".
Can anyone explain why sourcing file from host does not work in docker container even when I set user("-u=root")? Can anyone help me to solve this? When I source the file from inside the container it works perfectly. But in my case I have to do it from host machine
NOTE:, I am using docker 1.12 and tried the above in ubuntu:16.04 and ubuntu:14.04
If you have a running process in the docker and you are attempting to change the environment variable in the docker so the running process will dynamically change - this will not work. The environment variables of a process are set when it starts. You can see here ways to overcome that, but I don't think that is the right way to go.
I would instead, have a configuration file that the file reads (or listens to) periodically. And when you want to change the configuration change the file.
If this isn't your scenario, please describe your scenario so we can better assist you.
I find a way to provide environment variable to a running container. Fist upgrade your docker-engine. I am using V1.12.5.
create a script with environment variables-
#!/bin/bash
echo "export VAR1=VAL1
export VAR2=VAL2" >> /etc/bash.bashrc
source /etc/bash.bashrc
Now start a container. Here, 'test' is the container name:
docker run -idt --name=test ubuntu
Copy your script to container:
docker cp script.sh test:/
Run the script :
docker exec -it test /bin/bash -c "/script.sh"
Restart your container:
docker restart test
Go to container shell
docker exec -it test /bin/bash
Check the variable
echo $VAR1

How to get Container Id of Docker in Jenkins

I am using Docker Custom Build Environment Plugin to build my project inside "jpetazzo/dind" docker image. After building, in console output it shows:
Docker container 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc started to host the build
$ docker exec --tty 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc env
[workspace] $ docker exec --tty --user 122:docker 4aea29fff86ba4e50dbcc7387f4f23c55ff3661322fb430a099435e905d6eeef env BUILD_DISPLAY_NAME=#73
Here Docker Container which got started has container id 212ad049dfdf8b7180b8c9b185ddfc586b301174414c40969994f7b3e64f68bc .
Now further I want to execute some command on "Execute shell" part in "Build" option in Jenkins, there I want to use this Container Id. I tried using ${BUILD_CONTAINER_ID} as mentioned in the plugin page. But that does't work.
The documentation tells you to use docker run, but you're trying to do docker exec. The exec subcommand only works on a currently running container.
I suppose you could do a docker run -d to start the container in the background, and then make sure to docker stop when you're done. I suspect this will leave you with some orphaned running containers when things go wrong, though.

Resources