This command allows me to execute terragrunt binary within a docker cintainer correctly:
$ docker run -ti --rm -v $HOME/.aws:/root/.aws -v ${HOME}/.ssh:/root/.ssh -v `pwd`:/apps -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN alpine/terragrunt:0.13.6 bash
I simply have to cd into the directory where the terragrunt config file can be found, and execute the command:
bash-5.0# cd environments/assembly/us-east-1/sqs/ss-error-generic-encrypted/core/
bash-5.0# terragrunt init -reconfigure;terragrunt plan
(this is happening within the docker container shell)
And it works perfectly. However, I need to create a method of executing this from a script (like a Jenkins job), not interactively. So I tried this:
$ docker run -ti --rm -v $HOME/.aws:/root/.aws -v ${HOME}/.ssh:/root/.ssh -v `pwd`:/apps -w /apps/environments/assembly/us-east-1/sqs/ss-error-generic-encrypted/core -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN alpine/terragrunt:0.13.6 terragrunt init -reconfigure; terragrunt plan
The error I see seems like the -w doesn't work the way I think it works:
[terragrunt] 2022/08/27 11:12:14 Error reading file at path /Users/chewmanfoo/bitbucket.org/aws-terraform-infrastructure/terragrunt.hcl: open /Users/chewmanfoo/bitbucket.org/aws-terraform-infrastructure/terragrunt.hcl: no such file or directory
I thought the -w /apps/environments/assembly/us-east-1/sqs/ss-error-generic-encrypted/core would set the working directory to that path within the container, so that I could execute the binary in the proper place. But it seems to be doing something else.
Can anyone lend your expertise?
Related
I upgraded my docker desktop to the version 3.2.1 (61626), and choose to use wsl2, after that i cannot run Local builds of AWS CodeBuild because the AWS configuration is not being found, the command I use is (I run the command from a tab from Windows terminal using ubuntu 20 that I installed from the store):
./codebuild_build.sh -i aws/codebuild/standard:5.0 -a ./ -s ./ -b ./buildspec.yml -c ~/.aws
That command works with the version of docker that uses Hyper-V, after the upgrade to wsl2 i get the error:
agent_1 | [Container] 2021/03/05 21:04:05 Phase complete: DOWNLOAD_SOURCE State: FAILED
agent_1 | [Container] 2021/03/05 21:04:05 Phase context status code: Decrypted Variables Error Message: MissingRegion: could not find region configuration
The docker command that is generated is the following:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/standard:5.0" -e "ARTIFACTS=/mnt/c/[redacted]" -e "SOURCE=/mnt/c/[redacted]" -e "BUILDSPEC=/mnt/c/[redacted]" -e "AWS_CONFIGURATION=NONE" -e "INITIATOR=[redacted]" amazon/aws-codebuild-local:latest
edit:
running the command from git bash the generated command is:
winpty docker run -it -v //var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=aws/codebuild/standard:5.0" -e "ARTIFACTS=//C/[redacted]" -e "SOURCE=//C/[redacted]" -e "BUILDSPEC=//C/[redacted]" -e "AWS_CONFIGURATION=//C/Users/[redacted]/.aws" -e "INITIATOR=[redacted]" amazon/aws-codebuild-local:latest
But also fails with the error:
agent_1 | [Container] 2021/03/05 22:17:43 Phase complete: DOWNLOAD_SOURCE State: FAILED
agent_1 | [Container] 2021/03/05 22:17:43 Phase context status code: YAML_FILE_ERROR Message: stat /codebuild/output/srcDownload/src/buildspec.pr.yml: no such file or directory
With the previous command the variable AWS_CONFIGURATION had the path to my .aws folder, I had tried -c //c/Users/[myProfile]/.aws and /mnt/c/Users/[myProfile]/.aws but AWS_CONFIGURATION is always NONE
Is there a configuration that I'm missing? or I need add an extra step with wsl2?
Edit:
I installed Ubuntu 18 and failed in the same way.
I was having a similar problem. I realized that since I had to run docker as root using the sudo command, my home directory was now /root instead of /home/<username>.
There may be a better way around this, but I symlinked the folder /home/<username>/.aws to /root/.aws.
Also, you could pass the variables AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, and AWS_ACCESS_KEY_ID in through an environment file using the -e flag of the codebuild_build.shcommand.
Hello I m trying to follow the step by step guid to build jpeg xl (I m on windows and try to build a x64 version for linux)
after:
docker run -u root:root -it --rm -v C:\Users\fred\source\tools\jpegxl\jpeg-xl-master -w /jpeg-xl gcr.io/jpegxl/jpegxl-builder
I have the container running but I don't know how to run the command inside :
CC=clang-6.0 CXX=clang++-6.0 ./ci.sh opt
I tried CC=clang-6.0 CXX=clang++-6.0 ./ci.sh opt and I get ./ci.sh: No such file or directory no command seems to work when I do "ls" it display nothing
Does someone knows how to get this to build?
Make sure that you start a bash terminal inside the container:
docker run -it <image> /bin/bash
I believe /bin/bash is missing from your docker run command. As a result, you are executing the command for clang inside your own environment, not the container.
You can set the environment variables by using -e
Example
-e CC=clang-6.0 -e CXX=clang++-6.0
The full command to log in into your container:
docker run -u root:root -it --rm -e CC=clang-6.0 -e CXX=clang++-6.0 -v C:\Users\fred\source\tools\jpegxl\jpeg-xl-master -w /jpeg-xl gcr.io/jpegxl/jpegxl-builder /bin/bash
They have updated the image without updating the command so the command is
CC=clang-7 CXX=clang++-7 ./ci.sh opt
The discution is here:
Can't build from docker image "Unknown clang version"
I would like to create a makefile that runs a docker container, automatically mount the current folder and within the container CD to the shared directory.
I currently have the following which runs the docker image and mounts the directory with no issue. But I am unsure how to get it to change directory.
run:
docker run --rm -it -v $(PWD):/projects dockerImage bash
I've seen some examples where you can append -c "cd /projects" at the end so that it is:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects"
however it will immediately exit the bash command afterwards. Ive also seen an example where you can append && at the end so that it is the following:
docker run --rm -it -v $(PWD):/projects dockerImage bash -c "cd /projects &&".
Unfortunately the console will just hang.
You can specify the working directory in your docker run command with the -w option. So you can do something like this:
docker run --rm -it -v $(PWD):/projects -w /projects dockerImage bash
You can find this option in the official docs here https://docs.docker.com/engine/reference/run/.
I am running oracle-xe-11g on rancher os. I want to take the data backup of my DB. When I tried with the command
docker exec -it $Container_Name /bin/bash
then I entered:
exp userid=username/password file=test.dmp
It is working fine, and it created the test.dump file.
But I want to run the command with the docker exec command itself. When I tried this command:
docker exec $Container_Name sh -C exp userid=username/password file=test.dmp
I am getting this error message: sh: 0: Can't open exp.
The problem is:
When running bash with -c switch it is not running as interactive or a login shell so bash won't read the same startup scripts. Anything set in /etc/profile, ~/.bash_profile, ~/.bash_login, or ~/.profile would be skipped.
Workaround:
run your container with following command:
sudo docker run -d --name Oracle-DB -p 49160:22 -p 1521:1521 -e ORACLE_ALLOW_REMOTE=true -e ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe -e PATH=/u01/app/oracle/product/11.2.0/xe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin -e ORACLE_SID=XE -e SHLVL=1 wnameless/oracle-xe-11g
What I'm doing is specifying the environment variables set in the container using docker.
Now for generating the backup file:
sudo docker exec -it e0e6a0d3e6a9 /bin/bash -c "exp userid=system/oracle file=/test.dmp"
Please note the file will be created inside the container, so you need to copy it to docker host via docker cp command
This is how I did it. Mount a volume to the container e.g. /share/backups/ then execute:
docker exec -it oracle /bin/bash -c "ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe ORACLE_SID=XE /u01/app/oracle/product/11.2.0/xe/bin/exp userid=<username>/<password> owner=<owner> file=/share/backups/$(date +"%Y%m%d")_backup.dmp"
I want to mount a volume and add it to the container's PATH environment variable. I've tried the following and none is working.
docker run -it -v $(PWD):/app -e PATH=$PATH:/app/bin debian:jessie bash
docker run -it -v $(PWD):/app -e PATH='$PATH:/app/bin' debian:jessie bash
docker run -it -v $(PWD):/app -e PATH='$$PATH:/app/bin' debian:jessie bash
docker run -it -v $(PWD):/app -e PATH='\$PATH:/app/bin' debian:jessie bash
How do I append the mounted volume to PATH?
If you you use -e option, the $PATH value is the PATH of the host instead of the container.
You can do it like this:
docker run -it -v $(PWD):/app debian:jessie bash -c 'export PATH=$PATH:/app/bin; bash'
Within the docker command line, you can't get "what will be the value of $PATH at runtime". Thus, you cannot append a PATH to the PATH variable, with docker's -e flag. To achieve what you want to do, you will need to do that in a script that will get executed as the cmd / entrypoint of your container.
You can define a fixed Path for your imported Apps and add the new Path to the Apps into the Environment-Variable "Path"
Let's take your Path "/app". In your Dockerfile add the following Line:
ENV PATH=${PATH}:/app/bin
Build your modified Docker
Now you can access all Apps located under < external Directory >/bin that you mount to "/app" via
-v <external Directory>:/app
You can use a shell script (let's call it run.sh):
#/bin/bash
PATH=$PATH:/app/bin
"$#"
and call it from docker:
docker run -it -v $(PWD):/app debian:jessie /app/run.sh bash