This command works as expected:
# docker run --rm -it public.ecr.aws/aws-cli/aws-cli:2.9.1 --version
aws-cli/2.9.1 Python/3.9.11 Linux/5.15.0-1026-aws docker/aarch64.amzn.2 prompt/off
But this does not:
# docker run --rm -it public.ecr.aws/aws-cli/aws-cli:2.9.1 s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
Nor can I configure AWS credentials...
# docker run -it public.ecr.aws/aws-cli/aws-cli:2.9.1 aws configure
I am not sure how to use docker for aws command.
Update:
This seems to work as expected:
$ docker run --rm -it -v ~/.aws:/root/.aws public.ecr.aws/aws-cli/aws-cli:2.9.1 s3 ls
But in my case, I do not have credentials saved locally. Isn't the docker container recommended in such cases?
You're running this command:
docker run --rm -it public.ecr.aws/aws-cli/aws-cli:2.9.1 s3 ls
That doesn't work because it does not have access to either your AWS configuration in $HOME/.aws nor to environment variables such as $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY.
If you have your credentials in environment variables, you can expose them to the container using the -e option to docker run:
docker run --rm -it -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY \
public.ecr.aws/aws-cli/aws-cli:2.9.1 s3 ls
Alternatively, you can expose your ~/.aws directory as you have shown in your recent edit to your answer:
docker run --rm -it -v ~/.aws:/root/.aws \
public.ecr.aws/aws-cli/aws-cli:2.9.1 s3 ls
Related
Im trying to run docker inside jenkins container, i used this command to create jenkins container
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker jenkins/jenkins:latest
then this command to access jenkins container bash
docker exec -u 0 -it <container-id> bash, whenever i run docker i get this error
docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by docker) docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by docker)
What is creating this problem and what ways in order to solve it ?
This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
so run docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:latest
then this command to access jenkins container bash as root user docker exec -u 0 -it <container-id> bash
Once inside the Jenkins container, simply run this command to install docker inside of the Jenkins container: curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
this command gets the docker quick installation script and runs the script which then installs docker inside of the container
Exit out of the Jenkins container interactive shell, and run the following command to change permissions on “docker.sock” for added security sudo chmod 666 /var/run/docker.sock
solved by downgrading my server OS to Ubuntu 18
I need a Debian container that can run containers itself (and has access to systemd). Following this post, I have tried to run
docker run -v /var/run/docker.sock:/var/run/docker.sock --name debian-buster-slim -h 10-slim -e LANG=C.UTF-8 -it debian:10-slim /bin/bash -l
but the container cannot run docker containers. What am I doing wrong?
I am trying to run a software for predicting hemorrhage volume on brain CT in docker: https://github.com/msharrock/deepbleed
I created a "deepbleed" folder in my D:\ drive on windows, and ran docker pull msharrock/deepbleed command after I cd'd inside that directory. The pull was successful and I can see the container in my docker desktop app.
Then I went on and created an indir and outdir folder as instructed in documentation; placed my CT file for prediction in the indir folder.
The readme tells me to run this command next:
docker run -it msharrock/deepbleed bash -v /path/to/data:/data/
So I have run the following commands, but I get "no such file or directory" for all of them:
docker run --rm -it msharrock/deepbleed bash -v pwd/deepbleed/indir:outdir
docker run --rm -it msharrock/deepbleed bash -v ~/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /mnt/d/deepbleed/indir:/outdir/
docker run --rm -it msharrock/deepbleed bash -v /d/deepbleed/indir:/outdir
docker run --rm -it msharrock/deepbleed bash -v "$(& "D:\deepbleed\indir" "$(pwd)")":/outdir
docker run --rm -it msharrock/deepbleed bash -v /indir/:/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d:/deepbleed/indir://d:/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir://d/deepbleed/outdir/
docker run --rm -it msharrock/deepbleed bash -v //d/deepbleed/indir:/outdir/
My docker is running on a wsl2 based engine in windows 10, the hyper-v folders for disks and virtual machines are located on my d: drive.
What do I need to do to get this running?
Try doing it like this (just using one of your items in the list for this example to give you the idea):
docker run -rm -it -v /mnt/d/deepbleed/indir:/outdir msharrock/deepbleed bash
Goal
Action: Run command from my local machine
Result: Docker image deployed on cloud instance
Approach
For remote deployment, I am using gcloud commands.
The command below is working but the only problem is that it is not picking environment variables file i.e. .env. I have this .env file placed in the working directory.
Command:
gcloud beta compute ssh --quiet --zone "us-west1-b" "devop-beta-persistent-2" --project "my-project" --command 'sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILE_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest'
Error: docker: open /.env: no such file or directory.
What I already tried
I have tried setting path to:
.env
/full/path/to/.env
$PWD/.env
but still getting the same error.
If I run this command on my local machine, it works fine i.e. picking up the .env file.
sudo docker run -p 8080:8080 -p 8443:8443 -p 50000:50000 -v ~/jenkins_data:/var/jenkins_home -v $FILES_PATH/jenkins.yaml:/var/configurations/jenkins_casc.yml --name jenkins-devkit --env-file $PWD/.env $JENKINS_IMAGE:latest
Can any one suggest the possible solution?
I am following this docker user guide: Managing Data in Containers
It seem to be a error at "Mount a Host File as a Data Volume" part,
$ sudo docker run --rm -it -v ~/.bash_history:/.bash_history ubuntu /bin/bash
I test it in my mac version docker, it should be like this:
$ sudo docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
I am not sure if am I correct about this.
You can't use -v option with relative path. You need to use absolute path instead:
sudo docker run --rm -it -v /home/<your_user>/.bash_history:/.bash_history ubuntu /bin/bash