I have a docker container golang code which interacts with aws resources. In the testing environment, we use iam role. But How do I test locally. How to use aws credentials to run my docker locally.I am using docker file to build the docker image.
Just mount your credential directory as read-only using:
docker run -v ${HOME}/.aws/credentials:/root/.aws/credentials:ro ...
given you have root as the user in the container and also have setup the host using this guide for credentials file.
or pass them directly using environment variables as:
docker run -e AWS_ACCESS_KEY_ID=<ACCESS_KEY> -e AWS_SECRET_ACCESS_KEY=<SECRET_KEY> ...
Related
I have a twitter bot which I am attempting to deploy to my server with Docker; I don't want to save my API keys & Access Tokens in the code, so I access them via env variables.
I ssh'ed into the server and exported the keys & tokens in my ~/.profile, yet when I run the Docker container on my server, I get an error as if my keys/tokens are incorrect.
I'm new to Docker so I have to ask - Is my running Docker container able to access these env variables? Or do I have to set them another way such that my Docker container can see them?
Docker can't access the env vars on the server. You need to pass them explicitly when running the docker using -e / --env flag.
Example: docker run --env VAR1=value1 --env VAR2=value2 ...
Documentation: https://docs.docker.com/engine/reference/commandline/run
I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.
I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.
Hello i have an Ubuntu VM (using bridged adapter) in which i'm running a docker container in which im starting Rundeck with a pre-build war file in a mounted Volume.When i run the war the first time it creates its files and the config file:
#loglevel.default is the default log level for jobs:
ERROR,WARN,INFO,VERBOSE,DEBUG
loglevel.default=INFO
rdeck.base=/home/rundeck/rundeck
#rss.enabled if set to true enables RSS feeds that are public (non-authenticated)
rss.enabled=false
server.address=7d142a279564
grails.serverURL=http://7d142a279564:4440
dataSource.dbCreate = update
dataSource.url = jdbc:h2:file:/home/rundeck/rundeck/server/data/grailsdb;MVCC=true
# Pre Auth mode settings
rundeck.security.authorization.preauthenticated.enabled=false
rundeck.security.authorization.preauthenticated.attributeName=REMOTE_USER_GROUPS
rundeck.security.authorization.preauthenticated.delimiter=,
# Header from which to obtain user name
rundeck.security.authorization.preauthenticated.userNameHeader=X-Forwarded-Uuid
# Header from which to obtain list of roles
rundeck.security.authorization.preauthenticated.userRolesHeader=X-Forwarded-Roles
# Redirect to upstream logout url
rundeck.security.authorization.preauthenticated.redirectLogout=false
rundeck.security.authorization.preauthenticated.redirectUrl=/oauth2/sign_in
rundeck.log4j.config.file=/home/rundeck/rundeck/server/config/log4j.properties
As you see "server.address" and "grails.serverURL" get the default image ID as IP.
I can't access the container using this url,but i can access it using localhost:4440.But after loging in in rundeck it redirects me to "grails.serverURL" which gives "Server Not Found" as stated before.
This is how im starting the container:
sudo docker run -it -v /path/to/source:/path/to/dest -p 4440:4440 <imageID>
When i change the "server.address" and "grails.serverURL" to localhost or 127.0.0.1 i can't access the container at all.
Sorry if the question was answered before I'm new at docker and been at this for several days now,couldn't find a solution,Thanks!
I'm no expert in rundeck, but looking at the documentation rundeck image has two env vars for setting the URL and address RUNDECK_GRAILS_URL and RUNDECK_SERVER_ADDRESS
docker run -d -e RUNDECK_GRAILS_URL=http://127.0.0.1:4440 -e RUNDECK_SERVER_ADDRESS=0.0.0.0 -p 4440:4440 rundeck/rundeck.
Now you can access your application at http://localhost:4440
In case if you're running your docker container in a remote server, then update your RUNDECK_GRAILS_URL as RUNDECK_GRAILS_URL=http://<remote_server_ip>:4440.
Now you can access your app at http://remote_server_ip:4440
https://github.com/getsentry/onpremise
mkdir -p data/{sentry,postgres} - Make our local database and sentry config directories.
This directory is bind-mounted with postgres so you don't lose state!
docker-compose run --rm web config generate-secret-key - Generate a secret key.
Add it to docker-compose.yml in base as SENTRY_SECRET_KEY.
docker-compose run --rm web upgrade - Build the database.
Use the interactive prompts to create a user account.
docker-compose up -d - Lift all services (detached/background mode).
Access your instance at localhost:9000!
I'm new to docker.
I tried to run sentry container locally, succeeded.
But when I was trying to deploy it on a cloud container service platform,I met some problems.
The platform just provide one way to run docker: docker run xxx , unlike aws which can use cli.
So how could I deploy on that platform? Thanks.
Additionally,I must use that platform cause it's my company's product lol.