Docker daemon (not containers) can't read environment variables - docker

Trying to configure a container running outside of GCP to log to Google Cloud Platform (StackDriver). One requirement is that the Docker daemon is able to locate the environment variable GOOGLE_APPLICATION_CREDENTIALS so it can authenticate. One would assume that the following would work, but it doesn't:
GOOGLE_APPLICATION_CREDENTIALS=/usr/local/keys/project-1.json docker run --log-driver=gcplogs ...
That outputs:
ERROR: for api Cannot start service api:
failed to initialize logging driver: google: could not find default credentials.
See https://developers.google.com/accounts/docs/application-default-credentials
for more information.
Haven't found any documentation on how to set that directly on daemon.json, but I don't want that either because I might have different containers logging to different GCP projects.
I've tried this on Mac (docker desktop) and Debian.

This is question that keeps coming back. What is happening here is that environment variable GOOGLE_APPLICATION_CREDENTIALS is loaded by the system docker daemon. System daemons don't see the environment variables set in the user login. What you need to do is set the GOOGLE_APPLICATION_CREDENTIALS at the system level.
Here is how to do that in Ubuntu(Systemd):
$ sudo mkdir -p /etc/systemd/system/docker.service.d
Create /etc/systemd/system/docker.service.d/env.conf with the following content:
[Service]
Environment="GOOGLE_APPLICATION_CREDENTIALS=/path/to/file.json"
Apply the changes.
$ sudo systemctl daemon-reload
Once done restart docker/containerd daemons
$ sudo systemctl restart containerd
$ sudo systemctl restart docker
Test the gcplogs driver
docker run --log-driver=gcplogs --log-opt gcp-project="my-project" hello-world

Related

NoCredentialProviders error with awslogs logging driver in docker at mac

Hi I am trying to enable cloud watch logging in my docker container on my mac machine.
Docker version .
Version: 18.03.1-ce .
API version: 1.37 .
I am getting following error every-time i start container
Error response from daemon: failed to initialize logging driver: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I have tried following approaches:
Exporting AWS_ACCESS_KEY_ID (etc.) in /etc/default/docker
mounted ~/.aws/credentials
Passing aws credentials as env
But every-time i get same error.
docker run -d -p 5801:8080 --env AWS_REGION=us-west-2 -v /Users/me/.aws/credentials:/root/.aws/credentials:ro --log-driver=awslogs --log-opt awslogs-region=us-west-2 --log-opt awslogs-group=perf-log-group --log-opt awslogs-create-group=true --log-opt awslogs-stream=awslogs-ing imageId
Could you please suggest what i am missing here as if i remove log part application works fine and i am able access aws api in application.
I came here searching the net for an answer as I was not capable to understand what the docs want me to do, too. Finally this tutorial helped me make my way through it: https://transang.me/configure-docker-to-send-log-to-aws/. Although I am on an Ubuntu 20.04 I assume we both face the same trouble to understand where to put the env information
You will have to provide the credentials that shall be used to the docker daemon of your local machine, not to the docker build or the docker run command.
according to the tutorial put the config here:
# cat /etc/systemd/system/docker.service.d/override.conf
[Service]
Environment="AWS_ACCESS_KEY_ID=my-aws-access-key"
Environment="AWS_SECRET_ACCESS_KEY=my-secret-access-key"
followed by the commands
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
to flush the changes and reload the docker daemon or stop and feed the environment variables directly to the daemon:
$ sudo systemctl stop docker
$ sudo env AWS_ACCESS_KEY_ID=my-aws-access-key AWS_SECRET_ACCESS_KEY=my-secret-access-key /usr/bin/dockerd
Pitfall: If you have MFA enabled you may need to provide the security token, too (at least I stumbled over it) than the daemon invocation becomes:
$ sudo env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN /usr/bin/dockerd

dockerd --max-concurrent-downloads 1 command not found [duplicate]

I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}

Can I pass --max-concurrent-downloads as a flag?

I'm working with a poor internet connection and trying to pull and run a image.
I wanted to download one layer at a time and per documentation tried adding a flat --max-concurrent-downloads like so:
docker run --rm -p 8787:8787 -e PASSWORD=blah --max-concurrent-downloads=1 rocker/verse
But this gives an error:
unknown flag: --max-concurrent-downloads See 'docker run --help'.
I tried typing docker run --help and interestingly did not see the option --max-concurrent-downloads.
I'm using Docker Toolbox since I'm on a old Mac.
Over here under l there's an option for --max-concurrent-downloads however this doesn't appear on my terminal when typing docker run --help
How can I change the default of downloading 3 layers at a time to just one?
From the official documentation: (https://docs.docker.com/engine/reference/commandline/pull/#concurrent-downloads)
You can pass --max-concurrent-downloads during a pull operation.
You can set --max-concurrent-downloads with the dockerd command.
If you're using the docker Desktop GUI for Mac or Windows:
You can edit the .json file directly in docker engine settings:
This setting needs to be passed to dockerd when starting the daemon, not to the docker client CLI. The dockerd process is running inside of a VM with docker-machine (and other docker desktop environments).
With docker-machine that is used in toolbox, you typically pass the engine flags on the docker-machine create command line, e.g.
docker-machine create --engine-opt max-concurrent-downloads=1
Once you have a created machine, you can follow the steps from these answers to modify the config of an already running machine, mainly:
SSH into your local docker VM.
note: if 'default' is not the name of your docker machine then substitute 'default' with your docker machine name $
docker-machine ssh default
Open Docker profile $ sudo vi /var/lib/boot2docker/profile
Then in that profile, you would add your --engine-opt max-concurrent-downloads=1.
Newer versions of docker desktop (along with any Linux install) make this much easier with a configuration menu daemon -> advanced where you can specify your daemon.json entries like:
{
"max-concurrent-downloads": 1
}

Google Cloud Logging Driver cannot find credentials after reboot

I've followed the directions here, and everything works well until I restart my computer. After restarting, it seems like the docker daemon loses track of the Google credentials.
$ docker run --log-driver=gcplogs ...
fails with:
docker: Error response from daemon: failed to initialize logging driver: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
ERRO[0000] error waiting for container: context canceled
This is strange to me, because running $ systemctl show --property=Environment docker returns the value in my systemd configuration:
Environment=GOOGLE_APPLICATION_CREDENTIALS=/etc/path/to/application_default_credentials.json
If I $ sudo systemctl restart docker, then docker runs sucessfully and logs are sent to stackdriver. But I want this docker image to run automatically on startup, and restarting docker with sudo gets in the way.
Is there a way to initialize the docker daemon with the necessary environment variables, so gcplogs is ready on boot without restarting docker?
I had two versions of docker installed -- one through adding docker's repo to apt, and one through snap. Running
sudo systemctl list-unit-files| grep docker | grep enabled
showed two installations of docker:
docker.service enabled
snap.docker.dockerd.service enabled
Having two docker installations was causing problems for startup. I removed the snap installation, rebooted, and everything now works.
I think you may try to edit the systemd: Unit dependencies and order, let docker.service start after google-accounts-daemon.service.
You can see all the service in google vm by
sudo systemctl list-unit-files| grep google | grep enabled
And you will see
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-network-daemon.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled

Change docker mode to experimental in google cloud provider

I am using google cloud platform.
I created a cluster and i want to use criu feature of docker to do that i need to change docker mode to experimental.
I enter the node uisng ssh and enter etc/docker to change key.json file and add experimental parm to true.
This command:
echo "{\"experimental\": true}" >> /etc/docker/key.json
gave me the error message -bash: key.json: Permission denied
How can i change docker mode?
*Update: change the key.json file using sudo vi but after i restarted docker using the command: sudo systemctl restart docker the docker still experimental = false
I added daemon.json file in /etc/docker with the content:
{"experimental": true}
and restarted the docker service:
sudo /etc/init.d/docker restart
After the restart docker was in the experimental mode.

Resources