AWS can't read the credentials file - docker

I'm deploying a Flask app using Docker Machine on AWS. The credentials file is located in ~/.aws/:
[default]
aws_access_key_id=AKIAJ<NOT_REAL>7TUVKNORFB2A
aws_secret_access_key=M8G9Zei4B<NOT_REAL_EITHER>pcml1l7vzyedec8FkLWAYBSC7K
region=eu-west-2
Running it as follows:
docker-machine create --driver amazonec2 --amazonec2-open-port 5001 sandbox
According to Docker docs this should work but getting this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Before you ask, yes, I set permissions in a such a way that Docker is allowed to access the credentials file.
What should I do ?

Solution found here https://www.digitalocean.com/community/questions/ssh-not-available-when-creating-docker-machine-through-digitalocean
Problem was running Docker as a snap (Ubuntu's repo) instead of official build from Docker. As soon as I uninstalled the Docker snap and installed official build Docker was able to find credentials file immediately.

Related

Docker compose -f docker-compose-dev.yaml up fails without error using ecs context

Im attempting to publish a docker compose file to Amazon ECS using the new docker compose up and an ecs context (using security tokens) however i'm getting a blank output in the console.
C:\Repos\Project>docker compose -f docker-compose-up.yaml up
C:\Repos\Project>docker compose ps
C:\Repos\Project>docker compose version
Docker Compose version dev
C:\Repos\Project>docker login
Authenticating with existing credentials...
Login Succeeded
Logging in with your password grants your terminal complete access to your account.
For better security, log in with a limited-privilege personal access token. Learn more at https://docs.docker.com/go/access-tokens/
When i run the above against the default context it works as expected. Its just when im using the ecs context.
Does anyone have any ideas?
Thanks in advance

`aws ssm start-session` not working from inside docker container

i have a docker container based off https://github.com/bopen/docker-ubuntu-pyenv/blob/master/Dockerfile
...where i'm installing the aws-cli and would like to use aws ssm to access a remote instance.
i've tried starting the container with docker-compose AND with docker up -- in both cases i've mounted my AWS_PROFILE, and can access all other aws-cli commands (i tested with ec2 describe and even did an aws ssm send-command to the instance!)
BUT when i do aws ssm start-session --target $instance_id from the container, i get nothing. i'm able to run aws ssm start-session from my local shell to this instance so i know that ssm is configured properly.
running it with the --debug flag gives me the exact same output from when i run it locally, minus the Starting session with SessionId: part obviously.
is this a aws-cli issue? or some weird container stdout thing? help pls!
[cross posted here https://github.com/aws/aws-cli/issues/4465]
okayyy so the 'fix' for this was that the Session Manager Plugin on the container was not installed properly.
i guess the plugin isn't actually 'optional' as this says https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html, but is required to start a session with SSM.
i had the wrong plugin installed and session-manager-plugin was returning an error. getting the right one in the container fixed everything!

Access Docker Container from project registry

So I have my docker image uploaded to my projects registry. I can navigate to https://console.cloud.google.com/gcr/images/ and I see my Image listed there.
Now I want to run a VM on this project and on this one use docker to run this very image.
This is the command within my VM:
sudo /usr/bin/docker run eu.gcr.io/my-project-name/example001
The response is:
Unable to find image 'eu.gcr.io/.../example001:latest' locally
/usr/bin/docker: Error response from daemon: unauthorized: You don't have the needed permissions to perform this op
eration, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.goo
gle.com/container-registry/docs/advanced-authentication.
See '/usr/bin/docker run --help'.
Please see the image attached. I can list my images if I define "eu.gcr.io/..." as my project path. However the machine seems to run on ".gcr.io" so therefor not able to access my image? How would I fix this - and why is my image on "eu.gcr.io" and the machine on ".gcr.io", I cant find a method to change this (either move the image to gcr.io or move the machine to, eu.gcr.io). However I'm not sure if this is the issue.
Maybe it is an authentication issue with docker?
VM basically cannot be on ".gcr.io", it can run in non-European region/zone, but it shouldn't be a problem.
From GCP access control point of view registry is just a bucket.
So I believe first thing you need to check is that VM has access to Google Cloud Storage.
With gcloud:
gcloud compute instances describe <instance-name>
check if VM has scope to read from devstorage:
serviceAccounts:
- email: ...-compute#developer.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/devstorage.read_only
- ...
This scope should be in place to read from registry:
https://www.googleapis.com/auth/devstorage.read_only
If you don't have such scope on VM, but have there gcloud configured, you can use gcloud as credential helper:
gcloud auth configure-docker
as stated in doc you referred: https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper
The answer is found here:
https://serverfault.com/questions/900026/gcp-no-access-to-container-registry-from-compute-engine
It is the docker command which needs the authorization. Not the hostname (eu.gcr.io) is the issue here. I used 'gcloud docker -- pull ...' command to get the image from the repository to use within my VM.
after you create linux VM on GCP, SSH to it, you have to install Google SDK 1 using Cloud SDK with scripts]1 or manually.
If you are running Ubuntu follow the documentation here if you are installing with Red Hat or CentOS follow the documentation here after finishing the Google SDK you have to Run gcloud init to initialize the SDK, just open a terminal and tape [gcloud init] you have to configure your profile. after that you have to install Docker
sudo apt-get -y install docker-ce
sudo systemctl start docker
You need to have access to the registries which you will be pushing to and pulling from.
Configured Docker to use gcloud as a credential helper. To use gcloud as the crediential helper, run the command:
gcloud auth configure-docker
After that you can pull or push images on your registry using the gcloud command with the docker as shown below:
Push: gcloud docker -- push gcr.io/google-containers/example-image:latest
pull: gcloud docker -- pull gcr.io/google-containers/example-image:latest

Unable to ssh to master node in mesos local cluster installed system

I am a newbie to Mesos. I have installed a DCOS cluster locally in one system (Centos 7).
Everything went up properly and I am able to access the GUI of DCOS but when I am trying to connect through CLI, it is asking me for password.
I have not been prompted for any kind of password during local installation through vagrant.
But when I issue the following command:
[root#blade7 dcos-vagrant]# dcos node ssh --master-proxy --leader
Running `ssh -A -t core#192.168.65.90 ssh -A -t core#192.168.65.90 `
core#192.168.65.90's password:
Permission denied, please try again.
core#192.168.65.90's password:
I don’t know the password to be given.
Kindly help me in resolving this issue
Since the local installation bases on vagrant, you can use the following convenient workaround: directly log into the virtual machines by using vagrant's ssh.
open a terminal and enter vagrant global-status to see a list of all running vagrant environments (name/id)
switch into your dcos installation directory (e.g., cd ~/dcos-vagrant), which contains the file Vagrantfile
run vagrant ssh <name or (partial) id> in order to ssh into the virtual machine. For example, vagrant ssh m1 connects to the master/leader node, which gives you essentially the same shell as dcos node ssh --master-proxy --leader would do.
Two more tips:
within the virtual machine, the directory /vagrant is mounted to the current directory of the host machine, which is nice for transferring files into/from the VM
you may try to find out the correct ssh credentials of the default vagrant user and then add these (rather than the pem file retrieved from a cloud service provider) via ssh-add to your host machine. This should give you the ability to login via dcos node ssh --master-proxy --leader --user=vagrant without a password
The command shows that you are trying to login to the server using the userid "core". If you do not know the password of user "core", I suggest reset "core" user password and try it again.

Error creating Docker container in Bluemix

To create a Docker container in Bluemix we need to install container plug-ins and container extension. After installing container extension Docker should be running but it show error as :
root#oc0608248400 Desktop]# cf ic login
** Retrieving client certificates from IBM Containers
** Storing client certificates in /root/.ice/certs
Successfully retrieved client certificates
** Checking local docker configuration
Not OK
Docker local daemon may not be running. You can still run IBM Containers on the cloud
There are two ways to use the CLI with IBM Containers:
Option 1) This option allows you to use `cf ic` for managing containers on IBM Containers while still using the docker CLI directly to manage your local docker host.
Leverage this Cloud Foundry IBM Containers plugin without affecting the local docker environment:
Example Usage:
cf ic ps
cf ic images
Option 2) Leverage the docker CLI directly. In this shell, override local docker environment to connect to IBM Containers by setting these variables, copy and paste the following:
Notice: only commands with an asterisk(*) are supported within this option
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/root/.ice/certs
export DOCKER_TLS_VERIFY=1
Example Usage:
docker ps
docker images
exec: "docker": executable file not found in $PATH
Please suggest what should I go next.
the error is already telling you what to do:
exec: "docker": executable file not found in $PATH
means to find the executable docker.
Thus the following should tell you where it is located and that would needed to be append to the PATH environment variable.
dockerpath=$(dirname `find / -name docker -type f -perm /a+x 2>/dev/null`)
export PATH="$PATH:$dockerpath"
What this will do is search the root of the filesystem for a file, named 'docker', and has the executable bit set while ignoring error messages and returns the absolute path to the file as $dockerpath. Then it exports this temporarily.
The problem seems to be that your docker daemon isn't running.
Try running:
sudo docker restart
If you've just installed docker you may need to reboot your machine first.

Resources