Docker compose -f docker-compose-dev.yaml up fails without error using ecs context - docker

Im attempting to publish a docker compose file to Amazon ECS using the new docker compose up and an ecs context (using security tokens) however i'm getting a blank output in the console.
C:\Repos\Project>docker compose -f docker-compose-up.yaml up
C:\Repos\Project>docker compose ps
C:\Repos\Project>docker compose version
Docker Compose version dev
C:\Repos\Project>docker login
Authenticating with existing credentials...
Login Succeeded
Logging in with your password grants your terminal complete access to your account.
For better security, log in with a limited-privilege personal access token. Learn more at https://docs.docker.com/go/access-tokens/
When i run the above against the default context it works as expected. Its just when im using the ecs context.
Does anyone have any ideas?
Thanks in advance

Related

Docker logoff Azure

I had ChirpStack Docker-Compose container in my local Windows 10 PC. It was configured and running fine.
I did stupid thing. I was trying to make this system run on Azure by entering commands:
docker login azure
docker context create aci myacicontext
and some more ..
Finally I failed with Azure and now I would like my local docker run again by entering old good command that worked fine before Azure:
docker-compose up
Got error:
The platform targeted with the current context is not supported.
Make sure the context in use targets a Docker Engine.
I suppose this error is because I was loged in Azure and I executed command:
docker logout
But this not helped. How get my docker composer run on my Windows machine again?
I faced same issue and way to solve was to change context back to default
docker context use default

Can you bypass docker login when using AWS CDK to upload image to ECR?

I am using a CDK code example (I don't believe it's relevant to my question but here is the link: https://github.com/TysonWorks/cdk-examples/tree/master/ecs-go-api) which attempts to upload a docker image to the AWS ECR, however the 'CDK Deploy' command fails with the following error message:
After much research and testing with the Docker CLI itself, I discovered this happens when the AWS Password used by Docker to login to the ECR is too long for the operating system's Credential Manager (Windows 10 in my case).
This seems to be a known issue and one workaround is to use a Credential Helper (see: https://github.com/awslabs/amazon-ecr-credential-helper and read: https://aws.amazon.com/blogs/compute/authenticating-amazon-ecr-repositories-for-docker-cli-with-credential-helper/).
This approach allows one to use the 'docker push' command without 'docker login' and this works fine for me as I could use the Docker CLI to push the image to the AWS ECR, HOWEVER the AWS CDK Deploy process uses the 'docker login' approach and so I have the error once again...
Is there a way to change the AWS CDK to use 'docker push' without 'docker login'?

AWS can't read the credentials file

I'm deploying a Flask app using Docker Machine on AWS. The credentials file is located in ~/.aws/:
[default]
aws_access_key_id=AKIAJ<NOT_REAL>7TUVKNORFB2A
aws_secret_access_key=M8G9Zei4B<NOT_REAL_EITHER>pcml1l7vzyedec8FkLWAYBSC7K
region=eu-west-2
Running it as follows:
docker-machine create --driver amazonec2 --amazonec2-open-port 5001 sandbox
According to Docker docs this should work but getting this output:
Error setting machine configuration from flags provided: amazonec2 driver requires AWS credentials configured with the --amazonec2-access-key and --amazonec2-secret-key options, environment variables, ~/.aws/credentials, or an instance role
Before you ask, yes, I set permissions in a such a way that Docker is allowed to access the credentials file.
What should I do ?
Solution found here https://www.digitalocean.com/community/questions/ssh-not-available-when-creating-docker-machine-through-digitalocean
Problem was running Docker as a snap (Ubuntu's repo) instead of official build from Docker. As soon as I uninstalled the Docker snap and installed official build Docker was able to find credentials file immediately.

Docker in Docker unable to push

I'm trying to execute docker commands inside of a Docker container (don't ask why). To do so I start up a container by running.
sudo docker run -v /var/run/docker.sock:/var/run/docker.sock -it my_docker_image
I am able to run all of the docker commands (pull, login, images, etc) but when I try to push to my remote (Gitlab) registry I get denied access. Yes, I did do a docker login and was able to successfully log in.
When looking at the Gitlab logs I see an error telling me no access token was sent with the push. After I do a docker login I see a /root/.docker/config.json with the remote url and a string of random characters (my credentials in base 64 I believe)? I'm using an access token as my password because i have MFA enabled on my Gitlab server.
Appreciate the help!
I ended up resolving the issue by using docker:stable as my runner image. Not quite sure what the problem was with the centos:centos7 image.

GitLab CI ssh registry login

I have a GitLab project gitlab.com/my-group/my-project which has a CI pipeline that builds an image and pushes it to the project's GitLab registry registry.gitlab.com/my-group/my-project:tag. I want to deploy this image to Google Compute Engine, where I have a VM running docker.
Easy enough to do it manually by ssh'ing into the VM, then docker login registry.gitlab.com and docker run ... registry.gitlab.com/my-group/my-project:tag. Except the docker login command is interactive, which is a no-go for CI. It can accept a username and password on the command line, but that hardly feels like the right thing to do, even if my login info is in a secret variable (storing my GitLab login credentials in a GitLab secret variable?...)
This is the intended workflow on the Deploy stage of the pipeline:
Either install the gcloud tool or use an image with it preinstalled
gcloud compute ssh my-gce-vm-name --quiet --command \
"docker login registry.gitlab.com && docker run registry.gitlab.com/my-group/my-project:tag"
Since the gcloud command would be running within the GitLab CI Runner, it could have access to secret variables, but is that really the best way to log in to the GitLab Registry over ssh from GitLab?
I'll answer my own question in case anyone else stumbles upon it. GitLab creates ephemeral access tokens for each build of the pipeline that give the user gitlab-ci-token access to the GitLab Registry. The solution was to log in as the gitlab-ci-token user in the build.
.gitlab-ci.yml (excerpt):
deploy:
stage: deploy
before_script:
- gcloud compute ssh my-instance-name --command "docker login registry.gitlab.com/my-group/my-project -u gitlab-ci-token -p $CI_BUILD_TOKEN"
The docker login command creates a local configuration file in which your credentials are stored at $HOME/.docker/config.json that looks like this (also see the documentation on this):
{
"auths": {
"<registry-url>": {
"auth": "<credentials>"
}
}
}
As long as the config.json file is present on your host and your credentials (in this case simply being stored as base64("<username>:<password>")) do not change, there is no need to run docker login on every build or to store your credentials as variables for your CI job.
My suggestion would be to simply ensure that the config.json file is present on your target machine (either by running docker login once manually or by deploying the file using whatever configuration management tool you like). This saves you from handling the login and managing credentials within your build pipeline.
Regarding the SSH login per se; this should work just fine. If you really want to eliminate the SSH login, you could setup the Docker engine on your target machine to listen on an external socket, configure authentication and encryption using TLS client certificates as described in the official documentation and directly talk to the remote server's Docker API from within the build job:
variables:
DOCKER_HOST: "tcp://<target-server>:2376"
DOCKER_TLS_VERIFY: "1"
script:
- docker run registry.gitlab.com/my-group/my-project:tag
We had the same "problem" on other hosting providers. Our solution is to use some kind of custom script which runs on the target machine and can be called via a Rest-Api Endpoint (secured by Basic-Auth or what ever).
So you could just trigger the remote host to do the docker login and upgrade your service without granting SSH Access via gitlab-ci.

Resources