boto3: config profile could not be found - docker

I'm testing my lambda function wrapped in a docker image and provided environment variable AWS_PROFILE=my-profile for the lambda function. However, I got an error : "The config profile (my-profile) could not be found" while this information is there in ~/.aws/credentials and ~/.aws/config files. Below are my commands:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 <image>:latest lambda_func.handler
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '"body":{"x":5, "y":6}}'
The thing is that if I just run the lambda function as a separated python script then it works.
Can someone show me what went wrong here?
Thanks

When AWS is showing how to use their containers, such as for local AWS Glue, they share the ~/.aws/ in read-only mode with the container using volume option:
-v ~/.aws:/root/.aws:ro
Thus if you wish to follow AWS example, your docker command could be:
docker run -e BUCKET_NAME=my-bucket -e AWS_PROFILE=my-profile-p 9000:8080 -v ~/.aws:/root/.aws:ro <image>:latest lambda_func.handler
The other way is to pass the AWS credentials using docker environment variables, which you already are trying.

You need to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Your home directory (~) is not copied to Docker container, so AWS_PROFILE will not work.
See here for an example: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html

Related

Unable to assign environment file in singularity on high-performance cluster: 'file not found in .'

I think I have a rather easy question, but I was not able to find a solution, so I hope to get some useful hints.
I'm trying to get a program running in Singularity on a high-performance cluster. For this program called FORCE (more specifically, I want to use force-level1-csd function, which simply downloads a set of satellite images), I need to reference an environment file called .boto, which contains the credentials to gs utils to enable the download of a large number of satellite images.
There is a tutorial on docker for this program, which works nicely. Here is example code using Docker that I successfully applied on my own computer (everything after force-level1-csd are arguments specific to the function and likely not relevant to the problem described here):
docker run -it -v /scratch/csxxyy/force/:/opt/data --env FORCE_CREDENTIALS=/app/credentials/ -v $HOME:/app/credentials/ davidfrantz/force force-level1-csd -n -c 0,90 -d 20150701,20221017 -s S2A,S2B /opt/data/meta /opt/data/level1/sentinel2 /opt/data/level1/l1_pool.txt /opt/data/aoi_force_level1.shp
Since Docker is not available on the HPC and I cannot download the 30TB satellite images on my local machine, though, I have to use Singularity.
But with the Singularity code I use, I get the following error:
Error: gsutil config file was not found in .
The Singularity code I use is an attempt to simply "translate" the docker commands to Singularity and looks like this:
singularity exec --bind /scratch/csxxyy/force/:/opt/data/ --env FORCE_CREDENTIALS=/app/credentials docker://davidfrantz/force:latest force-level1-csd -n -c 0,90 -d 20150701,20221017 -s S2A,S2B /opt/data/meta /opt/data/level1 /opt/data/level1/l1_pool.txt /opt/data/aoi_force_level1.shp
Unfortunately, I get the above error. Precisely, the program seems unable to identify the credentials file .boto in /scratch/csxxyy/force/app/credentials/. I know for sure that the file exists at this location.
I have tried around with the arguments --env and --env-file, e.g. --env FORCE_CREDENTIALS=/opt/data/app/credentials and --env-file FORCE_CREDENTIALS=/app/credentials/.boto, but the error did not change. I also changed the name of .boto to boto because I thought that secret files might not be visible, but was not successful.
So my questions are:
What am I missing here? What is the correct "translation" from Docker to Singularity in my case?
Thank you very much for your help.

Cant log in to app after deployed on Cloud Run

I have deployed my doccano app on Cloud Run but cant cant log in. It keeps complaining about the "Incorrect username or password"
I think this is because I havent given the auth argument, on my local I usually need use to start the container with this kind of cmd:
docker container create --name doccano \
-e "ADMIN_USERNAME=admin" \
-e "ADMIN_EMAIL=admin#example.com" \
-e "ADMIN_PASSWORD=password" \
-p 8000:8000 chakkiworks/doccano
But I dont know where on GCP Cloud Run where can I add such information
can someone help? Thanks!
You can add your environment variables just next to the CONTAINER section which is VARIABLES on the Advanced settings, you set them as Name/Value pair.
Or update your service via command line:
gcloud run services update SERVICE --update-env-vars KEY1=VALUE1,KEY2=VALUE2
If you have multiple env variables, separate them with a comma ','.
However in production, please avoid storing your sensitive info as environment variables because they can be easily acccessed in plaintext. I recommend that you take your time and learn how to use Secret Manager.

Problem running gcsfuse on Google App Engine

I am trying to run Airflow Webserver on App Engine Flexible however for it to work I need a mounted GCS bucket. I am using custom runtime.
The reason why I am doing it is to get a secured endpoint that app Engine provides together with IAP.
My app.yaml is a simple file with service name, env and runtime
My Dockerfile is a lots of apt-get installs and in CMD there is gcsfuse mounting and running airflow webserver, it is not a big deal.
The error I am getting when trying to use gcsfuse in App Engine is:
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: fuse device not found, try 'modprobe fuse' first
I know that Google Composer exists but it is way too expensive for my needs. So I prefer to create a VM with a scheduler and webserver on GAE, sharing a GCS bucket, similar to what Composer gives but without all that HA and insane cost for simple things I want to run.
I am searching to do this in App Engine, all the answers I have found so far mention GKE for some reason.
I know it is a privilege problem, however in App Engine I do not see any option to set privileges, a way to do it would be very helpful.
Is is even possible to do what I want to do on App Engine?
This is possible. I'll show you how to do it manually, you might need to utilize shell script to deal with multiple instances.
define several vars used in this manual
service=YOUR_APPENGINE_VERSION
version=YOUR_APPENGINE_VERSION
project=PROJECTID
get instance list
gcloud app instances list --project $project
SERVICE VERSION ID VM_STATUS DEBUG_MODE
default *************** instance-id-1 RUNNING YES
default *************** instance-id-2 RUNNING
ssh into one instance
gcloud app instances ssh instance-id-1 --service $service --version $version --project $project
get image id
docker ps | grep gaeapp | awk '{print $2}'
you will get an imageid
get env of gaeapp
docker exec gaeapp env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=*****
GAE_MEMORY_MB=614
GAE_INSTANCE=****
GAE_SERVICE=default
PORT=8080
GCLOUD_PROJECT=*****
GAE_VERSION=*****
GOOGLE_CLOUD_PROJECT=*****
restart gaeapp with privilege
docker rm -f gaeapp
docker run --privileged -d -p 8080:8080 --name gaeapp -e GAE_MEMORY_MB=614 -e GAE_INSTANCE=instance-id-1 -e GAE_SERVICE=$service -e PORT=8080 -e GCLOUD_PROJECT=$project -e GAE_VERSION=$version -e GOOGLE_CLOUD_PROJECT=$project $imageid
enter gaeapp(assume you have gcsfuse installed and have service account key json: /test-service-account.json)
$ docker exec -it gaeapp bash
[in gaeapp] # GOOGLE_APPLICATION_CREDENTIALS=/test-service-account.json gcsfuse BUCKET /mnt/
Using mount point: /mnt
Opening GCS connection...
Opening bucket...
Mounting file system...
File system has been successfully mounted.
To be honest, I have tried all possible solutions. and finally the above solution worked. Unfortunately, it worked for 2-3 days only. After sometime, App Engine restarts the instances automatically, without any failure in app. Therefore all changes for gcsfuse got disappeared.
Main thing for gcsfuse to work in container is to run the docker image in priviliged mode. And App Engine doesnot allow that
The final solution that we are using is GKE which is working fine.
Note: It was expected that GAE should have some provision for privileged mode, but it doesnot have now. In future Google Team may introduce it. Thanks!

How to launch a rails console in a Fargate container

I would like to open a Rails console in a Fargate container to interact with my production installation
However after searching the web and posting in the AWS forum I could not find an answer to this question
Does anyone know how I can do this? This seems like a mandatory thing to have in any production environment and having no easy way to do it is kind of surprising coming from such a respected cloud provider as AWS
Thanks
[Update 2021]: It is now possible to run a command in interactive mode with AWS Fargate!
News: https://aws.amazon.com/fr/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/
The command to run is:
aws ecs execute-command \
--cluster cluster-name \
--task task-id \
--container container-name \ # optional
--interactive \
--command "rails c"
Troubleshooting:
Check the AWS doc for IAM permissions: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-exec.html#ecs-exec-prerequisites
After trying lots of things, I found a way to open a Rails console pointing to my production environment, so I will post it here in case somebody come accross the same issues
To summarise I add a rails application deployed on Fargate connected to a RDS postgres database
What I did is creating a VPN client endpoint to the VPC hosting my Rails app and my RDS database
Then after being connected to this VPN, I simply run my rails production container (with the same environment variables) overriding the container command to run the console startup script (bundle exec rails c production)
Being run on my local machine I can normally attach a TTY to this container and access my production console
I think this solution is good because it allow any developper working on the project to open a console without any costs incurred and a well-though security policy on the AWS end ensure that the console access is secure, plus you don't have to expose your database outside of your VPC
Hope this helped someone
Doing any sort of docker exec is a nightmare with ECS and fargate. Which makes doing things like shells or migrations very difficult.
Thankfully, a fargate task on ECS is really just an AWS server running a few super customized docker run commands. So if you have docker, jq, and the AWS CLI either on EC2 or your local machine, you can fake some of those docker run commands yourself and enter a bash shell. I do this for Django so I can run migrations and enter a python shell, but I'd assume it's the same for rails (or any other container that you need bash in)
Note that this only works if you only care about 1 container spelled out in your task definition running at a time, although I'd imagine you could jerry-rig something more complex easy enough.
For this the AWS CLI will need to be logged in with the same IAM permissions as your fargate task. You can do this locally by using aws configure and providing credentials for a user with the correct IAM permissions, or by launching an EC2 instance that has a role either with identical permissions, or (to keep things really simple) the role that your fargate task is running and a security group with identical access (plus a rule that lets you SSH into the bastion host.) I like the EC2 route, because funneling everything through the public internet and a VPN is... slow. Plus you're always guaranteed to have the same IAM access as your tasks do.
You'll also need to be on the same subnet as your fargate tasks are located on, which can usually be done via a VPN, or by running this code on a bastion EC2 host inside your private subnet.
In my case I store my configuration parameters as SecureStrings within the AWS Systems Manager Parameter Store and pass them in using the ECS task definition. Those can be pretty easily acquired and set to a local environment variable using
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name parameter.name.database_url \
| jq '.Parameter["Value"]' -r)
I store my containers on ECR, so I then need to login my local docker container to ECR
eval $(aws ecr get-login --no-include-email --region $REGION)
Then it's just a case of running an interactive docker container that passes in the DATABASE_URL, pulls the correct image from ECR, and enters bash. I also expose port 8000 so I can run a webserver inside the shell if I want, but that's optional.
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG \
/bin/bash
Once you run that you should see your copy of docker download the image from your container repository then launch you into bash (assuming bash is installed inside your container.) Docker has a pretty solid cache, so this will take a bit of time to download and launch the first time but after that should be pretty speedy.
Here's my full script
#!/bin/bash
REGION=${REGION:-us-west-2}
ENVIRONMENT=${ENVIRONMENT:-staging}
DOCKER_REPO_NAME=${DOCKER_REPO_NAME:-reponame}
TAG=${TAG:-latest}
ACCOUNT_ID=$(aws sts get-caller-identity | jq -r ".Account")
export DATABASE_URL=$(aws ssm get-parameter --region $REGION \
--with-decryption --name projectname.$ENVIRONMENT.database_url \
| jq '.Parameter["Value"]' -r)
eval $(aws ecr get-login --no-include-email --region $REGION)
IMAGE=$ACCOUNT_ID.dkr.ecr.$REGION.amazonaws.com/$DOCKER_REPO_NAME:$TAG
docker run -i -t \
-e DATABASE_URL \
-p 8000:8000 \
$IMAGE \
/bin/bash
You cannot ssh to the underlying host when you are using the Fargate execution type for ECS. This means that you cannot docker exec into a running container.
I haven't tried this on Fargate, but you should be able to create a fargate task in which the command is rails console.
Then if you configure the task as interactive, you should be able to launch the interactive container and have access to the console via stdin.
Ok, so I ended up doing things a bit differently. Instead of trying to run the console on Fargate, I just run a console on my localhost, but configure it to use RAILS_ENV='production' and let it use my RDS instance.
Of course to make this work, you have to expose your RDS instance through an egress rule in your security group. It's wise to configure it in a way that it only allows your local IP, to keep it a bit more secure.
The docker-compose.yml then looks something like this:
version: '3'
web:
stdin_open: true
tty: true
build: .
volumes:
- ./rails/.:/your-app
ports:
- "3000:3000"
environment: &env_vars
RAILS_ENV: 'production'
PORT: '8080'
RAILS_LOG_TO_STDOUT: 'true'
RAILS_SERVE_STATIC_FILES: 'true'
DATABASE_URL: 'postgresql://username:password#yours-aws-rds-instance:5432/your-db'
When you then run docker-compose run web rails c it uses your local Rails codebase, but makes live changes to your RDS DB (the prime reason why you'd like access to rails console anyway).

Docker Proxy Setup using environment variable

While working behind the corporate proxies..
Why can't docker export the proxy specific value from environment variables
(http_proxy, https_proxy,...).
Usually you get timeout issue while pulling the image, even if the proxy url is mentioned in environment vairable.
I have to set the value (hard-code the same value again) in or by creating the config files in /etc/systemd/system/docker.service.d folder.
If we change the proxy url, we have to make changes in different place. Or is there any way to refer the value from environment variable ?
I have tried the docker run -e env_proxy_variable=proxy_url but got the same timeout issue.
Consider using the below instead:
export HTTP_PROXY=http://xxx:port/
export HTTPS_PROXY=http://xxx:port/
export FTP_PROXY=http://xxx:port/
You can hardcode these variables in the /etc/default/docker file so that they are exported whenever docker is started.
You can check if the environment variable has been exported by typing $(name_of_var). For eg, after running
docker run --env HTTP_PROXY="123.45.21.32" -it ubuntu_latest /bin/bash
type
echo $HTTP_PROXY
It is likely that your DNS server isn't configured. Try
cat /etc/resolv.conf
if you see something like:
nameserver:8.8.8.8
then it's likely that the DNS server is inaccessible behind firewalls. You can pass dns server address along with docker run command like so:
docker run --env HTTP_PROXY="123.45.21.32" --dns=172.10.18.0 -it ubuntu_latest /bin/bash

Resources