Google gsutil auth without prompt - docker

I want to use gsutil inside a Docker container. I have created an O2Auth Service Account JSON file.
How can I setup gsutil auth to use the JSON config file and execute commands without prompting?
Currently I get something like this:
$ gsutil config -e
It looks like you are trying to run "/.../google-cloud-sdk/bin/bootstrapping/gsutil.py config".
The "config" command is no longer needed with the Cloud SDK.
To authenticate, run: gcloud auth login
Really run this command? (y/N) y
This command will create a boto config file at /.../.boto
containing your credentials, based on your responses to the following questions.
What is the full path to your private key file?
What command/parameters/setup do I have to use to circumstance prompts?

Solved this issue by executing:
gcloud auth activate-service-account --key-file=/opt/gcloud/auth.json
The whole example and finished container can be found here: blacklabelops/gcloud

If you want gsutil only and bypass the prompt you can do it easily with an expect script:
#!/usr/bin/expect
spawn gsutil config -e
expect "What is the full path to your private key file?" { send "/path/your.key\r" }
expect "Would you like gsutil to change the file permissions for you? (y/N)" { send "y\r" }
expect "What is your project-id?" { send "your-projet-42\r" }
interact

The -o Credentials:gs_service_key_file=<path to JSON file> does the good work, using the boto configuration override parameters as documented at https://cloud.google.com/storage/docs/gsutil/addlhelp/TopLevelCommandLineOptions
$ gsutil -v
gsutil version: 4.57
$ gsutil -o Credentials:gs_service_key_file=key.json ls -al gs://bucket/filename
79948 2021-05-24T02:12:25Z gs://bucket/filename#1111111145678393 metageneration=2

The above solutions don't work for me.
What solved this problem for me is the following
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS /.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID

using only gsutil:
first run this command to configure the authentication manually
gsutil config -a
This will create /root/.boto file with the needed credentials.
Copy that path/file into your docker image.
gsutil will now work with those credentials.

Related

Jenkins: sshpass: Failed to run command: No such file or directory

I use Jenkins for CICD. After cloning the repository, I want to copy some file from cloned repository to a remote server using sshpass (scp).
sh """sshpass -p '$KEY'-o StrictHostKeyChecking=no scp *.json $UNAME#$PROD_IP:/home/test"""
But I get error in output:
sshpass: Failed to run command: No such file or directory
What's wrong I'm doing ?
After a long search I found the answer. So, you need to switch to the jenkins user and create a pair of keys on his behalf and add to the remote server to which you need to access. Then add the private key into Jenkins credential and use.

ssh key in Dockerfile returning Permission denied (publickey)

I'm trying to build a Docker image using DOCKER_BUILDKIT which involves cloning a private remote repository from GitLab, with the following lines of my Dockerfile being used for the git clone:
# Download public key for gitlab.com
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan gitlab.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.com:*name_of_repo* *download_location*
However, when I run the docker build command using:
DOCKER_BUILDKIT=1 docker build --ssh default --tag test:local .
I get the following error when it is trying to do the git clone:
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've set up the ssh access successfully on the machine I'm trying to build this image on, and both the ssh -T git#gitlab.com and trying to clone the repository outside of the Docker build work just fine.
I've had a look around but can't find any info on what might be causing this specific issue - any pointers much appreciated.
Make sure you have an SSH agent running and that you added your private key to it.
Depending on your platform, the commands may vary but since it's tagged gitlab I will assume that Linux is your platform.
Verify that you have an SSH agent running with echo $SSH_AUTH_SOCK or echo $SSH_AGENT_SOCK if both echo an empty string, you most likely do not have an agent running.
To start an agent you can usually type:
eval `ssh-agent`
Next, you can verify what key are added (if any) with:
ssh-add -l
If the key you need is not listed, you can add it with:
ssh-add /path/to/your/private-key
Then you should be good to go.
More info here: https://www.ssh.com/academy/ssh/agent
Cheers
For testing, use a non-encrypted private SSH key (meaning you don't have to manage an ssh-agent, which is only needed for encrypted private key passphrase caching)
And use ssh -Tv git#gitlab.com to check where SSH is looking for your key.
Then, in your Dockerfile, add before the line with git clone:
ENV GIT_SSH_COMMAND='ssh -Tv'
You will see again where Docker/SSH is looking when executing git clone with an SSH URL.
I suggested as much here, and there were some mounting folders missing then.

How to configure Databricks token inside Docker File

I have a docker file where I want to
Download the Databricks CLI
Configure the CLI by adding a host and token
And then running a python file that hits the Databricks token
I am able to install the CLI in the docker image, and I have a working python file that is able to submit the job to the Databricks API but Im unsure of how to configure my CLI within docker.
Here is what I have
FROM python
MAINTAINER nope
# Creating Application Source Code Directory
RUN mkdir -p /src
# Setting Home Directory for containers
WORKDIR /src
# Installing python dependencies
RUN pip install databricks_cli
# Not sure how to do this part???
# databricks token kicks off the config via CLI
RUN databricks configure --token
# Copying src code to Container
COPY . /src
# Start Container
CMD echo $(databricks --version)
#Kicks off Pythern Job
CMD ["python", "get_run.py"]
If I was to do databricks configure --token in the CLI it would prompt for the configs like this :
databricks configure --token
Databricks Host (should begin with https://):
It's better not to do it this way for multiple reasons:
It's insecure - if you configure Databricks CLI this way it will generate a file inside the container that could be read by anyone who has access to it
Token has time-to-live (default is 90 days) - this means that you'll need to rebuild your containers regularly...
Instead it's just better to pass two environment variables to the container, and they will be picked up by the databricks command. These are DATABRICKS_HOST and DATABRICKS_TOKEN as it described in the documentation.
When databricks configure is run successfully, it writes the information to the file ~/.databrickscfg:
[DEFAULT]
host = https://your-databricks-host-url
token = your-api-token
One way you could set this in the container is by using a startup command (syntax here for docker-compose.yml):
/bin/bash -ic "echo '[DEFAULT]\nhost = ${HOST_URL}\ntoken = ${TOKEN}' > ~/.databrickscfg"
It is not very secure to put your token in the DockerFile. However, if you want to pursue this approach you can use the code below.
RUN export DATABRICKS_HOST=XXXXX && \
export DATABRICKS_API_TOKEN=XXXXX && \
export DATABRICKS_ORG_ID=XXXXX && \
export DATABRICKS_PORT=XXXXX && \
export DATABRICKS_CLUSTER_ID=XXXXX && \
echo "{\"host\": \"${DATABRICKS_HOST}\",\"token\": \"${DATABRICKS_API_TOKEN}\",\"cluster_id\":\"${DATABRICKS_CLUSTER_ID}\",\"org_id\": \"${DATABRICKS_ORG_ID}\", \"port\": \"${DATABRICKS_PORT}\" }" >> /root/.databricks-connect
Make sure to run all the commands in a line using one RUN command. Otherwise, the variable such as DATABRICKS_HOST or DATABRICKS_API_TOKEN may not properly propagate.
If you want to connect to a Databricks Cluster within a docker container you need more configuration. You can find the required details in this article: How to Connect a Local or Remote Machine to a Databricks Cluster
The number of personal access tokens per user is limited to 600
But via bash is easy
echo "y
$(WORKSPACE-REGION-URL)
$(CSE-DEVELOP-PAT)
$(EXISTING-CLUSTER-ID)
$(WORKSPACE-ORG-ID)
15001" | databricks-connect configure
If you want to access databricks models/download_artifacts using hostname and access token like how you do on databricks cli
databricks configure --token --profile profile_name
Databricks Host (should begin with https://): your_hostname
Token : token
if you have created profile name and pushed models and just want to access the model/artifacts in docker using this profile
Add below code in the docker file.
RUN pip install databricks_cli
ARG HOST_URL
ARG TOKEN
RUN echo "[<profile name>]\nhost = ${HOST_URL}\ntoken = ${TOKEN}" >> ~/.databrickscfg
#this will created your .databrickscfg file with host and token after build the same way you do using databricks configure command
Add args HOST_URL and TOKEN in the docker build
e.g
your host name = https://adb-5443106279769864.19.azuredatabricks.net/
your access token = dapi********************53b1-2
sudo docker build -t test_tag --build-arg HOST_URL=<your host name> --build-arg TOKEN=<your access token> .
And now you can access your experiments using this profilename Databricks:profile_name in the code.

How to get or generate deploy URL for Google Cloud Run services

How to get the URL of deployed service programmatically in CI environments? The URL does get's logged after successful deploy but what if I want to extract and use the URL programmatically, as part of post deploy needs, e.g. posting the URL for acceptance test.
Simply use the flag: --format='value(status.url)' with gcloud run services describe
Here is the entire command:
$ gcloud run services describe SERVICE --platform managed --region REGION --format 'value(status.url)'
There are several ways to get the desired information:
You can use the namespaces.services.get method from Cloud Run's API and a curl command. Notice that it will require an Authentication Header and an OAuth scope.
curl -i https://[REGION]-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/[PROJECT_NAME]/services/[CLOUD_RUN_SERVICE_NAME] -H "Authorization: Bearer [YOUR-BEARER-TOKEN]" | tail -n +13 | jq -r ".status.url"
You can use the gcloud run services list command in one of your build steps to get the desired value. For example if your service is fully managed you can use the following command to get the Cloud Run service that was last updated.:
gcloud run services list --platform managed | awk 'NR==2 {print $4}'
Build a script using the Goolge API Client libraries (e.g. the Cloud Run Google API Client for Python).
Extending Steren's answer:
With these Bash commands you can get url and save it in Secrets Manager:
First create empty Secret:
gcloud secrets create "CLOUDRUN_URL" --project $PROJECT_ID --replication-policy=automatic
Then:
gcloud run services describe $APP_NAME --platform managed --region $CLOUD_REGION --format 'value(status.url)' --project $PROJECT_ID | gcloud secrets versions add "CLOUDRUN_URL" --data-file=- --project $PROJECT_ID
or version with added "/some/address"
CLOUDRUN_URL=$(gcloud run services describe $APP_NAME --platform managed --region $CLOUD_REGION --format 'value(status.url)' --project $PROJECT_ID) # capture first string.
echo "$CLOUDRUN_URL/some/address/" | gcloud secrets versions add "CLOUDRUN_URL" --data-file=- --project $PROJECT_ID
And then you can load it as needed from Secrets Manager:
export CLOUDRUN_URL=$(gcloud secrets versions access latest --secret="CLOUDRUN_URL" --project $PROJECT_ID )

New Docker Build secret information for use with aws cli

I would like to use the new --secret flag in order to retreive something from aws with its cli during the build process.
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=mysecret,dst=/root/.aws cat /root/.aws
I can see the credentials when running the following command:
docker build --no-cache --progress=plain --secret id=mysecret,src=%USERPROFILE%/.aws/credentials .
However, if I adjust the command to be run, the aws cli cannot find the credentials file and asks me to do aws configure:
RUN --mount=type=secret,id=mysecret,dst=/root/.aws aws ssm get-parameter
Any ideas?
The following works:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
RUN --mount=type=secret,id=aws,dst=/aws export AWS_SHARED_CREDENTIALS_FILE=/aws aws ssm get-parameter ...

Resources