Http Trigger Azure Function in Docker with non anonymous authLevel - docker

I am playing around with an Http Triggered Azure Functions in a Docker container. Up to now all tutorials and guides I found on setting this up configure the Azure Function with the authLevel" set to anonymous.
After reading this blog carefully it seems possible (although tricky) to also configure other authentication levels. Unfortunately the promised follow up blogpost has not (yet) been written.
Can anyone help me clarify on how I would go about and set this up?

To control the master key the Function host uses on startup - instead of generating random keys - prepare our own host_secrets.json file like
{
"masterKey": {
"name": "master",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
},
"functionKeys": [{
"name": "default",
"value": "asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==",
"encrypted": false
}]
}
and then feed this file into the designated secrets folder of the Function host (Dockerfile):
for V1 Functions (assuming your runtime root is C:\WebHost):
...
ADD host_secrets.json C:\\WebHost\\SiteExtensions\\Functions\\App_Data\\Secrets\\host.json
...
for V2 Functions (assuming your runtime root is C:\runtime):
...
ADD host_secret.json C:\\runtime\\Secrets\\host.json
USER ContainerAdministrator
RUN icacls "c:\runtime\secrets" /t /grant Users:M
USER ContainerUser
ENV AzureWebJobsSecretStorageType=files
...
The function keys can be used to call protected functions like .../api/myfunction?code=asGmO6TCW/t42krL9CljNod3uG9aji4mJsQ7==.
The master key can be used to call Functions Admin API and Key management API.
In my blog I describe the whole journey of bringing V1 and later V2 Functions runtime into Docker containers and host those in Service Fabric.
for V3 Functions on Windows:
ENV FUNCTIONS_SECRETS_PATH=C:\Secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json C:\\Secrets\\host.json
for V3 Functions on Linux:
RUN mkdir /etc/secrets/
ENV FUNCTIONS_SECRETS_PATH=/etc/secrets
ENV AzureWebJobsSecretStorageType=Files
ADD host_secrets.json /etc/secrets/host.json

I found a solution for me, even though this post is out of date. My goal was to run a Http Trigger Azure Function in Docker container with function authLevel. For this I use the following Docker Image: Azure Functions Python from Docker hub.
I pushed my created container to an Azure Container Registry after my repository was ready there. I wanted to run my container serverless via Azure Function. So I had followed the following post and created a new Azure Functions in my Azure Portal.
Thus, the container content corresponds to an Azure Function Image and the operation of the container itself is implemented through Azure by an Azure Function. This way may not always be popular, but offers advantages to host a container there. The container can be easily selected from the Azure Container Registry via Deployment Center.
To make the container image accessible via function authLevel, the Azure Function ~3 cannot create a host key as this is managed within the container. So I proceeded as follows:
Customizing my function.json
"authLevel": "function",
"type": "httpTrigger",
Providing a storage account so that the Azure Function can obtaion configurations there. Create there a new container.
azure-webjobs-secrets
Create a directory inside the container with the name of your Azure Function.
my-function-name
A host.json can now be stored in the directory. This contains the master key.
{"masterKey": {
"name": "master",
"value": "myprivatekey",
"encrypted": false }, "functionKeys": [ ] }
Now the Azure Function has to be configured to get access to the storage account. The following values must be added to the configuration.
AzureWebJobsStorage = Storage Account Connection String
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = Storage Account Connection String
WEBSITE_CONTENTSHARE = my-function-name
From now on, the stored Azure Function master key is available. The container API is thus configured via authLevel function and only accessible with the corresponding key.
URL: https://my-function-name.azurewebsites.net/api/helloworld
HEADER: x-functions-key = myprivatekey

Related

How to authorize Google API inside of Docker

I am running an application inside of Docker that requires me to leverage google-bigquery. When I run it outside of Docker, I just have to go to the link below (redacted) and authorize. However, the link doesn't work when I copy-paste it from the Docker terminal. I have tried port mapping as well and no luck either.
Code:
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=["https://www.googleapis.com/auth/cloud-platform"],
)
# Make clients.
client = bigquery.Client(credentials=credentials, project=credentials.project_id,)
Response:
requests_oauthlib.oauth2_session - DEBUG - Generated new state
Please visit this URL to authorize this application:
Please see the available solutions on this page, it's constantly updated.
gcloud credential helper
Standalone Docker credential helper
Access token
Service account key
In short you need to use a service account key file. Make sure you either use a Secret Manager, or you just issue a service account key file for the purpose of the Docker image.
You need to place the service account key file into the Docker container either at build or runtime.

DSM docker parse-server can't access cloud code or config file

I've used yongjhih/parse-server as my container image for a while, but this image wasn't updated over 4 years so I moved to the official parseplatform/parse-server image, and connected it to the parse-dashboard official image.
Both working fine, and I've successfully saved objects to the DB using the dashboard console.
Problems:
My parse-server ignoring config.json file on mounted folder, so for now I use only environment variables.
When trying to access cloud code via REST console I get 404 (Not Found) on inspector with response Cannot GET /parse/endpointName. but I don't get any error or warnings on logs.
I've also disabled GraphQL and playground for now because it's making the whole container crash with an error GraphQLError [Object]: Syntax Error: Unexpected Name "const", which probably means I'm missing some imports, but it also means the server can actually access my mounted folders.
Dashboard can't access parse-server via local host, I solved it by using ip address for serverURL on the dashboard config.json
I'm using the docker (GUI) app on my Synology NAS with DSM 7.
My volumes:
Environment variables:
My dashboard config.json:
{
"apps": [{
"serverURL": "http://192.168.1.5:1337/parse",
"appId": "appId",
"masterKey": "masterKey",
"appName": "appName"
}],
"users":
[
{
"user":"user",
"pass":"pass"
}
]
}
--------------------------Edit--------------------------
So moving from such an old server to the new image meant a lot of changes
Cloud code is accessible, just needed to fix syntax a bit
GraphQL is now Relay so I'll have to fix my schema.js too.
Still can't find a way to use config file instead of environment variables

Access KeyVault from Azure Container Instance deployed in VNET

Azure Container Instance is deployed in VNET and I want to store my keys and other sensitive variables in Key Vault and somehow access to it. I found in documentation, it's currently limitation to use managed identities once ACI is in VNET.
Is there another way to bypass this identities and to use Key Vault?
I'm trying to avoid environment variables and secret volumes, because this container will be scheduled to run every day, which means there will be some script with access to all secrets and I don't want to expose them in script.
to access the Azure Key Vault you will need to have access to a Token, are you ok storing this token into a k8s secret ?
If you are, then any SKD or CURL command could be use to leverage the Rest API of the Key Vault to retrieve the secret at run time : https://learn.microsoft.com/en-us/rest/api/keyvault/
If you don't want to use secret/volumes to store the token for AKV it would be best to bake in your token in your container Image and maybe rebuild your image everyday with a new token that you could manage its access I AKS at the same time within your CI process

Run a Docker container under a different service account when using Cloud Build

I am using Cloud Build and would like to run a Docker container under a different service account than the standard Cloud Build service account (A).
The service account I would like to use (B) is from a different project.
One way to do it would be to put the json key on Cloud Storage and then mount it in the Docker container, but I think it should be possible with IAM policies too.
My cloubuild.yaml now contains the following steps:
steps:
- name: 'gcr.io/kaniko-project/executor:v0.20.0'
args:
- --destination=gcr.io/$PROJECT_ID/namu-app:latest
- --cache=true
- --cache-ttl=168h
- name: 'docker'
args: ['run', '--network=cloudbuild', 'gcr.io/$PROJECT_ID/namu-app:latest']
The network is set so that Cloud Build service account is accessible to docker container - see https://cloud.google.com/cloud-build/docs/build-config#network.
So I think my container should have access to the Cloud Build service account.
Then I run the following code inside the Docker container:
import socket
from googleapiclient.discovery import build
from google.auth import impersonated_credentials, default
default_credentials, _ = default()
print("Token: {}".format(default_credentials.token))
play_credentials = impersonated_credentials.Credentials(
source_credentials=default_credentials,
target_principal='google-play-api#api-0000000000000000-0000000.iam.gserviceaccount.com',
target_scopes=[],
lifetime=3600)
TRACK = "internal"
PACKAGE_NAME = 'x.y.z'
APPBUNDLE_FILE = "./build/app/outputs/bundle/release/app.aab"
socket.setdefaulttimeout(300)
service = build('androidpublisher', 'v3')
edits = service.edits()
edit_id = edits.insert(body={}, packageName=PACKAGE_NAME).execute()['id']
However, this fails with:
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/androidpublisher/v3/applications/x.y.z/edits?alt=json returned "Request had insufficient authentication scopes.">
I tried several ways of assigning service account roles, but no luck so far. I thought at first that explicitly 'impersonating' credentials might not be necessary (maybe it can be implicit?).
In summary, I want service account A from project P1 to run as service account B from project P2.
Any ideas?
You might follow to alternatives to troubleshoot this issue:
Give the Cloud Build service account the same permissions that has the service account you are using to run this on your local environment.
Authenticate with a different approach using a credentials file, as shown in this code snippet:
from apiclient.discovery import build
import httplib2
from oauth2client import client
SERVICE_ACCOUNT_EMAIL = (
'ENTER_YOUR_SERVICE_ACCOUNT_EMAIL_HERE#developer.gserviceaccount.com')
# Load the key in PKCS 12 format that you downloaded from the Google APIs
# Console when you created your Service account.
f = file('key.p12', 'rb')
key = f.read()
f.close()
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with the Credentials. Note that the first parameter, service_account_name,
# is the Email address created for the Service account. It must be the email
# address associated with the key that was created.
credentials = client.SignedJwtAssertionCredentials(
SERVICE_ACCOUNT_EMAIL,
key,
scope='https://www.googleapis.com/auth/androidpublisher')
http = httplib2.Http()
http = credentials.authorize(http)
service = build('androidpublisher', 'v3', http=http)
Using gcloud you can do gcloud run services update SERVICE --service-account SERVICE_ACCOUNT_EMAIL. Documentation also says that
In order to deploy a service with a non-default service account, the
deployer must have the iam.serviceAccounts.actAs permission on the
service account being deployed.
See https://cloud.google.com/run/docs/securing/service-identity#gcloud for more details.

Connecting to scality/s3 server between docker containers

We are using a python based solution which shall load and store files from S3. For developing and local testing we are using a vagrant environment with docker and docker-compose. We have two docker-compose defintions - one for the assisting backend services (mongo, restheart, redis and s3) and the other one containing the python based REST API exposing solution using the backend services.
When our "front-end" docker-compose group interacts with restheart this works fine (using the name of the restheart container as server host in http calls). When we are doing the same with scality/s3 server this does not work.
The interesting part is, that we have created a test suite for using the scality/s3 server from a python test suite running on the host (windows10) over the forwarded ports through vagrant to the docker container of scality/s3 server within the docker-compose group. We used the endpoint_url localhost and it works perfect.
In the error case (when frontend web service wants to write to S3) the "frontend" service always responds with:
botocore.exceptions.ClientError: An error occurred (InvalidURI) when calling the CreateBucket operation: Could not parse the specified URI. Check your restEndpoints configuration.
And the s3server always responds with http 400 and the message:
s3server | {"name":"S3","clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","time":1521306054614,"req_id":"e385aae3c04d99fc824d","level":"info","message":"received request","hostname":"cdc8a2f93d2f","pid":83}
s3server | {"name":"S3","bytesSent":233,"clientIP":"::ffff:172.20.0.7","clientPort":49404,"httpMethod":"PUT","httpURL":"/raw-data","httpCode":400,"time":1521306054639,"req_id":"e385aae3c04d99fc824d","elapsed_ms":25.907569,"level":"info","message":"responded with error XML","hostname":"cdc8a2f93d2f","pid":83}
We are calling the scality with this boto3 code:
s3 = boto3.resource('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3_client = boto3.client('s3',
aws_access_key_id='accessKey1',
aws_secret_access_key='verySecretKey1',
endpoint_url='http://s3server:8000')
s3.create_bucket(Bucket='raw-data') # here the exception comes
bucket = self.s3.Bucket('raw-data')
This issue is quite common. In your config.json file, which you mount in your Docker container, I assume, there is a restEndpoints section, where you must associate a domain name with a default region. What that means is your frontend domain name should be specified in there, matching a default region.
Do note that that default region does not prevent you from using other regions: it's just where your buckets will be created if you don't specify otherwise.
In the future, I'd recommend you open an issue directly on the Zenko Forum, as this is where most of the community and core developpers are.
Cheers,
Laure

Resources