Create container web app without setting linux_fx_version - terraform-provider-azure

I would like to create a container web using terraform. I also would like to deploy it from github. So I need to manage the linux_fx_version property there. When I dont set this in terrraform, I get an error when deploying a container
Error: Deployment Failed with Error: Error: This is not a container web app. Please remove inputs like images and configuration-file which are only relevant for container deployment.
When I set the property in terraform, terraform would apply this setting when I change my infrastructur.
Is there any way to mark a web app container based, or setting this property only when it is not set?

If you want to set the linux_fx_version property after the Azure app service provisioned. you could configure a custom container for Azure App Service with az webapp config container set. Also, you could use a local-exec Provisioner to invoke a process on the machine running Terraform.
For example, you can provision a Linux App Service which runs multiple Docker Containers from Docker Compose file.
provisioner "local-exec" {
command =<<EOT
az webapp config set \
--resource-group ${azurerm_resource_group.main.name} \
--name ${azurerm_app_service.main.name} \
--linux-fx-version "COMPOSE|${filebase64("./compose.yml")}"
EOT
}
Otherwise, to create a web app for containers with terraform, you need to use linux_fx_version to define the container to load on start.
If you're not using App Service Slots and Deployments are handled outside of Terraform - it's possible to ignore changes to specific fields in the configuration using ignore_changes within Terraform's lifecycle block, for example:
resource "azurerm_app_service" "test" {
# ...
site_config = {
# ...
linux_fx_version = "DOCKER|appsvcsample/python-helloworld:0.1.2"
}
lifecycle {
ignore_changes = [
"site_config.0.linux_fx_version", # deployments are made outside of Terraform
]
}
}
For more information, read examples of deploying Web App for Containers (Azure App Service) with terraform and examples of using the App Service Resources.

Related

Deploying an Azure durable function using a docker image in vscode

I have created a durable function in VSCODE, it works perfectly fine locally, but when I deploy it to azure it is missing some dependencies which cannot be included in the python environment (Playwright). I created a Dockerfile and a docker image on a private docker hub repository on which I want to use to deploy the function app, but I don't know how I can deploy the function app using this image.
I have already using commands such as:
az functionapp config container set --docker-custom-image-name <docker-id>/<image>:latest --name <function> --resource-group <rg>
Then when I deploy nothing happens, and I simply get The service is unavailable. I also tried adding the environment variables DOCKER_REGISTRY_SERVER_USERNAME, DOCKER_REGISTRY_SERVER_PASSWORD and DOCKER_REGISTRY_SERVER_PASSWORD. However, it is unclear whether the url should be <docker-id>/<image>:latest, docker.io/<image>:latest, https://docker.io/<image>:latest etc. Still the deployment gets stuck on The service is unavailable, not a very useful error message.
So I basicly have the function app project ready and the dockerfile/image. How can it be so difficult to simply deploy using the giving image? The documentation here is very elaborate but I am missing the details for a private repository. Also it is very different from my usual vscode deployment, making it very tough to follow and execute.
Created the Python 3.9 Azure Durable Functions in VS Code.
Created Container Registry in Azure and Pushed the Function Code to ACR using docker push.
az functionapp config container set --docker-custom-image-name customcontainer4funapp --docker-registry-server-password <login-server-pswd> --docker-registry-server-url https://customcontainer4funapp.azurecr.io --docker-registry-server-user customcontainer4funapp --name krisdockerfunapp --resource-group AzureFunctionsContainers-rg
As following the same MS Doc, pushed the function app to docker custom container made as private and to the Azure Function App. It is working as expected.
Refer to this similar issue resolution regarding the errorThe service is unavailable comes post deployment of the Azure Functions Project as there are several reasons which needs to be diagnosed in certain steps.

How do you manage the variation between local and cloud dependencies within Docker?

I have a Docker image with an application server running in it.
When I'm running in a development environment, I want to run a database server within the same Docker image.
However, in production, I want to use my cloud provider's database service to host my database server.
What is the best (preferably officially supported) way to enable this distinction?
You Don't
You don't run the DB in the same container. You run it in a separate container next to your application container (Probably with docker-compose, but not required)
You run the same version as the cloud provider (or as close as you can get because they will no doubt configure it specifically for their env)

Pulling images from private repository in kubernetes without using imagePullSecrets

I am new to kubernetes deployments so I wanted to know is it possible to pull images from private repo without using imagePullSecrets in the deployment yaml files or is it mandatory to create a docker registry secret and pass that secret in imagePullSecrets.
I also looked at adding imagePullSecrets to a service account but that is not the requirement I woul love to know that if I setup creds in variables can kubernetes use them to pull those images.
Also wanted to know how can it be achieved and reference to a document would work
Thanks in advance.
As long as you're using Docker on your Kubernetes nodes (please note that Docker support has itself recently been deprecated in Kubernetes), you can authenticate the Docker engine on your nodes itself against your private registry.
Essentially, this boils down to running docker login on your machine and then copying the resulting credentials JSON file directly onto your nodes. This, of course, only works if you have direct control over your node configuration.
See the documentation for more information:
If you run Docker on your nodes, you can configure the Docker container runtime to authenticate to a private container registry.
This approach is suitable if you can control node configuration.
Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If you put the same file in the search paths list below, kubelet uses it as the credential provider when pulling images.
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in the environment of the kubelet process.
Here are the recommended steps to configuring your nodes to use a private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json on your PC.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes; for example:
if you want the names: nodes=$( kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}' )
if you want to get the IP addresses: nodes=$( kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}' )
Copy your local .docker/config.json to one of the search paths list above.
for example, to test this out: for n in $nodes; do scp ~/.docker/config.json root#"$n":/var/lib/kubelet/config.json; done
Note: For production clusters, use a configuration management tool so that you can apply this setting to all the nodes where you need it.
If the Kubernetes cluster is private, you can deploy your own, private (and free) JFrog Container Registry using its Helm Chart in the same cluster.
Once it's running, you should allow anonymous access to the registry to avoid the need for a login in order to pull images.
If you prevent external access, you can still access the internal k8s service created and use it as your "private registry".
Read through the documentation and see the various options.
Another benefit is that JCR (JFrog Container Registry) is also a Helm repository and a generic file repository, so it can be used for more than just Docker images.

Create service or container from another container, on Google Cloud Run or Cloud Run on GKE

Can I create a service or container from another container, on Google Cloud Run or Cloud Run on GKE ?
I basically want to manage my containers/services dynamically from another container and not sure how to go about this
Adding more details:
One of my microservices needs to create new isolated containers that will run some user-land code. I would like to have full life-cycle control of these containers, run the code, and then destroy as needed.
I also looked at Cloud Run APIs but not sure how to run something like 'kubectl create ...' through the APIs? Is that the right approach?
Yes, you should be able to deploy Cloud Run services from Cloud Run services.
on Cloud Run (hosted): services by default run with Editor permissions, so this should be possible without any extra configuration
note that if you deploy apps with --allow-unauthenticated which requires setting IAM permissions, the Editor role will not be enough, as you need Owner role on the GCP project for that.
on Cloud Run on GKE: services by default run with limited scopes (as they by default inherit GKE node's permissions/scopes). You should add a service account to the Kubernetes Pod and use it to authenticate.
From there, you have several options:
Use the REST API directly: Since run.googleapis.com behaves like a Kubernetes API server, you can directly apply JSON objects of Knative Services. (You can use gcloud ... --log-http to learn how deployments are made using REST API requests).
Use gcloud: you can ship your container image with gcloud and invoke it from your process.
Use Google Cloud Client Libraries: You can use the client libraries that are available for Cloud Run (for example this Go library) to construct in-memory Service objects and send them to the API using a higher level client library (recommended approach)

environment variables in Docker images in Kubernetes Cluster

I'm working on some GCP apps which are dockerized in a Kubernetes cluster in GCP (I'm new to Docker and Kubernetes). In order to access some of the GCP services, the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to a credentials file.
Should the environment variable be set and that file included in:
- each of the Docker images?
- the Kubernetes cluster?
GCP specific stuff
This is the actual error: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.
-Should the environment variable be set and that file included in:
- each of the Compute Engine instances?
- the main GCP console?
And, most importantly, HOW?
:)
You'll need to create a service account (IAM & Admin > Service Accounts), generate a key for it in JSON format and then give it the needed permissions (IAM & Admin > IAM). If your containers need access to this, it's best practice to add it as a secret in kubernetes and mount it in your containers. Then set the environment variable to point to the secret which you've mounted:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_SECRET]"
This page should get you going: https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret

Resources