Is there a way to upload file to Azure Storage Blob with terraform null resource? - terraform-provider-azure

I have a Azure Storage Blob assocated with Azure Storage Account and a container($web) managed by terraform as follows.
resource "azurerm_storage_blob" "static_files_html" {
name = index.html
storage_account_name = azurerm_storage_account.storage_account.name
storage_container_name = "$web"
type = "Block"
content_type = "text/html"
source = index.html
depends_on = [
azurerm_resource_group.resource_group,
azurerm_storage_account.storage_account
]
}
Can I upload file to this using null_resource?
A few days ago, I used null_resource to upload file to a Virtual Machine as follows.
So I want to know, if there is a way to do the same to upload to Azure Storage Blob. The idea here is, I can change/modify the file and then can run the plan and apply to see the changes reflected in the blob storage. Is this possible?
resource "time_sleep" "wait_few_seconds" {
# depends_on = [azurerm_storage_blob.static_files_html]
create_duration = "10s"
}
# Terraform NULL RESOURCE
# Sync App1 Static Content to Webserver using Provisioners
resource "null_resource" "sync_app1_static" {
depends_on = [time_sleep.wait_few_seconds]
triggers = {
always-update = timestamp()
}
# Connection Block for Provisioners to connect to Azure VM Instance
connection {
type = "ssh"
host = azurerm_linux_virtual_machine.mylinuxvm.public_ip_address
user = azurerm_linux_virtual_machine.mylinuxvm.admin_username
private_key = file("${path.module}/ssh-keys/terraform-azure.pem")
}
# File Provisioner: Copies the app1 folder to /tmp
provisioner "file" {
source = "apps/app1"
destination = "/tmp"
}
# Remote-Exec Provisioner: Copies the /tmp/app1 folder to Apache Webserver /var/www/html directory
provisioner "remote-exec" {
inline = [
"sudo cp -r /tmp/app1 /var/www/html"
]
}
}
Full working example with VM is here.
The example that I am stuck with(static web site with storage) with is here

Related

Use latest version of Docker image for app platform (deploy latest using Terraform)

I'm using Terraform to deploy to the DigitalOcean App platform which basically works good. I'm deploying a public image from DockerHub.
This is my config.
resource "digitalocean_app" "docs-page" {
spec {
domain {
name = "${var.do_subdomain_docs}.${var.do_base_domain}"
}
name = "docs-page"
region = var.do_region
service {
name = "docs-page-app"
http_port = 80
instance_count = 1
instance_size_slug = var.do_instance_smallest
internal_ports = []
source_dir = "/"
image {
registry_type = "DOCKER_HUB"
registry = "sommerfeldio"
repository = "docs-website"
tag = "stable"
}
routes {
path = "/"
preserve_path_prefix = false
}
}
}
}
The image of choice is my sommerfeldio/docs-website:stable (docker pull sommerfeldio/docs-website:stable).
My problem is this: I deploy a new version of my image to DockerHub. Then I trigger terraform apply to update my DigitalOcean infrastructure and terraform states, that nothing has changed ("Apply complete! Resources: 0 added, 0 changed, 0 destroyed."). This might be true from an infrastructure-pount-of-view because there is no chagne to e.g. the instances count or domain name or whatever. But my image did change. And this change is not picked up. To update my DigitalOcean app I have to destroy and re-provision everything. Which is not something I wanna do.
I'm using this provider: https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs
How can I tell the terraform-digitalocean-provider to (1) re-deploy the app with the latest image version from DockerHub or (2) update my above mentioned config in a way that it is mandatory to load the image from DockerHub all the time?

Terraform: Docker provider does not respect insecure-registries?

I am trying to pull a Docker image from a local Artifactory when the digest of the image changed. But I am confused about Terrform configuration and its relation to the installed Docker Desktop.
The Terrform script starts with:
terraform {
required_providers {
docker = {
source = "terraform-providers/docker"
}
}
}
provider "docker" {
host = "npipe:////.//pipe//docker_engine"
registry_auth {
address= "ip:port"
username = "my-username"
password = "my-password"
}
}
data "docker_registry_image" "my-image" {
name = "ip:port/repository-name/my-image:version"
}
resource "docker_image" "my-image" {
name = "my-image-name"
pull_triggers = ["data.docker_registry_image.my-image.sha256_digest"]
keep_locally = true
}
I added the registry ip:port to the insecure-registries so that also Terraform has access to it.
The problem is that the insecure-registries from Docker Desktop is somehow ignored by Terraform (Docker provider) because I get the response:
Error: Got error when attempting to fetch image version from registry: Error during registry request: Get https://ip:port/v2/repository-name/my-image:version: http: server gave HTTP response to HTTPS client.
on script.tf line 20, in data "docker_registry_image" "my-image":
20: data "docker_registry_image" "my-image" {
Can anyone help? Does somebody know why insecue-registries set in Docker Desktop does not apply here?
I think I have found out the answer. Here is the link https://github.com/terraform-providers/terraform-provider-docker/blob/ccb7c6e8abe0fae89d115347c0677b5c0f45e2bf/docker/data_source_docker_registry_image.go#L85-L96 to the source code of the terraform-provider-docker plugin where we can see in the line 98 that the protocol https is hardcoded, when getting the image digest:
req, err := http.NewRequest("GET", "https://"+registry+"/v2/"+image+"/manifests/"+tag, nil)
This is the answer why the insecure-registries property is not taken into the account.

Why do I get Unable to create container with image Unable to pull image error pulling image when using multiple Docker hosts?

I am trying to perform a simple deployment using terraform (0.12.24), and multiple Docker providers (plugin version 2.7.0). My aim using the terraform template below is to deploy two different containers to two different Docker-enabled hosts.
# Configure the Docker provider
provider "docker" {
host = "tcp://192.168.1.10:2375/"
}
provider "docker" {
alias = "worker"
host = "tcp://127.0.0.1:2375/"
}
# Create a container
resource "docker_container" "hello" {
image = docker_image.world.latest
name = "hello"
}
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world" {
name = "hello-world:latest"
}
The docker command runs successfully without root privileges. The Docker daemons of both machines 192.168.1.10 and 127.0.0.1 listen on 2375, are reachable from the host machine and can respond to direct Docker REST API calls (create,pull etc.) performed with curl. Manually pulling images also works in both hosts, and I did that to be sure that the latest hello-world image exists in both.
However, the terraform deployment (terraform apply) fails with the following error:
docker_container.hello: Creating...
docker_container.test: Creating...
docker_container.hello: Creation complete after 1s [id=77e515b4269aed255d4becac61f40d38e09838cdf8285294bf51f3c7cddbf2bf]
Error: Unable to create container with image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Unable to pull image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: error pulling image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Error response from daemon: pull access denied for sha256, repository does not exist or may require 'docker login'
on test.tf line 17, in resource "docker_container" "test":
17: resource "docker_container" "test" {
Why do I get Unable to create container with image Unable to pull image error pulling image when using multiple Docker hosts?
docker_container.test references docker_image.world, but they use different providers (default and docker.worker):
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world" {
name = "hello-world:latest"
}
This is fatal as docker_image.world uses provided default which runs the docker pull on tcp://192.168.1.10:2375/ (not on tcp://127.0.0.1:2375/).
This can be fixed by creating a docker_image using provider docker.world_worker to match docker_container.test as follows:
resource "docker_container" "test" {
provider = docker.world_worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world_worker" {
provider = docker.world_worker
name = "hello-world:latest"
}
There are some problems with the template which was originally used in the question.
Firstly, short-running hello-world containers are used, which leads to at least one of the services exiting with an error message by Terraform.
Then, with the important help of #Alain O'Dea (see relevant answer and comments) I created the following modified template, which works and accomplishes my goal.
# Configure the Docker provider
provider "docker" {
host = "tcp://192.168.1.10:2375/"
}
provider "docker" {
alias = "worker"
host = "tcp://127.0.0.1:2375/"
}
# Create a container
resource "docker_container" "hello" {
image = docker_image.world.latest
name = "hello"
}
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world_image.latest
name = "test"
}
resource "docker_image" "world" {
name = "prom/prometheus:latest"
}
resource "docker_image" "world_image" {
provider = docker.worker
name = "nextcloud:latest"
}

Can you run Docker containers in GCP via Terraform?

I have created a Docker image that I'd like to run in GCP using Terraform. I have tagged and pushed the image to GCR like this:
docker tag carlspring/hello-spring-boot:1.0 eu.gcr.io/${PROJECT_ID}/carlspring/hello-spring-boot:1.0
docker push eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0
I have the following code:
provider "google" {
// Set this to CREDENTIALS
credentials = file("credentials.json")
// Set this to PROJECT_ID
project = "carlspring"
region = "europe-west2"
zone = "europe-west2-a"
}
resource "google_compute_network" "vpc_network" {
name = "carlspring-terraform-network"
}
resource "google_compute_instance" "docker" {
count = 1
name = "tf-docker-${count.index}"
machine_type = "f1-micro"
zone = var.zone
tags = ["docker-node"]
boot_disk {
initialize_params {
image = "carlspring/hello-spring-boot"
}
}
}
After doing:
terraform init
terraform plan
terraform apply
I get:
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.docker[0]: Creating...
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
on main.tf line 18, in resource "google_compute_instance" "docker":
18: resource "google_compute_instance" "docker" {
The examples I've seen online are either using K8s, or starting a VM image running a Linux in which Docker is installed and an image is being started. Can't I just simply use my own container to start the instance?
google_compute_instance expects a VM image, not a Docker image. If you want to deploy Docker images to GCP, the easiest option is Cloud Run. To use it with Terraform you need cloud_run_service.
For example:
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
Note that I used eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0 and not carlspring/hello-spring-boot. You must use the fully qualified name as the short one points to Docker Hub where your image will not be found.
Terraform can be used to create a GCP VM Instance with Docker Image.
Here is an example: https://github.com/terraform-providers/terraform-provider-google/issues/1022#issuecomment-475383003
Hope this helps.
The following line indicates the image does not exist:
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
You should tag the image as eu.gcr.io/carlspring/hello-spring-boot:1.0.
Or alternatively, change image reference in boot_disk block to be eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0.
You can do this using a VM in GCE whose operating system is based on a Google-supplied Container OS image. You then can use this terraform module that facilitates the fetching and running of a container image.

Cannot run program "-files" (in directory "."): error=2, No such file or directory

i am trying to run a step using aws sdk ruby for Amazon ElasticMapReduce service that uses hadoop, while i can create cluster and step, the step always fails but not when set manually using the web interface
emr = Aws::EMR::Client.new
cluster_id = "*******"
resp = emr.add_job_flow_steps({
job_flow_id: cluster_id, # required
steps: [ # required
{
name: "TestStep", # required
action_on_failure: "CANCEL_AND_WAIT", # accepts TERMINATE_JOB_FLOW, TERMINATE_CLUSTER, CANCEL_AND_WAIT, CONTINUE
hadoop_jar_step: { # required
jar: 'command-runner.jar',
args:[
"-files",
"s3://source123/mapper.py,s3://source123/source_reducer.py",
"-mapper",
"mapper.py",
"-reducer",
"source_reducer.py",
"-input",
"s3://source123/input/",
"-output",
"s3://source123/output/"
]
},
},
],
})
the error i get is this
Cannot run program "-files" (in directory "."): error=2, No such file or directory
any clues?
Seems that adding hadoop-streaming works like below
emr = Aws::EMR::Client.new
cluster_id = "*******"
resp = emr.add_job_flow_steps({
job_flow_id: cluster_id, # required
steps: [ # required
{
name: "TestStep", # required
action_on_failure: "CANCEL_AND_WAIT", # accepts TERMINATE_JOB_FLOW, TERMINATE_CLUSTER, CANCEL_AND_WAIT, CONTINUE
hadoop_jar_step: { # required
jar: 'command-runner.jar',
args:[
"hadoop-streaming",
"-files",
"s3://source123/mapper.py,s3://source123/source_reducer.py",
"-mapper",
"mapper.py",
"-reducer",
"source_reducer.py",
"-input",
"s3://source123/input/",
"-output",
"s3://source123/output/"
]
},
},
],
})

Resources