I have created a Docker image that I'd like to run in GCP using Terraform. I have tagged and pushed the image to GCR like this:
docker tag carlspring/hello-spring-boot:1.0 eu.gcr.io/${PROJECT_ID}/carlspring/hello-spring-boot:1.0
docker push eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0
I have the following code:
provider "google" {
// Set this to CREDENTIALS
credentials = file("credentials.json")
// Set this to PROJECT_ID
project = "carlspring"
region = "europe-west2"
zone = "europe-west2-a"
}
resource "google_compute_network" "vpc_network" {
name = "carlspring-terraform-network"
}
resource "google_compute_instance" "docker" {
count = 1
name = "tf-docker-${count.index}"
machine_type = "f1-micro"
zone = var.zone
tags = ["docker-node"]
boot_disk {
initialize_params {
image = "carlspring/hello-spring-boot"
}
}
}
After doing:
terraform init
terraform plan
terraform apply
I get:
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_compute_instance.docker[0]: Creating...
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
on main.tf line 18, in resource "google_compute_instance" "docker":
18: resource "google_compute_instance" "docker" {
The examples I've seen online are either using K8s, or starting a VM image running a Linux in which Docker is installed and an image is being started. Can't I just simply use my own container to start the instance?
google_compute_instance expects a VM image, not a Docker image. If you want to deploy Docker images to GCP, the easiest option is Cloud Run. To use it with Terraform you need cloud_run_service.
For example:
resource "google_cloud_run_service" "default" {
name = "cloudrun-srv"
location = "us-central1"
template {
spec {
containers {
image = "eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0"
}
}
}
traffic {
percent = 100
latest_revision = true
}
}
Note that I used eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0 and not carlspring/hello-spring-boot. You must use the fully qualified name as the short one points to Docker Hub where your image will not be found.
Terraform can be used to create a GCP VM Instance with Docker Image.
Here is an example: https://github.com/terraform-providers/terraform-provider-google/issues/1022#issuecomment-475383003
Hope this helps.
The following line indicates the image does not exist:
Error: Error resolving image name 'carlspring/hello-spring-boot': Could not find image or family carlspring/hello-spring-boot
You should tag the image as eu.gcr.io/carlspring/hello-spring-boot:1.0.
Or alternatively, change image reference in boot_disk block to be eu.gcr.io/carlspring/carlspring/hello-spring-boot:1.0.
You can do this using a VM in GCE whose operating system is based on a Google-supplied Container OS image. You then can use this terraform module that facilitates the fetching and running of a container image.
Related
I logged in to docker normally, and the authentication information was also checked, but the jib build fails.
docker login
cat ~/.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {}
},
"credsStore": "desktop"
}%
Docker login is successful.
// build.gradle
jib {
from {
image = "eclipse-temurin:17"
}
to {
image = "username/${project.name}:${project.version}"
tags = ["latest"]
}
}
and command ./gradlew jib
error message
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jib-test:jib'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Build image failed, perhaps you should make sure your credentials for 'registry-1.docker.io/library/eclipse-temurin' are set up correctly. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-unauthorized for help
Looks like a duplicate of these:
How to setup Jib container to authenticate with docker remote registry to pull images?
401 Unauthorized when using jib to create docker image
https://github.com/GoogleContainerTools/jib/issues/3677
Try emptying config.json entirely or just delete the file. Particularly, remove the entry for "https://index.docker.io/v1/" and credsStore.
I'm using Terraform to deploy to the DigitalOcean App platform which basically works good. I'm deploying a public image from DockerHub.
This is my config.
resource "digitalocean_app" "docs-page" {
spec {
domain {
name = "${var.do_subdomain_docs}.${var.do_base_domain}"
}
name = "docs-page"
region = var.do_region
service {
name = "docs-page-app"
http_port = 80
instance_count = 1
instance_size_slug = var.do_instance_smallest
internal_ports = []
source_dir = "/"
image {
registry_type = "DOCKER_HUB"
registry = "sommerfeldio"
repository = "docs-website"
tag = "stable"
}
routes {
path = "/"
preserve_path_prefix = false
}
}
}
}
The image of choice is my sommerfeldio/docs-website:stable (docker pull sommerfeldio/docs-website:stable).
My problem is this: I deploy a new version of my image to DockerHub. Then I trigger terraform apply to update my DigitalOcean infrastructure and terraform states, that nothing has changed ("Apply complete! Resources: 0 added, 0 changed, 0 destroyed."). This might be true from an infrastructure-pount-of-view because there is no chagne to e.g. the instances count or domain name or whatever. But my image did change. And this change is not picked up. To update my DigitalOcean app I have to destroy and re-provision everything. Which is not something I wanna do.
I'm using this provider: https://registry.terraform.io/providers/digitalocean/digitalocean/latest/docs
How can I tell the terraform-digitalocean-provider to (1) re-deploy the app with the latest image version from DockerHub or (2) update my above mentioned config in a way that it is mandatory to load the image from DockerHub all the time?
I am trying to pull a Docker image from a local Artifactory when the digest of the image changed. But I am confused about Terrform configuration and its relation to the installed Docker Desktop.
The Terrform script starts with:
terraform {
required_providers {
docker = {
source = "terraform-providers/docker"
}
}
}
provider "docker" {
host = "npipe:////.//pipe//docker_engine"
registry_auth {
address= "ip:port"
username = "my-username"
password = "my-password"
}
}
data "docker_registry_image" "my-image" {
name = "ip:port/repository-name/my-image:version"
}
resource "docker_image" "my-image" {
name = "my-image-name"
pull_triggers = ["data.docker_registry_image.my-image.sha256_digest"]
keep_locally = true
}
I added the registry ip:port to the insecure-registries so that also Terraform has access to it.
The problem is that the insecure-registries from Docker Desktop is somehow ignored by Terraform (Docker provider) because I get the response:
Error: Got error when attempting to fetch image version from registry: Error during registry request: Get https://ip:port/v2/repository-name/my-image:version: http: server gave HTTP response to HTTPS client.
on script.tf line 20, in data "docker_registry_image" "my-image":
20: data "docker_registry_image" "my-image" {
Can anyone help? Does somebody know why insecue-registries set in Docker Desktop does not apply here?
I think I have found out the answer. Here is the link https://github.com/terraform-providers/terraform-provider-docker/blob/ccb7c6e8abe0fae89d115347c0677b5c0f45e2bf/docker/data_source_docker_registry_image.go#L85-L96 to the source code of the terraform-provider-docker plugin where we can see in the line 98 that the protocol https is hardcoded, when getting the image digest:
req, err := http.NewRequest("GET", "https://"+registry+"/v2/"+image+"/manifests/"+tag, nil)
This is the answer why the insecure-registries property is not taken into the account.
I am trying to perform a simple deployment using terraform (0.12.24), and multiple Docker providers (plugin version 2.7.0). My aim using the terraform template below is to deploy two different containers to two different Docker-enabled hosts.
# Configure the Docker provider
provider "docker" {
host = "tcp://192.168.1.10:2375/"
}
provider "docker" {
alias = "worker"
host = "tcp://127.0.0.1:2375/"
}
# Create a container
resource "docker_container" "hello" {
image = docker_image.world.latest
name = "hello"
}
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world" {
name = "hello-world:latest"
}
The docker command runs successfully without root privileges. The Docker daemons of both machines 192.168.1.10 and 127.0.0.1 listen on 2375, are reachable from the host machine and can respond to direct Docker REST API calls (create,pull etc.) performed with curl. Manually pulling images also works in both hosts, and I did that to be sure that the latest hello-world image exists in both.
However, the terraform deployment (terraform apply) fails with the following error:
docker_container.hello: Creating...
docker_container.test: Creating...
docker_container.hello: Creation complete after 1s [id=77e515b4269aed255d4becac61f40d38e09838cdf8285294bf51f3c7cddbf2bf]
Error: Unable to create container with image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Unable to pull image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: error pulling image sha256:a29f45ccde2ac0bde957b1277b1501f471960c8ca49f1588c6c885941640ae60: Error response from daemon: pull access denied for sha256, repository does not exist or may require 'docker login'
on test.tf line 17, in resource "docker_container" "test":
17: resource "docker_container" "test" {
Why do I get Unable to create container with image Unable to pull image error pulling image when using multiple Docker hosts?
docker_container.test references docker_image.world, but they use different providers (default and docker.worker):
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world" {
name = "hello-world:latest"
}
This is fatal as docker_image.world uses provided default which runs the docker pull on tcp://192.168.1.10:2375/ (not on tcp://127.0.0.1:2375/).
This can be fixed by creating a docker_image using provider docker.world_worker to match docker_container.test as follows:
resource "docker_container" "test" {
provider = docker.world_worker
image = docker_image.world.latest
name = "test"
}
resource "docker_image" "world_worker" {
provider = docker.world_worker
name = "hello-world:latest"
}
There are some problems with the template which was originally used in the question.
Firstly, short-running hello-world containers are used, which leads to at least one of the services exiting with an error message by Terraform.
Then, with the important help of #Alain O'Dea (see relevant answer and comments) I created the following modified template, which works and accomplishes my goal.
# Configure the Docker provider
provider "docker" {
host = "tcp://192.168.1.10:2375/"
}
provider "docker" {
alias = "worker"
host = "tcp://127.0.0.1:2375/"
}
# Create a container
resource "docker_container" "hello" {
image = docker_image.world.latest
name = "hello"
}
resource "docker_container" "test" {
provider = docker.worker
image = docker_image.world_image.latest
name = "test"
}
resource "docker_image" "world" {
name = "prom/prometheus:latest"
}
resource "docker_image" "world_image" {
provider = docker.worker
name = "nextcloud:latest"
}
I'm working with terraforming gcloud resources and need to create gcloud container registry and trying to use below sample from terraform.io
data "google_container_registry_repository" {}
output "gcr_location" {
value = "${data.google_container_registry_repository.repository_url}"
}
and receving below error when I run terraform plan
'data' must be followed by exactly two strings: a type and a name
Any working sample that I can refer to ?
terraform.io syntax : https://www.terraform.io/docs/providers/google/d/google_container_registry_repository.html
terraform version:
Terraform v0.11.2
Edit : updated to Terraform v0.11.3 and still same problem.
Try this:
data "google_container_registry_repository" "myregistry" {}
output "gcr_location" {
value = "${data.google_container_registry_repository.myregistry.repository_url}"
}