I'm confused about the image property in the docker_container resource of a terraform tf file.
The following contents you see in all the tutorials:
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "tutorial"
ports {
internal = 80
external = 8000
}
}
This pulls the latest image.
But if you want a previous image, you need to specify the Digest, like this:
resource "docker_image" "nginx" {
name = "nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f"
keep_locally = false
}
Then you need to create a container on that image with the docker_container Resource.
But there is no way you can specify that previous image tag or digest.
Things like image = docker_image.nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f all fail. Various syntax versions are tried (quotes, no-quotes, etc, etc). They all result in an error.
The only way I could get this working was with the following:
resource "docker_image" "nginx" {
name = "nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "Terraform-Nginx"
ports {
internal = 80
external = 8000
}
}
But now I'm confused.
What does ".latest" even mean in the image property of the docker_container resource?
There are a couple of things to understand here. The first being the resource arguments and the second being the resource attributes [1]. In most of the providers, when you want to create a certain resource, you have to provide values for at least the required arguments. There are optional arguments as well. When a resource is successfully created, it provides a set of attributes which can be referenced in another resource. In your example, you cannot create a Docker container without specifying the image name. So instead of hardcoding the image name in the container resource, you first define the image resource and then reference its arguments/attributes in the container resource. The example you mention:
image = docker_image.nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f
is not valid because the Docker image resource provides no arguments and attributes named nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f, i.e., the provider schema knows nothing about that. The provider documentation has sections on arguments and attributes. The Docker provider documentation is lacking some of the usual elements, but in this case, the attributes seem to be denoted with Read-only section [2]. There you will find the latest attribute that you use for referencing an image that was pulled. Also note that it says that the argument is deprecated:
latest (String, Deprecated) The ID of the image in the form of sha256:<hash> image digest. Do not confuse it with the default latest tag.
Based on the current documentation, you might want to use the following:
resource "docker_container" "nginx" {
image = docker_image.nginx.name
name = "tutorial"
ports {
internal = 80
external = 8000
}
}
The syntax used when referencing attributes/arguments is always:
<RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
i.e., docker_image.nginx.latest or docker_iamge.nginx.name.
As a side note, to clear up any confusion, the way you are referencing the value for the image ID is called an implicit reference [3]. Since terraform creates resources in parallel, it helps when deciding the order in which the resources will be created. In this case the image will be pulled first and then the container will be created based on the image.
EDIT: updated the answer based on the input of #BertC.
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes
[2] https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/image#read-only
[3] https://www.terraform.io/language/resources/behavior#resource-dependencies
In reaction to #MarkoE's answer :
Your answer showed to use the id string:
resource "docker_image" "nginx" {
name = "nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.id
name = "Terraform-Nginx"
ports {
internal = 80
external = 8000
}
}
This gave me the following error:
docker_container.nginx: Creating...
╷
│ Error: Unable to create container with image sha256:1b84ed9be2d449f4242c521a3961fb417ecf773d2456477691f109aab3c5bb74nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f: findImage1: error looking up local image
"sha256:1b84ed9be2d449f4242c521a3961fb417ecf773d2456477691f109aab3c5bb74nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f": unable to inspect image
sha256:1b84ed9be2d449f4242c521a3961fb417ecf773d2456477691f109aab3c5bb74nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f: Error response from daemon: no such image:
sha256:1b84ed9be2d449f4242c521a3961fb417ecf773d2456477691f109aab3c5bb74nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f: invalid reference format
But from this link you provided I read:
The most common reference type is a reference to an attribute of a
resource which has been declared either with a resource or data block.
Which made me change the code to:
resource "docker_image" "nginx" {
name = "nginx:1.22.0#sha256:f2dfca5620b64b8e5986c1f3e145735ce6e291a7dc3cf133e0a460dca31aaf1f"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.name
name = "Terraform-Nginx"
ports {
internal = 80
external = 8000
}
}
(Note the docker_image.nginx.name)
And now it works. I even get the right tag in the docker images command.
Tnanks Marko
Related
We have a new terraform script that is pushing a docker image to an AWS Lambda. The script works well and correctly connects the fresh image to the Lambda. I can confirm this by checking the Image URL as shown in the AWS console for the Lambda and it is the newly pushed+connected image. However when testing the lambda it is clearly running the prior code. It seems like the Lambda has been updated but the running in-memory instances didnt get the message.
Question: is there a way to force the in-memory Lambdas to be cycled to the new image?
Here is our TF code for the Lambda:
resource "aws_lambda_function" "my_lambda" {
function_name = "MyLambda_${var.environment}"
role = data.aws_iam_role.iam_for_lambda.arn
image_uri = "${data.aws_ecr_repository.my_image.repository_url}:latest"
memory_size = 512
timeout = 300
architectures = ["x86_64"]
package_type = "Image"
environment {variables = {stage = var.environment, commit_hash=var.commit_hash}}
}
After more searching I found some discussions (here) that mention the source_code_hash option in terraform for the Lambda creation block (docs here). Its mostly used with a SHA hash of the zip file used for pushing code from an S3 bucket, but in our case we are using a container/image so there is not really a file to get a hash from. However, it turns out that it is just a string that Lambda checks for changes. So we added the following:
resource "aws_lambda_function" "my_lambda" {
function_name = "MyLambda_${var.environment}"
role = data.aws_iam_role.iam_for_lambda.arn
image_uri = "${data.aws_ecr_repository.my_image.repository_url}:latest"
memory_size = 512
timeout = 300
architectures = ["x86_64"]
package_type = "Image"
environment {variables = {stage = var.environment, commit_hash=var.commit_hash}}
source_code_hash = var.commit_hash << New line
}
And we use a bitbucket pipeline to inject the git hash into the terraform apply operation. This fix allowed the Lambda to correctly update the running version.
Alternatively, if you don't want to depend on bitbucket for this, you can add a data source for the ECR image:
data "aws_ecr_image" "repo_image" {
repository_name = "repo-name"
image_tag = "tag"
}
And then use its id as a source code hash like this:
source_code_hash = trimprefix(data.aws_ecr_image.repo_image.id, "sha256:")
Let's say I've a module which creates some resources that represent a default server. Now I want to inherit this default server to customize it into different directions.
At the Manager node which inherits the DockerNode I want to run docker swarm init and get the join token. On all Worker nodes I want to join with the token.
So in my main.tf where I use the DockerNode I have defined the nodes like this:
module "manager" {
source = "./modules/swarm-node"
node_network = {
network_id = hcloud_network.swarm-network.id
ip = "10.0.1.10"
}
node_name_prefix = "swarm-manager"
server = var.server
docker_compose_version = var.docker_compose_version
volume_size = var.volume_size
volume_filesystem = var.volume_filesystem
ssh_keys = [hcloud_ssh_key.ssh-key-a.name, hcloud_ssh_key.ssh-key-b.name]
depends_on = [
hcloud_network_subnet.swarm-network-nodes
]
}
module "worker" {
count = 2
source = "./modules/swarm-node"
node_index = count.index
node_network = {
network_id = hcloud_network.swarm-network.id
ip = "10.0.1.10${count.index}"
}
node_name_prefix = "swarm-worker"
server = var.server
docker_compose_version = var.docker_compose_version
volume_size = var.volume_size
volume_filesystem = var.volume_filesystem
ssh_keys = [hcloud_ssh_key.ssh-key-a.name, hcloud_ssh_key.ssh-key-b.name]
depends_on = [
hcloud_network_subnet.swarm-network-nodes,
module.manager
]
}
How to run docker swarm init and return the join token on the server resource inside of module.manager?
How to join the swarm with each worker?
I've researched this for quite a while:
Some solutions expose the Docker Daemon over TCP and access it from the worker to get the token. I don't like to expose the Docker Daemon unnecessarily.
Some solutions copy the base module (in my case DockerNode) just to modify one or two lines. I like to to follow DRY.
Some solutions have an additional shell script, which read the .tfstate and SSH into each machine to do further customization. I would like to use Terraform for this with all it's benefits.
When instantiating the vpc object within a stack using the CDK. There is a parameter max_azs which supposedly defaults to 3. However, when I create a vpc no matter what I set that number to, I only ever get 2 AZs.
from aws_cdk import (
core,
aws_ec2 as ec2
)
app = core.App()
subnets = []
subnets.append(ec2.SubnetConfiguration(name = "public", subnet_type = ec2.SubnetType.PUBLIC, cidr_mask = 20))
subnets.append(ec2.SubnetConfiguration(name = "private", subnet_type = ec2.SubnetType.PRIVATE, cidr_mask = 20))
subnets.append(ec2.SubnetConfiguration(name = "isolated", subnet_type = ec2.SubnetType.ISOLATED, cidr_mask = 20))
vpc = ec2.Vpc(app, "MyVpc", subnet_configuration = subnets, max_azs = 3)
print(vpc.availability_zones)
app.synth()
I would expect to see 3 azs used here, but actually only ever get 2. Even if i set the value to 99, which should mean all azs.
Ah yes I came across the same issue myself. What solved it for me was specifying the region and account when creating the stack.
The following examples are for typescript but I'm sure you can write the corresponding python.
new MyStack(app, 'MyStack', {
env: {
region: 'us-east-1',
account: '1234567890',
}
});
In the case of typescript you need to rebuild and synth before you deploy.
$ npm run build
$ cdk synth
I tied different approaches to get list of vm's of azurerm_kubernetes_cluster in terraform but with no success. There is number of possible elements in here: https://www.terraform.io/docs/providers/azurerm/d/kubernetes_cluster.html but not of them seams to allow getting list of VM's. Is there a way?
Ok I found way via subnet. To use this approach you need to have Kubernetes cluster created with advanced networking with your subnet which you know.
First section gets ip_configurations from subnet and extracts network interfaces names with ugly split.
data "null_data_source" "all_kubernetes_nic_name" {
count = "${length(azurerm_subnet.kubernetes.ip_configurations)}"
inputs {
nic = "${element(split("/", azurerm_subnet.kubernetes.ip_configurations[count.index]), 8)}"
}
}
Because of each kubernetes node acquiring number of ip addresses I need to distinct on previous list.
data "null_data_source" "kubernetes_nic_name" {
count = "${length(distinct(data.null_data_source.all_kubernetes_nic_name.*.outputs.nic))}"
inputs {
nic = "${element(distinct(data.null_data_source.all_kubernetes_nic_name.*.outputs.nic), count.index)}"
}
}
Then it's easy to get exact reference to network interface of each node in kubernetes cluster. Note resource_group_name being extracted directly from cluster object.
data "azurerm_network_interface" "kubernetes_nic" {
count = "${length(data.null_data_source.kubernetes_nic_name.*.outputs.nic)}"
name = "${data.null_data_source.kubernetes_nic_name.*.outputs.nic[count.index]}"
resource_group_name = "${azurerm_kubernetes_cluster.cluster.node_resource_group}"
}
I have azure container with name "images" but I can't figure out how to get all blobs which are stored inside this container
I have found the answer:
// We get reference for the container (by name "containerName" variable)
container = blobClient.GetContainerReference(containerName);
// We create the container if this container is not exists
container.CreateIfNotExist();
// Set permissions
var permissions = new BlobContainerPermissions
{
PublicAccess = blobContainerPublicAccessType.Blob
};
container.SetPermissions(permissions);
// Get list of all blobs URLs which are stored inside container
IEnumerable<IListBlobItem> blobs = container.ListBlobs();
return blobs.Select(item => item.Uri.ToString()).ToList();