Dynamically creating Terraform Kubernetes Volumes from a list - terraform-provider-kubernetes

I’m hoping someone already has an existing loop for this one. The idea is I’m sending an array and create K8S EFS volume based on each of these entries.
For example.
Sending this;
locals {
mount = [
{
"name" : "mount_name_1",
"mount_path" : "/some/other/pount",
"efs_server" : "AWS_EFS_address"
},
{
"name" : "mount_name_2",
"mount_path" : "/some/mount/pount",
"efs_server" : "AWS_EFS_address"
}
]
}
and I want to be able to access them like;
On K8S deployment spec
volume {
name = "mount.name"
nfs {
path = "/${var.env}"
server = mount.efs_server
}
}
On Container
volume_mount {
name = "mount.name"
mount = "mount.mount_path"
}
Any help would be appreciated.

Related

Uploading file to ECS task

I'm trying to upload a simple .yml file when creating an ECS task via Terraform, here is the code ./main.tf:
resource "aws_ecs_task_definition" "grafana" {
family = "grafana"
cpu = "256"
memory = "512"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
container_definitions = jsonencode([
{
name = "grafana"
image = "grafana/grafana:latest"
portMappings = [
{
containerPort = 3000,
hostPort = 3000,
protocol = "tcp"
}
]
}
])
}
How do I go about adding ./datasource.yml (located on my host machine) to the container within the task definition so that when the task runs it can use it? I wasn't sure if volume { } could be used?
I think you have two alternatives here:
rebuild the docker image including your modified datasource.yaml.
COPY datasource.yaml /usr/share/grafana/conf/provisioning/datasource.yaml
or
mount a volume that you can easily mount and push files programmatically (EFS turns out to be a bit complicated to do this)
mount_points = [ {
sourceVolume = "grafana"
containerPath = "/var/lib/grafana/conf/provisioning"
readOnly = false
}
]
volumes = [
{
name = "grafana"
host_path = "/ecs/grafana-provisioning"}
]
I wasn't sure if volume { } could be used?
As a matter of fact you can, check the docs https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_task_definition#example-usage
volume {
name = "grafana-volume"
host_path = "./datasource.yml"
}

Error using EFS in ECS, returns unknown filesystem type 'efs'

I'm using a docker image for jenkins (jenkins/jenkins:2.277.1-lts-alpine) in an AWS ECS, and I want to persist the data using a AWS EFS.
I created the EFS and got the ID (fs-7dcef848)
My terraform code looks like:
resource "aws_ecs_service" "jenkinsService" {
cluster = var.ECS_cluster
name = var.jenkins_name
task_definition = aws_ecs_task_definition.jenkinsService.arn
deployment_maximum_percent = "200"
deployment_minimum_healthy_percent = 50
desired_count = var.service_desired_count
tags = {
"ManagedBy" : "Terraform"
}
}
resource "aws_ecs_task_definition" "jenkinsService" {
family = "${var.jenkins_name}-task"
container_definitions = file("task-definitions/service.json")
volume {
name = var.EFS_name
efs_volume_configuration {
file_system_id = "fs-7dcef848"
}
}
tags = {
"ManagedBy" : "Terraform"
}
}
and the service.json
[
{
"name": "DevOps-jenkins",
"image": "jenkins/jenkins:2.284-alpine",
"cpu": 0,
"memoryReservation": 1024,
"essential": true,
"portMappings": [
{
"containerPort" : 8080,
"hostPort" : 80
}
],
"mountPoints": [
{
"sourceVolume" : "DevOps-Jenkins",
"containerPath" : "/var/jenkins_home"
}
]
}
]
The terraform apply works OK, but the task cannot start returning:
Stopped reason Error response from daemon: create ecs-DevOps-jenkins-task-33-DevOps-Jekins-bcb381cd9dd0f7ae2700: VolumeDriver.Create: mounting volume failed: mount: unknown filesystem type 'efs'
Does anyone know whats happening?
There is another way to persist data?
Thanks in advance.
Solved: The first attempt was to install the "amazon-efs-utils" package using a remote-exec
But following the indications provided by #Oguzhan Aygun , I did it on the USER DATA section and it worked!
Thanks!

What is the simplest way to get GCR image digest through terraform?

Terraform GCR provider has data source called google_container_registry_image which has an argument digest, but it will be null since it is stated that data source works completely offline.
data "google_container_registry_image" "image" {
project = "foo"
name = "bar"
tag = "baz"
}
output "digest" {
value = data.google_container_registry_image.image.digest // this is null
}
Current workaround I use uses docker provider and looks like this:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.11.0"
}
}
}
provider "docker" {
registry_auth {
address = "gcr.io"
config_file = pathexpand("~/.docker/config.json")
}
}
data "google_container_registry_image" "image" {
project = "foo"
name = "bar"
tag = "baz"
}
data "docker_registry_image" "image" {
name = data.google_container_registry_image.image.image_url
}
output "digest" {
value = data.docker_registry_image.image.sha256_digest
}
Using two providers and additional docker credentials seems pretty complicated for such a simple use case, is there some easier way to do it?

Icinga2 client Host culster-zone check command not going down (RED) when lost connection

I have setup a single master with 2 client endpoints in my icintga2 monitoring system using director with Top-Down mode.
I have also setup 2 client nodes with both accept configs and accept commands.
(hopefully this means I'm running Top Down Command Endpoint mode)
The service checks (disk/mem/load) for the 3 hosts are returning correct results. But my problem is:
according to the example from Top Down Command Endpoint example,
host icinga2-client1 is using "hostalive" as the host check_command.
eg.
object Host "icinga2-client1.localdomain" {
check_command = "hostalive" //check is executed on the master
address = "192.168.56.111"
vars.client_endpoint = name //follows the convention that host name == endpoint name
}
But one issue I have is that
if the client1 icinga process is not running,
the host status stays GREEN and also all of service status (disk/mem/load) stay all GREEN as well
because master is not getting any service check updates and hostalive check command is able to ping the node.
Under Best Practice - Health Check section,
it mentioned to use "cluster-zone" check commands.
I was expecting while using "cluster-zone",
the host status would be RED
when the client node icinga process is stopped,
but somehow this is not happening.
Does anyone has any idea?
My zone/host/endpoint configurations are as follows:
object Zone "icinga-master" {
endpoints = [ "icinga-master" ]
}
object Host "icinga-master" {
import "Master-Template"
display_name = "icinga-master [192.168.100.71]"
address = "192.168.100.71"
groups = [ "Servers" ]
}
object Endpoint "icinga-master" {
host = "192.168.100.71"
port = "5665"
}
object Zone "rick-tftp" {
parent = "icinga-master"
endpoints = [ "rick-tftp" ]
}
object Endpoint "rick-tftp" {
host = "172.16.181.216"
}
object Host "rick-tftp" {
import "Host-Template"
display_name = "rick-tftp [172.16.181.216]"
address = "172.16.181.216"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
object Zone "tftp-server" {
parent = "icinga-master"
endpoints = [ "tftp-server" ]
}
object Endpoint "tftp-server" {
host = "192.168.100.221"
}
object Host "tftp-server" {
import "Host-Template"
display_name = "tftp-server [192.168.100.221]"
address = "192.168.100.221"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
template Host "Host-Template" {
import "pnp4nagios-host"
check_command = "cluster-zone"
max_check_attempts = "5"
check_interval = 1m
retry_interval = 30s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_perfdata = true
}
Thanks,
Rick

Nomad+Docker: Using the local Docker image, avoiding cleanup

My problem
I use nomad to schedule and deploy Docker images across several nodes. I am using a pretty stable image, so I want that image to be loaded locally rather than fetched from Dockerhub each time.
The docker.cleanup.image argument should do just that:
docker.cleanup.image Defaults to true. Changing this to false will prevent Nomad from removing images from stopped tasks, which is exactly what I want.
The documentation example is:
client {
options {
"docker.cleanup.image" = "false"
}
}
However, I don't know where this stanza goes. I tried placing it in the job or task sections of the fairly simple configuration file, with no success.
Code (configuration file)
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 30
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
image = "whatever/whatever:v1"
port_map {
db = 80
}
}
env {
"LOGGER" = "ec2-52-58-216-66.eu-central-1.compute.amazonaws.com"
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
}
}
}
}
My question
Where do I place the client stanza in the nomad configuration file?
This doesn't go in your job file, it goes on the nomad agents (the clients where nomad jobs are deployed).

Resources