I am on a Windows machine using Terraform 0.13.4 and trying to spin up some containers on a remote host using Terraform and the Docker provider:
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
I am having a hard time with this as it cannot authenticate with my private registry. Always getting 401 Unauthorized.
If I don't do this to grab the sha256_digest and just use the docker_container resource, everything works but it forces replacements of the running containers.
Hello Angelos if you dont want to force replace the running container you should try this :
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
resource "docker_image" "example" {
name = data.docker_registry_image.mycontainer.name
pull_triggers = [data.docker_registry_image.mycontainer.sha256_digest]
keep_locally = true
}
then in the container use :
resource "docker_container" "example" {
image = docker_image.example.latest
name = "container_name"
}
you shoukd use
docker_image.example.latest
Using the resource docker_image itself if it already exist he wont pull the image and doesn't restart the container but if you pass the name as a string he will replace the container everytime.
https://www.terraform.io/docs/providers/docker/r/container.html
Turns out that the code is correct and that the container service I am using (older version of ProGet) is not replying correctly for the auth calls. I tested the code using another registry and it all works as expected.
Related
I have question related to terraform code for azure event hub.
What are the security principles and policies that we need to take care while deploying azure event hub securely through terraform?. If possible please share the terraform code also.
Thanks
I have checked few docs but unable to understand it.
I tried to reproduce the same in my environment to create an Azure event hub using Terraform:
Terraform Code:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "venkyrg" {
name = "venkyrg1"
location = "West Europe"
}
resource "azurerm_eventhub_namespace" "example" {
name = "venkatnamespace"
location = azurerm_resource_group.venkyrg.location
resource_group_name = azurerm_resource_group.venkyrg.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "example" {
name = "venkateventhub"
namespace_name = azurerm_eventhub_namespace.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
partition_count = 2
message_retention = 1
}
#Event hub Policy creation
resource "azurerm_eventhub_authorization_rule" "example" {
name = "navi"
namespace_name = azurerm_eventhub_namespace.example.name
eventhub_name = azurerm_eventhub.example.name
resource_group_name = azurerm_resource_group.venkyrg.name
listen = true
send = false
manage = false
}
# Service Prinicipal Assignment
resource "azurerm_role_assignment" "pod-identity-assignment" {
scope = azurerm_resource_group.resourceGroup.id
role_definition_name = "Azure Event Hubs Data Owner"
principal_id = "74cca40a-1d7e-4352-a66c-217eab00cf33"
}
Terraform Apply:
Once ran the code resources are created with event hub policies in Azure successfully, like below.
Policy Status:
Azure Built-in roles for Azure Event Hubs
Reference: Azurerm-eventhub with Terraform
I can't find more information in the Terraform provider documentation, neither can I find any open issues in github.
https://www.terraform.io/registry/providers/docs
https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+provider+attribute+deprecated
Terraform code:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "~> 2.13.0"
}
}
}
provider "docker" {}
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "nginx"
ports {
internal = 80
external = 8000
}
}
Deprecation:
Refer to the provider documentation, not Terraform. The provider is kreuzwerker/docker and issues for it would be on its own Github page.
According to the kreuzwerker documentation, you need to change the version in your required_providers block:
version = "~> 2.21.0"
And also, change the way how you're setting the image in the docker_container:
image = docker_image.nginx.image_id
I am using Packer to generate an image on Google Compute Engine, and Terraform to create the instance. I have set this metadata:
key: env_vars
value: export test=10
Packer is using a script with something like this inside:
curl "http://metadata.google.internal/computeMetadata/v1/project/attributes/env_vars?recursive=tru&alt=text" -H "Metadata-Flavor: Google" -o /tmp/env_vars.sh
source /tmp/env_vars.sh # or . /tmp/env_vars.sh
The problem is that when I create an instance using this image through Terraform the env variables are not available. That means, If I run printenv or echo $test, it is empty.
Even if I write a startup-script for the instance, it doesn't work.
But, if I run the same exact script inside the instance via SSH, it does work.
In all scenarios described above, the file env_vars.sh is created.
I just want to set the env vars from my metadata for any instance.
Any suggestion on how can I achieve this?
EDIT:
Here's the terraform code:
# create instance
resource "google_compute_instance" "default" {
count = 1
name = var.machine_name
machine_type = var.machine_type
zone = var.region_zone
tags = ["allow-http-ssh-rule"]
boot_disk {
initialize_params {
image = var.source_image
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
I have reproduced your issue in my own project, and you are right it seems that exportdoes not work on the strat-up script.
I also tried creating a start-up script in a bucket but it does not work.
On the other hand I was able to set the env var in my project:
I’m using a debian-9 image, so, I edited the /etc/profile to add the env vars.
I use the following code to create my VM with env variables:
provider "google" {
project = "<<PROJECT-ID>>"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
}
}
# defining metadata
metadata = {
foo = "bar"
}
metadata_startup_script = "echo ENVVAR=DEVELOPMENT2 >> /etc/profile"
}
After the creation of my instance I was able to see the correct values:
$ echo $ENVVAR
DEVELOPMENT2
I got a sample AWS codepipeline working via the console but need to get it set up via Terraform.
I have two problems, one minor and one major:
The Github stage fails until I go in and edit it via the console, even though I wind up not changing anything that I already had set up in "owner" or "repo"
The more major item is that I keep getting CannotPullContainerError on the build step that keeps anything else from happening. It says "repository does not exist or may require 'docker login'".
The repository DOES exist; I used the command line from my Linux instance to verify the same 'docker login' and 'docker pull' commands that don't work from AWS CodePipeline.
(I know: the buildspec.yml is stupidly insecure but I wanted to get the prototype I had working the same way before I put in kms.)
My buildspec.yml is simple:
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region us-west-2)
- docker pull 311541007646.dkr.ecr.us-west-2.amazonaws.com/agverdict-next:latest
build:
commands:
- sudo apt install curl
- curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -
- sudo apt install nodejs -y
- mkdir /root/.aws
- cp ./deployment/credentials /root/.aws/credentials
- cd ./deployment
- bash ./DeployToBeta.sh
Here's the terraform that creates the pipeline. (No 'deploy' step as the 'build' shell script does that from a previous incarnation.)
locals {
github_owner = "My-Employer"
codebuild_compute_type = "BUILD_GENERAL1_LARGE"
src_action_name = "projectname-next"
codebuild_envronment = "int"
}
data "aws_caller_identity" "current" {}
provider "aws" {
region = "us-west-2"
}
variable "aws_region" { default="us-west-2"}
variable "github_token" {
default = "(omitted)"
description = "GitHub OAuth token"
}
resource "aws_iam_role" "codebuild2" {
name = "${var.codebuild_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codebuild_arpdoc.json}"
}
resource "aws_iam_role_policy" "codebuild2" {
name = "codebuild2_service_policy"
role = "${aws_iam_role.codebuild2.id}"
policy = "${data.aws_iam_policy_document.codebuild_access.json}"
}
resource "aws_iam_role" "codepipeline2" {
name = "${var.codepipeline_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codepipeline_arpdoc.json}"
}
resource "aws_iam_role_policy" "codepipeline" {
name = "codepipeline_service_policy"
role = "${aws_iam_role.codepipeline2.id}"
policy = "${data.aws_iam_policy_document.codepipeline_access.json}"
}
resource "aws_codebuild_project" "projectname_next" {
name = "projectname-next"
description = "projectname_next_codebuild_project"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild2.arn}"
encryption_key = "arn:aws:kms:${var.aws_region}:${data.aws_caller_identity.current.account_id}:alias/aws/s3"
artifacts {
type = "CODEPIPELINE"
name = "projectname-next-bld"
}
environment {
compute_type = "${local.codebuild_compute_type}"
image = "311541007646.dkr.ecr.us-west-2.amazonaws.com/projectname-next:latest"
type = "LINUX_CONTAINER"
privileged_mode = false
environment_variable {
"name" = "PROJECT_NAME"
"value" = "projectname-next"
}
environment_variable {
"name" = "PROJECTNAME_ENV"
"value" = "${local.codebuild_envronment}"
}
}
source {
type = "CODEPIPELINE"
}
}
resource "aws_codepipeline" "projectname-next" {
name = "projectname-next-pipeline"
role_arn = "${aws_iam_role.codepipeline2.arn}"
artifact_store {
location = "${var.aws_s3_bucket}"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["projectname-webapp"]
configuration {
Owner = "My-Employer"
Repo = "projectname-webapp"
OAuthToken = "${var.github_token}"
Branch = "deploybeta_bash"
PollForSourceChanges = "false"
}
}
}
stage {
name = "Build"
action {
name = "projectname-webapp"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["projectname-webapp"]
output_artifacts = ["projectname-webapp-bld"]
version = "1"
configuration {
ProjectName = "projectname-next"
}
}
}
}
Thanks much for any insight whatsoever!
Both issues sound like permission problems.
CodePipeline's console is likely replacing the GitHub OAuth token (with one that works): https://docs.aws.amazon.com/codepipeline/latest/userguide/GitHub-authentication.html
Make sure the CodeBuild role (${aws_iam_role.codebuild2.arn} in the code you provided I think) has permission to access ECR.
I have setup a single master with 2 client endpoints in my icintga2 monitoring system using director with Top-Down mode.
I have also setup 2 client nodes with both accept configs and accept commands.
(hopefully this means I'm running Top Down Command Endpoint mode)
The service checks (disk/mem/load) for the 3 hosts are returning correct results. But my problem is:
according to the example from Top Down Command Endpoint example,
host icinga2-client1 is using "hostalive" as the host check_command.
eg.
object Host "icinga2-client1.localdomain" {
check_command = "hostalive" //check is executed on the master
address = "192.168.56.111"
vars.client_endpoint = name //follows the convention that host name == endpoint name
}
But one issue I have is that
if the client1 icinga process is not running,
the host status stays GREEN and also all of service status (disk/mem/load) stay all GREEN as well
because master is not getting any service check updates and hostalive check command is able to ping the node.
Under Best Practice - Health Check section,
it mentioned to use "cluster-zone" check commands.
I was expecting while using "cluster-zone",
the host status would be RED
when the client node icinga process is stopped,
but somehow this is not happening.
Does anyone has any idea?
My zone/host/endpoint configurations are as follows:
object Zone "icinga-master" {
endpoints = [ "icinga-master" ]
}
object Host "icinga-master" {
import "Master-Template"
display_name = "icinga-master [192.168.100.71]"
address = "192.168.100.71"
groups = [ "Servers" ]
}
object Endpoint "icinga-master" {
host = "192.168.100.71"
port = "5665"
}
object Zone "rick-tftp" {
parent = "icinga-master"
endpoints = [ "rick-tftp" ]
}
object Endpoint "rick-tftp" {
host = "172.16.181.216"
}
object Host "rick-tftp" {
import "Host-Template"
display_name = "rick-tftp [172.16.181.216]"
address = "172.16.181.216"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
object Zone "tftp-server" {
parent = "icinga-master"
endpoints = [ "tftp-server" ]
}
object Endpoint "tftp-server" {
host = "192.168.100.221"
}
object Host "tftp-server" {
import "Host-Template"
display_name = "tftp-server [192.168.100.221]"
address = "192.168.100.221"
groups = [ "Servers" ]
vars.cluster_zone = "icinga-master"
}
template Host "Host-Template" {
import "pnp4nagios-host"
check_command = "cluster-zone"
max_check_attempts = "5"
check_interval = 1m
retry_interval = 30s
enable_notifications = true
enable_active_checks = true
enable_passive_checks = true
enable_event_handler = true
enable_perfdata = true
}
Thanks,
Rick