I am using Packer to generate an image on Google Compute Engine, and Terraform to create the instance. I have set this metadata:
key: env_vars
value: export test=10
Packer is using a script with something like this inside:
curl "http://metadata.google.internal/computeMetadata/v1/project/attributes/env_vars?recursive=tru&alt=text" -H "Metadata-Flavor: Google" -o /tmp/env_vars.sh
source /tmp/env_vars.sh # or . /tmp/env_vars.sh
The problem is that when I create an instance using this image through Terraform the env variables are not available. That means, If I run printenv or echo $test, it is empty.
Even if I write a startup-script for the instance, it doesn't work.
But, if I run the same exact script inside the instance via SSH, it does work.
In all scenarios described above, the file env_vars.sh is created.
I just want to set the env vars from my metadata for any instance.
Any suggestion on how can I achieve this?
EDIT:
Here's the terraform code:
# create instance
resource "google_compute_instance" "default" {
count = 1
name = var.machine_name
machine_type = var.machine_type
zone = var.region_zone
tags = ["allow-http-ssh-rule"]
boot_disk {
initialize_params {
image = var.source_image
}
}
network_interface {
network = "default"
access_config {
// Ephemeral IP
}
}
}
I have reproduced your issue in my own project, and you are right it seems that exportdoes not work on the strat-up script.
I also tried creating a start-up script in a bucket but it does not work.
On the other hand I was able to set the env var in my project:
I’m using a debian-9 image, so, I edited the /etc/profile to add the env vars.
I use the following code to create my VM with env variables:
provider "google" {
project = "<<PROJECT-ID>>"
region = "us-central1"
zone = "us-central1-c"
}
resource "google_compute_instance" "vm_instance" {
name = "terraform-instance"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
# A default network is created for all GCP projects
network = "default"
access_config {
}
}
# defining metadata
metadata = {
foo = "bar"
}
metadata_startup_script = "echo ENVVAR=DEVELOPMENT2 >> /etc/profile"
}
After the creation of my instance I was able to see the correct values:
$ echo $ENVVAR
DEVELOPMENT2
Related
An environment variable like DISABLE_APPLICATION_NAME is optional. The APPLICATION_NAME is the name of the application. Inside our Jenkinsfile we have a loop where the "application name" is stored inside a variable "application_name". I left the loop and other code away, in the example below, but it should be sufficient to give an idea of what I try to accomplish.
I would like to check if the environment variable DISABLE_<application_name> exists and is set to TRUE or FALSE.
pipeline {
stages {
DISABLE_APPLICATION_TEST = True
}
steps {
deploy()
}
}
void deploy() {
application_name = "APPLICATION_TEST"
disable_variable = "DISABLE_${application_name}"
if(env."${disable_variable}"){
disable = env."${disable_variable}"
}else{
disable = False
}
}
This doesn't work, but is there any way to check if an environment variable exists, based on the contents of another variable? So like env["${env_name_stored_in_variable}"] ?
I have a create provisioner and a destroy provisioner. I've read that apparently, triggers might solve this problem, so they're integrated here, but while this succeeds to build the resources, it doesn't let them destroy this module.thingx.null_resource.script-stuff resource.
I'm not sure I'm using triggers correctly here, and it's more confusing that create time apply works fine, but the destroy time apply fails with the mentioned error.
Here is the module null resource that apparently the error is referring to; includes both the create and destroy time provisioners:
resource "null_resource" "script-stuff" {
### -- testing triggers
triggers = {
dns_zones = var.dns_zones[each.key]
dnat_ip = google_compute_instance.server[each.key].network_interface.0.access_config.0.nat_ip
pem = tls_private_key.node_ssh[each.key].private_key_pem
} ### -- end testing
depends_on = [google_compute_instance.server, google_project_iam_member.list-sa]
for_each = var.list_map
provisioner "remote-exec" {
when = create
inline = [
"cat ${var.dns_zones[each.key]} > /dev/null",
"sensitive-script.sh --create"
]
connection {
type = "ssh"
host = google_compute_instance.server[each.key].network_interface[0].access_config[0].nat_ip
user = "common-user"
private_key = tls_private_key.node_ssh[each.key].private_key_pem
}
}
provisioner "remote-exec" {
when = destroy
inline = [
# "echo ${var.dns_zones[each.key]} > /dev/null", #<-- this doesn't work when terraform is destroying
"echo ${self.triggers.dns_zones[each.key]} > /dev/null",
"sensitive-script.sh --destroy"
]
connection {
type = "ssh"
#host = google_compute_instance.server[each.key].network_interface[0].access_config[0].nat_ip #<-- this doesn't work when terraform is destroying
host = self.triggers.dnat_ip
user = "common-user"
#private_key = tls_private_key.node_ssh[each.key].private_key_pem #<-- this doesn't work when terraform is destroying
private_key = self.triggers.pem
}
}
}
destroy triggered provisioners do not support variables as explained in this GitHub issue:
Allow destroy-time provisioners to access variables
So you can't have any variable in "echo ${var.dns_zones[each.key]} > /dev/null".
I am on a Windows machine using Terraform 0.13.4 and trying to spin up some containers on a remote host using Terraform and the Docker provider:
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
I am having a hard time with this as it cannot authenticate with my private registry. Always getting 401 Unauthorized.
If I don't do this to grab the sha256_digest and just use the docker_container resource, everything works but it forces replacements of the running containers.
Hello Angelos if you dont want to force replace the running container you should try this :
provider "docker" {
host = "tcp://myvm:2376/"
registry_auth {
address = "myregistry:443"
username = "myusername"
password = "mypassword"
}
ca_material = file(pathexpand(".docker/ca.pem"))
cert_material = file(pathexpand(".docker/cert.pem"))
key_material = file(pathexpand(".docker/key.pem"))
}
data "docker_registry_image" "mycontainer" {
name = "myregistry:443/lvl1/lvl2/myimage:latest"
}
resource "docker_image" "example" {
name = data.docker_registry_image.mycontainer.name
pull_triggers = [data.docker_registry_image.mycontainer.sha256_digest]
keep_locally = true
}
then in the container use :
resource "docker_container" "example" {
image = docker_image.example.latest
name = "container_name"
}
you shoukd use
docker_image.example.latest
Using the resource docker_image itself if it already exist he wont pull the image and doesn't restart the container but if you pass the name as a string he will replace the container everytime.
https://www.terraform.io/docs/providers/docker/r/container.html
Turns out that the code is correct and that the container service I am using (older version of ProGet) is not replying correctly for the auth calls. I tested the code using another registry and it all works as expected.
I've just started playing with NixOS, and have so far managed to edit /etc/nixos/configuration.nix in my NixOS 18.09 VM to have PHP-FPM and the Caddy webserver enabled.
{ config, pkgs, ... }:
{
imports = [ <nixpkgs/nixos/modules/installer/virtualbox-demo.nix> ];
users = {
mutableUsers = false;
groups = {
caddy = { };
php-project = { };
};
users = {
hello = {
group = "php-project";
};
};
};
environment.systemPackages = [
pkgs.htop
pkgs.httpie
pkgs.php # for PHP CLI
];
services.caddy = {
enable = true;
email = "david#example.com";
agree = true;
config = ''
(common) {
gzip
header / -Server
header / -X-Powered-By
}
:8080 {
root /var/www/hello
fastcgi / /run/phpfpm/hello.sock php
log syslog
import common
}
'';
};
services.phpfpm = {
phpOptions = ''
date.timezone = "Europe/Berlin"
'';
poolConfigs = {
hello = ''
user = hello
listen = /run/phpfpm/hello.sock
; ...
pm.max_requests = 500
'';
};
};
}
A PHP-processed response is available at at localhost:8080. (Yay!)
To enable Caddy plugins when compiling from source, Go imports are added to caddy's run.go, e.g.:
_ "github.com/mholt/caddy/caddyhttp" // plug in the HTTP server type
// This is where other plugins get plugged in (imported)
_ "github.com/nicolasazrak/caddy-cache" // added to use another plugin
)
How can I set such line insertion to be performed after the source is downloaded and before the build takes place? (If this is a reasonable approach when using Nix?)
The NixOS 18.09 caddy package.
The NixOS 18.09 caddy service.
I believe that when writing a package a builder script (Bash or otherwise) can be assigned, and I'm thinking the line insertion could be done in it. But I'm lost as to how to assign a script to an existing package in this situation (override an attribute/use an overlay?) and where to put the script on the disk.
Status update
I've been doing some reading on customising packages in general and it sounds like overlays might be what I need. However, I don't seem to be able to get my overlay evaluated.
I'm using overriding of the package name as a test as it's simpler than patching code.
Overlay attempt 1
/etc/nixos/configuration.nix:
{ config, pkgs, options, ... }:
{
imports = [ <nixpkgs/nixos/modules/installer/virtualbox-demo.nix> ];
nix.nixPath = options.nix.nixPath.default ++ [
"nixpkgs-overlays=/etc/nixos/overlays-compat/"
];
# ...
}
/etc/nixos/overlays-compat/overlays.nix:
self: super:
with super.lib;
let
# Using the nixos plumbing that's used to evaluate the config...
eval = import <nixpkgs/nixos/lib/eval-config.nix>;
# Evaluate the config,
paths = (eval {modules = [(import <nixos-config>)];})
# then get the `nixpkgs.overlays` option.
.config.nixpkgs.overlays
;
in
foldl' (flip extends) (_: super) paths self
/etc/nixos/overlays-compat/caddy.nix:
self: super:
{
caddy = super.caddy.override {
name = "caddy-override";
};
}
Overlay attempt 2
/etc/nixos/configuration.nix:
nixpkgs.overlays = [ (self: super: {
caddy = super.caddy.override {
name = "caddy-override";
};
} ) ];
error: anonymous function at /nix/store/mr5sfmz6lm5952ch5q6v49563wzylrkx-nixos-18.09.2327.37694c8cc0e/nixos/pkgs/servers/caddy/default.nix:1:1 called with unexpected argument 'name', at /nix/store/mr5sfmz6lm5952ch5q6v49563wzylrkx-nixos-18.09.2327.37694c8cc0e/nixos/lib/customisation.nix:69:12
overrideAttrs
I previously managed to override the package name with this:
{ config, pkgs, options, ... }:
let
caddyOverride = pkgs.caddy.overrideAttrs (oldAttrs: rec {
name = "caddy-override-v${oldAttrs.version}";
});
in {
{
# ...
services.caddy = {
package = caddyOverride;
# ...
}
}
I could see in htop that the caddy binary was in a folder called /nix/store/...-caddy-override-v0.11.0-bin/. But I understand that overriding in this way has been superseded by overlays.
In order to add plugins to Caddy, it seems that the method is to modify the source.
You will need to adapt the Nixpkgs expression for Caddy to make that possible. That can be done outside the Nixpkgs tree, using services.caddy.package = callPackage ./my-caddy.nix {} for example, or by forking the Nixpkgs repository and pointing your NIX_PATH to your clone.
There is an issue for Caddy plugins: https://github.com/NixOS/nixpkgs/issues/14671
PR welcome!
I got a sample AWS codepipeline working via the console but need to get it set up via Terraform.
I have two problems, one minor and one major:
The Github stage fails until I go in and edit it via the console, even though I wind up not changing anything that I already had set up in "owner" or "repo"
The more major item is that I keep getting CannotPullContainerError on the build step that keeps anything else from happening. It says "repository does not exist or may require 'docker login'".
The repository DOES exist; I used the command line from my Linux instance to verify the same 'docker login' and 'docker pull' commands that don't work from AWS CodePipeline.
(I know: the buildspec.yml is stupidly insecure but I wanted to get the prototype I had working the same way before I put in kms.)
My buildspec.yml is simple:
version: 0.2
phases:
pre_build:
commands:
- $(aws ecr get-login --no-include-email --region us-west-2)
- docker pull 311541007646.dkr.ecr.us-west-2.amazonaws.com/agverdict-next:latest
build:
commands:
- sudo apt install curl
- curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -
- sudo apt install nodejs -y
- mkdir /root/.aws
- cp ./deployment/credentials /root/.aws/credentials
- cd ./deployment
- bash ./DeployToBeta.sh
Here's the terraform that creates the pipeline. (No 'deploy' step as the 'build' shell script does that from a previous incarnation.)
locals {
github_owner = "My-Employer"
codebuild_compute_type = "BUILD_GENERAL1_LARGE"
src_action_name = "projectname-next"
codebuild_envronment = "int"
}
data "aws_caller_identity" "current" {}
provider "aws" {
region = "us-west-2"
}
variable "aws_region" { default="us-west-2"}
variable "github_token" {
default = "(omitted)"
description = "GitHub OAuth token"
}
resource "aws_iam_role" "codebuild2" {
name = "${var.codebuild_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codebuild_arpdoc.json}"
}
resource "aws_iam_role_policy" "codebuild2" {
name = "codebuild2_service_policy"
role = "${aws_iam_role.codebuild2.id}"
policy = "${data.aws_iam_policy_document.codebuild_access.json}"
}
resource "aws_iam_role" "codepipeline2" {
name = "${var.codepipeline_service_role_name}"
path = "/projectname/"
assume_role_policy = "${data.aws_iam_policy_document.codepipeline_arpdoc.json}"
}
resource "aws_iam_role_policy" "codepipeline" {
name = "codepipeline_service_policy"
role = "${aws_iam_role.codepipeline2.id}"
policy = "${data.aws_iam_policy_document.codepipeline_access.json}"
}
resource "aws_codebuild_project" "projectname_next" {
name = "projectname-next"
description = "projectname_next_codebuild_project"
build_timeout = "60"
service_role = "${aws_iam_role.codebuild2.arn}"
encryption_key = "arn:aws:kms:${var.aws_region}:${data.aws_caller_identity.current.account_id}:alias/aws/s3"
artifacts {
type = "CODEPIPELINE"
name = "projectname-next-bld"
}
environment {
compute_type = "${local.codebuild_compute_type}"
image = "311541007646.dkr.ecr.us-west-2.amazonaws.com/projectname-next:latest"
type = "LINUX_CONTAINER"
privileged_mode = false
environment_variable {
"name" = "PROJECT_NAME"
"value" = "projectname-next"
}
environment_variable {
"name" = "PROJECTNAME_ENV"
"value" = "${local.codebuild_envronment}"
}
}
source {
type = "CODEPIPELINE"
}
}
resource "aws_codepipeline" "projectname-next" {
name = "projectname-next-pipeline"
role_arn = "${aws_iam_role.codepipeline2.arn}"
artifact_store {
location = "${var.aws_s3_bucket}"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = ["projectname-webapp"]
configuration {
Owner = "My-Employer"
Repo = "projectname-webapp"
OAuthToken = "${var.github_token}"
Branch = "deploybeta_bash"
PollForSourceChanges = "false"
}
}
}
stage {
name = "Build"
action {
name = "projectname-webapp"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = ["projectname-webapp"]
output_artifacts = ["projectname-webapp-bld"]
version = "1"
configuration {
ProjectName = "projectname-next"
}
}
}
}
Thanks much for any insight whatsoever!
Both issues sound like permission problems.
CodePipeline's console is likely replacing the GitHub OAuth token (with one that works): https://docs.aws.amazon.com/codepipeline/latest/userguide/GitHub-authentication.html
Make sure the CodeBuild role (${aws_iam_role.codebuild2.arn} in the code you provided I think) has permission to access ECR.