Pulumi Automation API doesn't run the Pulumi CLI? - docker

I'm writing a Flask app that uses Pulumi Automation API. I'm following the Automation API project examples. But when I send a POST request I get a Program run without the Pulumi engine available; re-run using the pulumi CLI error. Isn't the Automation API supposed to run the CLI on its own?
The Pulumi CLI is available:
pulumi version
v3.24.1
edit: I followed the pulumi over HTTP example, here is my app.py
import pulumi
from flask import Flask, request, make_response, jsonify
from pulumi import automation as auto
import os
from pulumi_aws import s3
app = Flask(__name__)
# This function defines our pulumi s3 static website in terms of the content that the caller passes in.
# This allows us to dynamically deploy websites based on user defined values from the POST body.
def create_pulumi_program(content: str):
# Create a bucket and expose a website index document
site_bucket = s3.Bucket("s3-website-bucket", website=s3.BucketWebsiteArgs(index_document="index.html"))
index_content = content
# Write our index.html into the site bucket
s3.BucketObject("index",
bucket=site_bucket.id,
content=index_content,
key="index.html",
content_type="text/html; charset=utf-8")
# Set the access policy for the bucket so all objects are readable
s3.BucketPolicy("bucket-policy",
bucket=site_bucket.id,
policy={
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:GetObject"],
# Policy refers to bucket explicitly
"Resource": [pulumi.Output.concat("arn:aws:s3:::", site_bucket.id, "/*")]
},
})
# Export the website URL
pulumi.export("website_url", site_bucket.website_endpoint)
#app.route('/', methods=['GET'])
def home():
return "<h1>Hello</p>"
#app.route('/v1/code', methods=['POST'])
def create_handler():
content = request.get_json()
project_name = content.get('project_name')
stack_name = content.get('stack_name')
pulumi_access_token = request.headers['pulumi_access_token']
os.environ['PULUMI_ACCESS_TOKEN'] = pulumi_access_token
try:
def pulumi_program():
return create_pulumi_program(content)
stack = auto.create_stack(stack_name=stack_name,
project_name=project_name,
program=create_pulumi_program(content))
stack.workspace.install_plugin("aws", "v4.0.0")
stack.set_config("aws:region", auto.ConfigValue(value="us-west-2"))
stack.set_config("aws:region", auto.ConfigValue("us-west-2"))
# deploy the stack, tailing the logs to stdout
up_res = stack.up(on_output=print)
return jsonify(id=stack_name, url=up_res.outputs['website_url'].value)
except auto.StackAlreadyExistsError:
return make_response(f"stack '{stack_name}' already exists", 409)
except Exception as exn:
return make_response(str(exn), 500)
if __name__ == '__main__':
app.run(debug=True)

I found the issue, it was because I was passing a parameter to the program function in create_stack
stack = automation.create_stack(
stack_name=stack_name,
project_name=project_name,
program=create_pulumi_program(content)
)
It should be instead like this:
stack = automation.create_stack(
stack_name=stack_name,
project_name=project_name,
program=create_pulumi_program
)

Related

Is there a way to upload file to Azure Storage Blob with terraform null resource?

I have a Azure Storage Blob assocated with Azure Storage Account and a container($web) managed by terraform as follows.
resource "azurerm_storage_blob" "static_files_html" {
name = index.html
storage_account_name = azurerm_storage_account.storage_account.name
storage_container_name = "$web"
type = "Block"
content_type = "text/html"
source = index.html
depends_on = [
azurerm_resource_group.resource_group,
azurerm_storage_account.storage_account
]
}
Can I upload file to this using null_resource?
A few days ago, I used null_resource to upload file to a Virtual Machine as follows.
So I want to know, if there is a way to do the same to upload to Azure Storage Blob. The idea here is, I can change/modify the file and then can run the plan and apply to see the changes reflected in the blob storage. Is this possible?
resource "time_sleep" "wait_few_seconds" {
# depends_on = [azurerm_storage_blob.static_files_html]
create_duration = "10s"
}
# Terraform NULL RESOURCE
# Sync App1 Static Content to Webserver using Provisioners
resource "null_resource" "sync_app1_static" {
depends_on = [time_sleep.wait_few_seconds]
triggers = {
always-update = timestamp()
}
# Connection Block for Provisioners to connect to Azure VM Instance
connection {
type = "ssh"
host = azurerm_linux_virtual_machine.mylinuxvm.public_ip_address
user = azurerm_linux_virtual_machine.mylinuxvm.admin_username
private_key = file("${path.module}/ssh-keys/terraform-azure.pem")
}
# File Provisioner: Copies the app1 folder to /tmp
provisioner "file" {
source = "apps/app1"
destination = "/tmp"
}
# Remote-Exec Provisioner: Copies the /tmp/app1 folder to Apache Webserver /var/www/html directory
provisioner "remote-exec" {
inline = [
"sudo cp -r /tmp/app1 /var/www/html"
]
}
}
Full working example with VM is here.
The example that I am stuck with(static web site with storage) with is here

How to get the docker host name in azure itoedge enviroment variable

Hi I am trying to get the hostname into my azure module by getting it form the enviroment variables. the module is written in C# and .NetCore 3.1
var deviceId = Environment.GetEnvironmentVariable("HOST_HOSTNAME");
I have tried to add the variable in de deployment template
"createOptions": {
"Cmd": [
"-e HOST_HOSTNAME=(hostname)"
]
}
The result is
deviceId == null
Can you try using "env" in your deployment template? You should add it on the same level as "settings" JSON object. Something like:
"env": {
"HOS_HOSTNAME": {
"value": "<valuehere>"
}
}
You can also do this in the Azure Portal. See for example how is done in the tutorial Give modules access to a device's local storage

inline lambda function using cdk

This code is working as expected and I can create cloudformation template. But I need to embed the function inline. This sample code will upload the file to S3 and I do not want to use S3.
# cat mylambda/hello.py
import json
def handler(event, context):
print('request: {}'.format(json.dumps(event)))
return {
'statusCode': 200,
'headers': {
'Content-Type': 'text/plain'
},
'body': 'Hello, CDK! You have hit {}\n'.format(event['path'])
}
# cat app.py
#!/usr/bin/env python3
from aws_cdk import core
from hello.hello_stack import MyStack
app = core.App()
MyStack(app, "hello-cdk-1", env={'region': 'us-east-2'})
MyStack(app, "hello-cdk-2", env={'region': 'us-west-2'})
app.synth()
# cat hello/hello_stack.py
from aws_cdk import (
core,
aws_lambda as _lambda,
)
class MyStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
# Defines an AWS Lambda resource
my_lambda = _lambda.Function(
self, 'HelloHandler',
runtime=_lambda.Runtime.PYTHON_3_7,
code=_lambda.Code.asset('mylambda'),
handler='hello.handler',
)
Here is an example of inline lambda function that can be deployed using cdk
git clone https://github.com/aws-samples/aws-cdk-examples.git
mkdir lambda-cron1
cd lambda-cron1
cdk init --language python
cp /tmp/aws-cdk-examples/python/lambda-cron/* .
pip install -r requirements.txt
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=xxx
export AWS_SECRET_ACCESS_KEY=xxx
cdk ls
cdk synth LambdaCronExample > a2.txt
cdk deploy LambdaCronExample

Automate Jenkins Keycloak plugin with groovy script

i try to 100% automating the deployment of Jenkins with Keycloak plugin with Docker-compose. The objectiv is that we do not want to do anything but run a single command.
To automate Jenkins, I tried to use the Jenkins API but the Groovy script seems to be the best and easiest solution. The problem is that I am not a developper ...
I try something like this, but it's failed at Keycloak conf :
Failed to run script file:/var/jenkins_home/init.groovy.d/init.groovy groovy.lang.GroovyRuntimeException: Could not find matching constructor for: org.jenkinsci.plugins.KeycloakSecurityRealm(java.lang.Boolean)
import jenkins.model.*
import hudson.security.*
import org.jenkinsci.plugins.*
def instance = Jenkins.getInstance()
def env = System.getenv()
def hudsonRealm = new HudsonPrivateSecurityRealm(false)
String password = env.JENKINS_PASSWORD
hudsonRealm.createAccount("admin", password)
instance.setSecurityRealm(hudsonRealm)
instance.save()
def keycloak_realm = new KeycloakSecurityRealm(true)
instance.setSecurityRealm(keycloak_realm)
instance.setAuthorizationStrategy(new FullControlOnceLoggedInAuthorizationStrategy())
instance.save()
In the end, i want to
create an admin user
configure the Keycloak plugin
set the users autorisations.
Thanks you in advance for your help :)
A possibly outdated issue, but I would like to share that I also had problems using Groovy scripts in the init.groovy.d to maintain the configurations in Jenkins, including Keycloak configurations. And the best way to solve it was through a declarative model using the Jenkins Configuration as Code (JCasC) plugin.
Examples:
Keycloak
jenkins:
securityRealm: keycloak
unclassified:
keycloakSecurityRealm:
keycloakJson: |-
{
"realm": "my-realm",
"auth-server-url": "https://my-keycloak-url/auth",
"ssl-required": "all",
"resource": "jenkins",
"public-client": true,
"confidential-port": 0
}
source: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos/keycloak
Credentials
credentials:
system:
domainCredentials:
- domain:
name: "test.com"
description: "test.com domain"
specifications:
- hostnameSpecification:
includes: "*.test.com"
credentials:
- usernamePassword:
scope: SYSTEM
id: sudo_password
username: root
password: ${SUDO_PASSWORD}
source: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos/credentials
Following solution works for me.
#!/usr/bin/env groovy
import jenkins.model.Jenkins
import hudson.security.*
import org.jenkinsci.plugins.KeycloakSecurityRealm
Jenkins jenkins = Jenkins.get()
def desc = jenkins.getDescriptor("org.jenkinsci.plugins.KeycloakSecurityRealm")
// JSON based on the keycloak configuration
desc.setKeycloakJson( "{\n" +
" \"realm\": \"myRealm\",\n" +
" \"auth-server-url\": \"https://keycloak/auth/\",\n" +
" \"ssl-required\": \"external\",\n" +
" \"resource\": \"jenkins\",\n" +
" \"public-client\": true,\n" +
" \"confidential-port\": 0\n" +
"}")
desc.save()
jenkins.setSecurityRealm(new KeycloakSecurityRealm())
def strategy = new FullControlOnceLoggedInAuthorizationStrategy()
strategy.setAllowAnonymousRead(false)
jenkins.setAuthorizationStrategy(strategy)
jenkins.save()

Jenkins credentials - Gitlab API token

I've been searching the whole web for a snippet on how to create GitLab API credential with groovy. and creating Gitlab connection using that API credential for 'Build merge request' purposes, It would be really helpful. Thanks in advance
UPDATE:
I found a solution anyway. I created the GitlabAPI creds manually and took its XML and parsed it with jinja2 to make it dynamic. then I've passed it to the Jenkins CLI create creds by xml
cat /tmp/gitlab-credential.xml | \
java -jar {{ cli_jar_location }} \
-s http://{{ jenkins_hostname }}:{{ http_port }} \
create-credentials-by-xml "SystemCredentialsProvider::SystemContextResolver::jenkins" "(global)"
I encountered similar need to create the gitlab api credential via groovy. Below is the solution I managed to figure out, adapted from https://gist.github.com/iocanel/9de5c976cc0bd5011653
import jenkins.model.*
import com.cloudbees.plugins.credentials.*
import com.cloudbees.plugins.credentials.common.*
import com.cloudbees.plugins.credentials.domains.*
import com.cloudbees.plugins.credentials.impl.*
import com.dabsquared.gitlabjenkins.connection.*
import hudson.util.Secret
domain = Domain.global()
store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()
token = new Secret("my-token")
gitlabToken = new GitLabApiTokenImpl(
CredentialsScope.GLOBAL,
"gitlab-token",
"token for gitlab",
token
)
store.addCredentials(domain, gitlabToken)

Resources