How to get the docker host name in azure itoedge enviroment variable - azure-iot-edge

Hi I am trying to get the hostname into my azure module by getting it form the enviroment variables. the module is written in C# and .NetCore 3.1
var deviceId = Environment.GetEnvironmentVariable("HOST_HOSTNAME");
I have tried to add the variable in de deployment template
"createOptions": {
"Cmd": [
"-e HOST_HOSTNAME=(hostname)"
]
}
The result is
deviceId == null

Can you try using "env" in your deployment template? You should add it on the same level as "settings" JSON object. Something like:
"env": {
"HOS_HOSTNAME": {
"value": "<valuehere>"
}
}
You can also do this in the Azure Portal. See for example how is done in the tutorial Give modules access to a device's local storage

Related

SwaggerUI Docker support for two api files (locally)

I'm trying to create a Swagger UI configuration to show several of my apis. They are not hosted publicly, the definition files are in my local file system. I'm using swagger ui with docker. I run it with the following command:
docker run -p 8080:8080 -v $(pwd)/_build:/spec swaggerapi/swagger-ui
In _build directory is where I have my yaml spec files. This is the swagger-config.yaml config file:
urls:
- /spec/openapi2.yaml
- /spec/openapi.yaml
plugins:
- topbar
I have also tried:
urls:
- url: /spec/openapi2.yaml
name: API1
- url: /spec/openapi.yaml
name: API2
plugins:
- topbar
After running it, this is what I see:
That's the default API of Swagger UI, so I suppose there's an error in my configuration. I have tried several things, but they have not worked and I do not seem to find any good documentation about the swagger-config.yaml configuration file.
Any idea to make it work with several APIs?
According to the comments in the Swagger UI issue tracker, the Docker version needs the config file in the JSON format rather than YAML.
Try using this swagger-config.json:
{
"urls": [
{
"url": "/spec/openapi2.yaml",
"name": "API1"
},
{
"url": "/spec/openapi.yaml",
"name": "API2"
}
],
"plugins": [
"topbar"
]
}
Also add -e CONFIG_URL=/path/to/swagger-config.json to the docker run command.

How to import airflow connections from a json file

so I have established two connections aws_default and google_cloud_default in a json file like so
{
"aws_default": {
"conn_type": "s3",
"host": null,
"login": "sample_login",
"password": "sample_secret_key",
"schema": null,
"port": null,
"extra": null
},
"google_cloud_default": {
"conn_type": "google_cloud_platform",
"project_id": "sample-proj-id123",
"keyfile_path": null,
"keyfile_json": {sample_json},
"scopes": "sample_scope",
"number_of_retries": 5,
}
}
I have a local airflow server containerized in docker. What I am trying to do, is to import the connections from this file, that way I don't need to define the connections in the Airflow UI.
I have an entrypoint.sh file which runs everytime the airflow image is built.
I have included this line airflow connections import connections.json in that shell file.
in my docker-compse.yaml file, I have added a binded a volume like so
- type: bind
source: ${HOME}/connections.json
target: /usr/local/airflow/connections.json
However, when I run my DAG locally, which includes hooks that connect to these connections, I receive errors: i.e.
The conn_id `google_cloud_default` isn't defined
So I'm not too sure how to proceed. I was reading about Airflow's local filesystem secrets backend here
And it mentions this code chunk to establish the file path
[secrets]
backend = airflow.secrets.local_filesystem.LocalFilesystemBackend
backend_kwargs = {"variables_file_path": "/files/var.json", "connections_file_path": "/files/conn.json"}
But, as I check my airflow.cfg, I can't find this code chunk. Am I supposed to add this to airflow.cfg?
Could use some guidance here.. I know the solution is simple but I've naive to setting up a connection like this. Thanks!

How to control Docker Image Version using Terraform while pushing images to ECR

I am new to Terraform so please be kind.
During Build process, Terraform is pushing the docker image to AWS ECR with a new name with every build.
As Image Name is different, we need to create a new Task Definition for each new build.
Is there a way to handle this issue in Terraform?
Any help is appreciated.
If you are ok to replace the task definition each time with the new image, then you can update the image name used in the task definition and Terraform will handle the update for you.
If you need to generate a new task definition each time and leave the old ones in place then read on..
If you do not need to keep the task definition in the Terraform state, then you could remove it after deployment so that next time a new one will be created.
The state rm command removes a resource from the state:
terraform state rm aws_ecs_task_definition.service
If you do need to keep each task definition in the Terraform state, you could create a data source for which you can use the for_each operator to generate a collection of resources for.
As an example you could save the container definitions of each task definition to a JSON file within a folder. Each file looks something like this:
[
{
"name": "a-service",
"image": "my-image",
"cpu": 10,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
Use the fileset function to list files in the folder and generate a new resource for each file using for_each:
resource "aws_ecs_task_definition" "service" {
family = "service"
for_each = fileset(path.module, "task-definitions/*.json")
container_definitions = file("${path.module}/${each.key}")
}

How can I deploy VM images from Container Registry using Google Cloud

I can see that my VM images are available in Google Container Registry following execution of the commands :
docker tag sutechnology/transcode eu.gcr.io/supereye/transcode
docker push eu.gcr.io/supereye/transcode
gcloud auth configure-docker docker push eu.gcr.io/supereye/transcode
Although I can see the images, I haven't been able to use this image while creating a new instance in Google Compute Engine. How can I use the image that I see in Container Registry while creating a new VM instance? Here below is my full config :
machine_type = "zones/europe-west2-b/machineTypes/n1-standard-1"
disk_type = "zones/europe-west2-b/diskTypes/pd-standard"
config = {
'name': name,
'machineType': machine_type,
# Specify the boot disk and the image to use as a source.
'disks': [
{
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': source_disk_image,
}
}
],
# Specify a network interface with NAT to access the public
# internet.
'networkInterfaces': [{
'network': 'global/networks/default',
'accessConfigs': [
{'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT'}
]
}],
# Allow the instance to access cloud storage and logging.
'serviceAccounts': [{
'email': 'default',
'scopes': [
'https://www.googleapis.com/auth/devstorage.read_write',
'https://www.googleapis.com/auth/logging.write'
]
}],
# Metadata is readable from the instance and allows you to
# pass configuration from deployment scripts to instances.
'metadata': {
'items': [{
# Startup script is automatically executed by the
# instance upon startup.
'key': 'startup-script',
'value': startup_script,
'VIDEOPATH': videopath
}]
}
}
And instance creation function below :
compute.instances().insert(
project=project,
zone=zone,
body=config).execute()
Google Container Registry (GCR), is used for storing docker images, which is then used to create the containers NOT compute Engine machine.
For Compute Engine, either use the public Images or Custom Images Snapshots of existing machines.
For Ref:- https://cloud.google.com/container-registry
Hope this helps

Terraform - GCloud Container Registry Sample

I'm working with terraforming gcloud resources and need to create gcloud container registry and trying to use below sample from terraform.io
data "google_container_registry_repository" {}
output "gcr_location" {
value = "${data.google_container_registry_repository.repository_url}"
}
and receving below error when I run terraform plan
'data' must be followed by exactly two strings: a type and a name
Any working sample that I can refer to ?
terraform.io syntax : https://www.terraform.io/docs/providers/google/d/google_container_registry_repository.html
terraform version:
Terraform v0.11.2
Edit : updated to Terraform v0.11.3 and still same problem.
Try this:
data "google_container_registry_repository" "myregistry" {}
output "gcr_location" {
value = "${data.google_container_registry_repository.myregistry.repository_url}"
}

Resources