deleting resources during CDK deploy? - aws-cdk

How can I delete unused resources during a CDK deploy? I'm creating a VPC with:
vpc = ec2.Vpc(self, id = "myTestVPC", max_azs = 3,
cidr = "10.120.0.0/16",
subnet_configuration = [ presentation_nc, logic_nc, data_nc ])
So I wind up with 9 subnets - 3 public and 6 private. I want to use 1 routing table for the public subnets and 1 for the private. I'm doing that by iterating over vpc.private_subnets and vpc.public_subnets, and associating the routing tables with ec2.CfnSubnetRouteTableAssociation. That part works.
Now I have 7 unused routing tables that I want to get rid of. How can I do that?

Set the removal policy to DESTROY and remove the stack from cloudformation or remove the code from cdk application.
vpc.applyRemovalPolicy(RemovalPolicy.DESTROY)

Related

How to change dask job_name to SGECluster

I am using dask_jobqueue.SGECluster() and when I submit jobs to the grid they are all listed as dask-worker. I want to have different names for each submitted job.
Here is one example:
futures = []
for i in range(1,10):
res = client.submit(slow_pow, i,2)
futures.append(res)
[future.result() for future in as_completed(futures)]
All 10 jobs appear with name dask-worker when checking their status with qsub.
I have tried adding client.adapt(job_name=f'job{i}') within the loop, but no success, name still dask-worker.
Any hints?
The dask-worker is a generic name for the compute allocation from the cluster, it can be changed by providing cluster-specific arguments at the time the cluster is created. For example, for SLURMCluster this would be:
cluster = SLURMCluster('job_extra': ['--job-name="func"'])
SGECluster might have a different syntax.
The actual tasks submitted to dask scheduler will have their names generated automatically by dask and can be viewed through the dashboard, by default on (http://localhost:8787/status).
It's possible to specify a custom name for each task submitted to scheduler by using key kwarg:
fut = client.submit(myfunc, my_arg, key='custom_key')
Note that if you are submitting multiple futures, you will want them to have unique keys:
futs = [client.submit(myfunc, i, key=f'custom_key_{i}') for i in range(3)]

Extending a resource created inside a module to avoid redundancy to set up a Docker Swarm with Terraform

Let's say I've a module which creates some resources that represent a default server. Now I want to inherit this default server to customize it into different directions.
At the Manager node which inherits the DockerNode I want to run docker swarm init and get the join token. On all Worker nodes I want to join with the token.
So in my main.tf where I use the DockerNode I have defined the nodes like this:
module "manager" {
source = "./modules/swarm-node"
node_network = {
network_id = hcloud_network.swarm-network.id
ip = "10.0.1.10"
}
node_name_prefix = "swarm-manager"
server = var.server
docker_compose_version = var.docker_compose_version
volume_size = var.volume_size
volume_filesystem = var.volume_filesystem
ssh_keys = [hcloud_ssh_key.ssh-key-a.name, hcloud_ssh_key.ssh-key-b.name]
depends_on = [
hcloud_network_subnet.swarm-network-nodes
]
}
module "worker" {
count = 2
source = "./modules/swarm-node"
node_index = count.index
node_network = {
network_id = hcloud_network.swarm-network.id
ip = "10.0.1.10${count.index}"
}
node_name_prefix = "swarm-worker"
server = var.server
docker_compose_version = var.docker_compose_version
volume_size = var.volume_size
volume_filesystem = var.volume_filesystem
ssh_keys = [hcloud_ssh_key.ssh-key-a.name, hcloud_ssh_key.ssh-key-b.name]
depends_on = [
hcloud_network_subnet.swarm-network-nodes,
module.manager
]
}
How to run docker swarm init and return the join token on the server resource inside of module.manager?
How to join the swarm with each worker?
I've researched this for quite a while:
Some solutions expose the Docker Daemon over TCP and access it from the worker to get the token. I don't like to expose the Docker Daemon unnecessarily.
Some solutions copy the base module (in my case DockerNode) just to modify one or two lines. I like to to follow DRY.
Some solutions have an additional shell script, which read the .tfstate and SSH into each machine to do further customization. I would like to use Terraform for this with all it's benefits.

Is it possible to set the a variable group scope using DevOps CLI or via REST

I am able to add/modify DevOps release definitions through a combination of CLI and CLI REST methods. The release definition object does not include (as far as I can tell) a property that controls the variable group scope. The release definition itself takes an array of variable group ID's but there is also the scope of the variable group within the context of the release definition. Where is that?
Is there support to access the variable group scope property in the CLI or CLI REST interface? The image below shows the interface from the portal in azure. Selecting the ellipses (...) you can "change scope" where a list of stages is displayed. You than save and then save the release definition.
I captured fiddler output but the body post was huge and not very helpful, I didn't see anything related to a list of scopes. but obviously this can be done. I'm just not sure about doing so via CLI or REST.
Edit: Here is a view of the script. There is no "scope", which should contain a list of environment names, anywhere in the release definition that I can see. Each environment name (aka stage) contains a number of variable groups associated with each environment.
$sourcedefinition = getreleasedefinitionrequestwithpat $reldefid $personalAccesstoken $org $project | select -Last 1
Write-Host "Root VariableGroups: " $sourcedefinition.variableGroups
$result = #()
#search each stage in the pipeline
foreach($item in $sourcedefinition.environments)
{
Write-Host ""
Write-Host "environment name: "$item.name
Write-Host "environment variable groups: "$item.variableGroups
}
To help clarify, the scope I seek cannot be in the environments collection as this is specific to each element (stage). The scope is set at the release definition level for a given variable group (again refer to the image).
I use this API to get the Definitions of my release and find that the values of variableGroups in ReleaseDefinition and in ReleaseDefinitionEnvironment are different when the scopes are different.
Then I think if we want to change the scope via REST API, we just need to change the variableGroups and update the Definitions. We can use this API to update the Definitions.
Edit:
For example, I want to change my scope from Release to Stage, I use the API like below:
PUT https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?api-version=6.1-preview.4
Request Body: (I get this from the first Get Definitions API Response Body and make some changes to use it)
{
"source":"userInterface",
"revision":6,
...
"lastRelease": {
"id": 1,
...
},
...
"variables":{
},
"variableGroups":[],
"environments":[
{
"name": "Stage 1",
...
"variables":{
},
"variableGroups":[
4
],
...
}
],
...
}
Note:
Please use your own newer revision.
The id value in lastRelease is your release definitionId.
Specify the stage name in environments name.
The variableGroups value in environments is the id of the variable group you want to change scope.

Is there a way to specify number of AZs to use when creating a vpc?

When instantiating the vpc object within a stack using the CDK. There is a parameter max_azs which supposedly defaults to 3. However, when I create a vpc no matter what I set that number to, I only ever get 2 AZs.
from aws_cdk import (
core,
aws_ec2 as ec2
)
app = core.App()
subnets = []
subnets.append(ec2.SubnetConfiguration(name = "public", subnet_type = ec2.SubnetType.PUBLIC, cidr_mask = 20))
subnets.append(ec2.SubnetConfiguration(name = "private", subnet_type = ec2.SubnetType.PRIVATE, cidr_mask = 20))
subnets.append(ec2.SubnetConfiguration(name = "isolated", subnet_type = ec2.SubnetType.ISOLATED, cidr_mask = 20))
vpc = ec2.Vpc(app, "MyVpc", subnet_configuration = subnets, max_azs = 3)
print(vpc.availability_zones)
app.synth()
I would expect to see 3 azs used here, but actually only ever get 2. Even if i set the value to 99, which should mean all azs.
Ah yes I came across the same issue myself. What solved it for me was specifying the region and account when creating the stack.
The following examples are for typescript but I'm sure you can write the corresponding python.
new MyStack(app, 'MyStack', {
env: {
region: 'us-east-1',
account: '1234567890',
}
});
In the case of typescript you need to rebuild and synth before you deploy.
$ npm run build
$ cdk synth

Clone an elasticsearch docker

I have the following setup:
one es-docker (live)
one es-docker (working)
So i wish that the working docker can run some data changes and save this inside the es application. (This changes will run over a few hours).
After this changes are done i wish to copy the working-docker (with all data) and override the live-docker.
So i can run the changes over some hours without having a downtime on live (or a minimalistic downtime).
But i don't know how to "copy" the original included all data.
Thank you for your hints.
The Elasticsearch Definitive Guide outlines a process to achieve zero downtime for use cases like yours, making use of Index Aliases.
The idea is to create an Index Alias that your applications will always use to access the live data.
Given an alias named "alias1" that is pointing to an index named "index1", perform the following steps:
Create a new index, named "index2"
Run your batch indexing process
Swap "alias1" to point to "index2"
Clean up "index1"
The alias swapping occurs in a single call, and Elasticsearch performs the action atomically, giving you the zero downtime you desire. The call looks something like this:
POST /_aliases
{
"actions" : [
{ "remove" : { "index" : "index1", "alias" : "alias1" } },
{ "add" : { "index" : "index2", "alias" : "alias1" } }
]
}

Resources