Terraform VMware Cloud Director Provider v3.12.0

Terraform VMware Cloud Director Provider v3.12.0 is available, introducing many new features and improvements.

Introducing the Container Service Extension (CSE) Kubernetes Cluster resource and data source

During the past releases of the provider, we gathered an outstanding amount of feedback from the community, and we understood that using Runtime Defined Entities to create, update and manage a Kubernetes cluster was sometimes challenging and not trivial, as it required a deep understanding of the CSE inner workings.

This release entails a big step forward for CSE users, by providing a new resource and data source vcd_cse_kubernetes_cluster, to abstract the Kubernetes cluster authors from dealing with the complexity of the generic methods provided in the now deprecated Kubernetes Cluster management guide.

The new vcd_cse_kubernetes_cluster resource looks like this:

resource “vcd_cse_kubernetes_cluster” “my_cluster” {
cse_version = “4.2.0”
name = “my-cluster”
kubernetes_template_id = data.vcd_catalog_vapp_template.tkg_ova.id
org = data.vcd_org_vdc.vdc.org
vdc_id = data.vcd_org_vdc.vdc.id
network_id = data.vcd_network_routed_v2.routed.id
api_token_file = vcd_api_token.token.file_name

control_plane {
machine_count = 3
disk_size_gi = 20
sizing_policy_id = data.vcd_vm_sizing_policy.tkg_small.id
storage_profile_id = data.vcd_storage_profile.sp.id
}

worker_pool {
name = “worker-pool-1”
machine_count = 10
disk_size_gi = 100
sizing_policy_id = data.vcd_vm_sizing_policy.tkg_small.id
storage_profile_id = data.vcd_storage_profile.sp.id
}

default_storage_class {
name = “default-storage-class”
storage_profile_id = data.vcd_storage_profile.sp.id
reclaim_policy = “delete”
filesystem = “ext4”
}

auto_repair_on_errors = true
node_health_check = true

operations_timeout_minutes = 0
}




12345678910111213141516171819202122232425262728293031323334353637

resource “vcd_cse_kubernetes_cluster” “my_cluster” {  cse_version            = “4.2.0”  name                   = “my-cluster”  kubernetes_template_id = data.vcd_catalog_vapp_template.tkg_ova.id  org                    = data.vcd_org_vdc.vdc.org  vdc_id                 = data.vcd_org_vdc.vdc.id  network_id             = data.vcd_network_routed_v2.routed.id  api_token_file         = vcd_api_token.token.file_name   control_plane {    machine_count      = 3    disk_size_gi       = 20    sizing_policy_id   = data.vcd_vm_sizing_policy.tkg_small.id    storage_profile_id = data.vcd_storage_profile.sp.id  }   worker_pool {    name               = “worker-pool-1”    machine_count      = 10    disk_size_gi       = 100    sizing_policy_id   = data.vcd_vm_sizing_policy.tkg_small.id    storage_profile_id = data.vcd_storage_profile.sp.id  }   default_storage_class {    name               = “default-storage-class”    storage_profile_id = data.vcd_storage_profile.sp.id    reclaim_policy     = “delete”    filesystem         = “ext4”  }   auto_repair_on_errors = true  node_health_check     = true   operations_timeout_minutes = 0} 

Readers will observe that the available arguments of this resource are fairly similar to the options available in UI when creating a Kubernetes cluster with the wizard. All the RDE schemas, RDE Types, and YAML files are not used explicitly anymore.

Likewise, users will experience a more comfortable mechanism to update their clusters, as they won´t need to manipulate JSON files either. The resource supports all the updateable elements that are also achievable using the UI: Resize the control plane, the worker pools, enable/disable the node health check, and turn off the “auto-repair” flag (4.1.0 only).

This new resource is available for CSE versions 4.2.1, 4.2.0, 4.1.1(a) and 4.1.0. It also supports importing existing clusters, for the ones that were using the generic approach to migrate their existing ones, and users can also read existing clusters with the data source.

Adding support for Container Service Extension (CSE) 4.2.0 and 4.2.1

This version of the provider updates the installation guide to support the newest versions of CSE, 4.2.0 and 4.2.1.

As mentioned in the previous section, the Kubernetes Cluster management guide is now deprecated in favor of the new vcd_cse_kubernetes_cluster resource and data source.

The provider repository contains now all the RDE Type schemas required for CSE 4.2.x and some example configurations for both 4.2.0 and 4.2.1 (as they differ in configuration values such as CAPVCD version, CPI version and CSI version).

Other notable changes and improvements

Consolidating VM disks on creation to support overriding template disks

A frequent user requested missing functionality was overriding disk sizes in fast provisioned VDCs. Terraform provider v3.12.0 adds new field consolidate_disks_on_create in both resources vcd_vapp_vm and vcd_vm. When enabled, it will consolidate disks during VM creation. It may be useful on its own, but it also permits overriding template disks when creating VMs in fast provisioned VDCs.

resource “vcd_vapp_vm” “resized” {
vapp_name = vcd_vapp.web.name
name = “Resized-OS-disk-VM”
vapp_template_id = data.vcd_catalog_vapp_template.lampstack.id
memory = 2048
cpus = 2
cpu_cores = 1

# Fast provisioned VDCs require disks to be consolidated
# if their size is to be changed
consolidate_disks_on_create = true

override_template_disk {
bus_type = “paravirtual”
size_in_mb = “22384”
bus_number = 0
unit_number = 0
iops = 0
storage_profile = “*”
}
}




12345678910111213141516171819202122

resource “vcd_vapp_vm” “resized” {  vapp_name        = vcd_vapp.web.name  name             = “Resized-OS-disk-VM”  vapp_template_id = data.vcd_catalog_vapp_template.lampstack.id  memory           = 2048  cpus             = 2  cpu_cores        = 1   # Fast provisioned VDCs require disks to be consolidated  # if their size is to be changed  consolidate_disks_on_create = true    override_template_disk {    bus_type        = “paravirtual”    size_in_mb      = “22384”    bus_number      = 0    unit_number     = 0    iops            = 0    storage_profile = “*”  }} 

VM Copy support

Both VM resources vcd_vapp_vm and vcd_vm get new field copy_from_vm_id that can be used to create a VM from already existing one instead of relying on catalog template or an empty VM.

data “vcd_vapp_vm” “existing” {
vapp_name = data.vcd_vapp.web.name
name = “web1”
}

resource “vcd_vapp_vm” “vm-copy” {
org = “org”
vdc = “vdc”

copy_from_vm_id = data.vcd_vapp_vm.existing.id # source VM ID
vapp_name = data.vcd_vapp_vm.existing.vapp_name
name = “VM Copy”
power_on = false
}






data “vcd_vapp_vm” “existing” {  vapp_name = data.vcd_vapp.web.name  name      = “web1”} resource “vcd_vapp_vm” “vm-copy” {  org = “org”  vdc = “vdc”   copy_from_vm_id = data.vcd_vapp_vm.existing.id # source VM ID  vapp_name       = data.vcd_vapp_vm.existing.vapp_name  name            = “VM Copy”  power_on        = false} 

Creating vApp templates from vApps or standalone VMs

The last bit that levels up VM control is that resource vcd_catalog_vapp_template introduces an option to capture vApp templates from existing vApps or standalone VMs. One can use a new capture_vapp block that accepts source vApp ID. Additionally, the vcd_vapp_vm and vcd_vm resources and data sources expose vapp_id attributes that can be specified as a source in capture_vapp.source_id. This is especially useful for standalone VMs that have hidden vApps.

data “vcd_catalog” “cat” {
org = “v51”
name = “demo-catalog”
}

resource “vcd_catalog_vapp_template” “from-vapp” {
org = “v51”
catalog_id = data.vcd_catalog.cat.id

name = “from-vapp”

capture_vapp {
source_id = vcd_vapp.web.id
customize_on_instantiate = false
}

lease {
storage_lease_in_sec = 3600 * 24 * 3
}

# Using dependency to ensure that all VMs are present in vApp that
# is being captured
depends_on = [vcd_vapp_vm.emptyVM]
}




12345678910111213141516171819202122232425

data “vcd_catalog” “cat” {  org  = “v51”  name = “demo-catalog”} resource “vcd_catalog_vapp_template” “from-vapp” {  org        = “v51”  catalog_id = data.vcd_catalog.cat.id   name = “from-vapp”   capture_vapp {    source_id                = vcd_vapp.web.id    customize_on_instantiate = false  }   lease {    storage_lease_in_sec = 3600 * 24 * 3  }   # Using dependency to ensure that all VMs are present in vApp that  # is being captured  depends_on = [vcd_vapp_vm.emptyVM]} 

Route advertisement configuration for routed Org VDC networks

Route advertisement toggle field route_advertisement_enabled in resource vcd_network_routed_v2 that allows users to enable route advertisement per routed network, which works together with IP Space route advertisement

List of new resources and data sources

1 new resource:

2 new data sources:

There are more features and enhancements, which you can see in the project’s changelog. And, as always, we are awaiting your feedback and suggestions in GitHub Issues and #vcd-terraform-dev Slack channel (vmwarecode.slack.com).

Related articles

Latest articles