* fix(provider): User-settable VLAN ID and name
By default resource `proxmox_virtual_environment_network_linux_vlan`
uses `name` to determine both the actual raw device for VLAN and
VLAN ID.
Since ifupdown2 (manually installed on PVE6, installed by default
since PVE7), it is possible to have VLAN name not tied to VLAN ID.
Make `interface` and `vlan` configurable by user.
* fix: update schema to ensure the correct docs generation.
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* fix(vm): wait for VMs to actually stop when sending a shutdown command
Due to how a Proxmox cluster reacts to a VM shutdown command when
running in HA mode, the VM might still be running when the shutdown API
calls returns. This commit adds a loop that actively waits for the VM's
status to change to "stopped" (while also accounting for the shutdown
timeout) after the call's return.
* chore(refactoring): extracted VM state change wait loop into a separate function
* fix(vm): wait for VMs to actually start after requesting it from the cluster
This commit forces the plugin to wait for a VM to actually run after
requesting it to be started. This avoids problems with Proxmox's High
Availability mode, where a start request may not be immediately honoured
by the cluster.
* fix: linter errors
* fix: use `vmAPI.WaitForVMState`
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
The datastore update support introduced in #486 only worked if the
CloudInit interface was also changed at the same time. This commit
fixes the problem.
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* chore: fix a pair of typos in comments
* feat(api): list High Availability groups
* New clients created for HA and HA groups (via
`Cluster().HA().Groups()`)
* `List(ctx)` method that lists the cluster's High Availability groups
* feat(ha): added the `proxmox_virtual_environment_hagroups` data source
* This data source returns the list of HA groups in its value's
`group_ids` field
* fix(api): changed incorrect copy-pasted error message
* feat(api): get a HA group's full information
* Added a `Get()` method to the HA group client, which fetches a
single group's information based on its identifier.
* feat(ha): added the `proxmox_virtual_environment_hagroup` data source
* This data source can read information about a single Proxmox High
Availabillity group from the cluster.
* chore(ha): fixed linter error
* test(ha): added schema tests for the HA groups data sources
* fix(ha): use -1 as a node's priority when no priority is defined
* It used to default to 0, which is a valid value for priorities.
* chore(ha): converted the `hagroups` datasource to the Terraform plugin SDK
* chore(refactoring): common definition for `id` attributes
* chore(ha): ported the HA group datasource to the Terraform plugin framework
* feat(ha): return HA group identifiers as a set rather than a list
* docs(ha): added examples for the hagroups/hagroup datasources
* docs(ha): added documentation for the hagroup{,s} datasources
* chore(ha): fixed linter errors
* chore(ha): workaround for the linter's split personality disorder
* fix(ha): fixed reading the restricted flag
* chore(refactoring): use `ExpandPath` for paths to the HA groups API
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* feat: CustomBool to Terraform attribute value conversion method
* chore(refactoring): use `CustomBool` for boolean fields in the API data
* chore(refactoring): renamed "members" to "nodes" in the HA group datasource
* fix: typo in comment
* chore(refactoring): split HA group API data and added the update request body
* fix(api): fixed copy-pasted error message
* feat(api): method to create/update a HA group
* feat(api): HA group deletion method
* fix(api): made the digest optional for HA groups
* feat(ha): added unimplemented hagroup resource
* fix(ha): fixed copy-pasted comment
* feat(ha): schema definition for the HA group resource
* feat: helper function that converts string attr values to string pointers
* fix(ha): ensure node priorities are <= 1000 in HA groups
* fix(ha): add the digest attribute to the schema
* feat(ha): model definition for the HA group resource
* fix(api): fixed incorrect error message
* fix(api): fixed HA group creation / update
* I had somehow misunderstood the Proxmox API doc and thought creation
and update went through the same endpoint. This has been fixed by
adding separate data structures and separate methods for both
actions.
* feat: Terraform/Proxmox API conversion utilities
* chore(refactoring): HA group model and reading code moved to separate file
* feat(ha): HA group creation
* fix(api): renamed method (missed during previous refactor)
* feat(ha): `Read()` method implemented for the `hagroup` resource
* chore(refactoring): more consistent variable naming
* fix(ha): fixed the behaviour of `Read()` when the resource is deleted externally
* feat(ha): implement HA group deletion
* feat(ha): HA group update implemented
* fix(ha): prevent empty or untrimmed HA group comments
* feat(ha): HA group import
* docs(ha): HA group resource examples
* docs(ha): generated documentation for the `hagroup` resource
* chore(ha): fixed linter errors
* chore(refactoring): updated the code based on changes to the datasource PR
* fix(api): fixed boolean fields in the HA group create/update structures
* fix(ha): removed digest from the HA group resource and datasource
* The digest is generated by Proxmox from the *whole* HA groups
configuration, so any update to one group causes changes in all
other groups.
* Because of that, using it causes failures when updating two or more
HA groups.
* It is also a pretty useless value to have in the datasource, as it
is global and not actually related to the individual data items
* chore(refactoring): removed obsolete type conversion code
* chore(refactoring): use `ExpandPath` in the HA groups API client
* feat(ha): custom type for HA resource states
* feat(ha): custom type for HA resource types
* fix(api): fixed JSON decoding for HA resource states and types
* Values were being decoded directly from the raw bytes.
* Added tests for JSON marshaling/unmarshaling
* feat(api): custom type for HA resource identifiers
* Structure with a type and name
* Conversion to/from strings
* Marshaling to/Unmarshaling from JSON
* URL encoding
* feat(api): list and get HA resources
* feat(ha): HA resources list datasource
* feat(ha): added method that converts HA resource data to Terraform values
* fix(api): HA resource max relocation/restarts are optional
* feat(ha): Terraform validator for HA resource IDs
* feat(ha): HA resource datasource
* chore(refactoring): moved HA resource model to separate file
* feat(api): data structures for HA resource creation and update
* feat(api): HA resource creation, update and deletion
* fix(api): incorrect mapping in common HA resource data
* feat: utility function to create attribute validators based on parse functions
* feat: validators for HA resource identifiers, states and types
* fix(api): incorrect comment for the update request body
* feat(ha): Terraform resource for Proxmox HA resources
* chore(reafactoring): removed old HA resource ID validator
* docs: examples related to HA resources added
* docs: added documentation related to HA resources management
* fix: update doc generation, fix minor typos
* fix: rename & split utils package, replace `iota`
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* feat(vm): support for migration when the node name is modified
* Added a `migrate` VM flag which changes the provider's behaviour
when the VM's `node_name` is updated. If `true`, the VM will be
migrated to the specified node instead of being re-created.
* Added a `timeout_migrate` setting to control the timeout for VM
migration.
* Fixed a bug in the API's migration data structure that prevented
the online migration flag to be set.
* fix: update description
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* feat(vm): pool update support
This commit removed the ForceNew flag from the VM resource's `pool_id`
argument and implements pool update:
* if the VM was part of a pool, it is removed from it,
* if the new `pool_id` value is non-empty, the VM is added to that new
pool.
* fix: use `types.CustomCommaSeparatedList` in `PoolUpdateRequestBody` datatype, minor error fix
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
fix: linter error in ambush
* This commit fixes a linter error that somehow doesn't manifest
unless some other, unrelated changes trigger it (see #501 and
#505).
* In addition it fixes a similar issue that had so far gone undetected
by the linter.
* Refactored the code in question into a function, since it was mostly
duplicated.
* Simplified a pair of conditionals that had the same code in both
branches.
* fix(vm): fix index out of range when unmarshalling custompcidevice
* fix: linter errors
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
When the VM contains at least one bridge, the main interface (e.g.
`eth0`) is left without an IP address because that's how networks
usually work.
The code that queries the VM's IP addresses (through the guest agent),
loops all available interfaces to find one. The existing code though
would prematurely exit the loop if the interface it was checking had no IP
address assigned. Like the aforementioned `eth0`, when it is controlled
by a bridge.
This patch fixes this problem by not exiting the loop, instead just continuing
to the next interface.
* fix(user): make password attribute optional
The password is already optional in the terraform schema, but still serialized and sent as an empty string via the client. This addresses the request body serialization.
Addresses #462
* add example template
---------
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
* feat(provider): ensure upload of ISO/VSTMPL completes before starting VM and add timeout to config for this
* remove `ForceNew: true` for the timeout attribute
* minor docs update
---------
Co-authored-by: dandaolrian <dandaolrian@users.noreply.github.com>
Co-authored-by: Pavel Boldyrev <627562+bpg@users.noreply.github.com>
The HTTP client makes requests using the operational context passed from Terraform. The client will no longer enforce its own fixed timeout but will rely on context cancellation instead.