diff --git a/README.md b/README.md index 62318a31..1bca8c5b 100644 --- a/README.md +++ b/README.md @@ -10,17 +10,17 @@ [![Buy Me A Coffee](https://img.shields.io/badge/-buy%20me%20a%20coffee-5F7FFF?logo=buymeacoffee&labelColor=gray&logoColor=FFDD00)](https://www.buymeacoffee.com/bpgca) [![Wakatime](https://wakatime.com/badge/github/bpg/terraform-provider-proxmox.svg)](https://wakatime.com/@a51a1a51-85c3-497b-b88a-3b310a709909/projects/vdtgmpvjom) -A Terraform / OpenTofu Provider which adds support for Proxmox solutions. +A Terraform / OpenTofu Provider that adds support for Proxmox solutions. This repository is a fork of which is no longer maintained. -## Compatibility promise +## Compatibility Promise This provider is compatible with the latest version of Proxmox VE (currently 8.2). While it may work with older 7.x versions, it is not guaranteed to do so. -While provider is on version 0.x, it is not guaranteed to be backwards compatible with all previous minor versions. -However, we will try to keep the backwards compatibility between provider versions as much as possible. +While the provider is on version 0.x, it is not guaranteed to be backward compatible with all previous minor versions. +However, we will try to maintain backward compatibility between provider versions as much as possible. ## Requirements @@ -29,59 +29,59 @@ However, we will try to keep the backwards compatibility between provider versio - [Terraform](https://www.terraform.io/downloads.html) 1.5.x+ or [OpenTofu](https://opentofu.org) 1.6.x - [Go](https://golang.org/doc/install) 1.22 (to build the provider plugin) -## Using the provider +## Using the Provider You can find the latest release and its documentation in the [Terraform Registry](https://registry.terraform.io/providers/bpg/proxmox/latest). -## Testing the provider +## Testing the Provider -In order to test the provider, you can simply run `make test`. +To test the provider, simply run `make test`. ```sh make test ``` -Tests are limited to regression tests, ensuring backwards compatibility. +Tests are limited to regression tests, ensuring backward compatibility. A limited number of acceptance tests are available in the `proxmoxtf/test` directory, mostly for "new" functionality implemented using the Terraform Provider Framework. These tests are not run by default, as they require a Proxmox VE environment to be available. -They can be run using `make testacc`, the Proxmox connection can be configured using environment variables, see provider documentation for details. +They can be run using `make testacc`. The Proxmox connection can be configured using environment variables; see the provider documentation for details. -## Deploying the example resources +## Deploying the Example Resources -There are number of TF examples in the `example` directory, which can be used to deploy a Container, VM, or other Proxmox resources on your test Proxmox environment. +There are a number of TF examples in the `example` directory, which can be used to deploy a Container, VM, or other Proxmox resources in your test Proxmox environment. The following assumptions are made about the test environment: - It has one node named `pve` - The node has local storages named `local` and `local-lvm` -- The "Snippets" content type is enabled in `local` storage +- The "Snippets" content type is enabled in the `local` storage Create `example/terraform.tfvars` with the following variables: ```sh -virtual_environment_username = "root@pam" -virtual_environment_password = "put-your-password-here" -virtual_environment_endpoint = "https://:8006/" +virtual_environment_endpoint = "https://pve.example.doc:8006/" +virtual_environment_ssh_username = "terraform" +virtual_environment_api_token = "root@pam!terraform=00000000-0000-0000-0000-000000000000" ``` Then run `make example` to deploy the example resources. -If you don't have free proxmox cluster to play with, there is dedicated [how-to tutorial](docs/guides/setup-proxmox-for-tests.md) how to setup Proxmox inside VM and run `make example` on it. +If you don't have a free Proxmox cluster to play with, there is a dedicated [how-to tutorial](docs/guides/setup-proxmox-for-tests.md) on how to set up Proxmox inside a VM and run `make example` on it. -## Future work +## Future Work The provider is using the [Terraform SDKv2](https://developer.hashicorp.com/terraform/plugin/sdkv2), which is considered legacy and is in maintenance mode. -The work has started to migrate the provider to the new [Terraform Plugin Framework](https://www.terraform.io/docs/extend/plugin-sdk.html), with aim to release it as a new major version **1.0**. +Work has started to migrate the provider to the new [Terraform Plugin Framework](https://www.terraform.io/docs/extend/plugin-sdk.html), with the aim of releasing it as a new major version **1.0**. -## Known issues +## Known Issues -### Disk images cannot be imported by non-PAM accounts +### Disk Images Cannot Be Imported by Non-PAM Accounts Due to limitations in the Proxmox VE API, certain actions need to be performed using SSH. This requires the use of a PAM account (standard Linux account). -### Disk images from VMware cannot be uploaded or imported +### Disk Images from VMware Cannot Be Uploaded or Imported -Proxmox VE is not currently supporting VMware disk images directly. +Proxmox VE does not currently support VMware disk images directly. However, you can still use them as disk images by using this workaround: ```hcl @@ -112,15 +112,15 @@ resource "proxmox_virtual_environment_vm" "example" { } ``` -### Snippets cannot be uploaded by non-PAM accounts +### Snippets Cannot Be Uploaded by Non-PAM Accounts Due to limitations in the Proxmox VE API, certain files (snippets, backups) need to be uploaded using SFTP. This requires the use of a PAM account (standard Linux account). -### Cluster hardware mappings cannot be created by non-PAM accounts +### Cluster Hardware Mappings Cannot Be Created by Non-PAM Accounts Due to limitations in the Proxmox VE API, cluster hardware mappings must be created using the `root` PAM account (standard Linux account) due to [IOMMU](https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_management_unit#Virtualization) interactions. -Hardware mappings allow to use [PCI "passthrough"](https://pve.proxmox.com/wiki/PCI_Passthrough) and [map physical USB ports](https://pve.proxmox.com/wiki/USB_Physical_Port_Mapping). +Hardware mappings allow the use of [PCI "passthrough"](https://pve.proxmox.com/wiki/PCI_Passthrough) and [map physical USB ports](https://pve.proxmox.com/wiki/USB_Physical_Port_Mapping). ## Contributors @@ -144,7 +144,6 @@ See [CONTRIBUTORS.md](CONTRIBUTORS.md) for a list of contributors to this projec Thanks again for your continuous support, it is much appreciated! 🙏 - ## Acknowledgements This project has been developed with **GoLand** IDE under the [JetBrains Open Source license](https://www.jetbrains.com/community/opensource/#support), generously provided by JetBrains s.r.o. diff --git a/docs/index.md b/docs/index.md index f6f26f96..2c3a9238 100644 --- a/docs/index.md +++ b/docs/index.md @@ -178,8 +178,6 @@ When using a non-root user for the SSH connection, the user **must** have the `s -> If you run clustered Proxmox VE, you will need to configure the `sudo` privilege for the user on all nodes in the cluster. --> `sudo` is not installed by default on Proxmox VE nodes. You can install it via the command line on the Proxmox host: `apt install sudo` - ~> The `root` user on the Proxmox node must be configured with `bash` as the default shell. You can configure the `sudo` privilege for the user via the command line on the Proxmox host. In the example below, we create a user `terraform` and assign the `sudo` privilege to it: @@ -196,7 +194,7 @@ You can configure the `sudo` privilege for the user via the command line on the sudo visudo ``` - Add the following lines to the end of the file: + Add the following lines to the end of the file, but **before** the `@includedir /etc/sudoers.d` line: ```sh terraform ALL=(root) NOPASSWD: /sbin/pvesm @@ -206,13 +204,7 @@ You can configure the `sudo` privilege for the user via the command line on the Save the file and exit. -- Copy your SSH public key to the new user on the target node: - - ```sh - ssh-copy-id terraform@ - ``` - - or manually add your public key to the `~/.ssh/authorized_keys` file of the `terraform` user on the target node. +- Copy your SSH public key to the `~/.ssh/authorized_keys` file of the `terraform` user on the target node. - Test the SSH connection and password-less `sudo`: