With the release of Terraform 1.5 comes a new block type, import. This follows on the addition of the moved block that came with Terraform 1.1. Both of these new block types represent a new way to manage state updates via code in preference to using the CLI commands, terraform import and terraform state mv, respectively. Is this just a complication, or does this have significant value? What challenge is this attempting to solve?
NOTE: the import block was known as the add block in the 1.1-alpha codebase. Hashicorp is rather consistent about using a distinctive pattern during pre-release than is used at release time. This might seem like a bad choice to some, but it marks a clear delineation between usage during pre-release and release that makes it unmistakable what the Terraform code was written to support.
Challenges with Managing State
With a local backend (the default for Terraform), using the CLI commands is of little concern (aside from proper quotes for the shell), as the state file is commingled with the code. However, local state is anti-pattern for production environments, or even lower environments that will lead to production. State should be stored in a remote backend, ideally with support of state locking. If collaboration is important (it is), state needs to be in a secured, shared location, with the ability to retain several versions.
Why not just keep state in a Git repository? State will almost certainly contain sensitive data like secrets, keys, etc. which could easily be leaked even in a private repository (git clone… and there is a leak). In addition, using pipelines to deploy code necessitates remote backends without building a Rube Goldberg Contraption.
So, using remote state is a best practice to support collaboration and automation. This makes things difficult if repository sprawl begins (which is not necessarily a Bad Thing™) because connectivity to remote state is required and comes with challenges of permissions and perhaps key rotation. Giving people access to state, as opposed to processes, is significant governance concern, but that is a story for another time. In addition, manipulating state files with CLI leaves very little accountability of the specific changes which is a principle value of Infrastructure as Code.
State Management as Code with import and moved Blocks
With these new blocks, simple two argument blocks can be written. These blocks like benefit many workflows by being written in intuitively named files, like import.tf or moved.tf. The code is written, committed and pushed to the Git repository of record, and a pipeline is easily triggered to perform the tasks. The blocks follow the principle of idempotency, so they could be left in place, but it is prudent to clean up the changes. A sensible practice may be to move the blocks of code to an archive file, but the Git history will keep record of the changes even if the files are simply deleted and pushed through the processes. This helps preserve the principle of least privilege and accountability is maintained through Git blame:

Allowing the pipeline to facilitate the task means there are no challenges with CLI access to remote state, which is another win.
This practice is similar to managing relational databases as code by storing Data Definition Language (DDL) in a repository to track schema changes over time.
Code Generation Sneaks In
The import block also brings another capability that is more suited for local development: code generation. I have written on Azure Terraform Export (previously known as Azure Terrafy) as a means to generate Terraform code for Azure environments. There is also Terraformer that was originally written for GCP and also supports AWS. With the import block, there is a universal tool for generating code from existing resources which makes use of Terraform’s provider model.
Keep in mind, the code generated is rather generic and hard-coded, just as the other tools. Refactoring is in order if readability and maintainability are priorities (they are).
When using the CLI, Hashicorp recommends performing terraform import, then writing code to match. This allows the use of terraform plan to identify differences in code and the existing infrastructure which facilitates iterating on the code until terraform plan returns “No changes”:

However, the workflow with the import block is different and aligns to my general preference, which is to write the code, then perform the import via a plan/apply cycle:
terraform plan -out=tfplan terraform apply tfplan
If no matching code exists, code generation can be used with the following plan/apply cycle:
terraform plan -generate-config-out=import_resources.tf -out=tfplan terraform apply tfplan
Obligatory refactoring would follow.
Drift Detection
If the priority is drift detection, resources could quickly be imported without a preference for refactoring. The code could simply be generated and then a drift strategy could be employed with Terraform Cloud or utilizing a scheduled terraform plan within a pipeline along with notifications for detected changes.
