Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

avoid reading data sources which are not needed during destroy #36777

Open
EugenKon opened this issue Mar 26, 2025 · 2 comments
Open

avoid reading data sources which are not needed during destroy #36777

EugenKon opened this issue Mar 26, 2025 · 2 comments
Labels
enhancement new new issue not yet triaged

Comments

@EugenKon
Copy link

EugenKon commented Mar 26, 2025

Terraform Version

v1.11.2

Terraform Configuration Files

this is separate module, thus here we just relay that zone exists. In normal situation it does, but during destroy this hosted zone is already destroyed at different module.

data "aws_route53_zone" "base-domain" {
  name = local.domain_name

}

resource "aws_route53_record" "acm-ssl-validation" {
  for_each = {
    for dvo in aws_acm_certificate.acm-ssl.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      type   = dvo.resource_record_type
      record = dvo.resource_record_value
    }
    if !startswith(dvo.domain_name, "*.")
  }

  zone_id = data.aws_route53_zone.base-domain.zone_id
  name    = each.value.name
  type    = each.value.type
  records = [each.value.record]
  ttl     = 60
}

Debug Output

│ Error: no matching Route 53 Hosted Zone found
│
│   with module.private-cloud.data.aws_route53_zone.base-domain,
│   on modules/private-cloud/ssl.tf line 181, in data "aws_route53_zone" "base-domain":
│  181: data "aws_route53_zone" "base-domain" {
│
╵

Expected Behavior

Resources already destroyed by the first run. The second run can not find them. TF should not try to get those resources, because we are destroying the cluster. Eg. hosted zone does not exists anymore, there is no reason to try to destroy DNS RRs there, because they are also already destroyed.

Actual Behavior

When we are destroying cluster TF should not check resource existence.

Steps to Reproduce

  1. terraform destroy (failed, because RDS has deletion protection)
  2. remove protection for RDS
  3. terraform destroy (failed, because hosted zones already destoryed)

Additional Context

No response

References

No response

Generative AI / LLM assisted development?

No response

@EugenKon EugenKon added bug new new issue not yet triaged labels Mar 26, 2025
@jbardin
Copy link
Member

jbardin commented Mar 26, 2025

Hi,

If a resource is destroyed already, the state will reflect that and Terraform will not attempt to destroy the resource again (and even if it does, most providers would return no error when the resource is already destroyed).

The error you have however is from data.aws_route53_zone.base-domain, which is a data source unmanaged by this terraform configuration, so the actual resource would be expected to outlive the entire configuration. If there is a problem outside of your configuration, you may need to manually work around that. Using -target to more specifically remove the problematic resource could help here, or even temporarily modifying the configuration.

A possible enhancement which could be made is that there may be a way to skip reading a data source if that data does not feed into anything which could use it, which is probably only provider configurations. That type of pruning is a little harder to do in Terraform, but we could look into it.

@jbardin jbardin changed the title [Bug]: Do not check references during cluster destroy avoid reading data sources which are not needed during destroy Mar 26, 2025
@jbardin jbardin added enhancement and removed bug labels Mar 26, 2025
@jbardin
Copy link
Member

jbardin commented Mar 27, 2025

Oh, I forgot to mention, Terraform is required to refresh all instances before a destroy in order to get the most current state. Technically if a resource can be read from the existing state, it could be destroyed as well, but some providers don't work well if a resource is unexpectedly missing or changed at that point for various reasons. If you plan the destroy operation with the flag -refresh=false there will be no reason to read the data source at all.

So in the long run, this may just be a request to find a way to always skip the refresh step on destroy, or make it as minimal as possible, rather than try and determine if data sources need to be read based on context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement new new issue not yet triaged
Projects
None yet
Development

No branches or pull requests

2 participants