fix(sdk): cover CNAME → dangling S3 in route53 takeover check#10920
Conversation
- Detect non-alias CNAMEs targeting S3 website endpoints whose bucket is missing from the audited account - Update metadata title/description/risk/recommendation to reflect both A→IP and CNAME→S3 vectors - Extend tests with PASS/FAIL for both endpoint formats and a non-S3 CNAME ignore case
|
✅ All necessary |
|
✅ Conflict Markers Resolved All conflict markers have been successfully resolved in this pull request. |
Codecov Report✅ All modified and coverable lines are covered by tests.
Additional details and impacted files@@ Coverage Diff @@
## master #10920 +/- ##
===========================================
- Coverage 86.17% 6.61% -79.56%
===========================================
Files 223 849 +626
Lines 5744 24579 +18835
===========================================
- Hits 4950 1627 -3323
- Misses 794 22952 +22158
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
🔒 Container Security ScanImage: 📊 Vulnerability Summary
4 package(s) affected
|
Updated changelog to reflect changes in AWS and Azure checks.
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation and see the Github Action logs for details |
Context
While auditing a customer environment, the existing
route53_dangling_ip_subdomain_takeovercheck did not flag a classic subdomain-takeover scenario:CNAMEat the bucket's website endpoint.CNAMEremains — anyone (any AWS account) can re-register the bucket name and serve content under the original domain.The previous logic only inspected non-alias
Arecords pointing at AWS-owned IPs that were no longer assigned (released EIPs / ENI public IPs). TheCNAME → deleted S3 bucketvector was uncovered.Description
Extend
route53_dangling_ip_subdomain_takeoverso the same check also flags dangling S3 website CNAMEs:CNAMErecord, parse the target. If it matches an S3 website endpoint (<bucket>.s3-website-<region>.amazonaws.comor the newer<bucket>.s3-website.<region>.amazonaws.com), extract the bucket name.s3_client.buckets, raise aFAIL(potential takeover). Otherwise,PASS.A → IPlogic is unchanged.CheckTitle,Description,Risk,Recommendation) is rewritten to describe both vectors and the S3 website endpoint reference is added toAdditionalURLs.CheckIDis not renamed (compliance mappings remain stable).The check now imports
s3_client, so the AWS scan must already evaluates3for the new path to fire (it does —s3is in the default service set).Steps to review
prowler/providers/aws/services/route53/route53_dangling_ip_subdomain_takeover/route53_dangling_ip_subdomain_takeover.py— confirm the regex (^<bucket>.s3-website[.-]<region>.amazonaws.com.?$) covers both endpoint formats and that the existingArecord branch is untouched.test_hosted_zone_cname_to_existing_s3_website_bucket→ PASStest_hosted_zone_cname_to_dangling_s3_website_bucket→ FAILtest_hosted_zone_cname_to_dangling_s3_website_bucket_dot_format→ FAIL (newer dot-style endpoint)test_hosted_zone_cname_to_non_s3_target_is_ignored→ 0 findingsFAIL.Checklist
Community Checklist
route53_dangling_ip_subdomain_takeoveris extended.route53:ListHostedZones,route53:ListResourceRecordSets,ec2:DescribeAddresses,ec2:DescribeNetworkInterfaces. The new S3 path consumess3_client.buckets, which is populated by the existings3:ListAllMyBucketspermission already used by the S3 service.SDK/CLI
License
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.