(This article was drafted on the plane to the SANS Cloud Security Summit but I never got around to publishing it. I dive deeper into the ThreatHunting topic for my DevChat at AWS re:Inforce to be published June 26th)
One the purposes for Antiope is to provide a platform for Cloud Threat Hunting. Traditional Threat Hunting looks for evidence of compromise. In this case what we’re really hunting are threats from misconfiguration. I’ve used this in a handful of cases (no where near as many as I’d hoped) across the cloud accounts I’m responsible for.
As threat-hunters have explained it, Threat Hunting follows a few steps:
- Develop a Hypothesis
- Gather Data from your environment related to the hypothesis
- Sift through that data looking to confirm the hypothesis
- Document your findings and if applicable create new detections.
I’ll go through two examples of Threat Hunting I’ve done.
Public Elastic Search Clusters
AWS’s Managed ElasticSearch service (ES) supports Resource Policies. These allow the ES Cluster to allow access without IAM Authentication. ES Clusters can also have public endpoints. You see where this is going. It’s possible to create and AWS ElasticSearch cluster that is on the public Internet and requires no IAM Authentication. In AWS’s defense, when you do this via the console, there is a big warning dialog that tells you what you’re doing. But it’s a wordy dialog box and people don’t read dialog boxes.
Hypothesis: somewhere in my environment, someone has ignored the warnings or doesn’t understand resource policies and exposed a cluster to the world
Data Gathering: We need to inventory all the ES Clusters across all our accounts.
Data Sifting: This was able to be done via a ElasticSearch query against the Antiope ElasticSearch cluster (which was not public!).
supplementaryConfiguration.AccessPolicies.Statement.Principal.AWS.keyword: “” AND configuration.Endpoint: AND NOT supplementaryConfiguration.AccessPolicies.Statement.Condition:*
Follow up: Our security standard was promptly updated and this query was added to our scorecards.
Dangling Origins
This is typically categorized as a sub-domain takeover, but the premise here is that you can forget to cleanup AWS Route53 Resource Records after deleting the resource they point to. In several cases, another AWS customer can create a resource with the same name and you’re Route53 will happily point to someone else’s resource.
Hypothesis: somewhere in my environment, there exists a Route53 Alias to an S3 bucket that no-longer exists, or doesn’t exist in any of our trusted accounts.
Data Gathering falls into two categories:
- I need a list of all the S3 buckets in my AWS accounts (I assume a bucket doesn’t have to be in the account with the hosted zone)
- I need a list of all the Route53 resource-records that are ALIASes for the S3 service.
This is more challenging that the first example. One, we’ve got to cross-reference two Antiope indexes - S3 Buckets and Route53 Hosted Zones. Second, inventory of Route53 isn’t straightforward. Zones can have a lot of RRs and AWS has a concurrency limit on the number of times you can call the Route53 API. Antiope used to pull resource recordsets, but I had to pull that out due to hitting API limits (and shear data volume).
Data Sifting: This is an expensive (from a wall-clock perspective) query. The sifting process does something like this:
- Store all the known S3 buckets (about 6000) in a python array.
- Grab each route53 domain from ElasticSearch
- AssumeRole into the account using the Antiope Audit role
- List all Resource Records (you can filter by Type A)
- Find the resource records with
AliasTarget
in the response body. Look for"DNSName": "s3-website*"
in the AliasTarget dict. - If that is a match, the AliasTarget Name is the name of the S3 Bucket. Make sure that exists in your list.
- If missing, attempt to list the bucket. If you have a “NoSuchBucket” response, this is a risk. If you get “AccessDenied” then someone already has the bucket, and it’s not one of your accounts.
Followup: Right now I’ve not fully operationalized this. We did a comprehensive cleanup and I run this search on occasion. Route53 Aliases to S3 aren’t as common as some other Dangling Origin issues:
- CloudFront pointing to missing buckets
- Route 53 pointing to missing Elastic Beanstalk URLs