Monday, March 14, 2022

I Discovered Thousands of Open Databases on AWS

How I Discovered Thousands of Open Databases on AWS

My journey on finding and reporting databases with sensitive data about Fortune-500 companies, Hospitals, Crypto platforms, Startups during due diligence, and more.

Table Of Contents

Overview

It is easy to find misconfigured assets on cloud services, by scanning the CIDR blocks (IP ranges) of managed services, since they are known and published by them.

An email from one of the companies I reported.

In just 1 day, I found thousands of ElasticSearch databases and Kibana dashboards that exposed sensitive information, most probably by mistake:

  • Sensitive information about customers: emails, addresses, current occupation, salaries, private wallets addresses, locations, bank accounts, and other sensitive information.
  • Production Logs that are written by Kubernetes cluster — From the applications logs to the kernels and system logs.
    Logs that are collected from all the nodes, pods, and applications running on top of them, in one place, are open to the world. I just got there first.
  • Some of the databases were already malformed by ransomware.
A company that I found compromised, is giving services to these companies. The image is taken from their website.

Background

There must be plenty of assets out there, listening outside their scope, waiting to be discovered.
published CIDR blocks make it easier for attackers to find these assets, to spread all types of malware, or to put their hands on sensitive data of real companies.

DevOps, Developers, and IT practitioners often misconfigure some of the following:

  • Binding the socket on the wrong network interfaces.
    For example, listening to connections from 0.0.0.0/* — So it is visible to all network interfaces, instead of only the inner-network interface IP address (172.x.x.x)
  • Misconfigured security group for the cluster (allow all TCP and all UDP from broad CIDR blocks).
  • Sometimes, the security group is changed by others that are not aware of the consequences.
  • The default network or subnet is used, subnet settings are derived, and a public IPv4 address is assigned silently.

The Hypothesis

I hypothesized that I can easily find misconfigured assets, mostly due to human errors if I will scan specific CIDR blocks from within the cloud operator (An Instance / VPS).

The key is to scan smartly, leveraging the pre-known network infrastructure (known CIDR-blocks for the software we like to scan) to find live servers, that are within my reach.

If you search a bit, you can find the relevant CIDR blocks of every cloud provider.
Let’s say I’m an IT technician or a security engineer, that needs to allow incoming connections from a specific cloud service, like AWS’s Elastic Container Service (ECS). It can be done by adding the CIDR blocks of the service to the Security Group rules, allowing connections from these CIDR blocks.

All Cloud providers publish a list of their services along with their CIDR blocks (IP addresses ranges) for each service.

Get the CIDR blocks of ElasticSearch Service (ES) from AWS

Some CIDR blocks were accessible only from within the cloud provider. All you have to do is to start a VPS / instance with internet connectivity inside the cloud provider you want to scan.

What does one need to find them?

  • A basic understanding of networks, the IP stack and routing, and cloud infrastructure.
  • A lightweight tool for Port Scanning (like MasScan, or NMap)
  • A list of CIDR blocks to scan(managed services — like Kubernetes or ElasticSearch) along with the ports that are most likely to be open on the instances within these IP ranges.
  • A tool to visualize all the data we collect (like ElasticSearch+Kibana)

Port Scanning — Collecting the data about the assets

I have used MasScan to scan for the open ports on the CIDR blocks I selected.
MasScan is a TCP port scanner, it spews SYN packets asynchronously.
Under the right circumstances, it can scan the entire Internet in under 5 minutes.

The input is the CIDR blocks (e.g 50.60.0.0/16 or 118.23.1.0/24) and the ports we would like to scan (9200, 5600, 80, 443, etc.).

I started an ELK stack on my instance using Docker images.
I started MasScan on the same machine, and it started scanning the CIDR blocks. It streamed the output (response logs) of MasScan to ElasticSearch using LogStash and visualized everything through Kibana.
During the scan, the TCP responses were logged and indexed in ElasticSearch.

I let it run for a while, and I had 337K+ IP and Ports combinations scanned in my hands, in no time.
Many of them were opened.
This is how my dashboard looked:

337K open ports in AWS’s ElasticSearch service CIDR blocks (customers clusters), in a few hours.

Analyzing And Visualizing The Data

The photos are censored.
I have reported the incidents to the involved parties as well as AWS.
I got their permission to proceed with this post.

Most of the parties addressed the issue within a day or two.
Some of them ignored the reports to this day.

Thanks to the pipeline I created — I had real-time logs and I could start looking at the services immediately, as it scans for more.

  • I exported the assets that were up from Kibana to a CSV file using Kibana’s export button. Then I loaded it using pandas (python).
  • For each IP, I fired an HTTP HEAD request and got an HTTP response with the fingerprint of the asset.
  • I eliminated the responses that required authentication.
  • Then I printed their web page’s titles from the HTTP response.
We can do the same for ElasticSearch port, or any service. The IPs were already scanned by MasScan so I assumed they were up.
Printing the title of the HTML documents that came in response from the endpoint I found. Can be also done using nmap’s HTTP script.
Getting the title of the servers that I found relevant


from Hacker News https://ift.tt/iTlSGXD

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.