Reconnaissance
In this step, we will look at how to get information online using DigitalOcean's services features. I will presume you have some knowledge on Pentesting at least a cloud provider.
Last updated
In this step, we will look at how to get information online using DigitalOcean's services features. I will presume you have some knowledge on Pentesting at least a cloud provider.
Last updated
As I said (wrote, you fucking vocabulary nazi) in the previous article, Spaces use AWS S3 as a backend, so they inherit some of the features of AWS S3. Think of them as S3 when you order it online.
Just as with AWS S3, you can use HTTP Return codes to find if a space exists, and even if it does, what Access Controls are put in it.
The possible hosts for a DO Space are:
https://region.digitaloceanspaces.com/space_name
https://space_name.region.digitaloceanspaces.com/
Access Controls on DO Buckets can be put in Space or Object level. So, even if a bucket is public, but each object is restricted, you cannot access the object:
We can use this to enumerate a bucket and it's objects. I have built a module in Nebula to do exacly that:
So, your basic everyday Bucket and Object ACLs.
Websites can be fuzzed too, by using HTTP Response codes. In the case of websites, the host for a DO Space is:
https://space_name.region.cdn.digitaloceanspaces.com/
And you basically do the same you did before.
Since every Space CDN requires a subdomain, even Google Dorking can help. Just a simple:
site:<domain>
can help. This will not help much though if the domain is young and not so well searched.
Also, you can try to find buckets using subdomains. Since each website (CDN as they call them), needs an SSL Certificate, you can use crt.sh for that.
I have again created a module for Nebula that does that too:
Then, just query CNAME DNS Record to get the space name and region. This can also help on Subdomain Takeover attacks.
GrayHatWarefare(https://buckets.grayhatwarfare.com/) keeps tracks of open storages (AWS Buckets, Azure Storage, DigitalOcean Spaces). It also keeps track of the files inside the bucket.
You can also use the API:
And as always, Nebula to the rescue:
Kubernetes cluster, by default have a public host created that has the format:
cluster_id
.k8s.ondigitalocean.com
By itself, this is not a problem, as the host does not directly give any info related to the account, project, or owner but, if the target has configured a CNAME Record to point to the kube host, you can find if the target is using Kubernetes from DigitalOcean.
Different from AWS Instances, droplets do not have a public host pointing to them. They only have IP Addresses, which can be assigned a domain A (and AAAA record).
Again, something like crt.sh, or a subdomain fuzzer, can help with that:
Same as with Spaces, Apps have a host that is made up of:
app_name-some random characters.ondigitalocean.app
Example: article-app-pepperclipp-rhbwf.ondigitalocean.app
In some test I have done, the random chars are only 5 lowercase alphanumeric characters, but I can be wrong. The app name can either be setup by us, or get something like whale-app, or seashell-app (I'm confused too. Yet again, I'm the one to talk.)
While bruteforcing apps the same way we do Spaces is not effective, trying to find them as domain hosts it's a better chance. Apps will most likely be having a CNAME record pointing to them. After finding a host that points to an app, we can check if the app is still working:
Each app has namespaces, packages and actions. To access an app, we need to make a request to each of them.
curl
https://article-app-pepperclipp-rhbwf.ondigitalocean.app/hello/hello/hello?cmd=whoami
So, in this case, namespace, package and action are all "hello". This is a trend with apps you might see, them being the same word.
Another thing you might find, is a website with no action, so:
So, one less search.
Apps are usually found on small apps and apps that might require changes. So, some places to look, might be:
Surveys
HR Job Openings
Games
Confirmation Forms
Lastly, functions. Functions are like Lambdas, but with less features.
Functions have a specific URL, in which fields space and
Again, they can have CNAME Records pointed to them, and you can use google dorking to find:
site:domain_totest .doserverless.co/api/v1/web/
site:domain_totest /api/v1/web/
Then, based on the finding, you can fuzz other fields or not.
Digital Ocean has no IAM (or not a good one at least). Every account has teams, teams can be assigned into projects and every user in a team is an admin on the projects and they can do everything except for assigning other users into teams. The only one that can do that is the Owner:
You can use normal recon techniques to find access as users:
Harvester
Hunter.io
Google Dorks
Pastebin
HaveIBeenPwned
Sites that advertise breaches
This can lead to a full compromise of the infrastructure, due to the lack of restrictions that Digital Ocean has.
So, in the end, there are a ton of ways to get info on Digital Ocean. A basic methodology you can use is:
Subdomain Enumeration
S3 Bucket Fuzzing
GrayHatWarfare
Directory and file fuzzing
File Access from ACL
Web Configuration
Try to find files with API credentials (both S3 and other keys)
Check CNAME Records for:
Kubernetes
Apps
Functions
Google dork for functions
Fuzz URLs and Parameters of Functions and Apps
Collect user list from:
Harvester
Hunter.io
Google Dorks
Check for breached users from:
Sites that advertise breaches
HaveIBeenPwned
Pastebin
In the next article, we will be looking at Initial Access Methods. If you came this far means of reading all the way, thanks and I hope you have learned something from it.