Himanshu Kukreja
0%

Bonus Problem 8: AWS S3

The World's Most Durable Storage System β€” 11 Nines That Changed Everything


πŸ—„οΈ How Do You Promise to Never Lose a Single File Out of 500 Trillion?

Imagine this challenge: You're storing 500 trillion objectsβ€”hundreds of exabytes of dataβ€”for millions of customers worldwide. Hard drives fail constantly. Data centers lose power. Natural disasters strike. And yet, you promise to lose only 1 object out of 10 million... every 10,000 years.

This is Amazon S3β€”the storage system that defied distributed systems theory by delivering both strong consistency AND 99.999999999% durability at planetary scale. It's not just cloud storage. It's an engineering marvel that other services are measured against.

THE AWS S3 SCALE (2025)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                                                                          β”‚
β”‚   STORAGE                                                                β”‚
β”‚   ───────                                                                β”‚
β”‚   Objects stored:                500+ Trillion                           β”‚
β”‚   Data stored:                   Hundreds of Exabytes                    β”‚
β”‚   Single object max size:        5 TB (up to 50 TB in 2025)              β”‚
β”‚                                                                          β”‚
β”‚   REQUESTS                                                               β”‚
β”‚   ────────                                                               β”‚
β”‚   Requests per second:           200+ Million (peak)                     β”‚
β”‚   GET requests per prefix:       5,500/second                            β”‚
β”‚   PUT requests per prefix:       3,500/second                            β”‚
β”‚                                                                          β”‚
β”‚   DURABILITY                                                             β”‚
β”‚   ──────────                                                             β”‚
β”‚   Durability:                    99.999999999% (11 nines)                β”‚
β”‚   Availability (Standard):       99.99%                                  β”‚
β”‚   Availability zones (min):      3 per region                            β”‚
β”‚                                                                          β”‚
β”‚   ARCHITECTURE                                                           β”‚
β”‚   ────────────                                                           β”‚
β”‚   Microservices:                 350+                                    β”‚
β”‚   AWS Regions:                   30+                                     β”‚
β”‚   Storage classes:               9 different tiers                       β”‚
β”‚                                                                          β”‚
β”‚   HISTORY                                                                β”‚
β”‚   ───────                                                                β”‚
β”‚   Launched:                      March 14, 2006                          β”‚
β”‚   First AWS service:             Yes (along with SQS)                    β”‚
β”‚   Years of operation:            19 years                                β”‚
β”‚                                                                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

This is the system we'll design todayβ€”and discover how AWS achieved the impossible: strong consistency at 11 nines of durability.


The Interview Begins

You walk into the interview room. The interviewer smiles and gestures to the whiteboard.

Interviewer: "Thanks for coming in. Today we're going to design a distributed object storage system like S3. I'm interested in how you think about durability, consistency, and scale. Please think out loudβ€”this is collaborative."

They write on the whiteboard:

╔══════════════════════════════════════════════════════════════════════════╗
β•‘                                                                          β•‘
β•‘                Design a Distributed Object Storage System                β•‘
β•‘                                                                          β•‘
β•‘   Requirements:                                                          β•‘
β•‘   - Store objects up to 5 TB each                                        β•‘
β•‘   - 11 nines (99.999999999%) durability                                  β•‘
β•‘   - Strong read-after-write consistency                                  β•‘
β•‘   - Handle millions of requests per second                               β•‘
β•‘   - Multiple storage tiers for cost optimization                         β•‘
β•‘   - Global availability across multiple regions                          β•‘
β•‘                                                                          β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•

Interviewer: "Take a few minutes to think about this, then walk me through your approach. We have about 45 minutes."


Phase 1: Requirements Clarification (5 minutes)

Before diving in, you take a breath and start asking questions.

Your Questions

You: "Before I start designing, let me clarify some requirements. What's the target scaleβ€”how many objects and how much data?"

Interviewer: "Think planetary scale. Hundreds of trillions of objects, exabytes of data."

You: "For durability, you mentioned 11 nines. What does that mean in practice?"

Interviewer: "If you store 10 million objects, you might lose 1 object every 10,000 years. Durability is non-negotiableβ€”data loss is catastrophic."

You: "What about consistency? Traditional distributed systems trade consistency for availability."

Interviewer: "We need strong read-after-write consistency. After a write succeeds, any subsequent read must return the latest data. No stale reads."

You: "What are the latency requirements for reads and writes?"

Interviewer: "First byte latency around 100-200ms for small objects. Large objects can be slower but must support parallel downloads."

You: "Perfect. Let me summarize the requirements."

Functional Requirements

1. OBJECT OPERATIONS
   - PUT: Upload objects up to 5 TB
   - GET: Download objects (full or byte-range)
   - DELETE: Remove objects
   - LIST: Enumerate objects with prefix filtering
   - HEAD: Get object metadata without content
   - Multipart upload for large objects

2. ORGANIZATION
   - Buckets: Containers for objects
   - Keys: Unique identifiers within buckets
   - Prefixes: Logical grouping (like folders)
   - Metadata: Custom key-value pairs per object

3. DATA MANAGEMENT
   - Versioning: Keep all versions of objects
   - Lifecycle policies: Auto-transition storage tiers
   - Replication: Cross-region copying
   - Encryption: At rest and in transit

4. ACCESS CONTROL
   - IAM policies: User-level permissions
   - Bucket policies: Resource-level permissions
   - ACLs: Fine-grained object access
   - Presigned URLs: Temporary access

Non-Functional Requirements

1. DURABILITY
   - 99.999999999% (11 nines)
   - 1 object lost per 10 million over 10,000 years
   - Survive AZ failures, region failures

2. AVAILABILITY
   - 99.99% for Standard tier
   - 99.9% for infrequent access tiers
   - Graceful degradation under load

3. CONSISTENCY
   - Strong read-after-write for all operations
   - Strong consistency for LIST operations
   - No stale reads after successful writes

4. PERFORMANCE
   - First byte latency: 100-200ms
   - 3,500 PUT/second per prefix
   - 5,500 GET/second per prefix
   - Unlimited aggregate throughput

Phase 2: Back of the Envelope Estimation (5 minutes)

You: "Let me work through the numbers to understand the scale."

Storage Estimation

OBJECT STORAGE

Scale:
  Objects stored:                500 trillion
  Average object size:           ~500 KB (highly variable)
  Total storage:                 ~250 exabytes

Storage overhead:
  Erasure coding overhead:       ~1.5x (vs 3x for replication)
  Metadata per object:           ~1 KB
  Metadata storage:              500 trillion Γ— 1 KB = 500 PB

With durability overhead:
  Raw storage needed:            250 EB Γ— 1.5 = 375 EB
  Across 3+ AZs:                 Distributed globally

Request Estimation

REQUEST VOLUME

Peak requests:
  Total requests/second:         200+ million
  
Per-prefix limits:
  GET/HEAD per prefix:           5,500/second
  PUT/DELETE per prefix:         3,500/second
  
Daily requests:
  200M Γ— 86,400 = ~17 trillion requests/day

Key Metrics Summary

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         ESTIMATION SUMMARY                               β”‚
β”‚                                                                          β”‚
β”‚   STORAGE                                                                β”‚
β”‚   β”œβ”€β”€ Objects:                   500 trillion                            β”‚
β”‚   β”œβ”€β”€ Raw data:                  ~250 exabytes                           β”‚
β”‚   └── With redundancy:           ~375 exabytes                           β”‚
β”‚                                                                          β”‚
β”‚   REQUESTS                                                               β”‚
β”‚   β”œβ”€β”€ Peak:                      200+ million/second                     β”‚
β”‚   β”œβ”€β”€ Daily:                     ~17 trillion                            β”‚
β”‚   └── Per-prefix GET limit:      5,500/second                            β”‚
β”‚                                                                          β”‚
β”‚   INFRASTRUCTURE                                                         β”‚
β”‚   β”œβ”€β”€ Availability Zones:        3+ per region                           β”‚
β”‚   β”œβ”€β”€ Regions:                   30+                                     β”‚
β”‚   └── Hard drives:               Millions                                β”‚
β”‚                                                                          β”‚
β”‚   DURABILITY MATH                                                        β”‚
β”‚   β”œβ”€β”€ Annual failure rate:       ~0.000000001%                           β”‚
β”‚   β”œβ”€β”€ Objects at risk:           1 in 10 billion/year                    β”‚
β”‚   └── Expected loss:             ~50 objects/year (out of 500T)          β”‚
β”‚                                                                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Phase 3: High-Level Design (10 minutes)

You: "Now let me sketch out the high-level architecture."

System Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       S3 HIGH-LEVEL ARCHITECTURE                         β”‚
β”‚                                                                          β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚   β”‚                         CLIENTS                                 β”‚    β”‚
β”‚   β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”           β”‚    β”‚
β”‚   β”‚    β”‚AWS SDK  β”‚  β”‚REST API β”‚  β”‚AWS CLI  β”‚  β”‚ Console β”‚           β”‚    β”‚
β”‚   β”‚    β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜           β”‚    β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β”‚
β”‚                                β”‚                                         β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚   β”‚                            β–Ό                                    β”‚    β”‚
β”‚   β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚    β”‚
β”‚   β”‚    β”‚                   FRONT END LAYER                    β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   DNS β†’ Load Balancer β†’ Authentication β†’ Routing     β”‚     β”‚    β”‚
β”‚   β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚    β”‚
β”‚   β”‚                            β”‚                                    β”‚    β”‚
β”‚   β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚    β”‚
β”‚   β”‚    β”‚                  INDEX/METADATA LAYER                β”‚     β”‚    β”‚
β”‚   β”‚    β”‚                                                      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”‚  Partition  β”‚  β”‚   Global    β”‚  β”‚  Lifecycle  β”‚  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”‚   Engine    β”‚  β”‚  Metadata   β”‚  β”‚   Manager   β”‚  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚     β”‚    β”‚
β”‚   β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚    β”‚
β”‚   β”‚                            β”‚                                    β”‚    β”‚
β”‚   β”‚    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚    β”‚
β”‚   β”‚    β”‚                   STORAGE LAYER                      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚                                                      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”‚   Storage   β”‚  β”‚   Erasure   β”‚  β”‚   Repair    β”‚  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β”‚    Nodes    β”‚  β”‚   Coding    β”‚  β”‚   Service   β”‚  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚     β”‚    β”‚
β”‚   β”‚    β”‚                                                      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚          AZ-1            AZ-2            AZ-3        β”‚     β”‚    β”‚
β”‚   β”‚    β”‚       β”Œβ”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”       β”Œβ”€β”€β”€β”€β”€β”€β”€β”      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚       β”‚ Disks β”‚       β”‚ Disks β”‚       β”‚ Disks β”‚      β”‚     β”‚    β”‚
β”‚   β”‚    β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”˜       β””β”€β”€β”€β”€β”€β”€β”€β”˜      β”‚     β”‚    β”‚
β”‚   β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚    β”‚
β”‚   β”‚                                                                 β”‚    β”‚
β”‚   β”‚                          AWS REGION                             β”‚    β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚                                                                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Component Breakdown

1. Front End Layer

Purpose: Handle all incoming requests, authentication, routing

Key components:

  • DNS Service: Route requests to correct region/endpoint
  • Load Balancers: Distribute traffic across API servers
  • Authentication: Validate IAM credentials, bucket policies
  • Request Router: Direct to correct storage nodes

2. Index/Metadata Layer

Purpose: Track object locations without storing data

Key components:

  • Partition Engine: Distribute keys across index partitions
  • Global Metadata Store: Map keys to physical storage locations
  • Lifecycle Manager: Handle versioning, retention, expiration

3. Storage Layer

Purpose: Physical storage with durability guarantees

Key components:

  • Storage Nodes: Thousands of servers with HDDs/SSDs
  • Erasure Coding: Split data into shards with parity
  • Repair Service: Detect and recover from failures
  • Cross-AZ Distribution: Spread across availability zones

Data Flow: PUT Object

PUT OBJECT FLOW

1. CLIENT REQUEST
   PUT /bucket/key HTTP/1.1
   Content-Length: 1048576
   x-amz-content-sha256: <checksum>

2. FRONT END
   β”œβ”€β”€ DNS resolves to regional endpoint
   β”œβ”€β”€ Load balancer routes to API server
   β”œβ”€β”€ Authenticate request (SigV4)
   └── Validate bucket policy

3. INDEX LAYER
   β”œβ”€β”€ Hash key to determine partition
   β”œβ”€β”€ Acquire write lock on key
   └── Generate storage location

4. STORAGE LAYER
   β”œβ”€β”€ Erasure code the data (e.g., 10 data + 6 parity)
   β”œβ”€β”€ Write shards to storage nodes across AZs
   β”œβ”€β”€ Calculate and store checksums
   └── BRACKETING: Verify data can be reconstructed

5. COMMIT
   β”œβ”€β”€ Update metadata index
   β”œβ”€β”€ Release write lock
   └── Return 200 OK to client

DURABILITY: Data survives if ANY 10 of 16 shards survive

Phase 4: Deep Dives (20 minutes)

Interviewer: "Let's dive into how you achieve 11 nines of durability."


Deep Dive 1: Erasure Coding for Durability (Week 1 Concepts)

The Problem

DURABILITY CHALLENGE

Hard drive failure rates:
  - Annual failure rate: 1-5% per drive
  - At S3 scale: Thousands of drives fail daily
  - Must lose ZERO customer data

Simple replication (3x):
  - Store 3 copies across AZs
  - Problem: 3x storage overhead
  - Problem: All 3 could fail (rare but possible)

Need: High durability with lower storage overhead

The Solution: Erasure Coding

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      ERASURE CODING ARCHITECTURE                        β”‚
β”‚                                                                         β”‚
β”‚   ORIGINAL DATA (1 MB object)                                           β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚   β”‚                        1 MB                                    β”‚    β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚                               β”‚                                         β”‚
β”‚                               β–Ό                                         β”‚
β”‚   ERASURE ENCODE (Reed-Solomon 10+6)                                    β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚
β”‚   β”‚  Split into 10 data shards + compute 6 parity shards             β”‚  β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚
β”‚                               β”‚                                         β”‚
β”‚                               β–Ό                                         β”‚
β”‚   16 SHARDS (~100 KB each)                                              β”‚
β”‚   β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”
β”‚   β”‚ D1 β”‚ D2 β”‚ D3 β”‚ D4 β”‚ D5 β”‚ D6 β”‚ D7 β”‚ D8 β”‚ D9 β”‚D10 β”‚ P1 β”‚ P2 β”‚ P3 β”‚ P4 β”‚ P5 β”‚ P6 β”‚
β”‚   β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜
β”‚                               β”‚                                         β”‚
β”‚                               β–Ό                                         β”‚
β”‚   DISTRIBUTE ACROSS AZs AND DRIVES                                      β”‚
β”‚                                                                         β”‚
β”‚   AZ-1 (drives A-E)    AZ-2 (drives F-J)    AZ-3 (drives K-P)           β”‚
β”‚   β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”              β”‚
β”‚   β”‚ D1 β”‚ D4 β”‚ D7 β”‚    β”‚ D2 β”‚ D5 β”‚ D8 β”‚    β”‚ D3 β”‚ D6 β”‚ D9 β”‚              β”‚
β”‚   β”‚ P1 β”‚ P4 β”‚D10 β”‚    β”‚ P2 β”‚ P5 β”‚    β”‚    β”‚ P3 β”‚ P6 β”‚    β”‚              β”‚
β”‚   β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜              β”‚
β”‚                                                                         β”‚
β”‚   RECOVERY: Can reconstruct from ANY 10 of 16 shards                    β”‚
β”‚   Can lose up to 6 shards (or entire AZ) and still recover              β”‚
β”‚                                                                         β”‚
β”‚   OVERHEAD: 1.6x storage (vs 3x for replication)                        β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Implementation

# storage/erasure_coding.py

"""
Erasure Coding for 11 Nines Durability

Uses Reed-Solomon coding to achieve extreme durability
with only 1.6x storage overhead (vs 3x for replication).
"""

from dataclasses import dataclass
from typing import List, Optional, Tuple
import hashlib


@dataclass
class Shard:
    """A single shard of erasure-coded data."""
    shard_id: int
    data: bytes
    checksum: str
    storage_node: str
    availability_zone: str


@dataclass
class ErasureConfig:
    """Configuration for erasure coding."""
    data_shards: int = 10      # K: minimum shards to reconstruct
    parity_shards: int = 6     # M: redundant shards
    # Total = 16 shards, can lose up to 6


class ErasureCodingService:
    """
    Erasure coding service for S3 durability.
    
    Uses Reed-Solomon (10, 6) coding:
    - Split data into 10 data shards
    - Compute 6 parity shards
    - Can reconstruct from any 10 of 16 shards
    
    Applies concepts:
    - Week 1: Replication and partitioning
    - Week 1: Data durability strategies
    """
    
    def __init__(self, config: ErasureConfig, storage_nodes: List):
        self.config = config
        self.storage_nodes = storage_nodes
        self.total_shards = config.data_shards + config.parity_shards
    
    async def encode_and_store(
        self, 
        object_key: str, 
        data: bytes
    ) -> List[Shard]:
        """
        Encode data and distribute shards across AZs.
        
        Steps:
        1. Calculate checksum of original data
        2. Split into data shards
        3. Compute parity shards
        4. Distribute across AZs (ensuring AZ diversity)
        5. Verify (bracketing) before confirming
        """
        # Step 1: Original checksum
        original_checksum = self._compute_checksum(data)
        
        # Step 2: Split into data shards
        data_shards = self._split_data(data, self.config.data_shards)
        
        # Step 3: Compute parity shards
        parity_shards = self._compute_parity(data_shards)
        
        # Step 4: Select storage nodes (AZ-aware)
        all_shards = data_shards + parity_shards
        shard_placements = self._select_nodes(len(all_shards))
        
        # Step 5: Write shards to storage
        stored_shards = []
        for i, (shard_data, (node, az)) in enumerate(
            zip(all_shards, shard_placements)
        ):
            checksum = self._compute_checksum(shard_data)
            shard = Shard(
                shard_id=i,
                data=shard_data,
                checksum=checksum,
                storage_node=node,
                availability_zone=az
            )
            await self._write_shard(shard)
            stored_shards.append(shard)
        
        # Step 6: BRACKETING - verify we can reconstruct
        if not await self._verify_reconstruction(
            stored_shards, original_checksum
        ):
            raise DurabilityError("Bracketing verification failed")
        
        return stored_shards
    
    async def read_and_decode(
        self, 
        shards: List[Shard]
    ) -> bytes:
        """
        Read shards and reconstruct original data.
        
        Can reconstruct from any K (10) of N (16) shards.
        """
        # Read available shards
        available = []
        for shard in shards:
            try:
                data = await self._read_shard(shard)
                if self._verify_checksum(data, shard.checksum):
                    available.append((shard.shard_id, data))
            except Exception:
                continue  # Shard unavailable
        
        # Need at least K shards
        if len(available) < self.config.data_shards:
            raise DurabilityError(
                f"Only {len(available)} shards available, "
                f"need {self.config.data_shards}"
            )
        
        # Reconstruct using Reed-Solomon
        return self._reconstruct(available)
    
    def _select_nodes(self, count: int) -> List[Tuple[str, str]]:
        """
        Select storage nodes ensuring AZ diversity.
        
        Critical: No more than ceil(count/3) shards per AZ
        This ensures we can lose an entire AZ and still reconstruct.
        """
        placements = []
        az_counts = {}
        max_per_az = (count + 2) // 3  # Ceiling division
        
        for node in self.storage_nodes:
            az = node.availability_zone
            if az_counts.get(az, 0) < max_per_az:
                placements.append((node.id, az))
                az_counts[az] = az_counts.get(az, 0) + 1
            if len(placements) == count:
                break
        
        return placements
    
    async def _verify_reconstruction(
        self, 
        shards: List[Shard], 
        expected_checksum: str
    ) -> bool:
        """
        Bracketing: Verify data can be reconstructed.
        
        This is critical - we don't confirm write success
        until we've proven the data is recoverable.
        """
        try:
            reconstructed = await self.read_and_decode(shards)
            actual_checksum = self._compute_checksum(reconstructed)
            return actual_checksum == expected_checksum
        except Exception:
            return False
    
    def _compute_checksum(self, data: bytes) -> str:
        return hashlib.sha256(data).hexdigest()

Deep Dive 2: Strong Consistency (Week 5 Concepts)

Interviewer: "S3 moved from eventual to strong consistency in 2020. How?"

The Problem

EVENTUAL CONSISTENCY ISSUES (pre-2020)

Scenario 1: Read-after-write
  PUT /bucket/key β†’ 200 OK
  GET /bucket/key β†’ 404 Not Found  (stale!)

Scenario 2: Read-after-update
  PUT /bucket/key v1 β†’ 200 OK
  PUT /bucket/key v2 β†’ 200 OK
  GET /bucket/key β†’ returns v1  (stale!)

Why it happened:
  - Metadata replicated asynchronously
  - Different nodes might see different versions
  - LIST operations even worse

Impact:
  - Data lakes: Missing files in queries
  - Analytics: Incorrect results
  - Required workarounds: Sleep, external databases

The Solution: Strong Consistency

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    STRONG CONSISTENCY ARCHITECTURE                      β”‚
β”‚                                                                         β”‚
β”‚   KEY INSIGHT: Synchronous metadata replication                         β”‚
β”‚                                                                         β”‚
β”‚   PUT FLOW (with strong consistency)                                    β”‚
β”‚   ──────────────────────────────────                                    β”‚
β”‚                                                                         β”‚
β”‚   1. Client sends PUT                                                   β”‚
β”‚      β”‚                                                                  β”‚
β”‚      β–Ό                                                                  β”‚
β”‚   2. Write data to storage layer                                        β”‚
β”‚      β”‚                                                                  β”‚
β”‚      β–Ό                                                                  β”‚
β”‚   3. Update metadata (SYNCHRONOUSLY)                                    β”‚
β”‚      β”œβ”€β”€ Write to primary metadata node                                 β”‚
β”‚      β”œβ”€β”€ Replicate to ALL metadata replicas                             β”‚
β”‚      └── Wait for acknowledgment from majority                          β”‚
β”‚      β”‚                                                                  β”‚
β”‚      β–Ό                                                                  β”‚
β”‚   4. Return 200 OK only after metadata is consistent                    β”‚
β”‚                                                                         β”‚
β”‚   GET FLOW (after strong consistency)                                   β”‚
β”‚   ───────────────────────────────────                                   β”‚
β”‚                                                                         β”‚
β”‚   1. Read from ANY metadata node                                        β”‚
β”‚      - All nodes guaranteed to have latest                              β”‚
β”‚      - No more stale reads                                              β”‚
β”‚                                                                         β”‚
β”‚   TRADE-OFFS                                                            β”‚
β”‚   ──────────                                                            β”‚
β”‚   Before: Higher throughput, eventual consistency                       β”‚
β”‚   After:  Same throughput(!), strong consistency                        β”‚
β”‚                                                                         β”‚
β”‚   AWS achieved this through:                                            β”‚
β”‚   - Optimized network between AZs                                       β”‚
β”‚   - Pipelining metadata updates                                         β”‚
β”‚   - Reducing partition risk to near-zero                                β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Implementation

# metadata/consistency.py

"""
Strong Consistency for S3 Metadata

Ensures read-after-write consistency for all operations.
"""

from dataclasses import dataclass
from typing import Optional, List
from datetime import datetime
import asyncio


@dataclass
class MetadataEntry:
    """Object metadata entry."""
    bucket: str
    key: str
    version_id: str
    etag: str
    size: int
    storage_class: str
    last_modified: datetime
    shard_locations: List[str]


class StrongConsistencyMetadataStore:
    """
    Metadata store with strong read-after-write consistency.
    
    Key mechanism: Synchronous replication to all replicas
    before acknowledging write success.
    
    Applies concepts:
    - Week 5: Strong vs eventual consistency
    - Week 5: Consensus protocols
    """
    
    def __init__(self, replicas: List, quorum_size: int):
        self.replicas = replicas
        self.quorum_size = quorum_size  # Majority
    
    async def put_metadata(
        self, 
        entry: MetadataEntry
    ) -> bool:
        """
        Store metadata with strong consistency.
        
        Only returns success after majority of replicas
        have durably stored the metadata.
        """
        # Generate monotonic version
        entry.version_id = self._generate_version_id()
        
        # Write to all replicas in parallel
        write_tasks = [
            self._write_to_replica(replica, entry)
            for replica in self.replicas
        ]
        
        # Wait for quorum (majority)
        results = await asyncio.gather(
            *write_tasks, 
            return_exceptions=True
        )
        
        successes = sum(1 for r in results if r is True)
        
        if successes >= self.quorum_size:
            # Quorum achieved - write is durable and consistent
            return True
        else:
            # Rollback on failure
            await self._rollback(entry)
            return False
    
    async def get_metadata(
        self, 
        bucket: str, 
        key: str
    ) -> Optional[MetadataEntry]:
        """
        Read metadata - guaranteed to return latest version.
        
        With strong consistency, we can read from any replica.
        All replicas are guaranteed to have the latest data.
        """
        # Read from nearest replica (any will do)
        replica = self._select_nearest_replica()
        
        entry = await self._read_from_replica(replica, bucket, key)
        
        return entry
    
    async def list_objects(
        self, 
        bucket: str, 
        prefix: str
    ) -> List[MetadataEntry]:
        """
        List objects - strongly consistent.
        
        After a PUT, the object is immediately visible in LIST.
        """
        replica = self._select_nearest_replica()
        
        # Read from partition responsible for this prefix
        partition = self._get_partition(bucket, prefix)
        
        return await self._list_from_partition(replica, partition, prefix)
    
    def _generate_version_id(self) -> str:
        """Generate monotonically increasing version ID."""
        # Combination of timestamp + node ID + sequence
        import uuid
        return str(uuid.uuid4())

Deep Dive 3: Index Partitioning (Week 1 Concepts)

Interviewer: "How do you index 500 trillion objects?"

The Problem

INDEXING 500 TRILLION OBJECTS

Challenge:
  - Cannot fit in single index (way too large)
  - Cannot scan linearly (too slow)
  - Must support prefix queries
  - Must handle hot prefixes

Requirements:
  - O(1) lookups by key
  - O(n) prefix scans (where n = matching keys)
  - Automatic scaling
  - No hotspots

The Solution: Key Partitioning

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    INDEX PARTITIONING ARCHITECTURE                      β”‚
β”‚                                                                         β”‚
β”‚   OBJECT KEY: s3://my-bucket/2024/logs/server1/app.log                  β”‚
β”‚                                                                         β”‚
β”‚   PARTITIONING STRATEGY                                                 β”‚
β”‚   ─────────────────────                                                 β”‚
β”‚                                                                         β”‚
β”‚   1. Hash bucket name β†’ Bucket partition                                β”‚
β”‚      hash("my-bucket") β†’ Partition 42                                   β”‚
β”‚                                                                         β”‚
β”‚   2. Within bucket, partition by key prefix                             β”‚
β”‚      "2024/logs" β†’ Sub-partition 7                                      β”‚
β”‚                                                                         β”‚
β”‚   3. Keys sorted within partition for efficient prefix scans            β”‚
β”‚                                                                         β”‚
β”‚   PARTITION STRUCTURE                                                   β”‚
β”‚   ───────────────────                                                   β”‚
β”‚                                                                         β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚   β”‚                     BUCKET PARTITION 42                         β”‚   β”‚
β”‚   β”‚                                                                 β”‚   β”‚
β”‚   β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚   β”‚
β”‚   β”‚  β”‚ Sub-part 1    β”‚ β”‚ Sub-part 2    β”‚ β”‚ Sub-part 7    β”‚   ...    β”‚   β”‚
β”‚   β”‚  β”‚ prefix: ""    β”‚ β”‚ prefix: "a"   β”‚ β”‚ prefix:"2024" β”‚          β”‚   β”‚
β”‚   β”‚  β”‚               β”‚ β”‚               β”‚ β”‚               β”‚          β”‚   β”‚
β”‚   β”‚  β”‚ key1 β†’ loc    β”‚ β”‚ a/b β†’ loc     β”‚ β”‚ 2024/logs/ β†’  β”‚          β”‚   β”‚
β”‚   β”‚  β”‚ key2 β†’ loc    β”‚ β”‚ a/c β†’ loc     β”‚ β”‚ 2024/data/ β†’  β”‚          β”‚   β”‚
β”‚   β”‚  β”‚ ...           β”‚ β”‚ ...           β”‚ β”‚ ...           β”‚          β”‚   β”‚
β”‚   β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚   β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                                                                         β”‚
β”‚   DYNAMIC PARTITIONING                                                  β”‚
β”‚   ────────────────────                                                  β”‚
β”‚   - S3 monitors request rates per partition                             β”‚
β”‚   - If partition exceeds 5,500 GET/sec, split it                        β”‚
β”‚   - Splits happen automatically, transparently                          β”‚
β”‚   - No limit on number of partitions                                    β”‚
β”‚                                                                         β”‚
β”‚   HOT PREFIX HANDLING                                                   β”‚
β”‚   ───────────────────                                                   β”‚
β”‚   Problem: All keys start with "2024/" β†’ hot partition                  β”‚
β”‚   Solution: Add randomized prefix                                       β”‚
β”‚                                                                         β”‚
β”‚   Instead of:  2024/log-001.txt                                         β”‚
β”‚   Use:         a7b3/2024/log-001.txt                                    β”‚
β”‚                                                                         β”‚
β”‚   Distributes load across many partitions                               β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Implementation

# index/partition_manager.py

"""
Index Partitioning for S3

Dynamically partitions index to handle 500+ trillion objects.
"""

from dataclasses import dataclass
from typing import List, Optional, Dict
import hashlib


@dataclass
class Partition:
    """An index partition."""
    partition_id: str
    prefix_range: tuple  # (start_prefix, end_prefix)
    request_rate: float
    node_assignment: str


class PartitionManager:
    """
    Manages index partitions for S3.
    
    Key features:
    - Dynamic splitting based on load
    - Prefix-based range partitioning
    - Automatic rebalancing
    
    Applies concepts:
    - Week 1: Hash vs range partitioning
    - Week 1: Hot key handling
    """
    
    # Thresholds for partition operations
    SPLIT_THRESHOLD_RPS = 5500  # GET requests/second
    MIN_PARTITION_SIZE = 1000   # Objects
    
    def __init__(self, partition_store):
        self.partition_store = partition_store
    
    def get_partition_for_key(
        self, 
        bucket: str, 
        key: str
    ) -> Partition:
        """
        Find the partition responsible for a key.
        
        Uses hierarchical lookup:
        1. Hash bucket to bucket partition
        2. Binary search within bucket for key prefix
        """
        # Level 1: Bucket partition
        bucket_hash = self._hash_bucket(bucket)
        bucket_partition = self._get_bucket_partition(bucket_hash)
        
        # Level 2: Key prefix partition
        key_partition = self._find_key_partition(bucket_partition, key)
        
        return key_partition
    
    async def check_and_split(
        self, 
        partition: Partition
    ) -> List[Partition]:
        """
        Check if partition needs splitting due to high load.
        
        Triggered when request rate exceeds threshold.
        """
        if partition.request_rate < self.SPLIT_THRESHOLD_RPS:
            return [partition]  # No split needed
        
        # Split partition in half by prefix
        start, end = partition.prefix_range
        mid = self._find_split_point(partition)
        
        new_partitions = [
            Partition(
                partition_id=f"{partition.partition_id}-a",
                prefix_range=(start, mid),
                request_rate=partition.request_rate / 2,
                node_assignment=self._select_node()
            ),
            Partition(
                partition_id=f"{partition.partition_id}-b",
                prefix_range=(mid, end),
                request_rate=partition.request_rate / 2,
                node_assignment=self._select_node()
            )
        ]
        
        # Atomic partition update
        await self.partition_store.atomic_split(partition, new_partitions)
        
        return new_partitions
    
    def _hash_bucket(self, bucket: str) -> int:
        """Hash bucket name to partition number."""
        return int(hashlib.md5(bucket.encode()).hexdigest(), 16) % 10000
    
    def _find_split_point(self, partition: Partition) -> str:
        """
        Find optimal split point for partition.
        
        Analyzes key distribution to split evenly.
        """
        # Sample keys in partition
        keys = self.partition_store.sample_keys(partition, n=1000)
        
        # Find median prefix
        keys.sort()
        median_key = keys[len(keys) // 2]
        
        # Extract common prefix
        return self._extract_prefix(median_key)

Deep Dive 4: Storage Classes and Lifecycle (Week 4 Concepts)

Interviewer: "How do storage classes work for cost optimization?"

Storage Classes

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       S3 STORAGE CLASSES                                β”‚
β”‚                                                                         β”‚
β”‚   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚
β”‚   β”‚ Storage Class   β”‚ Durability      β”‚ Availability    β”‚ Use Case     β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Standard     β”‚ 99.999999999%   β”‚ 99.99%          β”‚ Frequent     β”‚β”‚
β”‚   β”‚                 β”‚ (11 nines)      β”‚                 β”‚ access       β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Express      β”‚ 99.999999999%   β”‚ 99.95%          β”‚ Single-digit β”‚β”‚
β”‚   β”‚ One Zone        β”‚                 β”‚ (1 AZ)          β”‚ ms latency   β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Standard-IA  β”‚ 99.999999999%   β”‚ 99.9%           β”‚ Infrequent   β”‚β”‚
β”‚   β”‚                 β”‚                 β”‚                 β”‚ access       β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 One Zone-IA  β”‚ 99.999999999%   β”‚ 99.5%           β”‚ Recreatable  β”‚β”‚
β”‚   β”‚                 β”‚ (in 1 AZ)       β”‚ (1 AZ)          β”‚ data         β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Intelligent  β”‚ 99.999999999%   β”‚ 99.9%           β”‚ Unknown      β”‚β”‚
β”‚   β”‚ Tiering         β”‚                 β”‚                 β”‚ access       β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Glacier      β”‚ 99.999999999%   β”‚ 99.99%          β”‚ Archive      β”‚β”‚
β”‚   β”‚ Instant         β”‚                 β”‚                 β”‚ (ms access)  β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Glacier      β”‚ 99.999999999%   β”‚ 99.99%          β”‚ Archive      β”‚β”‚
β”‚   β”‚ Flexible        β”‚                 β”‚                 β”‚ (min-hours)  β”‚β”‚
β”‚   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”‚
β”‚   β”‚ S3 Glacier Deep β”‚ 99.999999999%   β”‚ 99.99%          β”‚ Archive      β”‚β”‚
β”‚   β”‚ Archive         β”‚                 β”‚                 β”‚ (12+ hours)  β”‚β”‚
β”‚   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚
β”‚                                                                         β”‚
β”‚   COST COMPARISON (per GB/month, us-east-1, 2025)                       β”‚
β”‚   ─────────────────────────────────────────────                         β”‚
β”‚   S3 Standard:           $0.023                                         β”‚
β”‚   S3 Standard-IA:        $0.0125                                        β”‚
β”‚   S3 Glacier Instant:    $0.004                                         β”‚
β”‚   S3 Glacier Deep:       $0.00099                                       β”‚
β”‚                                                                         β”‚
β”‚   Glacier Deep is 23x cheaper than Standard!                            β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Lifecycle Policies

# lifecycle/policy_engine.py

"""
S3 Lifecycle Policy Engine

Automatically transitions objects between storage classes
based on age and access patterns.
"""

from dataclasses import dataclass
from typing import List, Optional
from datetime import datetime, timedelta
from enum import Enum


class StorageClass(Enum):
    STANDARD = "STANDARD"
    STANDARD_IA = "STANDARD_IA"
    INTELLIGENT_TIERING = "INTELLIGENT_TIERING"
    GLACIER_INSTANT = "GLACIER_IR"
    GLACIER_FLEXIBLE = "GLACIER"
    GLACIER_DEEP = "DEEP_ARCHIVE"


@dataclass
class LifecycleRule:
    """A lifecycle transition rule."""
    rule_id: str
    prefix_filter: Optional[str]
    tag_filter: Optional[dict]
    transitions: List[dict]  # [{days: 30, storage_class: "STANDARD_IA"}]
    expiration_days: Optional[int]


class LifecyclePolicyEngine:
    """
    Evaluates and applies lifecycle policies to objects.
    
    Common pattern:
    1. Standard for 30 days (frequent access)
    2. Standard-IA for 60 days (occasional access)
    3. Glacier for 1 year (rare access)
    4. Delete after 7 years (compliance)
    
    Applies concepts:
    - Week 4: Cache tiering strategies
    - Week 8: Data lifecycle management
    """
    
    def __init__(self, metadata_store, storage_service):
        self.metadata = metadata_store
        self.storage = storage_service
    
    async def evaluate_object(
        self, 
        bucket: str, 
        key: str, 
        rules: List[LifecycleRule]
    ) -> Optional[StorageClass]:
        """
        Determine if object should transition.
        """
        obj_metadata = await self.metadata.get(bucket, key)
        
        for rule in rules:
            if not self._matches_filter(obj_metadata, rule):
                continue
            
            age_days = (datetime.utcnow() - obj_metadata.last_modified).days
            
            # Check expiration
            if rule.expiration_days and age_days >= rule.expiration_days:
                return "DELETE"
            
            # Check transitions (in reverse order - most aggressive first)
            for transition in sorted(
                rule.transitions, 
                key=lambda t: t['days'], 
                reverse=True
            ):
                if age_days >= transition['days']:
                    target_class = StorageClass(transition['storage_class'])
                    if target_class != obj_metadata.storage_class:
                        return target_class
        
        return None  # No transition needed
    
    async def apply_transition(
        self, 
        bucket: str, 
        key: str,
        target_class: StorageClass
    ) -> bool:
        """
        Transition object to new storage class.
        
        For Glacier classes, this involves:
        1. Reading object from current storage
        2. Re-encoding for archive (more aggressive erasure coding)
        3. Writing to archive storage
        4. Updating metadata
        """
        if target_class == "DELETE":
            return await self._delete_object(bucket, key)
        
        # Get current object location
        current_shards = await self.metadata.get_shard_locations(bucket, key)
        
        # Transition to new storage class
        new_shards = await self.storage.transition(
            current_shards, 
            target_class
        )
        
        # Update metadata atomically
        await self.metadata.update_storage_class(
            bucket, key, target_class, new_shards
        )
        
        return True

Phase 5: Scaling and Edge Cases

Scaling Strategies

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       SCALING STRATEGIES                                β”‚
β”‚                                                                         β”‚
β”‚   AUTOMATIC PARTITION SCALING                                           β”‚
β”‚   ───────────────────────────                                           β”‚
β”‚   - S3 monitors request rates per prefix                                β”‚
β”‚   - Automatically splits hot partitions                                 β”‚
β”‚   - No manual intervention needed                                       β”‚
β”‚   - Transparent to applications                                         β”‚
β”‚                                                                         β”‚
β”‚   WORKLOAD DECORRELATION                                                β”‚
β”‚   ──────────────────────                                                β”‚
β”‚   At S3's scale, individual workload spikes average out:                β”‚
β”‚   - Millions of customers                                               β”‚
β”‚   - Trillions of objects                                                β”‚
β”‚   - Most data is idle most of the time                                  β”‚
β”‚   - Hot spots become statistically rare                                 β”‚
β”‚                                                                         β”‚
β”‚   BEST PRACTICES FOR CLIENTS                                            β”‚
β”‚   ──────────────────────────                                            β”‚
β”‚   1. Use randomized prefixes for high-throughput workloads              β”‚
β”‚      Bad:  logs/2024-01-01/event-001.json                               β”‚
β”‚      Good: a7b3c1/logs/2024-01-01/event-001.json                        β”‚
β”‚                                                                         β”‚
β”‚   2. Implement exponential backoff for 503 errors                       β”‚
β”‚      - Start with 100ms delay                                           β”‚
β”‚      - Double on each retry                                             β”‚
β”‚      - Max 5 retries                                                    β”‚
β”‚                                                                         β”‚
β”‚   3. Use multipart upload for large objects                             β”‚
β”‚      - Parallel part uploads                                            β”‚
β”‚      - Retry individual parts on failure                                β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Edge Cases

EDGE CASE 1: Entire AZ Failure

Scenario: Entire availability zone loses power
Impact: ~33% of shards unavailable

Solution:
  - Erasure coding allows reconstruction from 10 of 16 shards
  - Even with 6 shards in failed AZ, data is recoverable
  - Repair service immediately starts re-replicating
  - No data loss, brief availability impact


EDGE CASE 2: Correlated Disk Failures

Scenario: Heat wave causes accelerated disk failures
Impact: Higher than normal failure rate

Solution:
  - Continuous health monitoring
  - Proactive migration from stressed disks
  - Spare capacity on all disks for fast recovery
  - Parallel re-replication across thousands of drives


EDGE CASE 3: Hot Object

Scenario: Single object gets millions of requests
Impact: Could overwhelm storage nodes

Solution:
  - Caching at front end layer
  - Object replicated to more nodes automatically
  - CloudFront integration for CDN caching
  - Request throttling with 503 errors + backoff


EDGE CASE 4: Concurrent Writes to Same Key

Scenario: Two clients PUT same key simultaneously
Impact: Which version wins?

Solution:
  - Last-writer-wins semantics
  - Timestamps determine winner
  - Versioning available to keep all versions
  - Conditional writes (If-Match) for optimistic locking

Phase 6: Monitoring and Operations

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                       S3 MONITORING DASHBOARD                            β”‚
β”‚                                                                          β”‚
β”‚   DURABILITY HEALTH                                                      β”‚
β”‚   β”œβ”€β”€ Shard integrity:           100%          [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ]              β”‚
β”‚   β”œβ”€β”€ Objects with all shards:   99.9999999%   [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ]              β”‚
β”‚   β”œβ”€β”€ Repair queue depth:        12,345        [β–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚   └── Repair throughput:         1.2M/hour     [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘]              β”‚
β”‚                                                                          β”‚
β”‚   AVAILABILITY                                                           β”‚
β”‚   β”œβ”€β”€ Request success rate:      99.99%        [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ]              β”‚
β”‚   β”œβ”€β”€ 503 rate:                  0.001%        [β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚   β”œβ”€β”€ GET latency (p50):         45ms          [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘]              β”‚
β”‚   └── PUT latency (p50):         85ms          [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘]              β”‚
β”‚                                                                          β”‚
β”‚   STORAGE                                                                β”‚
β”‚   β”œβ”€β”€ Total objects:             512T          [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ]              β”‚
β”‚   β”œβ”€β”€ Total storage:             287 EB        [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘]              β”‚
β”‚   β”œβ”€β”€ Daily growth:              +2.1 EB       [β–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚   └── Disk utilization:          67%           [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘]              β”‚
β”‚                                                                          β”‚
β”‚   REQUEST PATTERNS                                                       β”‚
β”‚   β”œβ”€β”€ GET requests/sec:          187M          [β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‘]              β”‚
β”‚   β”œβ”€β”€ PUT requests/sec:          23M           [β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚   β”œβ”€β”€ LIST requests/sec:         8M            [β–ˆβ–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚   └── Hot partitions:            142           [β–ˆβ–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘β–‘]              β”‚
β”‚                                                                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Alerting Strategy

CRITICAL (PagerDuty):
  - Durability below 99.9999999%
  - Any data loss detected
  - Region-wide availability < 99.9%
  - Repair queue > 1M objects

WARNING (Slack):
  - 503 rate > 0.01%
  - Disk utilization > 80%
  - Hot partition count > 500
  - Latency p99 > 500ms

INFO (Dashboard):
  - Daily storage growth
  - Traffic patterns
  - Storage class distribution

Interview Conclusion

Interviewer: "Excellent work. You've covered durability, consistency, and scaling really well. Any questions?"

You: "Thank you! I'm curious about the 2020 strong consistency change. How did AWS achieve it without sacrificing performance?"

Interviewer: "Great question. They invested heavily in their inter-AZ network to make synchronous replication nearly as fast as async. Combined with optimizations to their metadata layer, they essentially eliminated the trade-off."


Summary: Concepts Applied

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚            CONCEPTS FROM 10-WEEK COURSE IN S3 DESIGN                    β”‚
β”‚                                                                         β”‚
β”‚   WEEK 1: DATA AT SCALE                                                 β”‚
β”‚   β”œβ”€β”€ Partitioning: Key-based index partitioning                        β”‚
β”‚   β”œβ”€β”€ Replication: Multi-AZ data distribution                           β”‚
β”‚   └── Hot keys: Automatic partition splitting                           β”‚
β”‚                                                                         β”‚
β”‚   WEEK 2: FAILURE-FIRST DESIGN                                          β”‚
β”‚   β”œβ”€β”€ Erasure coding: Survive up to 6 shard failures                    β”‚
β”‚   β”œβ”€β”€ Checksums: Detect and repair corruption                           β”‚
β”‚   └── Bracketing: Verify before confirming write                        β”‚
β”‚                                                                         β”‚
β”‚   WEEK 4: CACHING                                                       β”‚
β”‚   β”œβ”€β”€ Storage tiers: Hot, warm, cold, frozen                            β”‚
β”‚   └── Lifecycle: Automatic tier transitions                             β”‚
β”‚                                                                         β”‚
β”‚   WEEK 5: CONSISTENCY                                                   β”‚
β”‚   β”œβ”€β”€ Strong consistency: Read-after-write                              β”‚
β”‚   β”œβ”€β”€ Synchronous replication: All replicas before ACK                  β”‚
β”‚   └── Conflict resolution: Last-writer-wins                             β”‚
β”‚                                                                         β”‚
β”‚   WEEK 10: PRODUCTION READINESS                                         β”‚
β”‚   β”œβ”€β”€ 11 nines durability SLA                                           β”‚
β”‚   β”œβ”€β”€ Continuous repair monitoring                                      β”‚
β”‚   └── Automatic scaling without intervention                            β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  WHY S3 IS AN ENGINEERING MARVEL                        β”‚
β”‚                                                                         β”‚
β”‚   DURABILITY                                                            β”‚
β”‚   ──────────                                                            β”‚
β”‚   β€’ 11 nines (99.999999999%)                                            β”‚
β”‚   β€’ 1 object lost per 10 million over 10,000 years                      β”‚
β”‚   β€’ Achieved with only 1.6x storage overhead                            β”‚
β”‚                                                                         β”‚
β”‚   SCALE                                                                 β”‚
β”‚   ─────                                                                 β”‚
β”‚   β€’ 500+ trillion objects                                               β”‚
β”‚   β€’ Hundreds of exabytes                                                β”‚
β”‚   β€’ 200+ million requests/second                                        β”‚
β”‚                                                                         β”‚
β”‚   CONSISTENCY                                                           β”‚
β”‚   ───────────                                                           β”‚
β”‚   β€’ Strong read-after-write since 2020                                  β”‚
β”‚   β€’ No stale reads                                                      β”‚
β”‚   β€’ No performance penalty                                              β”‚
β”‚                                                                         β”‚
β”‚   INNOVATION                                                            β”‚
β”‚   ──────────                                                            β”‚
β”‚   β€’ First cloud storage service (2006)                                  β”‚
β”‚   β€’ Defied CAP theorem at scale                                         β”‚
β”‚   β€’ 9 storage classes for every use case                                β”‚
β”‚   β€’ Foundation for AWS ecosystem                                        β”‚
β”‚                                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Sources

Official AWS:

Architecture Deep Dives:

Strong Consistency:

Recent Updates:


Self-Assessment Checklist

After studying this case study, you should be able to:

  • Explain erasure coding and why it's superior to replication for durability
  • Calculate durability math (11 nines)
  • Design an index partitioning scheme for trillions of objects
  • Explain how S3 achieved strong consistency without sacrificing performance
  • Describe bracketing and why it's critical for durability
  • Design lifecycle policies for cost optimization
  • Handle hot prefixes through randomization
  • Implement exponential backoff for 503 errors
  • Explain the trade-offs between different storage classes
  • Design for correlated failures (AZ failure, disk batch failure)
  • Monitor durability and availability at scale
  • Articulate why S3 is considered the "greatest cloud service"

This case study covers AWS S3, the world's most durable object storage system, demonstrating how to achieve 11 nines of durability with strong consistency at planetary scale.