Nathan Wailes - Blog - GitHub - LinkedIn - Patreon - Reddit - Stack Overflow - Twitter - YouTube
Algorithms, Data Structures, System Design and Interview Questions
- 1.1 Related pages
- 2 Non-algorithm-specific interview advice
- 3 Algorithms
- 3.1 A step-by-step process for solving algorithm questions
- 3.2 How to study
- 3.3 Data structures
- 3.3.1 Questions
- 3.4 Learning Resources
- 3.4.1 Algorithms Unlocked
- 3.5 Topics
- 3.5.1 Linked Lists
- 3.5.2 Databases
- 3.5.3 Dynamic Programming (DP)
- 3.5.4 Heaps
- 3.5.5 Recursion
- 3.5.6 Trees
- 3.5.6.1 Binary Trees
- 3.5.7 The interview process
- 3.6 Flashcards
- 4 System Design
- 5 Resumes
Related pages
My overall strategy on this page is to 1) summarize information I find around the web, and then 2) try to weave the information I have in those summaries into unified processes for handling DSA questions, system design questions, API design questions, how to format a resume, etc.
Non-algorithm-specific interview advice
Algorithms
A step-by-step process for solving algorithm questions
Understand that it is critical that you think out loud for the steps that follow.
Sources:
Why I Fail Candidates During Google Interviews.
The next issue that most people have is that they're a little too quiet, and I get it. A lot of us try to code on our own late at night without talking to anyone, but when you're in the workforce, you have to show how well you can communicate with others. The best way to do that is by thinking out loud. So when you're writing your code, definitely express what you're thinking line by line as you're coding it out.
Clarify the question with the interviewer.
Be sure you understand the limits of the input.
Define some of your own test cases.
Be sure you understand what the output should be for the test cases you define.
Sources:
Why I Fail Candidates During Google Interviews.
The biggest mistake that most candidates make is not clarifying the question. I think people are a little too eager to start coding, and they end up solving the wrong problem or forgetting a few test cases. The simple thing that most people can do to start clarifying the problem is just to ask questions about it, understand what the limits of the problem are, define their own test cases, and understand what the input should be for the expected output.
“What the input should be for the expected output”? Huh? Did he mean “what the output should be for a given input”?
Draw out a simple visual example.
Do it in something like Balsamiq.
Copy-paste it to visually show what the step-by-step process should look like.
Think about what variables you want at each step to answer the relevant question you have to be able to calculate the desired answer.
Ask yourself: What features of the state of each ‘step’ through the input can you think of to assign names to? Write the code needed to calculate/update these values with each iteration (or just have them as stub functions to be filled in later) and it could give you enough of a step in the right direction for the rest of the solution to make itself clear.
Step through the code with a simple example.
2024.09.14 - I’m thinking it may make sense to keep track of variables in a separate comment block, keeping track of the call stack / execution contexts as well. So basically doing what Jetbrains does, but manually.
One thing I’m noticing is that if I have the comment block with the variables above the code, then I can’t easily reference line numbers in the code because adding new lines to my comment block changes the line numbers of the code. So it seems better to keep the comment block below the code.
After you write out your code, step through it line-by-line with at least one of the test cases.
Sources:
Why I Fail Candidates During Google Interviews.
The other most common mistake that I've seen other people make is assuming that their code works, and what I mean by this is that after they go through the entire interview, they're just so glad to write down all the code that they forget to test if it actually works or does what it's supposed to do. So what candidates should be doing here is to actually go through their code line by line against one of the test cases that they outlined at the beginning of the interview. If you do that, then it's not just assuming that it works, it actually does work, and it shows the interviewer that you can really communicate your code and what's going on.
Explain the running-time of your code by walking through the code and explaining the running-time of each area.
Sources:
Why I Fail Candidates During Google Interviews.
The next topic I want to talk about is that people tend to struggle a little bit with the running time. What I mean by this is that most people feel like they have to just blurt out the running time or answer it like it's a multiple-choice exam or some other homework question. When someone asks you what the running time is, people don't want you to just say it's linear or n squared or whatever. They really want you to explain different areas of your code and what the running time is for those areas. If you can do that, you'll stand out a lot more than people who are just treating it like another exam question.
How to study
Memorize how to work with basic data structures (CRUD and search operations).
lists, dicts, sets, deques
Memorize some basic algorithms that are used as building blocks in many LeetCode questions.
Bookmark the two links below in a Chrome bookmark folder so you can quickly open all three links every day (right-click the folder and select “Open all”):
Every day, learn one of the basic algorithms listed in this pastebin if you haven’t already learned all of them.
Start from the top and work your way down.
To “learn” an algorithm:
Read the pastebin until you feel like you understand how the algorithm works.
Write out the algorithm on online-python.com with the pastebin open in a separate window (have the two windows open side-by-side).
I recommend turning off their autocompletion feature as I find it more of a hindrance than a help.
Set a timer for 60 minutes and then see if you can write out the algorithm on online-python.com from memory.
If you can’t, just keep trying that day until you can. Adjust the timer duration as necessary.
Every day, review two of the basic algorithms you’ve already learned.
Write out the algorithm, including a test case, from memory on online-python.com
You should be able to do this review in ~5 minutes.
Memorize the NeetCode 150.
I put all of the Neetcode 150 in Anki, separated by question type.
Creating Replit versions:
If I can’t access the question on Leetcode (it’s a premium question) or the Lintcode version (or if the Lintcode version is too different from the Leetcode question), I can create a Replit version in here.
Make the ‘template’ version of the question, then when you actually want to do the question, use that same Replit and you can just use the git feature to discard all of the changes you made to the code when you’re done.
Thought: I want to have a way to separately review a high-level description of how to solve a problem vs. reviewing the writing out of the actual solution. Like, I want to be able to do 10 high-level solutions per day (where I just have to remember a plain-English sentence or two about how to solve a question) and 1 written-out solution (where I actually have to write out the solution and run it to make sure it works). I think this is a good way to more-frequently refresh my memory of solutions without spending too much time per day. I want the whole thing to take ~15 minutes.
Thought: It’s sometimes not clear when I “know” a solution. Like, I solved a linked list question, but then I saw in the LeetCode explanation that there was a more-efficient solution. So when do I review the card? In a day, to see if I remember the efficient solution? Or in 6 months, when I might have forgotten my less-efficient solution?
2024.08.27 - What I’m doing now is aiming to spend an hour per day on the NeetCode 150, where my goal is to memorize the approach used to solve the question rather than try to “figure out” the question on my own. So I will spend a few minutes trying to remember the approach to solve the question, if I can’t then I just look at the LeetCode Editorial (both explanation and code) and set the due date of the card for 7-14 days from now to see if I can remember the approach at that time. Once I can remember the approach, I then see if I can write out the code. So I’m using the same Anki card to test my memory of two different things: 1) if I can remember a plain-English description of the approach for solving the question, and 2) if I can remember both the plain-English description and also how to write out the code.
If I end up with extra time from solving a question quickly, I’ll keep going with more NeetCode 150 Anki cards until the hour is up.
2024.08.29 - I think the way to do this is:
For each card, just read the editorials of the different approaches and set the card for a day or two out and just make sure you can remember the various different approaches. Interviewers apparently want to see that you can identify the different possible approaches to a problem.
Once you can identify the different approaches, set the card for maybe 7-14 days out to see if you can actually write out the ideal approach (and maybe one of the unideal approaches?).
2024.08.31 - If you’re limited on time on a certain day, try to make at least 20 minutes of progress so you don’t lose momentum. Just watch the LC Editorial video for a question, or just watch the Neetcode video and try to summarize what you see, or just skim the explanations and try to fill out the Anki card with a summary of what you read. Or just read the explanation of one of the possible approaches in-depth and try to summarize it.
Example: I’m taking several days to work on the LRU Cache LC question.
The first day I just opened the editorial, skimmed it, and wrote out on the Anki card “1. doubly-linked list. TODO: fill this out” and “2. use built-in data structures like OrderedDict(). You need to know how to use the .move_to_end() and .popitem() methods”.
The second day I skimmed the explanation of the first approach in greater detail so that I could fill out a summary on the Anki card.
My idea is that on a third day I’ll skim the code of one or both explanations and then try to write it out from memory afterwards (immediately afterwards? after 30 minutes? I’m not sure yet).
2024.09.06 - I had a good idea today: sometimes I’ll solve a question using the ‘normal’ approach but still feel like the card is worth reviewing to test myself on an alternative way of solving the question. So today I just made a note of what solution I wanted to try next time in my note in the answer section of Anki, and made a similar note in the ‘Question’ section so that the next time I see the card I’ll remember what my goal is:
2024.09.07
When you’re first learning a question, you can use Anki to just test your ability to read the prompt and identify the major issues / tricky things about the question, and the possible approaches for dealing with them. Don’t even bother trying to code the solution until you’ve internalized that much.
I’m also thinking it might make sense to skip the Hards until you’ve first gone through and become comfortable with the Mediums in every topic. I think I’m going to skip the Linked List Hards until I get through the rest of the questions first.
I’m also thinking I should maybe add all the Easy questions to the list of daily practice problems that I randomly choose two from to write out.
2024.09.10 - I’m working on a binary tree LC question (LC 102) and it’s occurring to me while I try to step through an example that I don’t have a clear system for writing out the state of the variables as I step through the code while keeping comments to the side. For example, I’m just now deciding that I will try to keep track of the state of variables where they’re defined, and then if there’s a .pop from it, I just add a comment pointing up to indicate that I should look above for the state. But it’s making me realize that there should really be a standardized / best-practice way of doing this, but it’s not something I see even being discussed online.
2024.09.10
When I first see a new problem, I don’t have it as a goal to write out the code. I just want to spend literally a few minutes thinking about it to see if I can guess what the correct approach is, and then I look at the LC Editorial / NC video to see what the correct approach is. I think this is a much nicer way to do things, much less likely to waste your time. Previously I would spend hours trying to figure it out on my own. You don’t want to waste time going in the wrong direction.
I’m looking at LC 572, supposedly an “Easy”, and the latter two solutions in LC are both closer to being Hard questions. So I guess I shouldn’t worry about memorizing every possible solution to a question as I move through questions. Maybe set it as a later goal.
Try solving questions on LeetCode that are labeled as being asked by the company you’re trying to apply to.
Data structures
Questions
What is the threshold at which point it's a good idea to create a class for a data structure?
Learning Resources
Algorithms-Specific Informational Websites (not programming challenge websites)
Nick Parlante - Pointers, Binary Trees, etc.
Very good explanations
OK explanations
Books
Look good:
The Algorithm Design Manual by Skiena
Rec'd by Adam D'Angelo in a Quora post.
Essential Algorithms - 5 stars on O'Reilly
Working with Algorithms in Python - O'Reilly video series - 4.3 stars - Same guy who wrote 'Algorithms in a Nutshell'
Maybe good:
Meh:
Articles
Interview Challenge sites
Front-end interview challenge / prep sites
Interview Prep sites
rec'd by Choketsu
Codility
Rec'd by Toptal
rec'd by Choketsu
via Tom A.
Non-Interview-Prep Challenge sites
Rec'd by Toptal
Someone who works with machine learning explaining how Kaggle is a simplification of real ML work:
Jason B Hill's Project Euler solutions
Rec'd by Toptal, Chris Uga
rec'd by Choketsu
How to get started training at the USACO Gateway:
What is it like to attend the USACO training camp?
It's only a week
New people just get 3 hours of lab/lecture in the morning
The top guys do a contest every day. Sometimes the contest goes until after lunch, otherwise they just review after lunch. (Seems like 5 hrs/day)
Then everyone has fun for the rest of the day
If one is stuck on a particular section, is it worthwhile to keep on trying to solve the given problem, or should one move on to other sources in preparation for programming contests?
Richard Peng, one of the most successful Canadian IOI competitors of all time, on the USACO training pages:
"USACO training was put together before the major IOI escalation (Finland/Korea/Wisconsin). A lot of the techniques described on it are no longer useful on *OI and a lot of the 'hot' topics over the past few years are not covered. Also, a lot of the bottle necks on training are quite meaningless, and they typically cause a lot of frustration and time waste on the scale of months."
Algorithms Unlocked
Summary
1. What Are Algorithms and Why Should You Care?
2. How to Describe and Evaluate Computer Algorithms
3. Algorithms for Sorting and Searching
4. A Lower Bound for Sorting and How to Beat It
5. Directed Acyclic Graphs
6. Shortest Paths
7. Algorithms on Strings
8. Foundations of Cryptography
9. Data Compression
10. Hard? Problems
Topics
Linked Lists
Databases
Dynamic Programming (DP)
https://activities.tjhsst.edu/sct/lectures/0405/dp1.pdf
"Dynamic programming, often abbreviated "DP", is a technique to efficiently solve recursion problems -- problems whose solutions depend on solutions of smaller problems -- by storing partial results. Many times, it is obvious that a given problem is a recursion problem. However, it may not be so obvious that using DP will tremendously speed up the solution. We start by looking at an example."
Heaps
Recursion
Trees
Binary Trees
The interview process
http://seldo.com/weblog/2014/08/26/you_suck_at_technical_interviews
http://blog.triplebyte.com/how-to-pass-a-programming-interview
https://medium.com/@evnowandforever/f-you-i-quit-hiring-is-broken-bb8f3a48d324#.jhferyy53
Flashcards
Here is an explanation of how to programmatically create Anki deck packages: http://decks.wikia.com/wiki/Anki_APKG_format_documentation
Here's an example of people collaborating on an Anki deck: http://decks.wikia.com/wiki/Main_Page
Javascript problem manager: https://github.com/imaginate/algorithmIV-question-manager
System Design
Step-by-step process
Clarify the task.
Share what you know about the system with the interviewer.
If you’re not familiar with the system, tell the interviewer so they can explain how it should work.
Constrain the problem to make it solvable within the hour.
List the different metrics you could consider tracking / focusing on and explain the one you think we should focus on and why.
Total users, DAU, WAU, MAU. Maybe daily minutes per active user.
Uber: driver ratings, trip completion rates, and acceptance rates, number of rides completed per day, number of cancelled rides, number of rider complaints raised, average loading time, ratio of daily active users to monthly active users, number of daily rides booked per daily active user, average revenue per user, average wait time
Clarify the scale of usage, the amount of data involved, the rate of requests.
Specify some key metrics to help your high-level decision-making.
Example: the number of users, the amount of data.
Try to determine the number of requests per day, divide by 100,000 (seconds per day) to get the average requests per second, and take a guess at the peak factor (how much times the number of requests per second you might see in peak traffic), take a guess at how many requests per second a typical server might be able to handle (say, 1000), and use that to determine how many servers you’ll need.
It doesn’t have to be perfect, the idea is to see that you’re aware of these ideas.
If you’re going to be using auto-scaling servers, that’s fine.
(Source)
You don’t necessarily have to go into the details of scaling at this point.
Design the system at a high level.
At an early stage, it’s OK to have a few simple components; later on you can get more complex.
Example:
If asked to design Spotify, you could draw a mobile app connected to a load balancer which connects to a group of webservers, which connect to the database.
Outline your database design.
After laying out your components, design your database.
Split out databases appropriate for different types of data.
In general, try to explain why you’re doing something upfront rather than waiting for your interviewer to ask you.
Go through a normal use-cases and explain/flesh out how it will work in greater detail.
Consider how the webservers will retrieve data from the db and how they’ll send it to the end-user; will they use websockets?
If you have more time, think about how things could “go wrong” in terms of scaling.
Examples:
When designing Spotify, some songs might be requested by users for listening far more than others.
Solutions:
A CDN
Learning resources
Abbott & Fisher
APIs: A Strategy Guide: Creating Channels with Application Programming Interfaces
The Art of Capacity Planning
Written by the manager of data operations for Flickr.com
First edition: The Art of Capacity Planning: Scaling Web Resources
Second edition: The Art of Capacity Planning: Scaling Web Resources in the Cloud
ByteByteGo / Alex Xu
https://www.designgurus.io/ - Grokking the System Design Interview (online course)
https://www.amazon.com/Design-Web-APIs-Arnaud-Lauret/dp/1617295108/
https://www.amazon.com/Designing-Swagger-OpenAPI-Joshua-Ponelat/dp/1617296287/
https://www.amazon.com/Designing-Distributed-Systems-Patterns-Paradigms/dp/1491983647/
https://www.amazon.com/Designing-Web-APIs-Building-Developers/dp/1492026921/
microservice-api-patterns.org - This looks really interesting.
NeetCode system design courses - https://neetcode.io/courses
https://drive.google.com/file/d/1U7EchvgzCjTtF5JzGVVCFgjXC-qw7lIG/view?pli=1 ← They send you a link to this ebook when you sign up for their newsletter.
https://www.amazon.com/REST-Design-Rulebook-Mark-Masse/dp/1449310508/
https://www.algoexpert.io/systems/product
Negative review: https://www.reddit.com/r/cscareerquestions/comments/s0e54f/warning_systems_expert_by_the_algoexpert_guy_is/
“modules are very light on information, and you'd have a better time just reading the wikipedia page on those topics.”
“topics aren't well explained and it's just an extremely high level overview of what they are”
Udemy: https://www.udemy.com/courses/search/?src=ukw&q=system+design
https://www.amazon.com/Scalability-Startup-Engineers-Artur-Ejsmont/dp/0071843655
API Design Patterns
Summary
PART 1: INTRODUCTION
1 Introduction to APIs
2 Introduction to API design patterns
PART 2: DESIGN PRINCIPLES
3 Naming
4 Resource scope and hierarchy
5 Data types and defaults
PART 3: FUNDAMENTALS
6 Resource identification
7 Standard methods
8 Partial updates and retrievals
9 Custom methods
10 Long-running operations
11 Rerunnable jobs
PART 4: RESOURCE RELATIONSHIPS
12 Singleton sub-resources
13 Cross references
14 Association resources
15 Add and remove custom methods
16 Polymorphism
PART 5: COLLECTIVE OPERATIONS
17 Copy and move
18 Batch operations
19 Criteria-based deletion
20 Anonymous writes
21 Pagination
22 Filtering
23 Importing and exporting
PART 6: SAFETY AND SECURITY
24 Versioning and compatibility
25 Soft deletion
26 Request deduplication
27 Request validation
28 Resource revisions
29 Request retrial
30 Request authentication
Designing APIs with Swagger and OpenAPI
Summary
PART 1 DESCRIBING APIS
1 Introducing APIs and OpenAPI
2 Getting set up to make API requests
3 Our first taste of OpenAPI definitions
4 Using Swagger Editor to write OpenAPI definitions
5 Describing API responses
6 Creating resources
7 Adding authentication and authorization
8 Preparing and hosting API documentation
PART 2 DESIGN-FIRST
9 Designing a web application
10 Creating an API design using OpenAPI
11 Building a change workflow around API design–first
12 Implementing frontend code and reacting to changes
13 Building a backend with Node.js and Swagger Codegen
14 Integrating and releasing the web application
PART 3 EXTENDING APIS
15 Designing the next API iteration
16 Designing schemas with composition in OpenAPI
17 Scaling collection endpoints with filters and pagination
18 Supporting the unhappy path: Error handling with problem+json
19 Improving input validation with advanced JSON Schema
20 Versioning an API and handling breaking changes
21 The API prerelease checklist
Designing Data-Intensive Applications
IGotAnOffer
2023.03.10 - Design Spotify (with ex-Google EM)
Summary
He goes through the various use cases of Spotify he can think of, then asks which one he should focus on, which the interviewer limits to “finding and playing music”.
He asks how many users and how many songs the interviewer wants to support, then goes through some math to try to estimate how much data they’ll need to store, but he never ends up using that information later on.
He draws out a high-level diagram of the system: a phone connects to a load balancer which connects to a bunch of web servers which connect to a database.
He goes deeper on the database and explains that they’ll want to have a relational db for user metadata (AWS RDS) and blob storage for the songs (AWS S3). He then lists some example fields you might want to use for the song table in the relational db.
23:05 - He discusses how to send the audio data to the user, he initially suggests using websockets to send chunks to the webserver, then decides that for only 5mb it would be “almost instantaneous” to send data from S3 to the webservers, and would remove any lag from hitting the database. So that makes me wonder whether the S3 db would be at the same datacenter as the webservers and what kind of bandwidth you can get between them.
I’m interested in knowing the process for determining whether to chunk between the EC2 instance and the S3 bucket.
25:37 - After creating his relatively-simple design for Spotify, he asks himself, “What could go wrong here?” and realizes that certain songs/artists could be way more popular than others. To avoid wasting bandwidth, he recommends using a CDN (a cache).
33:00 - He’s talking about other types of caches: a cache layer between the db and the webservers (redis?), a local cache within each webserver, and then a cache within the phone app to store frequently-played songs.
35:20 - More on the load balancer - Normally load balancers strive to equalize the CPU load across servers, but for this app he might focus more on I/O and/or RAM usage; he’d want to make it smarter than a normal round-robin system. [NW: Can a load balancer dynamically divert traffic like that? That’s not how I thought they worked.]
38:20 - You may want to locate your replicas closer to the geographies where the data is more likely to be used.
Comments
Like many others I was designing my own version alongside Mark's and I think the area he was a little weaker in (which he himself admits) was load balancing. My background is systems administration so I may have a different perspective on this. I think going back a few steps, chunking the data also serves an important role in the load balancing process. I would have songs chunked, from every retrieval source, so that as soon as the user presses Play, the song begins playing, and playback should always be an instantaneous process unless the servers are over capacity, which can occur because some song or album has gone viral.
I would structure the web server load balancing so that client apps attempt to contact the server geographically closest to them first and utilize GSLB (global site load balancing) which combines layer 4 and layer 7 load balancing, as I/O capacity or concurrent connections (the two metrics I would prioritize) reach a threshold.
Again, when talking about load balancing, it's important to determine what happens when maximum capacity on all servers is exceeded. When this happens in my design, the system will issue "tickets" for data chunks, served in the order they arrive in. This is where song chunking comes into play. Because we are chunking the MP3 data, we can still serve the first chunk of the song from the nearest available server as soon as that I/O is available, further ensuring near-instantaneous playback upon pressing the Play button. The rest of the song then has some time to download and cache to the client device, reducing the number of interruptions and pauses in playback due to bandwidth and concurrent connection overages.
2023.06.30 - Design TikTok (with ex-Google EM)
He just straight up asks the interviewer what use case he wants him to focus on without describing anything about how the app works. The interviewer says he wants him to ignore how the app itself works and instead focus on the backend that handles uploading and downloading videos. He clarifies that the interviewer doesn’t want him to spend any time on how people will create accounts etc.
He asks the interviewer what scale / usage to build for and he’s told 1B users in 150 countries, 1B views per day, 10B videos uploaded in a year.
He asks the interviewer what kind of success metrics we should be aiming for and is asked to come up with them. He comes up with “time in app”.
He asks how much time we should expect an average active user to spend in the app, the interviewer says an hour.
System Design Interviews: 10 Key Principles (with ex-Google EM)
Communicate efficiently.
The interview is definitely an artificially time-constrained environment.
So you need to be efficient. You don’t want the interviewer wondering what you’re thinking. You want the interviewer to be able to follow your thinking.
They work with a lot of engineers who understand the technical part but struggle to clearly communicate what they’re thinking.
Scope the problem.
Real systems (like YouTube) are too complicated to fully design in a one-hour interview.
So you need to ask, “Which part of the app/feature-set should I focus on?”
You need to go along with what the interviewer wants to focus on.
Start drawing ~15 minutes in
This isn’t a hard number, it’s just a guideline.
If you start drawing too soon, you may not fully understand what the interviewer wants you to focus on, and start going down a wrong path.
If you wait too long, you may not have enough time to come up with a working solution.
The interviewer says that in one mock the Google EM didn’t start until 20 minutes in and would’ve been struggling to finish.
Start with a simple design
Don’t add requirements that don’t exist.
Example: Adding tiering storage could prevent you from having a complete solution by the end of the interviewer.
You can tell the interviewer that a particular optimization is available and how you’d use it, but don’t let it stop you.
If the interviewer asks you to go deeper on a subject, follow their lead.
Use prep resources
GitHub has a system design study guide, for example.
You might see some differences across guides
Properly understand the problem
Talk through specific use cases.
Imagine actually calling the API.
Try to catch assumptions you may have made.
Check in with the interviewer to see if you’re on the right path.
He mentions the name of the doc used for a full spec is “PRD” - Product Requirements Document
Practice, practice, practice!
You can watch and read a lot of material and think you understand it, but still not be able to perform when it comes time to do it for real.
Practicing will also make you relax when you have a real interview.
If you don’t have another person or coach, just record yourself.
The English guy said his mom pointed out he touches his hair and face too much.
Don’t underestimate how much time it’ll take for a mock interview; if you want enough time for back-and-forth, you may want to block out 2 hours. 10 minutes for feedback is pretty tight.
Explain your thinking
Example: Don’t just say “I’m going to use a database”, or “I’m going to use a MySQL/relational db”; explain why you’re choosing the particular technology you’re using. “The data is pretty well structured, I’ll need multiple tables, do queries across the data (join tables)”.
You don’t want to come across like you’re repeating memorized solutions without understanding the pros and cons of the technology.
The interviewer wants to see how clearly you can communicate.
It’s could be fine if the interviewer doesn’t agree with your choice.
Get comfortable with the math.
They want to see that you can make some assumptions and do some really basic back-of-the-envelope math.
Make sure the interviewer can follow your math.
If you have a billion queries a day, with ~100,000 seconds in a day you’re dealing with ~10,000 requests per second.
I want to hear from you that you know about peaks (in traffic), that you’re aware of it. So a peak factor of 5 means you could hit 50,000 requests per second.
If a server can handle 1,000 requests per second, that means we need ~50 servers.
Get comfortable with the units for larger amounts of data (terabytes, petabytes, exabytes). Showing familiarity with that
Don’t get bits and bytes confused. Network bandwidth is measured in bits per second, data is measured in bytes. You can keep it simple by estimating 10 bits per byte instead of 8.
Using a calculator is fine.
Use the drawing tool efficiently.
Be aware of what drawing tool they’ll want you to use.
You might be able to negotiate with the recruiter about what tool you use.
You want to be able to draw things quickly.
You need to know how to draw three things:
Boxes
Arrows
Text
Colors are not important, perfectness of arrows is not important, the text just needs to be good enough that you can refer to it.
It just needs to be clear what you’re pointing at when you have your mouse over something.
You don’t want the interviewer to have to keep track of stuff in their head.
It’s totally fine to change your diagram.
NeetCode
2023.03.11 - NeetCode - 20 System Design Concepts Explained in 10 Minutes
Summary
Vertical Scaling - Adding more RAM or CPU to your server(s)
Horizontal Scaling - Adding more servers that replicate your app. This also adds fault tolerance, but it’s much more complicated.
Load Balancer - This is a server known as a reverse proxy to route traffic to your different app servers. To choose which server to send the request to you could use round robin or hash the request ID, or route to the nearest location if you have your servers located around the world.
CDN - A network of servers located around the world. They take files on your servers and copy them to the CDN servers. This can be done on a push or pull basis.
Caching - Creating copies of data so it can be re-fetched faster in the future. Browsers cache data from the internet on disk, but that can be expensive, so they’ll cache in memory, but that can also be expensive, so the computer will copy a subset of the data into the L1 / L2 / L3 CPU cache.
IP Address - How computers communicate.
TCP / IP - The Internet Protocol Suite. Also includes UDP. These are the rules that decide how we send data on the internet. Files are broken down into packets, the packets are numbered so they can be re-assembled. HTTP and Websockets are built on top of TCP / IP.
DNS - A decentralized service that that translates a domain name to an IP address. Your computer will cache the A Record so it doesn’t need to look up the IP address for every request.
HTTP - Why do we use HTTP to view websites? TCP is too low level. HTTP is an “application-layer protocol”. It follows the client-server model: a client will initiate a request, which includes two parts: a request header, which is like a shipping label on a package. The second part is the request body, which is like the package contents. There are multiple API patterns we can follow while using HTTP: REST, GraphQL, gRPC.
REST - A standardization aimed at making APIs stateless (i.e. a request can be diverted to any of numerous duplicated app servers). Includes ideas like a successful request should include a 200 header, a bad request gets a 400, and an error with the server gets a 500 response.
GraphQL - An API pattern introduced by Facebook in 2015. You can make a “query” and specify exactly which resources and which fields you want. This reduces the number of API requests needed and the amount of unnecessary data sent.
gRPC - An RPC (remote procedure call) API framework introduced by Google for server-to-server communication. gRPC Web lets you use gRPC from a browser. It has a performance boost over JSON because it converts the data to a compressed form in binary using a library called Protocol Buffers, but it’s not human-readable like JSON.
WebSocket - This is an API pattern helpful for stuff like chat apps where the client should be notified immediately of an update.
SQL - A db is better than storing stuff in plaintext files (like CSVs) because it allows for fast retrieval/querying of data.
ACID compliance - ACID (Atomicity, Consistency, Isolation, Durability) - Durability means the data will be there if the machine shuts down. Isolation means that different transactions won’t interfere with each other. Atomicity means that each transaction is all-or-nothing; you won’t have a partial transaction if the machine turns off in the middle of writing data. Consistency means that foreign key constraints will be enforced.
NoSQL databases - These dbs dropped the consistency constraint of SQL dbs and the idea of relations to make it easier to scale. Popular types are: key-value stores, document stores, and graph dbs. Not needing to worry about foreign key constraints means we can break up our data across different machines (sharding).
Sharding - This is when you break up your db. You split up the data based on a “shard key”, which can just be the ID of a user.
Replication - A simpler way to scale reads is to make copies of the entire database. When the copies are read-only, it’s called “leader-follower replication”. You can also do “leader-leader” replication but it can lead to inconsistencies, so it’s best to do it where you can ensure a given user will only be interacting with one of them, like if you have them in different regions in the world. It can be complicated to keep them in sync.
CAP thoerem - The idea that we have to choose between data consistency and data availability. It’s a somewhat controversial theorem, which led to the development of the PACELC theorem.
Message Queues - Kind of like db because they have durable storage, can be replicated / sharded. If your system is receiving more data than it can process, the queue can prevent requests from getting lost/dropped.
In summary: software engineering is all about storing and moving data around.
Scalability Rules
Key things I want to remember for the purposes of system design interviews:
Think about how you could break down an app into different services (e.g. login vs searching vs. completing a cart) hosted in different server pools, or how you could shard the data effectively (for example, maybe first by geography, then by user ID).
Summary / Notes
Reduce the Equation
Rule 1 - Don’t Overengineer the Solution
Rule 2 - Design Scale into the Solution (D-I-D Process) - Design for 20x capacity, Implement for 3x capacity, Deploy for ~1.5x capacity
Rule 3 - Simplify the Solution 3 Times Over
Rule 4 - Reduce DNS Lookups
Rule 5 - Reduce Objects Where Possible
Rule 6 - Use Homogenous Networks
Distribute Your Work
Rule 7 - Design to Clone Things (X Axis) - Be able to duplicate data/services
Rule 8 - Design to Split Different Things (Y Axis) - You can split up your app / data / servers based on either actions the user/app can take (“services”) or by data the app may need to access (“resources”).
Services examples - signup, login, search, browse, view, add-to-cart, purchase/buy - “The data needed to perform any one of these can vary from the data needed for the others”
Resources examples - product catalog, product inventory, user account information, marketing information
Rule 9 - Design to Split Similar Things (Z Axis) - Set data up into unique and separated shards or swimlanes - Separate the db / servers by things like customer ID, name, geography, etc.
Design to Scale Out Horizontally
Rule 10 - Design Your Solution to Scale Out--Not Just Up
Rule 11 - Use Commodity Systems (Goldfish Not Thoroughbreds)
Rule 12 - Scale Out Your Data Centers
Rule 13 - Design to Leverage the Cloud
Use the Right Tools
Rule 14 - Use Databases Appropriately
Rule 15 - Avoid Using Firewalls - Firewalls cause issues with scalability and availability, so only use them when they significantly reduce risk or are required for compliance.
Rule 16 - Actively Use Log Files
Don’t Duplicate Your Work
Rule 17 - Don’t Check Your Work - Don’t read data you just wrote unless it’s a regulatory / legal requirement, and if so, consider doing it asynchronously.
Rule 18 - Stop Redirecting Traffic
Rule 19 - Relax Temporal Constraints - For example, requiring items being available from when a viewer views them until they purchase them.
Use Caching Aggressively
Rule 20 - Use CDNs
Rule 21 - Use ‘Expires’ Headers
Rule 22 - Cache Ajax Calls
Rule 23 - Use Page Caches
Rule 24 - Use Application Caches
Rule 25 - Use Object Caches - These are used for data that may be expensive to regenerate, like the result set of complex db queries.
Rule 26 - Put Object Caches on Their Own Servers
They have a diagram of multiple web servers connecting to an application server object cache, which connects to multiple app servers, which connect to a db object cache, which connects to db servers.
They use the word “tier” to refer to these servers being another entire row in their diagram.
Learn from Your Mistakes
Rule 27 - Learn Aggressively - Use A/B testing, postmortems.
Rule 28 - Don’t Rely on QA to Find Mistakes
Rule 29 - Failing to Design for Rollback Is Designing for Failure - Practice rolling back in staging or QA.
Rule 30 - Discuss and Learn from Failures - Do postmortems
Database Rules
Rule 31 - Be Aware of Costly Relationships
Rule 32 - Use the Right Type of Database Lock
Rule 33 - Don’t Use Multiphase Commits
Rule 34 - Try Not to Use “Select for Update”
Rule 35 - Don’t Select Everything
Design for Fault Tolerance and Graceful Failure
Rule 36 - Design Using Fault Isolative “Swimlanes”
Rule 37 - Never Trust Single Points of Failure - Identify single instances on design diagrams and try to turn them into active/active configurations.
Rule 38 - Avoid Putting Systems in Series
Rule 39 - Ensure You Can Wire On and Off Functions
Avoid or Distribute State
Rule 40 - Strive for Statelessness
Rule 41 - Maintain Sessions in the Browser When Possible - Use cookies, this lets the user request be served by any web server in the pool.
Rule 42 - Use a Distributed Cache for States
Asynchronous Communication and Message Buses
Rule 43 - Communicate Asynchronously As Much As Possible
Rule 44 - Make Sure Your Message Bus Can Scale
Rule 45 - Avoid Overcrowding Your Message Bus
Miscellaneous Rules
Rule 46 - Be Wary of Scaling Through Third Parties - Own your destiny, reduce your total cost of ownership.
Rule 47 - Purge, Archive, and Cost-Justify Storage
Rule 48 - Remove Business Intelligence from Transaction Processing
Rule 49 - Design Your Application to Be Monitored
Rule 50 - Be Competent
Rule Review and Prioritization
System Design Interview: An Insider’s Guide (Vol. 1) by Alex Xu
Foreword
Scale from zero to millions of users
Back-of-the-envelop estimation
A framework for system design interviews
Design a rate limiter
Design consistent hashing
Design a key-value store
Design a unique ID generator in distributed systems
Design a URL shortener
Design a web crawler
Design a notification system
Design a news feed system
Design a chat system
Design a search autocomplete system
Design YouTube
Design Google Drive
The Lea