Patterns
- sliding window pattern
- subset pattern → no reptttion and repetttion
- Modified binary search
- top k elements → use heap
- binary tree dfs
- toplogical sort
- two pointer
- two moving pointer at diff speed
- two are in same direction
- Opposite direction
- prefix sum
- Linkindeni inplace reverse
- monotonic stock
- interval or range are overlap
- find interval and etc
- backtracking
- dynamic programming
- priorty queue
- Fast and Slow Pointer
- Cycle detection method
- O(1) space efficiency
- Linked list problems
- Merge Intervals
- Sort and merge
- O(n log n) complexity
- Overlapping interval handling
- Sliding Window
- Fixed/variable window
- O(n) time optimization
- Subarray/substring problems
- Islands (Matrix Traversal)
- DFS/BFS traversal
- Connected component detection
- 2D grid problems
- Two Pointers
- Dual pointer strategy
- Linear time complexity
- Array/list problems
- Cyclic Sort
- Sorting in cycles
- O(n) time complexity
- Constant space usage
- In-place Reversal of Linked List
- Reverse without extra space
- O(n) time efficiency
- Pointer manipulation technique
- Breadth First Search
- Level-by-level traversal
- Uses queue structure
- Shortest path problems
- Depth First Search
- Recursive/backtracking approach
- Uses stack (or recursion)
- Tree/graph traversal
- Two Heaps
- Max and min heaps
- Median tracking efficiently
- O(log n) insertions
- Subsets
- Generate all subsets
- Recursive or iterative
- Backtracking or bitmasking
- Modified Binary Search
- Search in variations
- O(log n) time
- Rotated/specialized arrays
- Bitwise XOR
- Toggle bits operation
- O(1) space complexity
- Efficient for pairing
- Top ‘K’ elements
- Use heap/quickselect
- O(n log k) time
- Efficient selection problem
- K-way Merge
- Merge sorted lists
- Min-heap based approach
- O(n log k) complexity
- 0/1 Knapsack (Dynamic Programming)
- Choose or skip items
- O(n * W) complexity
- Maximize value selection
- Unbounded Knapsack (Dynamic Programming)
- Unlimited item choices
- O(n * W) complexity
- Multiple item selection
- Topological Sort (Graphs)
- Directed acyclic graph
- Order dependency resolution
- Uses DFS or BFS
- Monotonic Stack
- Maintain increasing/decreasing stack
- Optimized for range queries
- O(n) time complexity
- Backtracking
- Recursive decision-making
- Explore all possibilities
- Pruning with constraints
- Union Find
- Track and merge connected components
- Used for disjoint sets
- Great for network connectivity
- Greedy Algorithm
- Make locally optimal choices
- Efficient for problems with optimal substructure
- Covers tasks like activity selection, minimum coins
1️⃣ Merge Intervals
2️⃣ Koko Eating Bananas
3️⃣ Search in Rotated Sorted Array
4️⃣ Detect a Cycle in Linked List
5️⃣ Word Search II
6️⃣ Gas Station
7️⃣ Sliding Window Maximum
8️⃣ Coin Change
9️⃣ Word Break
🔟 Edit Distance
1️⃣1️⃣ Trapping Rainwater
1️⃣2️⃣ Largest Rectangle in a Histogram
1️⃣3️⃣ Rod Cutting
1️⃣4️⃣ Binary Tree Maximum Path Sum
1️⃣5️⃣ Number of Distinct Islands
1️⃣6️⃣ Rotten Oranges
1️⃣7️⃣ Course Schedule II
1️⃣8️⃣ Pacific Atlantic Water Flow
1️⃣9️⃣ Cheapest Flights Within K Stops
2️⃣0️⃣ Min Cost to Connect All Points
These problems cover Greedy, DP, Graphs, Binary Search, Sliding Window, Trees, and more—helping to build a strong problem-solving mindset rather than just memorizing solutions.
💡 What I Learned Along the Way:
🔹 Quality over quantity—solving fewer problems deeply is better than rushing through many.
🔹 Pattern recognition matters—many problems share similar core ideas.
🔹 Revisiting and optimizing solutions—helps reinforce concepts and improve efficiency.
If you’re preparing for interviews, don’t focus on hitting a high problem count. Focus on understanding concepts, thinking deeply, and solving strategically.
Of course, sometimes we do need to practice more to get better. The key is to identify what works best for you—some may need more practice, while others improve by focusing on fewer problems but solving them deeply.
-
https://photonlines.substack.com/p/visual-focused-algorithms-cheat-sheet
-
https://www.kirupa.com/data_structures_algorithms/intro_data_structures_why.htm (best)
Data Structure | Type | Operations (Time Complexity) | Use Cases |
---|---|---|---|
Array | Linear | Access: O(1), Insert/Delete: O(n) | Fixed-size collections, random access |
Dynamic Array | Linear | Insert: Amortized O(1), Access: O(1) | Resizable array (e.g., Python lists) |
Linked List | Linear | Insert/Delete: O(1), Access: O(n) | Dynamic memory allocation, stack/queue |
Singly Linked List | Linear | Insert/Delete: O(1), Access: O(n) | Sequential data, simple structures |
Doubly Linked List | Linear | Insert/Delete: O(1), Access: O(n) | More complex data traversal, memory management |
Circular Linked List | Linear | Insert/Delete: O(1), Access: O(n) | Circular data, round-robin scheduling |
Stack | Linear | Push/Pop: O(1) | Undo/redo functionality, parsing expressions |
Queue | Linear | Enqueue/Dequeue: O(1) | Task scheduling, BFS |
Deque | Linear | Insert/Remove from both ends: O(1) | Sliding window problems, stack/queue hybrid |
Tree | Non-Linear | Insert/Search/Delete: O(log n) (avg) | Hierarchical data, binary search, sorting |
Binary Tree | Non-Linear | Insert/Search/Delete: O(log n) (avg) | Tree traversal, hierarchical problems |
Binary Search Tree (BST) | Non-Linear | Insert/Search/Delete: O(log n) (avg) | Fast searching, sorting, range queries |
AVL Tree | Non-Linear | Insert/Search/Delete: O(log n) | Balanced search trees, optimized search |
Heap | Non-Linear | Insert/Extract: O(log n) | Priority queues, sorting algorithms (heap sort) |
Trie (Prefix Tree) | Non-Linear | Insert/Search: O(k), where k = string length | Autocomplete, dictionary, prefix matching |
Graph | Non-Linear | BFS/DFS: O(V + E), Dijkstra: O(V log V) | Social networks, pathfinding, recommendation |
Adjacency Matrix | Graph (Dense) | Insert/Search/Delete: O(1) | Dense graphs, quick edge lookups |
Adjacency List | Graph (Sparse) | Insert/Search/Delete: O(V + E) | Sparse graphs, dynamic graph traversal |
Directed Graph (Digraph) | Graph | BFS/DFS: O(V + E), Shortest path: O(V + E) | Directed dependencies, task scheduling |
Undirected Graph | Graph | BFS/DFS: O(V + E), Shortest path: O(V + E) | Social networks, connectivity |
Disjoint Set (Union-Find) | Specialized | Union/Find: O(α(n)) | Kruskal’s algorithm, network connectivity |
Hash Table | Hashing | Insert/Search/Delete: O(1) | Caching, associative arrays, unique storage |
Hash Set | Hashing | Insert/Search/Delete: O(1) | Set operations, removing duplicates |
Priority Queue | Specialized | Insert: O(log n), Extract Min/Max: O(log n) | Task scheduling, Dijkstra’s algorithm, Huffman coding |
Bloom Filter | Specialized (Probabilistic) | Insert/Check: O(1) | Membership testing, web caches, databases |
Segment Tree | Advanced | Query/Update: O(log n) | Range queries, interval problems |
Fenwick Tree (BIT) | Advanced | Query/Update: O(log n) | Prefix sum queries, cumulative frequency |
Suffix Tree | String Processing | Search: O(m) where m is the pattern length | String matching, text processing |
Suffix Array | String Processing | Search: O(log n) | Pattern matching, suffix-based searches |
K-d Tree | Multi-dimensional | Insert/Search: O(log n) | Range searches, nearest neighbor searches |
Quad Tree | Multi-dimensional | Insert/Search: O(log n) | Spatial partitioning, image processing |
Octree | Multi-dimensional | Insert/Search: O(log n) | 3D data, computer graphics |
Matrix | Multi-dimensional | Access: O(1), Operations: O(n^2) | 2D grid storage, dynamic programming |
Skip List | Specialized | Insert/Search/Delete: O(log n) | Alternative to balanced trees, fast searching |
Aho-Corasick | String Processing | Search: O(n + m + z) | Multi-pattern matching, dictionary matching |
B-Tree | Tree (Balanced) | Search/Insert/Delete: O(log n) | Database indexing, file systems |
Two pointer approach
- two moving pointer at diff speed
- two are in same direction
- Opposite direction
we need to take decsion based on condition when we want to shrink the pointer from left or right
for Max Water Hold problem we move pointer based on which is lesser
Array
Pre compute technique
Prefix sum is a technique where we pre-compute cumulative sums of an array to answer range queries efficiently.
“What’s the sum of elements from index i
to j
?”
- if we precompute a prefix sum array to answer each in
O(1)
time.
Problem Type | Example Problems |
---|---|
Basic Traversal | Sum, Max/Min, Reverse |
Searching | Linear Search, Binary Search |
Sorting | QuickSort, MergeSort, Bubble Sort |
Prefix Sum | Range Sum Query, Subarray Sum |
Sliding Window | Max sum subarray, Longest substring without repeating chars |
Two Pointers | Pair Sum, Container With Most Water |
Hashing (Map/Set) | Two Sum, Longest Consecutive Sequence |
Binary Search on Answer | Koko Eating Bananas, Capacity to Ship Packages |
Greedy | Min Platforms, Merge Intervals |
Kadane’s Algorithm | Maximum Subarray Sum |
Subarrays/Subsets | Count/Generate Subarrays, Subset Sum |
Backtracking | Subsets, Combination Sum |
Bit Manipulation | Find Single Number, XOR Tricks |
Prefix XOR | Count of subarrays with XOR = K |
Monotonic Stack | Next Greater Element, Histogram Area |
Union Find (Disjoint Set) | Detect Cycles in Graphs (if grid based) |
DP (on Array) | House Robber, Max Product Subarray |
Divide & Conquer | Maximum Subarray (recursive) |
Counting/Indexing | Counting Sort, Inversion Count |
Rotations/Shifts | Rotate Array, Cyclic Sort |
subset pattern
It’s a method to generate all possible subsets (a.k.a. the power set) of a given set or array.
array = [1,2,3]
[], [1], [2], [3], [1,2], [1,3], [2,3], [1,2,3]
bruteforce
function subsets(nums) {
const res = [[]];
for (const num of nums) {
const len = res.length;
for (let i = 0; i < len; i++) {
res.push([...res[i], num]);
}
}
return res;
}
console.log(subsets([1, 2]));
// Output: [ [], [1], [2], [1,2] ]
We iterating through each number and adding that number to exist subset and creating this as
- Time = O(n * 2^n)
Backtracking
function subsetsBacktrack(nums) {
const res = [];
function backtrack(index, path) {
res.push([...path]);
for (let i = index; i < nums.length; i++) {
path.push(nums[i]); // include
backtrack(i + 1, path); // recurse
path.pop(); // exclude (backtrack)
}
}
backtrack(0, []);
return res;
}
console.log(subsetsBacktrack([1, 2]));
// Output: [ [], [1], [1,2], [2] ]
Start: []
├── [1]
│ ├── [1,2]
│ │ └── [1,2,3]
│ └── [1,3]
├── [2]
│ └── [2,3]
└── [3]
[]
/ \
[1] []
/ \ / \
[1,2] [1] [2] []
- In recurrsio it go depth first and then comeback
Monotonic stack
A Monotonic Stack is a special kind of stack where elements are kept in monotonic (increasing or decreasing) order.
- Next Greater Element
- Previous Smaller Element
- Largest Rectangle in Histogram
- Daily Temperatures
- Stock Span Problem
pattern
for (let i = 0; i < arr.length; i++) {
while (stack.length && (condition based on arr[i] and stack top)) {
// Do something with stack.pop()
}
stack.push(i);
}
const nums = [300, 4, 2, 1, 5, 3, 400];
const stack = [];
const result = Array(nums.length).fill(-1); // Array to store the result
for (let i = 0; i < nums.length; i++) {
// While stack is not empty and current element is greater than stack's top
while (stack.length && nums[i] > nums[stack[stack.length - 1]]) {
const index = stack.pop(); // Get index of the previous smaller element
result[index] = nums[i]; // Set the next greater element for that index
}
stack.push(i); // Push current element's index onto the stack
}
console.log(result); // Output the next greater elements
[
400, 5, 5, 5,
400, 400, -1
]
HEAP
A heap is a special kind of binary tree used mainly for priority queues and efficient sorting (like in Heap Sort). It has these main properties:
-
Complete Binary Tree
- Every level is fully filled except possibly the last.
- Last level has all nodes as left as possible.
-
Heap Property
Two types:- Max Heap: Every parent node is greater than or equal to its children.
- Min Heap: Every parent node is less than or equal to its children.
Implement a heap using an array (no need for pointers!):
- For index
i
:- Left child → at
2i + 1
- Right child → at
2i + 2
- Parent → at
(i - 1) // 2
- Left child → at
30
/ \
15 25
/ \ /
5 10 20
Insertion
- when inserting the element insert the element in last index array and do compare with parent until condition match and swap such that we will not get holes array
30
/ \
15 25
/ \ /
- Let we going to insert a new number it need to be from left such that when we store that in array it wont create holes
[30,15,25,5]
Deletion
-
Remove root node and replace the last node to root node and do the comparsion from root node until node issue
-
If we want sorted array we need to remove one element by one element and do heapfiy
Sorting
- Bubble sort
- Swap two elements until no swap needed
- Selection sort
- Select pick smalles element and put to new array and repeat that on pick min from array
- Insertion sort
- Pick one number and find the right place to insert
Let’s trace through this:
**Initial Array:** `[5, 3, 4, 1, 2]`
- **Pass 1:** key = 3 → insert before 5 → `[3, 5, 4, 1, 2]`
- **Pass 2:** key = 4 → insert before 5 → `[3, 4, 5, 1, 2]`
- **Pass 3:** key = 1 → move 5, 4, 3 → `[1, 3, 4, 5, 2]`
- **Pass 4:** key = 2 → move 5, 4, 3 → `[1, 2, 3, 4, 5]`
-
Merge sort
- Split ➡ Sort ➡ Merge
-
Quick sort
- Pick a pivot (any element from the array).
- Partition the array into two parts:
- Elements less than the pivot.
- Elements greater than or equal to the pivot.
- Recursively apply the same steps to the left and right parts.
- Combine the results.
-
Heap sort
-
Couting sort
-
ACID properties
-
Indexing in DBMS
-
What is DDL & DML
-
Inner and Outer Join
-
Types of relationship
-
Nested Queries in SQL?
-
Conflict Serializability in DBMS
-
Normalization & Denormalization?
-
Diff b/w share lock & exclusive lock
-
What is sharding and keys in DBMS
-
Diff b/w 2 tier and 3 tier architecture
-
What is DBMS ? Mention advantages
-
What are SQL commands? Types of them
-
Data abstraction in DBMS, three levels of it
-
Diff b/w TRUNCATE and DELETE command
-
Diff b/w Intension & Extension in a DataBase
-
What is functional dependency & E-R Model?
-
Entity, Entity Type, Entity Set, Weak Entity Set
-
What is CCP? (Concurrency Control Protocols)
-
Difference between vertical & horizontal scaling
- Operating Systems
- Explain Cache
- SJF Scheduling
- Dynamic Binding
- LRTF Scheduling
- SRTF Scheduling
- FCFS Scheduling
- Banker’s Algorithm
- Priority Scheduling
- Round Robin Scheduling
- Starving and Aging in OS
- Why does thrashing occur?
- Producer Consumer Problem
- Real Time OS, types of RTOS
- What is RAID ? Different types
- Demand Paging, Segmentation
- What is paging & why do we need it?
- Diff b/w multitasking & multiprocessing
- Define virtual memory, thrashing, threads
- What is a socket, kernel & monolithic kernel
- Diff b/w main memory & secondary memory
- Diff b/w process & thread? Types of process
- Diff b/w direct mapping & associative mapping
- What is fragmentation? Types of fragmentation
- main purpose of operating system? Discuss types
- What is deadlock? conditions to achieve a deadlock
- Computer Networks
- LAN
- Firewalls
- Hub vs Switch
- RSA Algorithm
- Define Network
- 3 way handshaking
- What is MAC address?
- Bandwidth, node & link
- Diff. b/w IPV4 and IPV6
- Different types of delays
- What is SMTP protocol ?
- Server-side load balancer
- Diff b/w Gateway and router
- What does ping command do
- Significance of Data Link Layer
- Network Topology & define types
- What is HTTP & HTTPS protocol ?
- What is DNS, DNS forwarder and NIS?
- VPN, advantages & disadvantages of it
- TCP & UDP protocol, prepare differences
- What happens after entering google[dot]com
- What is IP, private IP and public IP address