Patterns

  • sliding window pattern
  • subset pattern no reptttion and repetttion
  • Modified binary search
  • top k elements use heap
  • binary tree dfs
  • toplogical sort
  • two pointer
    • two moving pointer at diff speed
    • two are in same direction
    • Opposite direction
  • prefix sum
  • Linkindeni inplace reverse
  • monotonic stock
  • interval or range are overlap
    • find interval and etc
  • backtracking
  • dynamic programming
  • priorty queue
  1. Fast and Slow Pointer
  • Cycle detection method
  • O(1) space efficiency
  • Linked list problems
  1. Merge Intervals
  • Sort and merge
  • O(n log n) complexity
  • Overlapping interval handling
  1. Sliding Window
  • Fixed/variable window
  • O(n) time optimization
  • Subarray/substring problems
  1. Islands (Matrix Traversal)
  • DFS/BFS traversal
  • Connected component detection
  • 2D grid problems
  1. Two Pointers
  • Dual pointer strategy
  • Linear time complexity
  • Array/list problems
  1. Cyclic Sort
  • Sorting in cycles
  • O(n) time complexity
  • Constant space usage
  1. In-place Reversal of Linked List
  • Reverse without extra space
  • O(n) time efficiency
  • Pointer manipulation technique
  1. Breadth First Search
  • Level-by-level traversal
  • Uses queue structure
  • Shortest path problems
  1. Depth First Search
  • Recursive/backtracking approach
  • Uses stack (or recursion)
  • Tree/graph traversal
  1. Two Heaps
  • Max and min heaps
  • Median tracking efficiently
  • O(log n) insertions
  1. Subsets
  • Generate all subsets
  • Recursive or iterative
  • Backtracking or bitmasking
  1. Modified Binary Search
  • Search in variations
  • O(log n) time
  • Rotated/specialized arrays
  1. Bitwise XOR
  • Toggle bits operation
  • O(1) space complexity
  • Efficient for pairing
  1. Top ‘K’ elements
  • Use heap/quickselect
  • O(n log k) time
  • Efficient selection problem
  1. K-way Merge
  • Merge sorted lists
  • Min-heap based approach
  • O(n log k) complexity
  1. 0/1 Knapsack (Dynamic Programming)
  • Choose or skip items
  • O(n * W) complexity
  • Maximize value selection
  1. Unbounded Knapsack (Dynamic Programming)
  • Unlimited item choices
  • O(n * W) complexity
  • Multiple item selection
  1. Topological Sort (Graphs)
  • Directed acyclic graph
  • Order dependency resolution
  • Uses DFS or BFS
  1. Monotonic Stack
  • Maintain increasing/decreasing stack
  • Optimized for range queries
  • O(n) time complexity
  1. Backtracking
  • Recursive decision-making
  • Explore all possibilities
  • Pruning with constraints
  1. Union Find
  • Track and merge connected components
  • Used for disjoint sets
  • Great for network connectivity
  1. Greedy Algorithm
  • Make locally optimal choices
  • Efficient for problems with optimal substructure
  • Covers tasks like activity selection, minimum coins

1️⃣ Merge Intervals
2️⃣ Koko Eating Bananas
3️⃣ Search in Rotated Sorted Array
4️⃣ Detect a Cycle in Linked List
5️⃣ Word Search II
6️⃣ Gas Station
7️⃣ Sliding Window Maximum
8️⃣ Coin Change
9️⃣ Word Break
🔟 Edit Distance
1️⃣1️⃣ Trapping Rainwater
1️⃣2️⃣ Largest Rectangle in a Histogram
1️⃣3️⃣ Rod Cutting
1️⃣4️⃣ Binary Tree Maximum Path Sum
1️⃣5️⃣ Number of Distinct Islands
1️⃣6️⃣ Rotten Oranges
1️⃣7️⃣ Course Schedule II
1️⃣8️⃣ Pacific Atlantic Water Flow
1️⃣9️⃣ Cheapest Flights Within K Stops
2️⃣0️⃣ Min Cost to Connect All Points

These problems cover Greedy, DP, Graphs, Binary Search, Sliding Window, Trees, and more—helping to build a strong problem-solving mindset rather than just memorizing solutions.

💡 What I Learned Along the Way:
🔹 Quality over quantity—solving fewer problems deeply is better than rushing through many.
🔹 Pattern recognition matters—many problems share similar core ideas.
🔹 Revisiting and optimizing solutions—helps reinforce concepts and improve efficiency.

If you’re preparing for interviews, don’t focus on hitting a high problem count. Focus on understanding concepts, thinking deeply, and solving strategically.

Of course, sometimes we do need to practice more to get better. The key is to identify what works best for you—some may need more practice, while others improve by focusing on fewer problems but solving them deeply.

Data StructureTypeOperations (Time Complexity)Use Cases
ArrayLinearAccess: O(1), Insert/Delete: O(n)Fixed-size collections, random access
Dynamic ArrayLinearInsert: Amortized O(1), Access: O(1)Resizable array (e.g., Python lists)
Linked ListLinearInsert/Delete: O(1), Access: O(n)Dynamic memory allocation, stack/queue
Singly Linked ListLinearInsert/Delete: O(1), Access: O(n)Sequential data, simple structures
Doubly Linked ListLinearInsert/Delete: O(1), Access: O(n)More complex data traversal, memory management
Circular Linked ListLinearInsert/Delete: O(1), Access: O(n)Circular data, round-robin scheduling
StackLinearPush/Pop: O(1)Undo/redo functionality, parsing expressions
QueueLinearEnqueue/Dequeue: O(1)Task scheduling, BFS
DequeLinearInsert/Remove from both ends: O(1)Sliding window problems, stack/queue hybrid
TreeNon-LinearInsert/Search/Delete: O(log n) (avg)Hierarchical data, binary search, sorting
Binary TreeNon-LinearInsert/Search/Delete: O(log n) (avg)Tree traversal, hierarchical problems
Binary Search Tree (BST)Non-LinearInsert/Search/Delete: O(log n) (avg)Fast searching, sorting, range queries
AVL TreeNon-LinearInsert/Search/Delete: O(log n)Balanced search trees, optimized search
HeapNon-LinearInsert/Extract: O(log n)Priority queues, sorting algorithms (heap sort)
Trie (Prefix Tree)Non-LinearInsert/Search: O(k), where k = string lengthAutocomplete, dictionary, prefix matching
GraphNon-LinearBFS/DFS: O(V + E), Dijkstra: O(V log V)Social networks, pathfinding, recommendation
Adjacency MatrixGraph (Dense)Insert/Search/Delete: O(1)Dense graphs, quick edge lookups
Adjacency ListGraph (Sparse)Insert/Search/Delete: O(V + E)Sparse graphs, dynamic graph traversal
Directed Graph (Digraph)GraphBFS/DFS: O(V + E), Shortest path: O(V + E)Directed dependencies, task scheduling
Undirected GraphGraphBFS/DFS: O(V + E), Shortest path: O(V + E)Social networks, connectivity
Disjoint Set (Union-Find)SpecializedUnion/Find: O(α(n))Kruskal’s algorithm, network connectivity
Hash TableHashingInsert/Search/Delete: O(1)Caching, associative arrays, unique storage
Hash SetHashingInsert/Search/Delete: O(1)Set operations, removing duplicates
Priority QueueSpecializedInsert: O(log n), Extract Min/Max: O(log n)Task scheduling, Dijkstra’s algorithm, Huffman coding
Bloom FilterSpecialized (Probabilistic)Insert/Check: O(1)Membership testing, web caches, databases
Segment TreeAdvancedQuery/Update: O(log n)Range queries, interval problems
Fenwick Tree (BIT)AdvancedQuery/Update: O(log n)Prefix sum queries, cumulative frequency
Suffix TreeString ProcessingSearch: O(m) where m is the pattern lengthString matching, text processing
Suffix ArrayString ProcessingSearch: O(log n)Pattern matching, suffix-based searches
K-d TreeMulti-dimensionalInsert/Search: O(log n)Range searches, nearest neighbor searches
Quad TreeMulti-dimensionalInsert/Search: O(log n)Spatial partitioning, image processing
OctreeMulti-dimensionalInsert/Search: O(log n)3D data, computer graphics
MatrixMulti-dimensionalAccess: O(1), Operations: O(n^2)2D grid storage, dynamic programming
Skip ListSpecializedInsert/Search/Delete: O(log n)Alternative to balanced trees, fast searching
Aho-CorasickString ProcessingSearch: O(n + m + z)Multi-pattern matching, dictionary matching
B-TreeTree (Balanced)Search/Insert/Delete: O(log n)Database indexing, file systems

Two pointer approach

  • two moving pointer at diff speed
  • two are in same direction
  • Opposite direction

we need to take decsion based on condition when we want to shrink the pointer from left or right

for Max Water Hold problem we move pointer based on which is lesser

Array

Pre compute technique

Prefix sum is a technique where we pre-compute cumulative sums of an array to answer range queries efficiently.

“What’s the sum of elements from index i to j?”

  • if we precompute a prefix sum array to answer each in O(1) time.
Problem TypeExample Problems
Basic TraversalSum, Max/Min, Reverse
SearchingLinear Search, Binary Search
SortingQuickSort, MergeSort, Bubble Sort
Prefix SumRange Sum Query, Subarray Sum
Sliding WindowMax sum subarray, Longest substring without repeating chars
Two PointersPair Sum, Container With Most Water
Hashing (Map/Set)Two Sum, Longest Consecutive Sequence
Binary Search on AnswerKoko Eating Bananas, Capacity to Ship Packages
GreedyMin Platforms, Merge Intervals
Kadane’s AlgorithmMaximum Subarray Sum
Subarrays/SubsetsCount/Generate Subarrays, Subset Sum
BacktrackingSubsets, Combination Sum
Bit ManipulationFind Single Number, XOR Tricks
Prefix XORCount of subarrays with XOR = K
Monotonic StackNext Greater Element, Histogram Area
Union Find (Disjoint Set)Detect Cycles in Graphs (if grid based)
DP (on Array)House Robber, Max Product Subarray
Divide & ConquerMaximum Subarray (recursive)
Counting/IndexingCounting Sort, Inversion Count
Rotations/ShiftsRotate Array, Cyclic Sort

subset pattern

It’s a method to generate all possible subsets (a.k.a. the power set) of a given set or array.

array = [1,2,3]

[], [1], [2], [3], [1,2], [1,3], [2,3], [1,2,3]

bruteforce

function subsets(nums) {
  const res = [[]];
  for (const num of nums) {
    const len = res.length;
    for (let i = 0; i < len; i++) {
      res.push([...res[i], num]);
    }
  }
  return res;
}
 
console.log(subsets([1, 2]));
// Output: [ [], [1], [2], [1,2] ]
 

We iterating through each number and adding that number to exist subset and creating this as

  • Time = O(n * 2^n)

Backtracking

function subsetsBacktrack(nums) {
  const res = [];
 
  function backtrack(index, path) {
    res.push([...path]);
    for (let i = index; i < nums.length; i++) {
      path.push(nums[i]);                 // include
      backtrack(i + 1, path);             // recurse
      path.pop();                         // exclude (backtrack)
    }
  }
 
  backtrack(0, []);
  return res;
}
 
console.log(subsetsBacktrack([1, 2]));
// Output: [ [], [1], [1,2], [2] ]
 
 
Start: []
 
├── [1]
│   ├── [1,2]
│   │   └── [1,2,3]
│   └── [1,3]
├── [2]
│   └── [2,3]
└── [3]
 
 
           []
         /    \
      [1]      []
     /   \    /   \
 [1,2] [1]  [2]   []
 
  • In recurrsio it go depth first and then comeback

Monotonic stack

A Monotonic Stack is a special kind of stack where elements are kept in monotonic (increasing or decreasing) order.

  • Next Greater Element
  • Previous Smaller Element
  • Largest Rectangle in Histogram
  • Daily Temperatures
  • Stock Span Problem

pattern

for (let i = 0; i < arr.length; i++) {
  while (stack.length && (condition based on arr[i] and stack top)) {
    // Do something with stack.pop()
  }
  stack.push(i);
}
 
const nums = [300, 4, 2, 1, 5, 3, 400];
 
const stack = [];
const result = Array(nums.length).fill(-1);  // Array to store the result
 
for (let i = 0; i < nums.length; i++) {
  // While stack is not empty and current element is greater than stack's top
  while (stack.length && nums[i] > nums[stack[stack.length - 1]]) {
    const index = stack.pop(); // Get index of the previous smaller element
    result[index] = nums[i];   // Set the next greater element for that index
  }
  stack.push(i);  // Push current element's index onto the stack
}
 
console.log(result);  // Output the next greater elements
 
	[
  400,   5,  5, 5,
  400, 400, -1
]
 

HEAP

A heap is a special kind of binary tree used mainly for priority queues and efficient sorting (like in Heap Sort). It has these main properties:

  1. Complete Binary Tree

    • Every level is fully filled except possibly the last.
    • Last level has all nodes as left as possible.
  2. Heap Property
    Two types:

    • Max Heap: Every parent node is greater than or equal to its children.
    • Min Heap: Every parent node is less than or equal to its children.

Implement a heap using an array (no need for pointers!):

  • For index i:
    • Left child → at 2i + 1
    • Right child → at 2i + 2
    • Parent → at (i - 1) // 2
        30
      /    \
    15      25
   /  \     /
  5   10   20

Insertion

  • when inserting the element insert the element in last index array and do compare with parent until condition match and swap such that we will not get holes array
        30
      /    \
    15      25
   /  \     /
       

- Let we going to insert a new number it need to be from left such that when we store that in array it wont create holes
[30,15,25,5]


Deletion

  • Remove root node and replace the last node to root node and do the comparsion from root node until node issue

  • If we want sorted array we need to remove one element by one element and do heapfiy

Sorting

  • Bubble sort
    • Swap two elements until no swap needed
  • Selection sort
    • Select pick smalles element and put to new array and repeat that on pick min from array
  • Insertion sort
    • Pick one number and find the right place to insert
 Let’s trace through this:

**Initial Array:** `[5, 3, 4, 1, 2]`

- **Pass 1:** key = 3 → insert before 5 → `[3, 5, 4, 1, 2]`
    
- **Pass 2:** key = 4 → insert before 5 → `[3, 4, 5, 1, 2]`
    
- **Pass 3:** key = 1 → move 5, 4, 3 → `[1, 3, 4, 5, 2]`
    
- **Pass 4:** key = 2 → move 5, 4, 3 → `[1, 2, 3, 4, 5]`
  • Merge sort

    • Split ➡ Sort ➡ Merge
  • Quick sort

    1. Pick a pivot (any element from the array).
    2. Partition the array into two parts:
      • Elements less than the pivot.
      • Elements greater than or equal to the pivot.
    3. Recursively apply the same steps to the left and right parts.
    4. Combine the results.
  • Heap sort

  • Couting sort

  • ACID properties

  • Indexing in DBMS

  • What is DDL & DML

  • Inner and Outer Join

  • Types of relationship

  • Nested Queries in SQL?

  • Conflict Serializability in DBMS

  • Normalization & Denormalization?

  • Diff b/w share lock & exclusive lock

  • What is sharding and keys in DBMS

  • Diff b/w 2 tier and 3 tier architecture

  • What is DBMS ? Mention advantages

  • What are SQL commands? Types of them

  • Data abstraction in DBMS, three levels of it

  • Diff b/w TRUNCATE and DELETE command

  • Diff b/w Intension & Extension in a DataBase

  • What is functional dependency & E-R Model?

  • Entity, Entity Type, Entity Set, Weak Entity Set

  • What is CCP? (Concurrency Control Protocols)

  • Difference between vertical & horizontal scaling

  1. Operating Systems
  • Explain Cache
  • SJF Scheduling
  • Dynamic Binding
  • LRTF Scheduling
  • SRTF Scheduling
  • FCFS Scheduling
  • Banker’s Algorithm
  • Priority Scheduling
  • Round Robin Scheduling
  • Starving and Aging in OS
  • Why does thrashing occur?
  • Producer Consumer Problem
  • Real Time OS, types of RTOS
  • What is RAID ? Different types
  • Demand Paging, Segmentation
  • What is paging & why do we need it?
  • Diff b/w multitasking & multiprocessing
  • Define virtual memory, thrashing, threads
  • What is a socket, kernel & monolithic kernel
  • Diff b/w main memory & secondary memory
  • Diff b/w process & thread? Types of process
  • Diff b/w direct mapping & associative mapping
  • What is fragmentation? Types of fragmentation
  • main purpose of operating system? Discuss types
  • What is deadlock? conditions to achieve a deadlock
  1. Computer Networks
  • LAN
  • Firewalls
  • Hub vs Switch
  • RSA Algorithm
  • Define Network
  • 3 way handshaking
  • What is MAC address?
  • Bandwidth, node & link
  • Diff. b/w IPV4 and IPV6
  • Different types of delays
  • What is SMTP protocol ?
  • Server-side load balancer
  • Diff b/w Gateway and router
  • What does ping command do
  • Significance of Data Link Layer
  • Network Topology & define types
  • What is HTTP & HTTPS protocol ?
  • What is DNS, DNS forwarder and NIS?
  • VPN, advantages & disadvantages of it
  • TCP & UDP protocol, prepare differences
  • What happens after entering google[dot]com
  • What is IP, private IP and public IP address