Arrays, Stacks & Queues
1. Arrays: The Power of Contiguity
Arrays are the simplest and most widely used data structure. To truly understand an array, you must understand how computer memory (RAM) works.
The Mechanics:
An array is a collection of items stored at contiguous memory locations. The system knows the starting address of the array and the size of the data type (e.g., a 32-bit integer takes 4 bytes). Because of this physics-level layout, the computer can calculate the exact memory address of any element instantly using simple arithmetic:
Address = Start_Address + (Index * Element_Size)
Operations & Complexity Analysis
Operation
Time Complexity
Notes
Access arr[i]
O(1)
The Superpower. Because of the arithmetic formula above, accessing the 1,000,000th element takes the same time as accessing the 1st.
Search (Unsorted)
O(n)
You must scan element by element (Linear Search).
Search (Sorted)
O(\log n)
If sorted, we can use Binary Search.
Insertion (End)
O(1)*
*Amortized O(1) for dynamic arrays. $O(1)$ for static if space exists.
Insertion (Middle/Start)
O(n)
The Weakness. To insert at index 0, every existing element must shift one spot to the right to make room.
Deletion (End)
O(1)
Simply decrement the size counter.
Deletion (Middle/Start)
O(n)
Similar to insertion, we must shift elements left to fill the gap.
JavaScript Implementation
In JavaScript, standard Arrays are dynamic. Note how splice (insertion in middle) is computationally expensive compared to direct access.
const arr = [10, 20, 30, 40];
// 1. Access: O(1) - Instant
console.log(arr[2]); // Output: 30
// 2. Insertion at End: O(1)
arr.push(50);
// 3. Insertion in Middle: O(n) - Expensive!
// Must shift 20, 30, 40, 50 to the right to make room for 15.
arr.splice(1, 0, 15); // Result: [10, 15, 20, 30, 40, 50]
// 4. Deletion in Middle: O(n) - Expensive!
// Must shift elements left to close the gap.
arr.splice(2, 1);
Expert Insight: The Cache Locality Advantage
The theoretical O(1) access is great, but the real-world performance benefit comes from Spatial Locality. When a CPU fetches data from RAM, it grabs a "cache line" (usually 64 bytes). Because array data is contiguous, fetching arr[0] likely also loads arr[1] through arr[15] into the L1 cache. This makes iterating over arrays significantly faster than iterating over Linked Lists, where nodes are scattered in memory.
2. Stacks: The LIFO Principle
A Stack is a linear data structure that follows a particular order in which operations are performed: LIFO (Last In, First Out). Think of a stack of dinner plates; you can only add a plate to the top, and you can only remove the plate from the top.
The Mechanics:
Stacks are abstract data types. They can be implemented using an Array or a Linked List.
Array Implementation: Fast, but can suffer from overflow (Stack Overflow) if the size limit is reached.
Linked List Implementation: Dynamic sizing, but slightly higher memory overhead due to pointers.
Operations & Complexity Analysis
Operation
Time Complexity
Notes
Push (Insert)
O(1)
We place the element at the "Top" pointer. No shifting required.
Pop (Remove)
O(1)
We remove the element at the "Top" pointer.
Peek/Top
O(1)
Inspect the top element without removing it.
Search
O(n)
Stacks are not designed for searching; you must pop elements to find deep items (destructively) or traverse strictly.
JavaScript Implementation
We can simply use a JavaScript Array's push and pop methods, as they operate on the end of the array (O(1)).
Expert Insight: The Call Stack
The most famous implementation of a stack is the one managed by the OS/Compiler: The Call Stack. When a function is called, its local variables and return address are "pushed" onto the stack. When it returns, they are "popped." This is why infinite recursion leads to a StackOverflowError—you literally ran out of the allocated contiguous memory block for the stack.
Common Use Cases:
Undo/Redo mechanisms in text editors.
Expression evaluation (converting Infix to Postfix).
Backtracking algorithms (Maze solving, DFS).
3. Queues: The FIFO Principle
A Queue is a linear structure that follows the FIFO (First In, First Out) order. It resembles a line at a ticket counter. The first person to join the line is the first one served.
The Mechanics:
Queues maintain two pointers: Front (Head) and Rear (Tail). New elements are added to the Rear, and elements are removed from the Front.
Naive Array Implementation: As you enqueue and dequeue, the data "drifts" to the right, leaving wasted space at the start.
Circular Buffer (Ring Buffer): The "expert" implementation. When the rear pointer hits the end of the array, it wraps around to index 0. This utilizes memory efficiently without needing to shift elements.
Linked List Implementation: The most common dynamic approach. Enqueue at Tail, Dequeue at Head.
Operations & Complexity Analysis
Operation
Time Complexity
Notes
Enqueue (Insert)
O(1)
Add to the rear pointer.
Dequeue (Remove)
O(1)
Remove from the front pointer.
Peek/Front
O(1)
Inspect the first item.
Search
O(n)
Like stacks, queues are not meant for searching.
JavaScript Implementation
Warning: Using a simple array with .push() and .shift() is a bad practice for large queues. .shift() is O(n) because it re-indexes every element. A proper queue implementation uses pointers or an object map to ensure O(1) dequeue.
Expert Insight: Buffering and Concurrency
In system design, queues are vital for decoupling. If a web server receives 1,000 requests/sec (Producers) but the database can only handle 500 writes/sec (Consumer), we place a Queue (like RabbitMQ or Kafka) in the middle.
The "False Sharing" Trap: In multi-threaded environments, if the Head and Tail pointers of a queue are stored too close to each other in memory, different CPU cores updating them can cause "cache trashing" (False Sharing), significantly degrading performance. Padding is often used to separate these pointers onto different cache lines.
Summary Comparison Table
Feature
Array
Stack
Queue
Ordering
Indices (0 to n-1)
LIFO (Last In First Out)
FIFO (First In First Out)
Access
Random Access $O(1)$
Restricted (Top only)
Restricted (Front only)
Underlying Tech
Contiguous Memory
Array or Linked List
Array, Circular Buffer, or Linked List
Primary Cost
Insertion/Deletion in middle
Memory limit (if fixed size)
Complexity of pointers/wrapping
Conclusion
Default to Arrays: If you need to store data and iterate over it, use an array (or
ArrayList/Vector). The cache locality benefits are too good to ignore.Use Stacks for State: If you need to "remember where you came from" (parsing, backtracking), use a Stack.
Use Queues for Throttling: If you are dealing with asynchronous data flows or job processing, a Queue is your best friend.
Last updated