Linked List

The Mechanics

To implement a Linked List, we treat memory as a scattered landscape rather than a neat block.

  1. Nodes: The fundamental unit. A node contains two things:

    • Data: The actual value (integer, string, object).

    • Pointer (Next): A memory address reference to the next node in the sequence.

  2. Head: The entry point. If you lose the reference to the Head, the entire list is lost in memory (a memory leak in unmanaged languages).

  3. Tail (Optional): A pointer to the very last node, allowing for $O(1)$ insertions at the end.

Operations & Complexity Analysis

Operation

Time Complexity

Notes

Access node[i]

O(n)

The Fatal Flaw. There is no math to calculate the address of the 5th element. You must start at Head and jump 5 times.

Search

O(n)

You must traverse node-by-node. Binary Search is impossible because we cannot jump to the middle.

Insertion (Start)

O(1)

Create a node, point it to the current Head, update Head. No shifting required.

Insertion (End)

O(1)

If you maintain a Tail pointer. Otherwise $O(n)$ to walk to the end.

Insertion (Middle)

O(1)*

*Technically O(1) to re-wire pointers, but implies you have already traversed $O(n)$ to find the spot.

Deletion (Start)

O(1)

Move Head to Head.next.

Deletion (Middle)

O(1)*

Same caveat as insertion: re-wiring is fast, finding the spot is slow.

JavaScript Implementation

JavaScript does not have a native Linked List. We must build it using classes. Notice how the LinkedList class manages the state of the pointers.

Expert Insight: The Hardware Reality

Why don't we use Linked Lists for everything if insertion is so fast?

1. Pointer Overhead

In a 64-bit system, a pointer takes 8 bytes. If you are storing a list of integers (4 bytes), you are using 8 bytes of overhead for every 4 bytes of data. That is a 200% memory overhead. Arrays have 0% per-element overhead.

2. The Cache Miss Disaster

CPUs love patterns. When you iterate an array, the CPU pre-fetches the next chunk of data because it knows exactly where it is.

With a Linked List, the nodes are scattered randomly in the heap (Memory fragmentation). To go from Node A to Node B, the CPU often has to halt and fetch a completely new memory page.

  • Iterating an Array: ~1 cycle per element (pipelined).

  • Iterating a Linked List: ~100+ cycles per element (due to cache misses).

3. When to actually use them?

  • Kernel Development: Operating Systems use linked lists heavily for process scheduling because processes are created and destroyed constantly, and we can't allocate a contiguous block for them.

  • Implementing other structures: Stacks, Queues, and Hash Tables (for collision handling) often rely on Linked Lists internally.

  • Splitting/Merging: If you need to chop a list in half and merge it with another list, Linked Lists can do this in $O(1)$ by just swapping pointers. Arrays would require O(n) copying.

Conclusion

Use Linked Lists when the data size is unpredictable and insertion speed at the edges is more critical than read access speed. However, for 95% of high-level application code, the CPU cache benefits of a standard Array (or ArrayList/Vector) outweigh the theoretical insertion benefits of a Linked List.

Last updated