Skip to main content
Back to the DSA atlas
Sorting Algorithm Intro

Quick Sort

Pick a pivot, partition around it, recurse on both sides. Quick sort is the fastest general-purpose sort in practice despite O(n²) worst case.

sortingdivide-and-conquerarrayrecursion

Interactive visualization

Use the controls to connect the idea to concrete operations before diving into the full write-up.

Pivot Comparing Sorted

Family

Sorting

Partition, merge, and reorder data so later operations become easier.

Builds on

Foundational

You can start here without another page first.

Unlocks

0 next topics

Use these follow-ups when you want to apply the current idea in a richer setting.

Learning paths

0

This topic appears in curated progressions so you can keep moving without guessing the next step.

Problem

Sort an array of nn elements in-place. Quick sort achieves O(nlogn)O(n \log n) on average with excellent cache performance and minimal overhead.

Intuition

Choose a pivot element. Rearrange the array so everything less than the pivot is on the left, everything greater is on the right. The pivot is now in its final sorted position. Recurse on the left and right partitions.

Unlike merge sort where the work happens during merging, in quick sort the work happens during partitioning. After partitioning, no merge step is needed — the subarrays are simply concatenated.

Algorithm (Lomuto partition)

quick_sort(arr, lo, hi):
    if lo >= hi:
        return

    pivot_index ← partition(arr, lo, hi)
    quick_sort(arr, lo, pivot_index - 1)
    quick_sort(arr, pivot_index + 1, hi)

partition(arr, lo, hi):
    pivot ← arr[hi]        // last element as pivot
    i ← lo - 1

    for j in lo..hi-1:
        if arr[j] <= pivot:
            i += 1
            swap(arr[i], arr[j])

    swap(arr[i + 1], arr[hi])
    return i + 1

Lomuto vs Hoare partition

Lomuto (above): simpler, uses last element as pivot. One pointer i tracks the boundary of “elements ≤ pivot.” Easy to understand and implement.

Hoare: two pointers start from opposite ends and walk toward each other, swapping elements that are on the wrong side. Does ~3x fewer swaps than Lomuto on average. The pivot doesn’t end up at a known position — you get a partition point where everything left is ≤ pivot and everything right is ≥ pivot.

hoare_partition(arr, lo, hi):
    pivot ← arr[lo + (hi - lo) / 2]
    i ← lo - 1
    j ← hi + 1

    while true:
        do: i += 1 while arr[i] < pivot
        do: j -= 1 while arr[j] > pivot
        if i >= j: return j
        swap(arr[i], arr[j])

Hoare is faster in practice but trickier to implement correctly.

Pivot selection matters

The pivot determines how balanced the partitions are:

Pivot strategyWorst caseNotes
First/last elementAlready sorted input → O(n2)O(n^2)Never use in production
Random elementO(n2)O(n^2) with astronomically low probabilityGood practical choice
Median of threeStill O(n2)O(n^2) theoretically, much better in practiceMost implementations use this
Median of mediansGuaranteed O(nlogn)O(n \log n)Theoretical interest only — too slow in practice

Median of three: pick the median of arr[lo], arr[mid], arr[hi]. This avoids the worst case on sorted/reverse-sorted input and costs almost nothing.

The worst case

Quick sort degrades to O(n2)O(n^2) when every partition produces one empty side and one side with n1n-1 elements. This happens with:

  • First/last pivot on sorted input
  • All equal elements (without three-way partition)

The call tree becomes a chain of nn levels, each doing O(n)O(n) work. In practice, randomized pivot selection makes this vanishingly unlikely.

Three-way partition (Dutch National Flag)

When the array contains many duplicate elements, standard quick sort wastes time re-partitioning equal elements. Three-way partition groups elements into < pivot, == pivot, > pivot:

three_way_partition(arr, lo, hi):
    pivot ← arr[lo]
    lt ← lo      // arr[lo..lt-1] < pivot
    i ← lo       // arr[lt..i-1] == pivot
    gt ← hi      // arr[gt+1..hi] > pivot

    while i <= gt:
        if arr[i] < pivot:
            swap(arr[i], arr[lt])
            lt += 1; i += 1
        else if arr[i] > pivot:
            swap(arr[i], arr[gt])
            gt -= 1
        else:
            i += 1

    // recurse only on < and > portions
    quick_sort(arr, lo, lt - 1)
    quick_sort(arr, gt + 1, hi)

This handles arrays with many duplicates in O(n)O(n) instead of O(n2)O(n^2).

Introsort: the production hybrid

Most standard library sort implementations (C++ std::sort, Rust’s sort_unstable) use introsort: start with quick sort, but if the recursion depth exceeds 2logn2 \log n, switch to heap sort. This guarantees O(nlogn)O(n \log n) worst case while keeping quick sort’s excellent average-case performance.

Some also switch to insertion sort for small subarrays (n16n \leq 16) because insertion sort has lower overhead on small inputs.

Why quick sort beats merge sort in practice

Despite both being O(nlogn)O(n \log n) on average:

  • Cache locality: quick sort partitions in-place, accessing contiguous memory. Merge sort bounces between the array and an auxiliary buffer.
  • No allocation: quick sort uses O(logn)O(\log n) stack space. Merge sort needs O(n)O(n) for the merge buffer.
  • Fewer writes: on arrays, quick sort does fewer total writes than merge sort.

But merge sort wins on:

  • Stability: quick sort is not stable
  • Guaranteed O(nlogn)O(n \log n): merge sort has no bad inputs
  • Linked lists: merge sort is far better for linked lists

Complexity

Time (worst)Time (best)Time (avg)SpaceStable
Quick sortO(n2)O(n^2)O(nlogn)O(n \log n)O(nlogn)O(n \log n)O(logn)O(\log n)No

The O(logn)O(\log n) space is for the recursion stack. With tail-call optimization on the larger partition, this is guaranteed.

Key takeaways

  • Work happens in the partitioning phase — after partitioning, the pivot is in its final sorted position with no merge needed
  • Pivot selection is critical: median-of-three avoids worst case on sorted input at near-zero cost
  • Three-way partition (Dutch National Flag) handles duplicate-heavy arrays in O(n)O(n) instead of O(n2)O(n^2)
  • Quick sort beats merge sort in practice due to cache locality and no auxiliary memory allocation
  • Production sorts use introsort: quick sort with heap sort fallback to guarantee O(nlogn)O(n \log n) worst case

Practice problems

ProblemDifficultyKey idea
Sort an Array🟡 MediumImplement quick sort with randomized pivot to avoid worst case
Kth Largest Element in an Array🟡 MediumQuickselect — partition without fully sorting the array
Sort Colors🟡 MediumDutch National Flag three-way partition in a single pass
Wiggle Sort II🟡 MediumPartition around median then interleave elements
Top K Frequent Elements🟡 MediumQuickselect on frequency counts to find top-k without full sort

Relation to other topics

  • Merge sort — same divide-and-conquer structure, but work happens during merging; stable and guaranteed O(nlogn)O(n \log n) but needs O(n)O(n) space
  • Quickselect — uses the same partition step to find the k-th smallest element in O(n)O(n) average time without fully sorting
  • Heap sortO(nlogn)O(n \log n) worst case and in-place, used as fallback in introsort when quick sort recurses too deep

Related directions

These topics live nearby conceptually, even if they are not direct prerequisites.

More from Sorting

Stay in the same family when you want parallel variations of the same mental model.

From the blog

Pair the atlas with recent writing from the rest of the site when you want a broader engineering perspective alongside the topic graph.