Skip to main content
Back to the DSA atlas
Problem-solving strategies Technique Intermediate

Greedy Algorithms

Make the locally optimal choice at each step to find a global optimum. Greedy works when the problem has optimal substructure and the greedy choice property.

greedysortingoptimization

Interactive visualization

Use the controls to connect the idea to concrete operations before diving into the full write-up.

01234567891011
Unprocessed Considering Selected Rejected Selected 0 intervals

Family

Problem-solving strategies

Reusable paradigms like greedy choice, backtracking, and dynamic programming.

Builds on

Foundational

You can start here without another page first.

Unlocks

2 next topics

Use these follow-ups when you want to apply the current idea in a richer setting.

Learning paths

1

This topic appears in curated progressions so you can keep moving without guessing the next step.

Problem

Find an optimal solution (maximize or minimize some objective) where making a sequence of locally best choices leads to a globally best result — without backtracking or reconsidering.

Intuition

A greedy algorithm works when the problem has two properties:

  1. Greedy choice property: a locally optimal choice at each step is part of some globally optimal solution. You never need to undo a decision.
  2. Optimal substructure: after making a greedy choice, the remaining problem is a smaller instance of the same problem.

If both hold, greedily choosing the best option at each step and solving the residual subproblem gives an optimal answer. If either fails, greedy may produce a wrong answer — and you likely need DP or exhaustive search.

Interval scheduling — the classic greedy

Given nn intervals [si,ei)[s_i, e_i), select the maximum number of non-overlapping intervals.

Key insight: always pick the interval that ends earliest. This leaves the most room for future intervals.

interval_scheduling(intervals):
    sort intervals by end time
    selected ← []
    last_end ← -∞

    for each interval [s, e] in sorted order:
        if s >= last_end:
            selected.append([s, e])
            last_end ← e

    return selected

Sorting is O(nlogn)O(n \log n), the scan is O(n)O(n). Total: O(nlogn)O(n \log n).

Activity selection — weighted vs unweighted

The unweighted case above is pure greedy. When each interval has a weight (profit), the greedy choice property breaks: picking the earliest-ending interval might skip a high-value one. Weighted interval scheduling requires DP:

dp[i]=max(dp[i1],  wi+dp[p(i)])\text{dp}[i] = \max(\text{dp}[i-1],\; w_i + \text{dp}[p(i)])

where p(i)p(i) is the latest interval that doesn’t overlap interval ii. The unweighted version is the special case where all weights equal 1.

Other greedy patterns

Fractional knapsack: sort items by value-to-weight ratio vi/wiv_i / w_i. Take as much as possible of the best ratio first. Unlike 0/1 knapsack, fractions are allowed, so greedy is optimal. O(nlogn)O(n \log n).

Huffman coding: repeatedly merge the two lowest-frequency symbols into one node. Builds an optimal prefix-free code. O(nlogn)O(n \log n) with a min-heap.

Jump game: at each position, jump as far as you can reach. Track the farthest reachable index — if it reaches the end, the answer is yes. O(n)O(n).

Minimum spanning tree: both Kruskal’s (sort edges, union-find) and Prim’s (min-heap of crossing edges) are greedy on edge weights.

When greedy fails

0/1 knapsack: you can’t take fractions, so the ratio-based greedy picks wrong items. Need DP.

Coin change (arbitrary denominations): greedy picks the largest coin first, but for denominations like {1,3,4}\{1, 3, 4\} with target 66, greedy gives 4+1+14+1+1 (3 coins) instead of optimal 3+33+3 (2 coins). With standard denominations {1,5,10,25}\{1, 5, 10, 25\}, greedy happens to work.

Graph coloring: greedily assigning the smallest available color doesn’t guarantee the chromatic number.

Proving greedy correctness

Two standard techniques:

Exchange argument: assume an optimal solution OO differs from greedy solution GG. Show you can swap a choice in OO for the greedy choice without worsening the objective. Repeat until O=GO = G.

Greedy stays ahead: after each step kk, show the greedy solution is at least as good as any other solution’s first kk choices. By induction, greedy is optimal at the end.

Both boil down to: “we never regret the greedy choice.”

Complexity

AlgorithmTimeSpace
Interval schedulingO(nlogn)O(n \log n)O(n)O(n)
Fractional knapsackO(nlogn)O(n \log n)O(1)O(1) extra
Huffman codingO(nlogn)O(n \log n)O(n)O(n)
Jump gameO(n)O(n)O(1)O(1)
Kruskal’s MSTO(ElogE)O(E \log E)O(V)O(V)

Key takeaways

  • Greedy choice property + optimal substructure are the two conditions that must hold; if either fails, greedy produces wrong answers and you need DP or exhaustive search.
  • Sort first — most greedy problems start by sorting on the right criterion (earliest end time, best ratio, farthest reach). Choosing the wrong sort key is the most common mistake.
  • Exchange argument is the go-to proof technique: show you can swap any non-greedy choice in an optimal solution without worsening the result.
  • Greedy is a special case of DP where only one subproblem matters at each step — always try greedy first, then fall back to DP if the greedy choice property doesn’t hold.
  • Interval scheduling (sort by end time) is the canonical example; master it and the pattern transfers to task scheduling, meeting rooms, and merge intervals.

Practice problems

ProblemDifficultyKey idea
Jump Game🟡 MediumTrack the farthest reachable index greedily from left to right
Jump Game II🟡 MediumBFS-like greedy using current and next reachable boundary to minimize jumps
Gas Station🟡 MediumTrack running fuel surplus to identify the unique valid starting index
Task Scheduler🟡 MediumGreedily schedule the most frequent task first and calculate idle slots
Non-overlapping Intervals🟡 MediumSort by end time and count overlapping intervals to remove (interval scheduling)

Relation to other topics

GreedyDPBrute force
Explores1 choice per stepAll subproblemsAll combinations
BacktracksNeverImplicitly (via table)Explicitly
TimeUsually O(nlogn)O(n \log n)Polynomial (often O(n2)O(n^2) or O(nW)O(nW))Exponential
Correct whenGreedy choice property holdsOptimal substructure + overlapping subproblemsAlways

Greedy is a special case of DP where only one subproblem needs to be solved at each step. When it works, it’s the simplest and fastest approach.

What this enables

Once the current idea feels natural, these are the most useful next jumps.

Related directions

These topics live nearby conceptually, even if they are not direct prerequisites.

More from Problem-solving strategies

Stay in the same family when you want parallel variations of the same mental model.

Paths that include this topic

Follow one of these sequences if you want a guided next step instead of open-ended browsing.

Strategy playbook

A reusable set of problem-solving moves for brute-force search, greedy choices, memoized state, and advanced DP optimization.

From the blog

Pair the atlas with recent writing from the rest of the site when you want a broader engineering perspective alongside the topic graph.