Dynamic Programming
Break a problem into overlapping subproblems, solve each once, reuse the results. DP turns exponential brute force into polynomial time.
Interactive visualization
Use the controls to connect the idea to concrete operations before diving into the full write-up.
Family
Problem-solving strategies
Reusable paradigms like greedy choice, backtracking, and dynamic programming.
Builds on
Foundational
You can start here without another page first.
Unlocks
2 next topics
Use these follow-ups when you want to apply the current idea in a richer setting.
Learning paths
2
This topic appears in curated progressions so you can keep moving without guessing the next step.
Problem
Solve optimization or counting problems where the brute-force solution re-computes the same subproblems exponentially many times.
Intuition
Dynamic programming is not a single algorithm — it’s a problem-solving technique. It applies when a problem has two properties:
- Optimal substructure: the optimal solution to the problem can be built from optimal solutions to its subproblems
- Overlapping subproblems: the same subproblems appear multiple times in the recursion tree
If you have both, you can solve each subproblem once, store the result, and reuse it. This transforms exponential time into polynomial time.
Fibonacci: the canonical example
Naive recursion:
fib(n):
if n <= 1: return n
return fib(n-1) + fib(n-2)
This is because fib(n-1) calls fib(n-2) and fib(n-3), and fib(n-2) also calls fib(n-3) — massive duplication. fib(5) computes fib(3) twice, fib(2) three times, etc.
Top-down (memoization)
Add a cache. Before computing, check if you already have the answer:
memo ← {}
fib(n):
if n in memo: return memo[n]
if n <= 1: return n
memo[n] ← fib(n-1) + fib(n-2)
return memo[n]
Same recursive structure, but each subproblem is solved exactly once. time, space.
Bottom-up (tabulation)
Fill a table iteratively from the base cases upward:
fib(n):
dp ← array of size n+1
dp[0] ← 0, dp[1] ← 1
for i in 2..n:
dp[i] ← dp[i-1] + dp[i-2]
return dp[n]
No recursion, no stack overhead. Same time, space. You can even reduce space to since you only need the last two values:
fib(n):
a ← 0, b ← 1
for i in 2..n:
a, b ← b, a + b
return b
Top-down vs bottom-up
| Top-down (memoization) | Bottom-up (tabulation) | |
|---|---|---|
| Style | Recursive + cache | Iterative + table |
| Computes | Only needed subproblems | All subproblems up to |
| Stack overflow risk | Yes (deep recursion) | No |
| Space optimization | Harder | Easier (can drop old rows) |
| Easier to write | Usually yes — natural recursion | Requires figuring out the order |
For most problems, either approach works. Bottom-up is generally preferred in production because it avoids stack overflow and is easier to optimize for space.
The knapsack problem
You have items, each with weight and value . You have a knapsack with capacity . Maximize the total value without exceeding the capacity.
0/1 Knapsack (each item used at most once)
State: dp[i][c] = maximum value using items with capacity .
Transition:
- Don’t take item :
dp[i][c] = dp[i-1][c] - Take item (if ):
dp[i][c] = dp[i-1][c - w_i] + v_i - Take the max of both
knapsack(weights, values, W):
n ← weights.length
dp ← 2D array [n+1][W+1], filled with 0
for i in 1..n:
for c in 0..W:
dp[i][c] ← dp[i-1][c] // skip item
if weights[i-1] <= c:
dp[i][c] ← max(dp[i][c],
dp[i-1][c - weights[i-1]] + values[i-1])
return dp[n][W]
Time: . Space: , reducible to by only keeping the previous row.
Note: this is pseudo-polynomial — polynomial in and , but could be exponentially large in the input size (number of bits). The knapsack problem is NP-complete.
Common DP patterns
1D DP (linear sequence)
- Climbing stairs:
dp[i] = dp[i-1] + dp[i-2] - House robber:
dp[i] = max(dp[i-1], dp[i-2] + val[i]) - Longest increasing subsequence:
dp[i] = max(dp[j] + 1)for all where
2D DP (grid/string)
- Edit distance:
dp[i][j]= min edits to transforms[0..i]tot[0..j] - Longest common subsequence:
dp[i][j]= LCS ofs[0..i]andt[0..j] - Matrix chain multiplication:
dp[i][j]= min multiplications for matrices
Interval DP
- Burst balloons:
dp[l][r]= max coins from bursting balloons in range - Palindrome partitioning:
dp[l][r]= min cuts for substring
DP on trees
- Tree diameter, max path sum: root the tree, compute bottom-up
- Subtree problems:
dp[node]depends ondp[child]for all children
Identifying DP problems
Ask yourself:
- Can I define the answer in terms of answers to smaller subproblems?
- Do the same subproblems appear repeatedly?
- Is there a clear base case?
If yes to all three, it’s likely DP. The hardest part is usually defining the state — what do dp[i], dp[i][j], etc. represent?
Complexity
| Problem | Time | Space |
|---|---|---|
| Fibonacci | optimized | |
| 0/1 Knapsack | optimized | |
| LCS | optimized | |
| Edit distance | optimized | |
| LIS | with binary search |
Key takeaways
- Define the state first — the hardest part of DP is deciding what
dp[i]ordp[i][j]represents. Get this right and the transition often writes itself. - Bottom-up is preferred in production — it avoids stack overflow risks and makes space optimization straightforward by dropping old rows.
- Space optimization is almost always possible — if
dp[i]only depends ondp[i-1], keep only two rows. If it depends ondp[i-1]anddp[i-2], keep two variables. - Knapsack is pseudo-polynomial — looks polynomial but can be exponentially large in the input size. This is why knapsack is NP-complete.
- Look for overlapping subproblems — if a recursive solution recomputes the same calls, memoization or tabulation will give you a massive speedup.
Practice problems
| Problem | Difficulty | Key idea |
|---|---|---|
| Climbing Stairs | 🟢 Easy | Classic 1D DP identical to Fibonacci — dp[i] = dp[i-1] + dp[i-2] |
| Longest Common Subsequence | 🟡 Medium | 2D DP on two strings — match characters or take the best of skipping either |
| Partition Equal Subset Sum | 🟡 Medium | 0/1 knapsack variant — can you fill a knapsack of capacity sum/2? |
| Edit Distance | 🟡 Medium | 2D DP with three operations: insert, delete, replace at each cell |
| Longest Increasing Subsequence | 🟡 Medium | 1D DP optimizable to with binary search on a patience-sorting array |
Relation to other topics
- Greedy algorithms: greedy makes locally optimal choices without reconsidering, while DP considers all subproblems. When greedy fails (no greedy-choice property), DP is the fallback.
- Recursion and memoization: top-down DP is just recursion with a cache. Understanding the recursive structure is often the first step toward a DP solution.
- Graph algorithms: shortest-path problems (Bellman-Ford, Floyd-Warshall) are DP on graphs, where the state is the current node and the number of edges used.
What this enables
Once the current idea feels natural, these are the most useful next jumps.
Floyd-Warshall
Compute all-pairs shortest paths by gradually allowing more intermediate vertices. This is the clean dynamic-programming answer to dense shortest-path queries.
Convex Hull Trick
Maintain a set of lines so DP transitions of the form m*x + b can be optimized by querying only the best candidate line.
Related directions
These topics live nearby conceptually, even if they are not direct prerequisites.
Topological Sort
Order the nodes of a DAG so that every edge points forward. Essential for dependency resolution, build systems, and scheduling.
Segment Tree
Answer range queries and point updates in O(log n). Segment trees are the go-to structure for range sum, range min, and similar problems.
Monotonic Queue
Maintain sliding-window candidates in sorted order inside a deque so each window maximum or minimum is available in O(1).
Backtracking
Systematically explore all possible solutions by building candidates incrementally and abandoning paths that cannot lead to valid solutions.
More from Problem-solving strategies
Stay in the same family when you want parallel variations of the same mental model.
Backtracking
Systematically explore all possible solutions by building candidates incrementally and abandoning paths that cannot lead to valid solutions.
Greedy Algorithms
Make the locally optimal choice at each step to find a global optimum. Greedy works when the problem has optimal substructure and the greedy choice property.
Convex Hull Trick
Maintain a set of lines so DP transitions of the form m*x + b can be optimized by querying only the best candidate line.
Paths that include this topic
Follow one of these sequences if you want a guided next step instead of open-ended browsing.
Strategy playbook
A reusable set of problem-solving moves for brute-force search, greedy choices, memoized state, and advanced DP optimization.
DP optimization
Turn transition formulas into line queries and move from hull geometry to dynamic Li Chao trees.
From the blog
Pair the atlas with recent writing from the rest of the site when you want a broader engineering perspective alongside the topic graph.