Dynamic Programming
Dynamic programming algorithms solve problems by dividing them into smaller subproblems, storing the results of these subproblems, and later combining them to solve the larger problem. These algorithms can solve problems where the optimal solution to the problem is composed of the optimal solutions to its subproblems, i.e., problems with optimal substructure.
0/1 Knapsack Problem
Certain problems require the calculation of all possible cases. In such scenarios, dynamic programming truly shows its effectiveness.
A list variable named 'pack' will create an intermediate result table in the form of a 6 x 16 matrix. The size of the table is based on the maximum number of items + 1, and the maximum capacity of the knapsack + 1, thus making it 6 x 16. Each cell of this table will contain the maximum value according to the number of items and the capacity of the knapsack up to that point.
# (price, weight)
cargo = [(4, 12), (2, 1), (10, 4), (1, 1), (2, 2)]
def zero_one_knapsack(cargo):
capacity = 15
pack = []
for i in range(len(cargo) + 1):
pack.append([])
for c in range(capacity + 1):
if i == 0 or c == 0:
pack[i].append(0)
elif cargo[i - 1][1] <= c:
pack[i].append(
max(
cargo[i - 1][0] + pack[i - 1][c - cargo[i - 1][1]],
pack[i - 1][c]
)
)
else:
pack[i].append(pack[i - 1][c])
return pack[-1][-1]Including the current item
cargo[i - 1][0] + pack[i - 1][c - cargo[i - 1][1]]:cargo[i - 1][0]is the value of the current item.pack[i - 1][c - cargo[i - 1][1]]is the best value achievable with the remaining capacity if this item is included.The remaining capacity is calculated as
c - cargo[i - 1][1], which is the current capacity minus the weight of the current item.
Not including the current item
pack[i - 1][c]: This represents the best value achievable without including the current item, at the current capacityc.When the current item's weight is more than the current capacity
cpack[i].append(pack[i - 1][c]): This line simply carries over the best value found without including the current item, as the current item cannot be added due to capacity constraints.
Greedy algorithms solve problems by choosing the option that seems best at the moment, whereas dynamic programming solves problems by storing the results of overlapping subproblems and using them in the solution. Therefore, problems without overlapping subproblems are not solved using dynamic programming.
Dynamic programming
Optimal Substructure
Overlapping Subproblems
Fibonacci Sequence
Dijkstra's algorithm
Greedy algorithm
Optimal Substructure
Greedy Choice Pro
Dijkstra's algorithm
Divide-and-Conquer
Optimal Substructure
Merge Sort
Quick Sort
Dijkstra's algorithm is a case that applies to both dynamic programming and greedy algorithms. During a BFS (Breadth-First Search), it always finds the shortest path, embodying the greedy algorithm's property of making a locally optimal choice. At the same time, it stores already calculated paths for future use, solving overlapping subproblems, which is a characteristic of dynamic programming.

Methodology
Bottom-up approach: This starts from the smaller subproblems, using the solutions of these small problems to solve larger ones. Typically, this method is known as tabulation.
Top-down approach: This solves the problem in a natural manner, checking whether the solution to a subproblem has been calculated as it goes. This method is known as memoization.
Last updated