## 329. Longest Increasing Path in a Matrix

Given an integer matrix, find the length of the longest increasing path.

From each cell, you can either move to four directions: left, right, up or down. You may NOT move diagonally or move outside of the boundary (i.e. wrap-around is not allowed).

Example 1:

nums = [
[9,9,4],
[6,6,8],
[2,1,1]
]


Return 4
The longest increasing path is [1, 2, 6, 9].

Example 2:

nums = [
[3,4,5],
[3,2,6],
[2,2,1]
]


Return 4
The longest increasing path is [3, 4, 5, 6]. Moving diagonally is not allowed.

Credits:
Special thanks to @dietpepsi for adding this problem and creating all test cases.

b'
\n\n

\n

\n

## Solution

\n
\n

#### Approach #1 (Naive DFS) [Time Limit Exceeded]

\n

Intuition

\n

DFS can find the longest increasing path starting from any cell. We can do this for all the cells.

\n

Algorithm

\n

Each cell can be seen as a vertex in a graph . If two adjacent cells have value , i.e. increasing then we have a directed edge . The problem then becomes:

\n
\n

Search the longest path in the directed graph .

\n
\n

Naively, we can use DFS or BFS to visit all the cells connected starting from a root. We update the maximum length of the path during search and find the answer when it finished.

\n

Usually, in DFS or BFS, we can employ a set visited to prevent the cells from duplicate visits. We will introduce a better algorithm based on this in the next section.

\n\n

Complexity Analysis

\n
\n
• Time complexity : . The search is repeated for each valid increasing path. In the worst case we can have calls. For example:
• \n
\n\n
\n
• Space complexity : . For each DFS we need space used by the system stack, where is the maximum depth of the recursion. In the worst case, .
• \n
\n
\n

#### Approach #2 (DFS + Memoization) [Accepted]

\n

Intuition

\n

Cache the results for the recursion so that any subproblem will be calculated only once.

\n

Algorithm

\n

From previous analysis, we know that there are many duplicate calculations in the naive approach.

\n

One optimization is that we can use a set to prevent the repeat visit in one DFS search. This optimization will reduce the time complexity for each DFS to and the total algorithm to .

\n

Here, we will introduce more powerful optimization, Memoization.

\n
\n

In computing, memoization is an optimization technique used primarily to speed up computer programs by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

\n
\n

In our problem, we recursively call dfs(x, y) for many times. But if we already know all the results for the four adjacent cells, we only need constant time. During our search if the result for a cell is not calculated, we calculate and cache it; otherwise, we get it from the cache directly.

\n\n

Complexity Analysis

\n
\n
• \n

Time complexity : . Each vertex/cell will be calculated once and only once, and each edge will be visited once and only once. The total time complexity is then . is the total number of vertices and is the total number of edges. In our problem, , .

\n
• \n
• \n

Space complexity : . The cache dominates the space complexity.

\n
• \n
\n
\n

#### Approach #3 (Peeling Onion) [Accepted]

\n

Intuition

\n

The result of each cell only related to the result of its neighbors. Can we use dynamic programming?

\n

Algorithm

\n

If we define the longest increasing path starting from cell as a function

\n

\n\n

\n

then we have the following transition function

\n

\n\n

\n

This formula is the same as used in the previous approaches. With such transition function, one may think that it is possible to use dynamic programming to deduce all the results without employing DFS!

\n

That is right with one thing missing: we don\'t have the dependency list.

\n

For dynamic programming to work, if problem B depends on the result of problem A, then we must make sure that problem A is calculated before problem B. Such order is natural and obvious for many problems. For example the famous Fibonacci sequence:

\n

\n\n

\n

The subproblem depends on its two predecessors. Therefore, the natural order from 0 to n is the correct order. The dependent is always behind the dependee.

\n

The terminology of such dependency order is "Topological order" or "Topological sorting":

\n
\n

Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge , vertex comes before in the ordering.

\n
\n

In our problem, the topological order is not natural. Without the value in the matrix, we couldn\'t know the dependency relation of any two neighbors A and B. We have to perform the topological sort explicitly as a preprocess. After that, we can solve the problem dynamically using our transition function following the stored topological order.

\n

There are several ways to perform the topological sorting. Here we employ one of them called "Peeling Onion".

\n

The idea is that in a DAG, we will have some vertex who doesn\'t depend on others which we call "leaves". We put these leaves in a list (their internal ordering does matter), and then we remove them from the DAG. After the removal, there will be new leaves. We do the same repeatedly as if we are peeling an onion layer by layer. In the end, the list will have a valid topological ordering of our vertices.

\n

In out problem, since we want the longest path in the DAG, which equals to the total number of layers of the "onion". Thus, we can count the number of layers during "peeling" and return the counts in the end without invoking dynamic programming.

\n\n

Complexity Analysis

\n
\n
• \n

Time complexity : . The the topological sort is .\nHere, is the total number of vertices and is the total number of edges. In our problem, , .

\n
• \n
• \n

Space complexity : . We need to store the out degrees and each level of leaves.

\n
• \n
\n
\n

## Remarks

\n
\n
• Memoization: for a problem with massive duplicate calls, cache the results.
• \n
• Dynamic programming requires the subproblem solved in topological order. In many problems, it coincides the natural order. For those who doesn\'t, one need perform topological sorting first. Therefore, for those problems with complex topology (like this one), search with memorization is usually an easier and better choice.
• \n
\n
'