You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/6_Advanced/Lagrange.mdx
+135-6
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,14 @@
1
1
---
2
2
id: lagrange
3
3
title: 'Lagrangian Relaxation'
4
-
author: Benjamin Qi
4
+
author: Benjamin Qi, Alex Liang, Dong Liu
5
5
description: 'aka Aliens Trick'
6
6
prerequisites:
7
7
- convex-hull
8
8
frequency: 1
9
9
---
10
10
11
-
adding lambda\*smth
12
-
13
-
<Problemsproblems="sample" />
14
-
15
-
## Tutorial
11
+
## Resources
16
12
17
13
<Resources>
18
14
<Resource
@@ -27,6 +23,139 @@ adding lambda\*smth
27
23
/>
28
24
</Resources>
29
25
26
+
## Lagrangian Relaxation
27
+
28
+
Lagrangian Relaxation involves transforming a constraint on a variable into a cost $\lambda$ and binary searching for the optimal $\lambda$.
29
+
30
+
<FocusProblemproblem="sample" />
31
+
32
+
The problem gives us a length $N$ ($1 \le N \le 3 \cdot 10^5$) array of integers in the range $[-10^9,10^9]$. We are given some $K$ ($1 \le K \le N$) and are asked to choose at most $K$ disjoint subarrays such that the sum of elements included in a subarray is maximized.
33
+
34
+
### Intuition
35
+
36
+
The main bottleneck of any dynamic programming solution to this problem is having to store the number of subarrays we have created so far.
37
+
38
+
Let's try to find a way around this. Instead of storing the number of subarrays we have created so far, we assign a penalty of $\lambda$ for creating a new subarray (i.e. everytime we create a subarray we penalize our sum by $\lambda$).
39
+
40
+
This leads us to the sub-problem of finding the maximal sum and number of subarrays used if creating a new subarray costs $\lambda$. We can solve this in $\mathcal{O}(N)$ time with dynamic programming.
41
+
42
+
<Spoilertitle="Dynamic Programming Solution">
43
+
Let's have $\texttt{dp}[i][j:\{0,1\}]$ represent the maximum sum if we consider the first $i$ elements, given that $j=0/1$ implies whether
44
+
element $i$ is part of a subarray. Let $\texttt{cnt}[i][j]$ represent the number of
45
+
people used in an optimal arrangement of $\texttt{dp}[i][j]$.
because we either begin a new subarray or we continue an existing subarray.
65
+
</Spoiler>
66
+
67
+
Let $v$ be the maximal achievable sum with $\lambda$ penalty and $c$ be the number of subarrays used to achieve $v$. Then the **maximal possible sum achievable if we use exactly $c$ subarrays is $v+\lambda c$**. Note that we add $\lambda c$ to undo the penalty.
68
+
69
+
Our goal is to find some $\lambda$ such that $c=k$. As we increase $\lambda$, it makes sense for $c$ to decrease since we are penalizing subarrays more. Thus, we can try to binary search for $\lambda$ to make $c=k$ and set our answer to be $v+\lambda c$ at the optimal $\lambda$.
70
+
71
+
This idea almost works but there are still some very important caveats and conditions that we have not considered.
72
+
73
+
### Geometry
74
+
75
+
Let $f(x)$ be the maximal sum if we use at most $x$ subarrays. We want to find $f(K)$.
76
+
77
+
The first condition is that $f(x)$ **must be concave or convex**. Since $f(x)$ is increasing in this problem, the means that we want $f(c)$ to be concave: $f(x) - f(x - 1) \ge f(x + 1) - f(x)$. Intuitively speaking, this means that the more subarrays we add, the less we increase our answer by.
78
+
79
+
<Spoilertitle="Proof that our function is concave">
80
+
81
+
</Spoiler>
82
+
83
+
Consider the following graphs of $f(x)$ and $f(x)-\lambda x$. In this example, we have $\lambda=5$.
Here is where the fact that $f(x)$ is concave comes in. Because the slope is non-increasing, we know that $f(x) - \lambda x$ will first increase, then stay the same, and finally decrease.
93
+
94
+
Let $v(\lambda)$ be the optimal maximal achievable sum with $\lambda$ penalty and $c(\lambda)$ be the number of subarrays used to achieve $v(\lambda)$ (note that if there are multiple such possibilities, we set $c$ to be the **minimal** number of subarrays to achieve $v$). These values can be calculated in $\mathcal{O}(N)$ time using the dynamic programming approach described above.
95
+
96
+
When we assign the penalty of $\lambda$, we are trying to find the maximal sum if creating a subarray reduces our sum by $\lambda$. In other words, **we are trying to find the maximum of $f(x) - \lambda x$**.
97
+
98
+
Without loss of generality, suppose there exists a slope equal to $\lambda$. Given the shape of $f(x) - \lambda x$, we know that $f(x) - \lambda x$ will be maximized at the points where $\lambda$ is equal to the slope of $f(x)$ (these points are red in the graph above). This means that $c(\lambda)$ will be the point at which $\lambda$ is equal to the slope of $f(x)$ (if there are multiple such points, then $c(\lambda)$ will be the leftmost one).
99
+
100
+
Now we know exactly what $\lambda$ represents: $\lambda$ is the slope and $c(\lambda)$ is the position with slope equal to $\lambda$ (if there are multiple such positions then $c(\lambda)$ is the leftmost one).
101
+
102
+
We binary search for $\lambda$ and find the highest $\lambda$ such that $c(\lambda) \le K$. Let the optimal value be $\lambda_{\texttt{opt}}$. Then our answer is $v(\lambda_{\texttt{opt}}) + \lambda_{\texttt{opt}} K$. Note that this works even if $c(\lambda_{\texttt{opt}}) \neq K$ since $c(\lambda_{\texttt{opt}})$ and $K$ will be on the same line with slope $\lambda_{\texttt{opt}}$.
103
+
104
+
Because calculating $v(\lambda)$ and $c(\lambda)$ with the dynamic programming solution described above will take $\mathcal{O}(N)$ time, this solution runs in $\mathcal{O}(N\log{\sumA[i]})$ time.
0 commit comments