This repository was archived by the owner on Sep 27, 2024. It is now read-only.
forked from hadizand/DL_CS_ECG
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathSL0.py
103 lines (77 loc) · 3.65 KB
/
SL0.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
import numpy as np
def SL0(y, A, sigma_min, sigma_decrease_factor=0.5, mu_0=2, L=3, A_pinv=None, showProgress=False):
"""
Returns the sparsest vector `s` that satisfies the underdetermined system of
linear equations `A @ s = y`, using the Smoothed L0 (SL0) algorithm.
Requires:
--------
- numpy as np
Parameters:
----------
y : numpy array
The observed vector (Mx1), where M is the number of rows in `A`.
A : numpy array
The measurement matrix (MxN), which should be 'wide', meaning it has more
columns than rows (N > M). The number of rows in `A` must match the length
of `y`.
sigma_min : float
The minimum value of `sigma`, which determines the stopping criterion for
the algorithm. It should be chosen based on the noise level or desired
accuracy.
sigma_decrease_factor : float, optional (default=0.5)
The factor by which `sigma` is decreased in each iteration. This should be
a positive value less than 1. Smaller values lead to quicker reduction of
`sigma`, possibly at the cost of accuracy for less sparse signals.
mu_0 : float, optional (default=2)
The scaling factor for `mu`, where `mu = mu_0 * sigma^2`. This parameter
influences the convergence rate of the algorithm.
L : int, optional (default=3)
The number of iterations for the inner loop (steepest descent). Increasing
`L` can improve the precision of the result but also increases computational
cost.
A_pinv : numpy array, optional
The precomputed pseudoinverse of the matrix `A`. If not provided, it will be
calculated within the function as `np.linalg.pinv(A)`. Providing this value
is beneficial if the function is called repeatedly with the same `A`.
showProgress : bool, optional (default=False)
If `True`, the function prints the current value of `sigma` during each
iteration, which helps monitor the convergence process.
Returns:
-------
s : numpy array
The estimated sparse signal (Nx1) that best satisfies the equation `A @ s = y`.
Notes:
-----
- The algorithm works by iteratively reducing `sigma` in a geometric sequence,
starting with `sigma = 2 * max(abs(s))` and ending with `sigma_min`. At each
step, the function adjusts `s` to minimize the L0-norm by smoothing it using
a Gaussian kernel.
- The choice of `sigma_min` is crucial: for noiseless cases, a smaller `sigma_min`
yields a sparser solution; for noisy cases, `sigma_min` should be a few times
the standard deviation of the noise in `s`.
- If `A_pinv` is precomputed and passed as an argument, the function becomes
more efficient, especially in scenarios where it is called repeatedly with the
same `A`.
References:
----------
- Original authors (MATLAB): Massoud Babaie-Zadeh, Hossein Mohimani, 4 August 2008.
- Web-page: http://ee.sharif.ir/~SLzero
- Ported to python: RosNaviGator https://github.com/RosNaviGator, 2024
"""
if A_pinv is None:
A_pinv = np.linalg.pinv(A)
# Initialize the variables
s = A_pinv @ y
sigma = 2 * max(np.abs(s))
# Define lambda function for delta
OurDelta = lambda s, sigma: s * np.exp(-s**2 / sigma**2)
# Main loop
while sigma > sigma_min:
for i in range(L):
delta = OurDelta(s, sigma)
s = s - mu_0 * delta
s = s - A_pinv @ (A @ s - y)
if showProgress:
print(f'sigma: {sigma}')
sigma = sigma * sigma_decrease_factor
return s