Skip to content

Commit b020b7b

Browse files
committed
WIP
1 parent 60bf2ca commit b020b7b

5 files changed

+15
-12
lines changed

DifferentiatingPerceptron.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -655,7 +655,7 @@
655655
"views": {}
656656
},
657657
"kernelspec": {
658-
"display_name": "Python 3",
658+
"display_name": "Python 3 (ipykernel)",
659659
"language": "python",
660660
"name": "python3"
661661
},
@@ -669,9 +669,9 @@
669669
"name": "python",
670670
"nbconvert_exporter": "python",
671671
"pygments_lexer": "ipython3",
672-
"version": "3.6.7"
672+
"version": "3.12.1"
673673
}
674674
},
675675
"nbformat": 4,
676-
"nbformat_minor": 1
676+
"nbformat_minor": 4
677677
}

InformationTheoryOptimization.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@
191191
"\\end{align*}\n",
192192
"where $p_1$ and $p_2$ are probability mass functions and $\\lambda \\in [0,1]$\n",
193193
"\n",
194-
"Proof: Let $X$ be a discrete random variable with possible outcomes $\\mathcal{X} := {x_i, i \\in 0,1,\\dots N-1}$ and let $u(x)$ be the probability mass function of a discrete uniform distribution on $X \\in \\mathcal{X}$. Then, the entropy of an arbitrary probability mass function $p(x)$ can be rewritten as\n",
194+
"Proof: Let $X$ be a discrete random variable with possible outcomes $\\mathcal{X} := {x_i, i \\in 0,1,\\dots N-1}$ and let $u(x)$ be the probability mass function of a discrete uniform distribution on $X \\in \\mathcal{X}$, ie $u(x_i)=\\frac{1}{N}$. Then, the entropy of an arbitrary probability mass function $p(x)$ can be rewritten as\n",
195195
"\n",
196196
"\\begin{align*} \\tag{1.2}\n",
197197
" H(X) &= - \\sum_{i=0}^{N-1} p(x_i)log(p(x_i)) \\\\\n",

OptimalTransportWasserteinDistance.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -559,7 +559,7 @@
559559
"metadata": {},
560560
"source": [
561561
"### OT and statistical concepts\n",
562-
"Some of the basics to understand the following statements can be found in the notebook \"InformationTheoryOptimization\" this part is also partly a direct reproduction of Marco Cuturi famous article \"Sinkhorn Distances: Lightspeed Computation of Optimal Transport\"\n",
562+
"Some of the basics to understand the following statements can be found in the notebook \"InformationTheoryOptimization\", this part is also partly a direct reproduction of Marco Cuturi famous article \"Sinkhorn Distances: Lightspeed Computation of Optimal Transport\"\n",
563563
"\n",
564564
"I would like to stop and mention that as we now interpret $P$ as a joint probability matrix, we can define its entropy, the marginal probabiilty entropy, and KL-divergence between two different transportation matrix. These takes the form of\n",
565565
"\n",
@@ -579,7 +579,7 @@
579579
"\\begin{align*} \\tag{1.5}\n",
580580
" \\forall r,c \\in \\Sigma_d, \\forall P \\in U(r,c), h(P) \\leq h(r) + h(c)\n",
581581
"\\end{align*}\n",
582-
"ie, by using log-sum inequality, proved in the notebook called InformationTheoryOptimization\n",
582+
"ie, by using log-sum inequality, we proved in the notebook called InformationTheoryOptimization\n",
583583
"\\begin{align*}\\tag{1.6}\n",
584584
" \\sum_{i=0}^{N-1} a_i log\\left(\\frac{a_i}{b_i}\\right) &\\geq \\left(\\sum_{i=0}^{N-1} a_i\\right) log\\left(\\frac{\\sum_{i=0}^{N-1}a_i}{\\sum_{i=0}^{N-1}b_i}\\right)\n",
585585
"\\end{align*}\n",

RegularizationByDenoising.ipynb

+8-5
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"\n",
1616
"## Introduction\n",
1717
"\n",
18-
"This notebook intends to show what are the next steps in terms of regularized image reconstruction. We will try to focus especially in a framework that allows the introduction of deep learning in a proper mathematical framework that allows for prior and data fitting mitigation called: regularization by denoising (RED).\n",
18+
"This notebook intends to show what are the next steps in terms of regularized image reconstruction. We will try to focus especially in a framework that allows the introduction of deep learning in a proper mathematical framework that allows for prior and data fitting mitigation called: regularization by denoising (RED) but not only.\n",
1919
"\n",
2020
"The following paper guided us to write this notebook:\n",
2121
"\n",
@@ -38,7 +38,10 @@
3838
"https://ieeexplore.ieee.org/document/9107406\n",
3939
"\n",
4040
"* Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition\n",
41-
"https://arxiv.org/abs/2106.03668"
41+
"https://arxiv.org/abs/2106.03668\n",
42+
"\n",
43+
"* Deep inverse\n",
44+
"https://github.com/deepinv/deepinv"
4245
]
4346
},
4447
{
@@ -51,7 +54,7 @@
5154
],
5255
"metadata": {
5356
"kernelspec": {
54-
"display_name": "Python 3",
57+
"display_name": "Python 3 (ipykernel)",
5558
"language": "python",
5659
"name": "python3"
5760
},
@@ -65,9 +68,9 @@
6568
"name": "python",
6669
"nbconvert_exporter": "python",
6770
"pygments_lexer": "ipython3",
68-
"version": "3.8.10"
71+
"version": "3.12.1"
6972
}
7073
},
7174
"nbformat": 4,
72-
"nbformat_minor": 2
75+
"nbformat_minor": 4
7376
}

bayesian_ab_testing.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -1960,7 +1960,7 @@
19601960
"name": "python",
19611961
"nbconvert_exporter": "python",
19621962
"pygments_lexer": "ipython3",
1963-
"version": "3.8.6"
1963+
"version": "3.12.1"
19641964
}
19651965
},
19661966
"nbformat": 4,

0 commit comments

Comments
 (0)