You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function $f$ wherelse the $l_1$-norm is a *nonsmooth* function $g$. This problem can be solved with only few lines of code:
21
+
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function $f$ whereas the $l_1$-norm is a *nonsmooth* function $g$. This problem can be solved with only few lines of code:
22
22
23
23
```julia
24
24
julia>using StructuredOptimization
@@ -52,7 +52,7 @@ julia> ~x # inspect solution
52
52
53
53
It is possible to access to the solution by typing `~x`.
54
54
By default variables are initialized by `Array`s of zeros.
55
-
Different initializations can be set during construction `x = Variable( [1.; 0.; ...] )` or by assignement`~x .= [1.; 0.; ...]`.
55
+
Different initializations can be set during construction `x = Variable( [1.; 0.; ...] )` or by assignment`~x .= [1.; 0.; ...]`.
56
56
57
57
## Constrained optimization
58
58
@@ -70,12 +70,12 @@ can be converted into an *indicator function*
Currently `StructuredOptimization.jl` supports only *proximal gradient algorithms* (i.e., *forward-backward splitting* base), which require specific properties of the nonsmooth functions and costraint to be applicable. In particular, the nonsmooth functions must have an *efficiently computable proximal mapping*.
137
+
Currently StructuredOptimization.jl supports only *proximal gradient algorithms* (i.e., *forward-backward splitting* base), which require specific properties of the nonsmooth functions and constraint to be applicable. In particular, the nonsmooth functions must have an *efficiently computable proximal mapping*.
138
138
139
139
If we express the nonsmooth function $g$ as the composition of
140
140
a function $\tilde{g}$ with a linear operator $A$:
@@ -148,7 +148,7 @@ then the proximal mapping of $g$ is efficiently computable if either of the foll
148
148
149
149
1. Operator $A$ is a *tight frame*, namely it satisfies $A A^* = \mu Id$, where $\mu \geq 0$, $A^*$ is the adjoint of $A$, and $Id$ is the identity operator.
150
150
151
-
2. Function $g$ is the*separable sum* $g(\mathbf{x}) = \sum_j h_j (B_j \mathbf{x}_j)$, where $\mathbf{x}_j$ are non-overlapping slices of $\mathbf{x}$, and $B_j$ are tight frames.
151
+
2. Function $g$ is a*separable sum* $g(\mathbf{x}) = \sum_j h_j (B_j \mathbf{x}_j)$, where $\mathbf{x}_j$ are non-overlapping slices of $\mathbf{x}$, and $B_j$ are tight frames.
152
152
153
153
Let us analyze these rules with a series of examples.
154
154
The LASSO example above satisfy the first rule:
@@ -157,8 +157,8 @@ The LASSO example above satisfy the first rule:
157
157
julia>@minimizels( A*x - y ) + λ*norm(x, 1)
158
158
```
159
159
160
-
since the non-smooth function $\lambda \| \cdot \|_1$ is not composed with any operator (or equivalently is composed with $Id$ which is a tight frame).
161
-
Also the following problem would be accepted:
160
+
since the nonsmooth function $\lambda \| \cdot \|_1$ is not composed with any operator (or equivalently is composed with $Id$ which is a tight frame).
161
+
Also the following problem would be accepted by StructuredOptimization.jl:
0 commit comments