Skip to content

Commit 2771bb1

Browse files
authored
Merge pull request #12 from nantonel/master
Fix to nonlinear displaced operators
2 parents d34a2f4 + 5bfe34e commit 2771bb1

23 files changed

+399
-401
lines changed

benchmarks/LineSpectraEstimation.jl

+2-2
Original file line numberDiff line numberDiff line change
@@ -121,12 +121,12 @@ end
121121

122122
#StructuredOptimization non-Matrix Free
123123
function solve_problem!(slv::S, x0, y, K, F::A, Fc, lambda, lambda_m) where {S <: StructuredOptimization.ForwardBackwardSolver, A <: AbstractMatrix}
124-
it, = @minimize ls(F*x0-y)+lambda_m*norm(x0,1) with slv
124+
it, = @minimize ls(F*x0-complex(y))+lambda_m*norm(x0,1) with slv
125125
return x0, it
126126
end
127127

128128
function solve_problem_ncvx!(slv::S, x0, y, K, F::A, Fc, lambda, lambda_m) where {S <: StructuredOptimization.ForwardBackwardSolver, A <: AbstractMatrix}
129-
it, = @minimize ls(F*x0-y) st norm(x0,0) <= 2*K with slv
129+
it, = @minimize ls(F*x0-complex(y)) st norm(x0,0) <= 2*K with slv
130130
return x0, it
131131
end
132132

benchmarks/run_demos.jl

+3-3
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@
1515
#results = MatrixDecomposition.run_demo()
1616
#MatrixDecomposition.show_results(results...)
1717

18-
#include("DNN.jl")
19-
#results = DNN.run_demo()
20-
#DNN.show_results(results...)
18+
include("DNN.jl")
19+
results = DNN.run_demo()
20+
DNN.show_results(results...)
2121

2222
#include("TotalVariation.jl")
2323
#results = TotalVariation.run_demo()

docs/src/expressions.md

+8-1
Original file line numberDiff line numberDiff line change
@@ -75,6 +75,12 @@ variation
7575

7676
### Nonlinear mappings
7777
```@docs
78+
sin
79+
cos
80+
atan
81+
tanh
82+
exp
83+
pow
7884
sigmoid
7985
```
8086

@@ -86,5 +92,6 @@ Notice that these commands work also for the `Term`s described in [Functions and
8692
```@docs
8793
variables
8894
operator
89-
displacement
95+
affine
96+
AbstractOperators.displacement
9097
```

docs/src/tutorial.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Standard problem formulation
44

5-
Currently with `StructuredOptimization.jl` you can solve problems of the form
5+
Currently with StructuredOptimization.jl one can solve problems of the form
66

77
```math
88
\underset{ \mathbf{x} }{\text{minimize}} \ f(\mathbf{x}) + g(\mathbf{x}),
@@ -18,7 +18,7 @@ The *least absolute shrinkage and selection operator* (LASSO) belongs to this cl
1818
\underset{ \mathbf{x} }{\text{minimize}} \ \tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2+ \lambda \| \mathbf{x} \|_1.
1919
```
2020

21-
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function $f$ wherelse the $l_1$-norm is a *nonsmooth* function $g$. This problem can be solved with only few lines of code:
21+
Here the squared norm $\tfrac{1}{2} \| \mathbf{A} \mathbf{x} - \mathbf{y} \|^2$ is a *smooth* function $f$ whereas the $l_1$-norm is a *nonsmooth* function $g$. This problem can be solved with only few lines of code:
2222

2323
```julia
2424
julia> using StructuredOptimization
@@ -52,7 +52,7 @@ julia> ~x # inspect solution
5252

5353
It is possible to access to the solution by typing `~x`.
5454
By default variables are initialized by `Array`s of zeros.
55-
Different initializations can be set during construction `x = Variable( [1.; 0.; ...] )` or by assignement `~x .= [1.; 0.; ...]`.
55+
Different initializations can be set during construction `x = Variable( [1.; 0.; ...] )` or by assignment `~x .= [1.; 0.; ...]`.
5656

5757
## Constrained optimization
5858

@@ -70,12 +70,12 @@ can be converted into an *indicator function*
7070
```math
7171
g(\mathbf{x}) = \delta_{\mathcal{S}} (\mathbf{x}) = \begin{cases}
7272
0 & \text{if} \ \mathbf{x} \in \mathcal{S},\\
73-
+\infty & \text{otherwise},
73+
+\infty & \text{otherwise}.
7474
\end{cases}
7575
```
7676

77-
to obtain the standard form. Constraints are treated as *nonsmooth functions*.
78-
This conversion is automatically performed by `StructuredOptimization.jl`.
77+
Constraints are treated as *nonsmooth functions*.
78+
This conversion is automatically performed by StructuredOptimization.jl.
7979
For example, the non-negative deconvolution problem:
8080

8181
```math
@@ -85,7 +85,7 @@ For example, the non-negative deconvolution problem:
8585
\end{align*}
8686
```
8787

88-
where $*$ stands fof convoluton and $\mathbf{h}$ contains the taps of a finite impluse response,
88+
where $*$ stands for convolution and $\mathbf{h}$ contains the taps of a finite impulse response filter,
8989
can be solved using the following lines of code:
9090

9191
```julia
@@ -102,7 +102,7 @@ julia> @minimize ls(conv(x,h)-y) st x >= 0.
102102
!!! note
103103

104104
The convolution mapping was applied to the variable `x` using `conv`.
105-
`StructuredOptimization.jl` provides a set of functions that can be
105+
StructuredOptimization.jl provides a set of functions that can be
106106
used to apply specific operators to variables and create mathematical
107107
expression. The available functions can be found in [Mappings](@ref).
108108
In general it is more convenient to use these functions instead of matrices,
@@ -134,7 +134,7 @@ julia> @minimize ls(X1*X2-Y) st X1 >= 0., X2 >= 0.
134134

135135
## Limitations
136136

137-
Currently `StructuredOptimization.jl` supports only *proximal gradient algorithms* (i.e., *forward-backward splitting* base), which require specific properties of the nonsmooth functions and costraint to be applicable. In particular, the nonsmooth functions must have an *efficiently computable proximal mapping*.
137+
Currently StructuredOptimization.jl supports only *proximal gradient algorithms* (i.e., *forward-backward splitting* base), which require specific properties of the nonsmooth functions and constraint to be applicable. In particular, the nonsmooth functions must have an *efficiently computable proximal mapping*.
138138

139139
If we express the nonsmooth function $g$ as the composition of
140140
a function $\tilde{g}$ with a linear operator $A$:
@@ -148,7 +148,7 @@ then the proximal mapping of $g$ is efficiently computable if either of the foll
148148

149149
1. Operator $A$ is a *tight frame*, namely it satisfies $A A^* = \mu Id$, where $\mu \geq 0$, $A^*$ is the adjoint of $A$, and $Id$ is the identity operator.
150150

151-
2. Function $g$ is the *separable sum* $g(\mathbf{x}) = \sum_j h_j (B_j \mathbf{x}_j)$, where $\mathbf{x}_j$ are non-overlapping slices of $\mathbf{x}$, and $B_j$ are tight frames.
151+
2. Function $g$ is a *separable sum* $g(\mathbf{x}) = \sum_j h_j (B_j \mathbf{x}_j)$, where $\mathbf{x}_j$ are non-overlapping slices of $\mathbf{x}$, and $B_j$ are tight frames.
152152

153153
Let us analyze these rules with a series of examples.
154154
The LASSO example above satisfy the first rule:
@@ -157,8 +157,8 @@ The LASSO example above satisfy the first rule:
157157
julia> @minimize ls( A*x - y ) + λ*norm(x, 1)
158158
```
159159

160-
since the non-smooth function $\lambda \| \cdot \|_1$ is not composed with any operator (or equivalently is composed with $Id$ which is a tight frame).
161-
Also the following problem would be accepted:
160+
since the nonsmooth function $\lambda \| \cdot \|_1$ is not composed with any operator (or equivalently is composed with $Id$ which is a tight frame).
161+
Also the following problem would be accepted by StructuredOptimization.jl:
162162

163163
```julia
164164
julia> @minimize ls( A*x - y ) + λ*norm(dct(x), 1)

src/solvers/build_solve.jl

+4-2
Original file line numberDiff line numberDiff line change
@@ -39,11 +39,13 @@ function build(terms::Tuple, solver::ForwardBackwardSolver)
3939
append!(kwargs, [(:Aq, Aq)])
4040
end
4141
if !isempty(smooth)
42-
fs = extract_functions(smooth)
43-
As = extract_operators(x, smooth)
4442
if is_linear(smooth)
43+
fs = extract_functions(smooth)
44+
As = extract_operators(x, smooth)
4545
append!(kwargs, [(:As, As)])
4646
else
47+
fs = extract_functions_nodisp(smooth)
48+
As = extract_affines(x, smooth)
4749
fs = PrecomposeNonlinear(fs, As)
4850
end
4951
append!(kwargs, [(:fs, fs)])

src/solvers/terms_extract.jl

+50-11
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,14 @@ end
2424
extract_functions{N}(t::NTuple{N,Term}) = SeparableSum(extract_functions.(t))
2525
extract_functions(t::Tuple{Term}) = extract_functions(t[1])
2626

27+
# extract functions from terms without displacement
28+
function extract_functions_nodisp(t::Term)
29+
f = t.lambda == 1. ? t.f : Postcompose(t.f, t.lambda)
30+
return f
31+
end
32+
extract_functions_nodisp{N}(t::NTuple{N,Term}) = SeparableSum(extract_functions_nodisp.(t))
33+
extract_functions_nodisp(t::Tuple{Term}) = extract_functions_nodisp(t[1])
34+
2735
# extract operators from terms
2836

2937
# returns all operators with an order dictated by xAll
@@ -43,6 +51,48 @@ function extract_operators{N,M}(xAll::NTuple{N,Variable}, t::NTuple{M,Term})
4351
return vcat(ops...)
4452
end
4553

54+
sort_and_extract_operators(xAll::Tuple{Variable}, t::Term) = operator(t)
55+
56+
function sort_and_extract_operators{N}(xAll::NTuple{N,Variable}, t::Term)
57+
p = zeros(Int,N)
58+
xL = variables(t)
59+
for i in eachindex(xAll)
60+
p[i] = findfirst( xi -> xi == xAll[i], xL)
61+
end
62+
return operator(t)[p]
63+
end
64+
65+
# extract affines from terms
66+
67+
# returns all affines with an order dictated by xAll
68+
69+
#single term, single variable
70+
extract_affines(xAll::Tuple{Variable}, t::Term) = affine(t)
71+
72+
extract_affines{N}(xAll::NTuple{N,Variable}, t::Term) = extract_affines(xAll, (t,))
73+
74+
#multiple terms, multiple variables
75+
function extract_affines{N,M}(xAll::NTuple{N,Variable}, t::NTuple{M,Term})
76+
ops = ()
77+
for ti in t
78+
tex = expand(xAll,ti)
79+
ops = (ops...,sort_and_extract_affines(xAll,tex))
80+
end
81+
return vcat(ops...)
82+
end
83+
84+
sort_and_extract_affines(xAll::Tuple{Variable}, t::Term) = affine(t)
85+
86+
function sort_and_extract_affines{N}(xAll::NTuple{N,Variable}, t::Term)
87+
p = zeros(Int,N)
88+
xL = variables(t)
89+
for i in eachindex(xAll)
90+
p[i] = findfirst( xi -> xi == xAll[i], xL)
91+
end
92+
return affine(t)[p]
93+
end
94+
95+
# expand term domain dimensions
4696
function expand{N,T1,T2,T3}(xAll::NTuple{N,Variable}, t::Term{T1,T2,T3})
4797
xt = variables(t)
4898
C = codomainType(operator(t))
@@ -57,17 +107,6 @@ function expand{N,T1,T2,T3}(xAll::NTuple{N,Variable}, t::Term{T1,T2,T3})
57107
return Term(t.lambda, t.f, ex)
58108
end
59109

60-
sort_and_extract_operators(xAll::Tuple{Variable}, t::Term) = operator(t)
61-
62-
function sort_and_extract_operators{N}(xAll::NTuple{N,Variable}, t::Term)
63-
p = zeros(Int,N)
64-
xL = variables(t)
65-
for i in eachindex(xAll)
66-
p[i] = findfirst( xi -> xi == xAll[i], xL)
67-
end
68-
return operator(t)[p]
69-
end
70-
71110
# extract function and merge operator
72111
function extract_merge_functions(t::Term)
73112
if is_sliced(t)

src/syntax/expressions/abstractOperator_bind.jl

+79-6
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,7 @@ julia> reshape(A*x-b,2,5)
2020
function reshape(a::AbstractExpression, dims...)
2121
A = convert(Expression,a)
2222
op = Reshape(A.L, dims...)
23-
if typeof(displacement(A)) <: Number
24-
d = displacement(A)
25-
else
26-
d = reshape(displacement(A), dims...)
27-
end
28-
return Expression{length(A.x)}(A.x,op,d)
23+
return Expression{length(A.x)}(A.x,op)
2924
end
3025
#Reshape
3126

@@ -39,6 +34,11 @@ imported = [:getindex :GetIndex;
3934
:conv :Conv;
4035
:xcorr :Xcorr;
4136
:filt :Filt;
37+
:exp :Exp;
38+
:cos :Cos;
39+
:sin :Sin;
40+
:atan :Atan;
41+
:tanh :Tanh;
4242
]
4343

4444
exported = [:finitediff :FiniteDiff;
@@ -47,6 +47,7 @@ exported = [:finitediff :FiniteDiff;
4747
:zeropad :ZeroPad;
4848
:sigmoid :Sigmoid;
4949
:Sigmoid; #alias
50+
:pow :Pow; #alias
5051
]
5152

5253
#importing functions from Base
@@ -396,3 +397,75 @@ See documentation of `AbstractOperator.Sigmoid`.
396397
"""
397398
sigmoid
398399
σ
400+
401+
"""
402+
`exp(x::AbstractExpression)`
403+
404+
Exponential function:
405+
```math
406+
e^{ \\mathbf{x} }
407+
```
408+
409+
See documentation of `AbstractOperator.Exp`.
410+
"""
411+
exp
412+
413+
"""
414+
`sin(x::AbstractExpression)`
415+
416+
Sine function:
417+
```math
418+
\\sin( \\mathbf{x} )
419+
```
420+
421+
See documentation of `AbstractOperator.Sin`.
422+
"""
423+
sin
424+
425+
"""
426+
`cos(x::AbstractExpression)`
427+
428+
Cosine function:
429+
```math
430+
\\cos( \\mathbf{x} )
431+
```
432+
433+
See documentation of `AbstractOperator.Cos`.
434+
"""
435+
cos
436+
437+
"""
438+
`atan(x::AbstractExpression)`
439+
440+
Inverse tangent function:
441+
```math
442+
\\tan^{-1}( \\mathbf{x} )
443+
```
444+
445+
See documentation of `AbstractOperator.Atan`.
446+
"""
447+
atan
448+
449+
"""
450+
`tanh(x::AbstractExpression)`
451+
452+
Hyperbolic tangent function:
453+
```math
454+
\\tanh ( \\mathbf{x} )
455+
```
456+
457+
See documentation of `AbstractOperator.Tanh`.
458+
"""
459+
tanh
460+
461+
"""
462+
`pow(x::AbstractExpression, n)`
463+
464+
Elementwise power 'n' of 'x':
465+
```math
466+
x_i^{n} \\ \\forall \\ i = 0,1, \\dots
467+
```
468+
469+
See documentation of `AbstractOperator.Pow`.
470+
"""
471+
pow

0 commit comments

Comments
 (0)