-
Notifications
You must be signed in to change notification settings - Fork 16
/
Copy pathoptimizer_callback.docstring
87 lines (64 loc) · 3.9 KB
/
optimizer_callback.docstring
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
Call the optimization callback function
SYNOPSIS
model = mrcal.cameramodel('xxx.cameramodel')
optimization_inputs = model.optimization_inputs()
b_packed,x,J_packed,factorization = \
mrcal.optimizer_callback( **optimization_inputs )
Please see the mrcal documentation at
https://mrcal.secretsauce.net/formulation.html for details.
The main optimization routine in mrcal.optimize() searches for optimal
parameters by repeatedly calling a function to evaluate each hypothethical
parameter set. This evaluation function is available by itself here, separated
from the optimization loop. The arguments are largely the same as those to
mrcal.optimize(), but the inputs are all read-only. Some arguments that have
meaning in calls to optimize() have no meaning in calls to optimizer_callback().
These are accepted, and effectively ignored. Currently these are:
- do_apply_outlier_rejection
ARGUMENTS
This function accepts lots of arguments, but they're the same as the arguments
to mrcal.optimize() so please see that documentation for details. Arguments
accepted by optimizer_callback() on top of those in optimize():
- no_jacobian: optional boolean defaulting to False. If True, we do not compute
a jacobian, which would speed up this function. We then return None in its
place. if no_jacobian and not no_factorization then we still compute and
return a jacobian, since it's needed for the factorization
- no_factorization: optional boolean defaulting to False. If True, we do not
compute a cholesky factorization of JtJ, which would speed up this function.
We then return None in its place. if no_jacobian and not no_factorization then
we still compute and return a jacobian, since it's needed for the
factorization
RETURNED VALUES
The output is returned in a tuple:
- b_packed: a numpy array of shape (Nstate,). This is the packed (unitless)
state vector that represents the inputs, as seen by the optimizer. If the
optimization routine was running, it would use this as a starting point in the
search for different parameters, trying to find those that minimize norm2(x).
This packed state can be converted to the expanded representation like this:
b = mrcal.optimizer_callback(**optimization_inputs)[0
mrcal.unpack_state(b, **optimization_inputs)
- x: a numpy array of shape (Nmeasurements,). This is the error vector. If the
optimization routine was running, it would be testing different parameters,
trying to find those that minimize norm2(x)
- J: a sparse matrix of shape (Nmeasurements,Nstate). These are the gradients of
the measurements in respect to the packed parameters. This is a SPARSE array
of type scipy.sparse.csr_matrix. This object can be converted to a numpy array
like this:
b,x,J_sparse = mrcal.optimizer_callback(...)[:3]
J_numpy = J_sparse.toarray()
Note that the numpy array is dense, so it is very inefficient for sparse data,
and working with it could be very memory-intensive and slow.
This jacobian matrix comes directly from the optimization callback function,
which uses packed, unitless state. To convert a densified packed jacobian to
full units, one can do this:
J_sparse = mrcal.optimizer_callback(**optimization_inputs)[2]
J_numpy = J_sparse.toarray()
mrcal.pack_state(J_numpy, **optimization_inputs)
Note that we're calling pack_state() instead of unpack_state() because the
packed variables are in the denominator
- factorization: a Cholesky factorization of JtJ in a
mrcal.CHOLMOD_factorization object. The core of the optimization algorithm is
solving a linear system JtJ x = b. J is a large, sparse matrix, so we do this
with a Cholesky factorization of J using the CHOLMOD library. This
factorization is also useful in other contexts, such as uncertainty
quantification, so we make it available here. If the factorization could not
be computed (because JtJ isn't full-rank for instance), this is set to None