Open
Description
Hi @willow-ahrens @hameerabbasi,
This issue is meant to track progress of implementing Array API standard for finch-tensor
.
I thought that we could try adding short notes to bullet-points, saying which Finch.jl
functions should be called to implement given entry. I think we already had some ideas during one of our first calls.
Array API: https://data-apis.org/array-api/latest/index.html
A note to those who edit this list in the future: Convert bullets to issues, not subissues, as you complete them. Otherwise they get removed from the list.
Backlog
main namespace
-
astype
- API:finch.astype
function #15 - eager - element-wise ops (
add
,multiply
,cos
, ...) - API: Lazy API #17 (partially...) - reductions (
xp.prod
,xp.sum
) -jl.sum
andjl.prod
, also justjl.reduce
- API: Lazy API #17 -
matmul
- implemented withfinch.tensordot
for non-stacked input. Should be rewritten withjl.mul
/ Finch einsum. -
tensordot
-finch.tensordot
- API: Implementtensordot
andmatmul
#22 -
where
-jl.broadcast(jl.ifelse, cond, a, b)
- API: Implementwhere
andnonzero
#30 -
argmin
/argmax
-jl.argmin
(bug willow if this isn't implemented already) - eager for now #90 -
take
-jl.getindex
eager for now -
nonzero
- this is an eager function, but it is implemented asffindnz(arr)
- API: Implementwhere
andnonzero
#30 - creation functions:
asarray
,ones
,full
,full_like
, ... -finch.Tensor
constructor, as well asjl.copyto!(arr, jl.broadcasted(Scalar(1))
, as well as changing the default of the tensor withTensor(Dense(Element(1.0)))
. We may need to distinguish some of these. API: Addasarray
function #28, API: Addeye
function #32 - stats functions:
max
,mean
,min
,std
,var
#106 - set functions:
unique_all
,unique_counts
,unique_inverse
,unique_values
- eager -
all
,any
-
concat
- eager for now -
expand_dims
- lazy #109 -
flip
-eager for now -
reshape
- eager for now #107 -
roll
- eager for now -
squeeze
- lazy #108 -
stack
- eager for now -
argsort
/sort
- eager -
broadcast_arrays
- eager for now -
broadcast_to
- eager for now -
can_cast
/finfo
/iinfo
/result_type
- bitwise ops:
bitwise_and
/bitwise_left_shift
/bitwise_invert
/bitwise_or
/bitwise_right_shift
/bitwise_xor
- Boolean indexing (https://data-apis.org/array-api/latest/API_specification/indexing.html#boolean-array-indexing)
linalg
namespace
(I copied those from the benchmark suite. If something turns out to be unfeasible we can drop it.)
-
linalg.vecdot
-finch.tensordot
-[+] [x]linalg.vector_norm
-finch.norm
- https://github.com/finch-tensor/finch-tensor-python/pull/94[/+][-] [ ]linalg.vector_norm
-finch.norm
[/-] -
linalg.trace
- eager -
linalg.tensordot
- implemented in the main namespace. Just needs an alias -
linalg.outer
#89 -
linalg.cross
- eager for now -
linalg.matrix_transpose
- lazy -
linalg.matrix_power
- eager (call matmul on sparse matrix until it gets too dense) -
linalg.matrix_norm
- fornuc
or2
, call external library. Forfro
,inf
,1
,0
,-1
,-inf
, calljl.norm
. -
xp.linalg.diagonal
-finch.tensordot(finch.diagmask(), mtx)
#110 -
xp.linalg.cholesky
- call CHOLMOD or something -
xp.linalg.det
- call EIGEN or something -
xp.linalg.eigh
- call external library -
xp.linalg.eigvalsh
- call external library -
xp.linalg.inv
- call external library -scipy.sparse.linalg.inv
-
xp.linalg.matrix_rank
- call external library -
xp.linalg.pinv
- call external library
Tensor
methods and attributes
-
Tensor.to_device()
-finch.moveto
miscellaneous
- handling scalars
- benchmarking
- Don't forget about array_namespace_info() function, and the isdtype() and result_type() data type functions 🙂