Skip to content

Commit 57f5068

Browse files
authored
Merge pull request #45 from TensorBFS/jg/fix-mmap-constructor
Fix mmap constructor, add UAIInstance pretty printing
2 parents e5ae5a2 + a12e2e1 commit 57f5068

File tree

18 files changed

+224
-140
lines changed

18 files changed

+224
-140
lines changed

.github/workflows/CI.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ jobs:
1919
matrix:
2020
version:
2121
- '1'
22-
- 'nightly'
22+
# - 'nightly'
2323
os:
2424
- ubuntu-latest
2525
arch:

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,6 @@ Manifest.toml
33
*.jl.cov
44
*.jl.mem
55
/docs/build/
6+
/docs/src/generated/
67
.vscode/
78
Session.vim

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ pkg> add TensorInference
3030
To update, just type `up` in the package mode.
3131

3232
## Examples
33-
Examples are in the [example](example) folder, which contains the following list of example problems
34-
- [asia network](example/asia)
33+
Examples are in the [examples](examples) folder, which contains the following list of example problems
34+
- [asia network](examples/asia)
3535

3636

3737
## Supporting and Citing

docs/Manifest.toml

Lines changed: 0 additions & 100 deletions
This file was deleted.

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
4+
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
35
TensorInference = "c2297e78-99bd-40ad-871d-f50e56b81012"

docs/make.jl

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,24 @@
11
using TensorInference
2-
using Documenter
2+
using TensorInference: OMEinsum
3+
using TensorInference.OMEinsum: OMEinsumContractionOrders
4+
using Documenter, Literate
5+
6+
# Literate
7+
const EXAMPLE_DIR = pkgdir(TensorInference, "examples")
8+
const LITERATE_GENERATED_DIR = pkgdir(TensorInference, "docs", "src", "generated")
9+
mkpath(LITERATE_GENERATED_DIR)
10+
for each in readdir(EXAMPLE_DIR)
11+
workdir = joinpath(LITERATE_GENERATED_DIR, each)
12+
cp(joinpath(EXAMPLE_DIR, each), workdir; force=true)
13+
input_file = joinpath(workdir, "main.jl")
14+
@info "building" input_file
15+
Literate.markdown(input_file, workdir; execute=true)
16+
end
317

418
DocMeta.setdocmeta!(TensorInference, :DocTestSetup, :(using TensorInference); recursive=true)
519

620
makedocs(;
7-
modules=[TensorInference],
21+
modules=[TensorInference, OMEinsumContractionOrders],
822
authors="Jin-Guo Liu, Martin Roa Villescas",
923
repo="https://github.com/TensorBFS/TensorInference.jl/blob/{commit}{path}#{line}",
1024
sitename="TensorInference.jl",
@@ -16,7 +30,13 @@ makedocs(;
1630
),
1731
pages=[
1832
"Home" => "index.md",
33+
"Examples" => [
34+
"Asia network" => "generated/asia/main.md",
35+
],
36+
"Performance Tips" => "performance.md",
37+
"References" => "ref.md",
1938
],
39+
doctest = false,
2040
)
2141

2242
deploydocs(;

docs/serve.jl

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
function serve(;host::String="0.0.0.0", port::Int=8000)
2+
# setup environment
3+
docs_dir = @__DIR__
4+
julia_cmd = "using Pkg; Pkg.instantiate()"
5+
run(`$(Base.julia_exename()) --project=$docs_dir -e $julia_cmd`)
6+
7+
serve_cmd = """
8+
using LiveServer;
9+
LiveServer.servedocs(;
10+
doc_env=false,
11+
skip_dirs=[
12+
joinpath("docs", "src", "generated"),
13+
joinpath("docs", "build"),
14+
],
15+
skip_files=[
16+
joinpath("docs", "Manifest.toml"),
17+
],
18+
literate="examples",
19+
host=\"$host\",
20+
port=$port,
21+
)
22+
"""
23+
try
24+
run(`$(Base.julia_exename()) --project=$docs_dir -e $serve_cmd`)
25+
catch e
26+
if e isa InterruptException
27+
return
28+
else
29+
rethrow(e)
30+
end
31+
end
32+
return
33+
end
34+
35+
serve()

docs/src/index.md

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,4 @@ CurrentModule = TensorInference
44

55
# TensorInference
66

7-
Documentation for [TensorInference](https://github.com/TensorBFS/TensorInference.jl).
8-
9-
```@index
10-
```
11-
12-
```@autodocs
13-
Modules = [TensorInference]
14-
```
7+
Documentation for [TensorInference](https://github.com/TensorBFS/TensorInference.jl).

docs/src/performance.md

Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
# Performance Tips
2+
## Optimize contraction orders
3+
4+
Let us use the independent set problem on 3-regular graphs as an example.
5+
```julia
6+
julia> using TensorInference, Artifacts, Pkg
7+
8+
julia> Pkg.ensure_artifact_installed("uai2014", pkgdir(TensorInference, "test", "Artifacts.toml"));
9+
10+
julia> function get_instance_filepaths(problem_name::AbstractString, task::AbstractString)
11+
model_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai")
12+
evidence_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai.evid")
13+
solution_filepath = joinpath(artifact"uai2014", task, problem_name * ".uai." * task)
14+
return model_filepath, evidence_filepath, solution_filepath
15+
end
16+
17+
julia> model_filepath, evidence_filepath, solution_filepath = get_instance_filepaths("Promedus_14", "MAR")
18+
19+
julia> instance = read_instance(model_filepath; evidence_filepath, solution_filepath)
20+
```
21+
22+
Next, we select the tensor network contraction order optimizer.
23+
```julia
24+
julia> optimizer = TreeSA(ntrials = 1, niters = 5, βs = 0.1:0.1:100)
25+
```
26+
27+
Here, we choose the local search based [`TreeSA`](@ref) algorithm, which often finds the smallest time/space complexity and supports slicing.
28+
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of the [`TreeSA`](@ref) method,
29+
while the detailed algorithm explanation is in [arXiv: 2108.05665](https://arxiv.org/abs/2108.05665).
30+
Alternative tensor network contraction order optimizers include
31+
* [`GreedyMethod`](@ref) (default, fastest in searching speed but worst in contraction complexity)
32+
* [`KaHyParBipartite`](@ref)
33+
* [`SABipartite`](@ref)
34+
35+
```julia
36+
julia> tn = TensorNetworkModel(instance; optimizer)
37+
```
38+
The returned object `tn` contains a field `code` that specifies the tensor network with optimized contraction order. To check the contraction complexity, please type
39+
```julia
40+
julia> contraction_complexity(problem)
41+
```
42+
43+
The returned object contains log2 values of the number of multiplications, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
44+
45+
```julia
46+
julia> p1 = probability(tn)
47+
```
48+
49+
## Slicing technique
50+
51+
For large scale applications, it is also possible to slice over certain degrees of freedom to reduce the space complexity, i.e.
52+
loop and accumulate over certain degrees of freedom so that one can have a smaller tensor network inside the loop due to the removal of these degrees of freedom.
53+
In the [`TreeSA`](@ref) optimizer, one can set `nslices` to a value larger than zero to turn on this feature.
54+
55+
```julia
56+
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA());
57+
58+
julia> contraction_complexity(tn)
59+
(20.856518235241687, 16.0, 18.88208476145812)
60+
```
61+
62+
As a comparision we slice over 5 degrees of freedom, which can reduce the space complexity by at most 5.
63+
In this application, the slicing achieves the largest possible space complexity reduction 5, while the time and read-write complexity are only increased by less than 1,
64+
i.e. the peak memory usage is reduced by a factor ``32``, while the (theoretical) computing time is increased by at a factor ``< 2``.
65+
```
66+
julia> tn = TensorNetworkModel(instance; optimizer=TreeSA(nslices=5));
67+
68+
julia> timespacereadwrite_complexity(problem)
69+
(21.134967710592804, 11.0, 19.84529401927876)
70+
```
71+
72+
## GEMM for Tropical numbers
73+
No extra effort is required to enjoy the BLAS level speed provided by [`TropicalGEMM`](https://github.com/TensorBFS/TropicalGEMM.jl).
74+
The benchmark in the `TropicalGEMM` repo shows this performance is close to the theoretical optimal value.
75+
Its implementation on GPU is under development in Github repo [`CuTropicalGEMM.jl`](https://github.com/ArrogantGao/CuTropicalGEMM.jl) as a part of [Open Source Promotion Plan summer program](https://summer-ospp.ac.cn/).
76+
77+
## Working with GPUs
78+
To upload the computation to GPU, you just add `using CUDA` before calling the `solve` function, and set the keyword argument `usecuda` to `true`.
79+
```julia
80+
julia> using CUDA
81+
[ Info: OMEinsum loaded the CUDA module successfully
82+
83+
julia> marginals(tn; usecuda = true)
84+
```
85+
86+
Functions support `usecuda` keyword argument includes
87+
* [`probability`](@ref)
88+
* [`log_probability`](@ref)
89+
* [`marginals`](@ref)
90+
* [`most_probable_config`](@ref)
91+
92+
## Benchmarks
93+
Please check our [paper (link to be added)]().

docs/src/ref.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# References
2+
3+
## TensorInference
4+
```@autodocs
5+
Modules = [TensorInference]
6+
Order = [:function, :type]
7+
Private = false
8+
```
9+
10+
## Tensor Network
11+
```@docs
12+
contraction_complexity
13+
GreedyMethod
14+
TreeSA
15+
SABipartite
16+
KaHyParBipartite
17+
MergeVectors
18+
MergeGreedy
19+
```

0 commit comments

Comments
 (0)