Skip to content

Commit 499316d

Browse files
authored
Example: hard core lattice gas (#58)
* new hard core lattice gas example * suppress output * polish examples
1 parent b292dd4 commit 499316d

File tree

6 files changed

+123
-5
lines changed

6 files changed

+123
-5
lines changed

docs/Project.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
3+
GenericTensorNetworks = "3521c873-ad32-4bb4-b63d-f4f178f42b49"
34
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
45
LiveServer = "16fef848-5104-11e9-1b77-fb7a48bbb589"
56
TensorInference = "c2297e78-99bd-40ad-871d-f50e56b81012"

docs/make.jl

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,26 @@ using TensorInference
22
using TensorInference: OMEinsum
33
using TensorInference.OMEinsum: OMEinsumContractionOrders
44
using Documenter, Literate
5+
using Pkg
56

67
# Literate Examples
78
const EXAMPLE_DIR = pkgdir(TensorInference, "examples")
89
const LITERATE_GENERATED_DIR = pkgdir(TensorInference, "docs", "src", "generated")
910
mkpath(LITERATE_GENERATED_DIR)
1011
for each in readdir(EXAMPLE_DIR)
12+
# setup directory
1113
workdir = joinpath(LITERATE_GENERATED_DIR, each)
1214
cp(joinpath(EXAMPLE_DIR, each), workdir; force=true)
15+
# NOTE: for convenience, we use the `docs` environment
16+
# enter env
17+
# Pkg.activate(workdir)
18+
# Pkg.instantiate()
19+
# build
1320
input_file = joinpath(workdir, "main.jl")
1421
@info "building" input_file
1522
Literate.markdown(input_file, workdir; execute=true)
23+
# restore environment
24+
# Pkg.activate(Pkg.PREV_ENV_PATH[])
1625
end
1726

1827
const EXTRA_JL = ["performance.jl"]
@@ -42,7 +51,8 @@ makedocs(;
4251
"Background" => "background.md",
4352
"Examples" => [
4453
"Overview" => "examples-overview.md",
45-
"Asia network" => "generated/asia/main.md",
54+
"Asia Network" => "generated/asia/main.md",
55+
"Hard-core Lattice Gas" => "generated/hard-core-lattice-gas/main.md",
4656
],
4757
"UAI file formats" => "uai-file-formats.md",
4858
"Performance tips" => "generated/performance.md",
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# # The hard core lattice gas
2+
# ## Hard-core lattice gas problem
3+
# Hard-core lattice gas refers to a model used in statistical physics to study the behavior of particles on a lattice, where the particles are subject to an exclusion principle known as the "hard-core" interaction that characterized by a blockade radius. Distances between two particles can not be smaller than this radius.
4+
#
5+
# * Nath T, Rajesh R. Multiple phase transitions in extended hard-core lattice gas models in two dimensions[J]. Physical Review E, 2014, 90(1): 012120.
6+
# * Fernandes H C M, Arenzon J J, Levin Y. Monte Carlo simulations of two-dimensional hard core lattice gases[J]. The Journal of chemical physics, 2007, 126(11).
7+
#
8+
# Let define a $10 \times 10$ triangular lattice, with unit vectors
9+
# ```math
10+
# \begin{align}
11+
# \vec a &= \left(\begin{matrix}1 \\ 0\end{matrix}\right)\\
12+
# \vec b &= \left(\begin{matrix}\frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{matrix}\right)
13+
# \end{align}
14+
# ```
15+
16+
a, b = (1, 0), (0.5, 0.5*sqrt(3))
17+
Na, Nb = 10, 10
18+
sites = [a .* i .+ b .* j for i=1:Na, j=1:Nb]
19+
20+
# There exists blockade interactions between hard-core particles.
21+
# We connect two lattice sites within blockade radius by an edge.
22+
# Two ends of an edge can not both be occupied by particles.
23+
blockade_radius = 1.1
24+
using GenericTensorNetworks: show_graph, unit_disk_graph
25+
using GenericTensorNetworks.Graphs: edges, nv
26+
graph = unit_disk_graph(vec(sites), blockade_radius)
27+
show_graph(graph; locs=sites, texts=fill("", length(sites)))
28+
29+
# These constraints defines a independent set problem that characterized by the following energy based model.
30+
# Let $G = (V, E)$ be a graph, where $V$ is the set of vertices and $E$ be the set of edges. The energy model for the hard-core lattice gas problem is
31+
# ```math
32+
# E(\mathbf{n}) = -\sum_{i \in V}w_i n_i + \infty \sum_{(i, j) \in E} n_i n_j
33+
# ```
34+
# where $n_i \in \{0, 1\}$ is the number of particles at site $i$, and $w_i$ is the weight associated with it. For unweighted graphs, the weights are uniform.
35+
# The solution space hard-core lattice gas is equivalent to that of an independent set problem. The independent set problem involves finding a set of vertices in a graph such that no two vertices in the set are adjacent (i.e., there is no edge connecting them).
36+
# One can create a tensor network based modeling of an independent set problem with package [`GenericTensorNetworks.jl`](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
37+
using GenericTensorNetworks
38+
problem = IndependentSet(graph; optimizer=GreedyMethod());
39+
40+
# There has been a lot of discussions related to solution space properties in the `GenericTensorNetworks` [documentaion page](https://queracomputing.github.io/GenericTensorNetworks.jl/dev/generated/IndependentSet/).
41+
# In this example, we show how to use `TensorInference` to use probabilistic inference for understand the finite temperature properties of this statistic physics model.
42+
# We use [`TensorNetworkModel`](@ref) to convert a combinatorial optimization problem to a probabilistic model.
43+
# Here, we let the inverse temperature be $\beta = 3$.
44+
45+
# ## Probabilistic modeling correlation functions
46+
using TensorInference
47+
β = 3.0
48+
pmodel = TensorNetworkModel(problem, β)
49+
50+
# The partition function of this statistical model can be computed with the [`probability`](@ref) function.
51+
partition_func = probability(pmodel)
52+
53+
# The default return value is a log-rescaled tensor. Use indexing to get the real value.
54+
partition_func[]
55+
56+
# The marginal probabilities can be computed with the [`marginals`](@ref) function, which measures how likely a site is occupied.
57+
mars = marginals(pmodel)
58+
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in getindex.(mars, 2)], texts=fill("", nv(graph)))
59+
# The can see the sites at the corner is more likely to be occupied.
60+
# To obtain two-site correlations, one can set the variables to query marginal probabilities manually.
61+
pmodel2 = TensorNetworkModel(problem, β; mars=[[e.src, e.dst] for e in edges(graph)])
62+
mars = marginals(pmodel2);
63+
64+
# We show the probability that both sites on an edge are not occupied
65+
show_graph(graph; locs=sites, edge_colors=[(b=mar[1, 1]; (1-b, 1-b, 1-b)) for mar in mars], texts=fill("", nv(graph)), edge_line_width=5)
66+
67+
# ## The most likely configuration
68+
# The MAP and MMAP can be used to get the most likely configuration given an evidence.
69+
# The relavant function is [`most_probable_config`](@ref).
70+
# If we fix the vertex configuration at one corner to be one, we get the most probably configuration as bellow.
71+
pmodel3 = TensorNetworkModel(problem, β; evidence=Dict(1=>1))
72+
mars = marginals(pmodel3)
73+
logp, config = most_probable_config(pmodel3)
74+
75+
# The log probability is 102. Let us visualize the configuration.
76+
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in config], texts=fill("", nv(graph)))
77+
# The number of particles is
78+
sum(config)
79+
80+
# Otherwise, we will get a suboptimal configuration.
81+
pmodel3 = TensorNetworkModel(problem, β; evidence=Dict(1=>0))
82+
logp2, config2 = most_probable_config(pmodel)
83+
84+
# The log probability is 99, which is much smaller.
85+
show_graph(graph; locs=sites, vertex_colors=[(1-b, 1-b, 1-b) for b in config2], texts=fill("", nv(graph)))
86+
# The number of particles is
87+
sum(config2)
88+
89+
## Sampling configurations
90+
# One can ue [`sample`](@ref) to generate samples from hard-core lattice gas at finite temperature.
91+
# The return value is a matrix, with the columns correspond to different samples.
92+
configs = sample(pmodel3, 1000)
93+
sizes = sum(configs; dims=1)
94+
[count(==(i), sizes) for i=0:34] # counting sizes

src/cuda.jl

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,3 +8,10 @@ end
88

99
# NOTE: this interface should be in OMEinsum
1010
match_arraytype(::Type{<:CuArray{T, N}}, target::AbstractArray{T, N}) where {T, N} = CuArray(target)
11+
12+
function keep_only!(x::CuArray{T}, j) where T
13+
CUDA.@allowscalar hotvalue = x[j]
14+
fill!(x, zero(T))
15+
CUDA.@allowscalar x[j] = hotvalue
16+
return x
17+
end

src/generictensornetworks.jl

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,5 @@
11
using .GenericTensorNetworks: generate_tensors, GraphProblem, flavors, labels
22

3-
export probabilistic_model
4-
53
"""
64
$TYPEDSIGNATURES
75

src/map.jl

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
########### Backward tropical tensor contraction ##############
44
# This part is copied from [`GenericTensorNetworks`](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
55
function einsum_backward_rule(eins, xs::NTuple{M, AbstractArray{<:Tropical}} where {M}, y, size_dict, dy)
6-
return backward_tropical(OMEinsum.getixs(eins), xs, OMEinsum.getiy(eins), y, dy, size_dict)
6+
return backward_tropical!(OMEinsum.getixs(eins), xs, OMEinsum.getiy(eins), y, dy, size_dict)
77
end
88

99
"""
@@ -16,7 +16,7 @@ The backward rule for tropical einsum.
1616
* `ymask` is the boolean mask for gradients,
1717
* `size_dict` is a key-value map from tensor label to dimension size.
1818
"""
19-
function backward_tropical(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y), @nospecialize(ymask), size_dict)
19+
function backward_tropical!(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y), @nospecialize(ymask), size_dict)
2020
y .= masked_inv.(y, ymask)
2121
masks = []
2222
for i in eachindex(ixs)
@@ -28,10 +28,18 @@ function backward_tropical(ixs, @nospecialize(xs::Tuple), iy, @nospecialize(y),
2828
# compute the mask, one of its entry in `A^{-1}` that equal to the corresponding entry in `X` is masked to true.
2929
j = argmax(xs[i] ./ inv.(A))
3030
mask = onehot_like(A, j)
31+
# zero-out irrelavant indices, otherwise, the result may be unreliable.
32+
keep_only!(xs[i], j)
3133
push!(masks, mask)
3234
end
3335
return masks
3436
end
37+
function keep_only!(x::AbstractArray{T}, j) where T
38+
hotvalue = x[j]
39+
fill!(x, zero(T))
40+
x[j] = hotvalue
41+
return x
42+
end
3543
masked_inv(x, y) = iszero(y) ? zero(x) : inv(x)
3644
function onehot_like(A::AbstractArray, j)
3745
mask = zero(A)

0 commit comments

Comments
 (0)