Skip to content

Commit 0d23e37

Browse files
authored
Refactor Interfaces (#81)
* update * save * NoWeight->UnitWeight * update * update * update * update * update * update docs * update * fix CUDA test * fix document * fix document * fix doc build
1 parent 43e64c2 commit 0d23e37

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

63 files changed

+783
-1092
lines changed

Makefile

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
JL = julia --project
2+
3+
default: init test
4+
5+
init:
6+
$(JL) -e 'using Pkg; Pkg.precompile()'
7+
init-docs:
8+
$(JL) -e 'using Pkg; Pkg.activate("docs"); Pkg.develop(path="."), Pkg.precompile()'
9+
10+
update:
11+
$(JL) -e 'using Pkg; Pkg.update(); Pkg.precompile()'
12+
update-docs:
13+
$(JL) -e 'using Pkg; Pkg.activate("docs"); Pkg.update(); Pkg.precompile()'
14+
15+
test:
16+
$(JL) -e 'using Pkg; Pkg.test("GenericTensorNetworks")'
17+
18+
coverage:
19+
$(JL) -e 'using Pkg; Pkg.test("GenericTensorNetworks"; coverage=true)'
20+
21+
serve:
22+
$(JL) -e 'using Pkg; Pkg.activate("docs"); using LiveServer; servedocs(;skip_dirs=["docs/src/assets", "docs/src/generated"], literate_dir="examples")'
23+
24+
clean:
25+
rm -rf docs/build
26+
find . -name "*.cov" -type f -print0 | xargs -0 /bin/rm -f
27+
28+
.PHONY: init test coverage serve clean init-docs update update-docs

docs/make.jl

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,6 @@ makedocs(;
4040
"Satisfiability problem" => "generated/Satisfiability.md",
4141
"Set covering problem" => "generated/SetCovering.md",
4242
"Set packing problem" => "generated/SetPacking.md",
43-
"Hyper Spin glass problem" => "generated/HyperSpinGlass.md",
4443
#"Other problems" => "generated/Others.md",
4544
],
4645
"Topics" => [

docs/src/index.md

Lines changed: 11 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -23,32 +23,29 @@ If you find our paper or software useful in your work, we would be grateful if y
2323
You can find a set up guide in our [README](https://github.com/QuEraComputing/GenericTensorNetworks.jl).
2424
To get started, open a Julia REPL and type the following code.
2525

26-
```julia
27-
julia> using GenericTensorNetworks, Graphs
28-
29-
julia> # using CUDA
30-
31-
julia> solve(
32-
IndependentSet(
33-
Graphs.random_regular_graph(20, 3);
26+
```@repl
27+
using GenericTensorNetworks, Graphs#, CUDA
28+
solve(
29+
GenericTensorNetwork(IndependentSet(
30+
Graphs.random_regular_graph(20, 3),
31+
UnitWeight()); # default: uniform weight 1
3432
optimizer = TreeSA(),
35-
weights = NoWeight(), # default: uniform weight 1
3633
openvertices = (), # default: no open vertices
3734
fixedvertices = Dict() # default: no fixed vertices
3835
),
3936
GraphPolynomial();
4037
usecuda=false # the default value
4138
)
42-
0-dimensional Array{Polynomial{BigInt, :x}, 0}:
43-
Polynomial(1 + 20*x + 160*x^2 + 659*x^3 + 1500*x^4 + 1883*x^5 + 1223*x^6 + 347*x^7 + 25*x^8)
4439
```
4540

4641
Here the main function [`solve`](@ref) takes three input arguments, the problem instance of type [`IndependentSet`](@ref), the property instance of type [`GraphPolynomial`](@ref) and an optional key word argument `usecuda` to decide use GPU or not.
47-
If one wants to use GPU to accelerate the computation, the `using CUDA` statement must uncommented.
42+
If one wants to use GPU to accelerate the computation, the `, CUDA` should be uncommented.
43+
44+
An [`IndependentSet`](@ref) instance takes two positional arguments to initialize, the graph instance that one wants to solve and the weights for each vertex. Here, we use a random regular graph with 20 vertices and degree 3, and the default uniform weight 1.
4845

49-
The problem instance takes four arguments to initialize, the only positional argument is the graph instance that one wants to solve, the key word argument `optimizer` is for specifying the tensor network optimization algorithm, the key word argument `weights` is for specifying the weights of vertices as either a vector or `NoWeight()`.
46+
The [`GenericTensorNetwork`](@ref) function is a constructor for the problem instance, which takes the problem instance as the first argument and optional key word arguments. The key word argument `optimizer` is for specifying the tensor network optimization algorithm.
5047
The keyword argument `openvertices` is a tuple of labels for specifying the degrees of freedom not summed over, and `fixedvertices` is a label-value dictionary for specifying the fixed values of the degree of freedoms.
51-
Here, we use [`TreeSA`](@ref) method as the tensor network optimizer, and leave `weights` and `openvertices` the default values.
48+
Here, we use [`TreeSA`](@ref) method as the tensor network optimizer, and leave `openvertices` the default values.
5249
The [`TreeSA`](@ref) method finds the best contraction order in most of our applications, while the default [`GreedyMethod`](@ref) runs the fastest.
5350

5451
The first execution of this function will be a bit slow due to Julia's just in time compiling.

docs/src/performancetips.md

Lines changed: 25 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,15 @@
22
## Optimize contraction orders
33

44
Let us use the independent set problem on 3-regular graphs as an example.
5-
```julia
6-
julia> using GenericTensorNetworks, Graphs, Random
7-
8-
julia> graph = random_regular_graph(120, 3)
9-
{120, 180} undirected simple Int64 graph
10-
11-
julia> problem = IndependentSet(graph; optimizer=TreeSA(
12-
sc_target=20, sc_weight=1.0, rw_weight=3.0, ntrials=10, βs=0.01:0.1:15.0, niters=20), simplifier=MergeGreedy());
5+
```@repl performancetips
6+
using GenericTensorNetworks, Graphs, Random
7+
graph = random_regular_graph(120, 3)
8+
iset = IndependentSet(graph)
9+
problem = GenericTensorNetwork(iset; optimizer=TreeSA(
10+
sc_target=20, sc_weight=1.0, rw_weight=3.0, ntrials=10, βs=0.01:0.1:15.0, niters=20));
1311
```
1412

15-
The [`IndependentSet`](@ref) constructor maps an independent set problem to a tensor network with optimized contraction order.
13+
The [`GenericTensorNetwork`](@ref) constructor maps an independent set problem to a tensor network with optimized contraction order.
1614
The key word argument `optimizer` specifies the contraction order optimizer of the tensor network.
1715
Here, we choose the local search based [`TreeSA`](@ref) algorithm, which often finds the smallest time/space complexity and supports slicing.
1816
One can type `?TreeSA` in a Julia REPL for more information about how to configure the hyper-parameters of the [`TreeSA`](@ref) method,
@@ -22,46 +20,32 @@ Alternative tensor network contraction order optimizers include
2220
* [`KaHyParBipartite`](@ref)
2321
* [`SABipartite`](@ref)
2422

25-
The keyword argument `simplifier` specifies the preprocessor to improve the searching speed of the contraction order finding.
2623
For example, the `MergeGreedy()` here "contracts" tensors greedily whenever the contraction result has a smaller space complexity.
2724
It can remove all vertex tensors (vectors) before entering the contraction order optimization algorithm.
2825

2926
The returned object `problem` contains a field `code` that specifies the tensor network with optimized contraction order.
3027
For an independent set problem, the optimal contraction time/space complexity is ``\sim 2^{{\rm tw}(G)}``, where ``{\rm tw(G)}`` is the [tree-width](https://en.wikipedia.org/wiki/Treewidth) of ``G``.
3128
One can check the time, space and read-write complexity with the [`contraction_complexity`](@ref) function.
3229

33-
```julia
34-
julia> contraction_complexity(problem)
35-
Time complexity (number of element-wise multiplications) = 2^20.568850503058382
36-
Space complexity (number of elements in the largest intermediate tensor) = 2^16.0
37-
Read-write complexity (number of element-wise read and write) = 2^18.70474460304404
30+
```@repl performancetips
31+
contraction_complexity(problem)
3832
```
3933

4034
The return values are `log2` values of the number of multiplications, the number elements in the largest tensor during contraction and the number of read-write operations to tensor elements.
4135
In this example, the number `*` operations is ``\sim 2^{21.9}``, the number of read-write operations are ``\sim 2^{20}``, and the largest tensor size is ``2^{17}``.
4236
One can check the element size by typing
43-
```julia
44-
julia> sizeof(TropicalF64)
45-
8
46-
47-
julia> sizeof(TropicalF32)
48-
4
49-
50-
julia> sizeof(StaticBitVector{200,4})
51-
32
52-
53-
julia> sizeof(TruncatedPoly{5,Float64,Float64})
54-
48
37+
```@repl performancetips
38+
sizeof(TropicalF64)
39+
sizeof(TropicalF32)
40+
sizeof(StaticBitVector{200,4})
41+
sizeof(TruncatedPoly{5,Float64,Float64})
5542
```
5643

5744
One can use [`estimate_memory`](@ref) to get a good estimation of peak memory in bytes.
5845
For example, to compute the graph polynomial, the peak memory can be estimated as follows.
59-
```julia
60-
julia> estimate_memory(problem, GraphPolynomial(; method=:finitefield))
61-
297616
62-
63-
julia> estimate_memory(problem, GraphPolynomial(; method=:polynomial))
64-
71427840
46+
```@repl performancetips
47+
estimate_memory(problem, GraphPolynomial(; method=:finitefield))
48+
estimate_memory(problem, GraphPolynomial(; method=:polynomial))
6549
```
6650
The finite field approach only requires 298 KB memory, while using the [`Polynomial`](https://juliamath.github.io/Polynomials.jl/stable/polynomials/polynomial/#Polynomial-2) number type requires 71 MB memory.
6751

@@ -75,25 +59,11 @@ For large scale applications, it is also possible to slice over certain degrees
7559
loop and accumulate over certain degrees of freedom so that one can have a smaller tensor network inside the loop due to the removal of these degrees of freedom.
7660
In the [`TreeSA`](@ref) optimizer, one can set `nslices` to a value larger than zero to turn on this feature.
7761

78-
```julia
79-
julia> using GenericTensorNetworks, Graphs, Random
80-
81-
julia> graph = random_regular_graph(120, 3)
82-
{120, 180} undirected simple Int64 graph
83-
84-
julia> problem = IndependentSet(graph; optimizer=TreeSA(βs=0.01:0.1:25.0, ntrials=10, niters=10));
85-
86-
julia> contraction_complexity(problem)
87-
Time complexity (number of element-wise multiplications) = 2^20.53277253647484
88-
Space complexity (number of elements in the largest intermediate tensor) = 2^16.0
89-
Read-write complexity (number of element-wise read and write) = 2^19.34699193791874
90-
91-
julia> problem = IndependentSet(graph; optimizer=TreeSA(βs=0.01:0.1:25.0, ntrials=10, niters=10, nslices=5));
92-
93-
julia> contraction_complexity(problem)
94-
Time complexity (number of element-wise multiplications) = 2^21.117277836449578
95-
Space complexity (number of elements in the largest intermediate tensor) = 2^11.0
96-
Read-write complexity (number of element-wise read and write) = 2^19.854965754099602
62+
```@repl performancetips
63+
problem = GenericTensorNetwork(iset; optimizer=TreeSA(βs=0.01:0.1:25.0, ntrials=10, niters=10));
64+
contraction_complexity(problem)
65+
problem = GenericTensorNetwork(iset; optimizer=TreeSA(βs=0.01:0.1:25.0, ntrials=10, niters=10, nslices=5));
66+
contraction_complexity(problem)
9767
```
9868

9969
In the second `IndependentSet` constructor, we slice over 5 degrees of freedom, which can reduce the space complexity by at most 5.
@@ -103,7 +73,7 @@ i.e. the peak memory usage is reduced by a factor ``32``, while the (theoretical
10373
## GEMM for Tropical numbers
10474
One can speed up the Tropical number matrix multiplication when computing the solution space property [`SizeMax`](@ref)`()` by using the Tropical GEMM routines implemented in package [`TropicalGEMM`](https://github.com/TensorBFS/TropicalGEMM.jl/).
10575

106-
```julia
76+
```julia-repl
10777
julia> using BenchmarkTools
10878
10979
julia> @btime solve(problem, SizeMax())
@@ -139,7 +109,7 @@ results = multiprocess_run(collect(1:10)) do seed
139109
n = 10
140110
@info "Graph size $n x $n, seed= $seed"
141111
g = random_diagonal_coupled_graph(n, n, 0.8)
142-
gp = Independence(g; optimizer=TreeSA(), simplifier=MergeGreedy())
112+
gp = GenericTensorNetwork(IndependentSet(g); optimizer=TreeSA())
143113
res = solve(gp, GraphPolynomial())[]
144114
return res
145115
end
@@ -165,7 +135,7 @@ You will see a vector of polynomials printed out.
165135
166136
## Make use of GPUs
167137
To upload the computation to GPU, you just add `using CUDA` before calling the `solve` function, and set the keyword argument `usecuda` to `true`.
168-
```julia
138+
```julia-repl
169139
julia> using CUDA
170140
[ Info: OMEinsum loaded the CUDA module successfully
171141

docs/src/ref.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22
## Graph problems
33
```@docs
44
solve
5+
GenericTensorNetwork
56
GraphProblem
67
IndependentSet
78
MaximalIS
89
Matching
910
Coloring
1011
DominatingSet
1112
SpinGlass
12-
HyperSpinGlass
1313
MaxCut
1414
PaintShop
1515
Satisfiability
@@ -27,14 +27,12 @@ Interfaces [`GenericTensorNetworks.generate_tensors`](@ref), [`labels`](@ref), [
2727
```@docs
2828
GenericTensorNetworks.generate_tensors
2929
labels
30-
terms
30+
energy_terms
3131
flavors
3232
get_weights
3333
chweights
3434
nflavor
3535
fixedvertices
36-
37-
extract_result
3836
```
3937

4038
#### Graph Problem Utilities
@@ -49,7 +47,6 @@ is_set_packing
4947
5048
cut_size
5149
spinglass_energy
52-
hyperspinglass_energy
5350
num_paint_shop_color_switch
5451
paint_shop_coloring_from_config
5552
mis_compactify!

docs/src/sumproduct.md

Lines changed: 9 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -3,45 +3,23 @@
33
It is a sum-product expression tree to store [`ConfigEnumerator`](@ref) in a lazy style, where configurations can be extracted by depth first searching the tree with the `Base.collect` method.
44
Although it is space efficient, it is in general not easy to extract information from it due to the exponential large configuration space.
55
Directed sampling is one of its most important operations, with which one can get some statistic properties from it with an intermediate effort. For example, if we want to check some property of an intermediate scale graph, one can type
6-
```julia
7-
julia> graph = random_regular_graph(70, 3)
8-
9-
julia> problem = IndependentSet(graph; optimizer=TreeSA());
10-
11-
julia> tree = solve(problem, ConfigsAll(; tree_storage=true))[];
12-
16633909006371
6+
```@repl sumproduct
7+
graph = random_regular_graph(70, 3)
8+
problem = GenericTensorNetwork(IndependentSet(graph); optimizer=TreeSA());
9+
tree = solve(problem, ConfigsAll(; tree_storage=true))[]
1310
```
1411
If one wants to store these configurations, he will need a hard disk of size 256 TB!
1512
However, this sum-product binary tree structure supports efficient and unbiased direct sampling.
1613

17-
```julia
18-
samples = generate_samples(tree, 1000);
14+
```@repl sumproduct
15+
samples = generate_samples(tree, 1000)
1916
```
2017

2118
With these samples, one can already compute useful properties like Hamming distance (see [`hamming_distribution`](@ref)) distribution.
2219

23-
```julia
24-
julia> using UnicodePlots
25-
26-
julia> lineplot(hamming_distribution(samples, samples))
27-
┌────────────────────────────────────────┐
28-
100000 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠹⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
29-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
30-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡇⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
31-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
32-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
33-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
34-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
35-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
36-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
37-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡞⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
38-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⣇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
39-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
40-
│⠀⠀⠀⠀⠀⠀⠀⠀⠀⡼⠀⠀⠀⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
41-
│⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀⠀⠀⠀⠀⢳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
42-
0 │⢀⣀⣀⣀⣀⣀⣀⣀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠀⠀⠀⠀│
43-
└────────────────────────────────────────┘
44-
⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀80⠀
20+
```@repl sumproduct
21+
using UnicodePlots
22+
lineplot(hamming_distribution(samples, samples))
4523
```
4624

4725
Here, the ``x``-axis is the Hamming distance and the ``y``-axis is the counting of pair-wise Hamming distances.

examples/Coloring.jl

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,11 @@ locations = [[rot15(0.0, 2.0, i) for i=0:4]..., [rot15(0.0, 1.0, i) for i=0:4]..
1919
show_graph(graph; locs=locations, format=:svg)
2020

2121
# ## Generic tensor network representation
22-
#
23-
# We construct the tensor network for the 3-coloring problem as
24-
problem = Coloring{3}(graph);
22+
# We can define the 3-coloring problem with the [`Coloring`](@ref) type as
23+
coloring = Coloring{3}(graph)
24+
25+
# The tensor network representation of the 3-coloring problem can be obtained by
26+
problem = GenericTensorNetwork(coloring)
2527

2628
# ### Theory (can skip)
2729
# Type [`Coloring`](@ref) can be used for constructing the tensor network with optimized contraction order for a coloring problem.
@@ -67,7 +69,7 @@ show_graph(linegraph; locs=[0.5 .* (locations[e.src] .+ locations[e.dst])
6769
# Let us construct the tensor network and see if there are solutions.
6870
lineproblem = Coloring{3}(linegraph);
6971

70-
num_of_coloring = solve(lineproblem, CountingMax())[]
72+
num_of_coloring = solve(GenericTensorNetwork(lineproblem), CountingMax())[]
7173

7274
# You will see the maximum size 28 is smaller than the number of edges in the `linegraph`,
7375
# meaning no solution for the 3-coloring on edges of a Petersen graph.

examples/DominatingSet.jl

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,11 @@ show_graph(graph; locs=locations, format=:svg)
2323

2424
# ## Generic tensor network representation
2525
# We can use [`DominatingSet`](@ref) to construct the tensor network for solving the dominating set problem as
26-
problem = DominatingSet(graph; optimizer=TreeSA());
26+
dom = DominatingSet(graph)
27+
28+
# The tensor network representation of the dominating set problem can be obtained by
29+
problem = GenericTensorNetwork(dom; optimizer=TreeSA())
30+
# where the key word argument `optimizer` specifies the tensor network contraction order optimizer as a local search based optimizer [`TreeSA`](@ref).
2731

2832
# ### Theory (can skip)
2933
# Let ``G=(V,E)`` be the target graph that we want to solve.

0 commit comments

Comments
 (0)