Skip to content

Toward defining the interface of AbstractArrays #10064

@jiahao

Description

@jiahao

What is an AbstractArray?

This issue follows up from #987 (and complements #9586), where the question was raised about what it means to be an AbstractArray. My understanding of where this issue left off was that AbstractArray{T,N} has the following interface:

  • size is defined and returns a tuple of length N,
  • getindex is defined and returns data of type T "cheaply", where cheaply means "in O(1) time preferably but we will allow some small growth bounded very loosely above by O(length(n)) to accommodate sparse matrices".

This interface is incompletely defined because Julia arrays support nontrivial indexing semantics and not all of which have the same computational complexity.

The question

This issue is about spelling out the methods supported by AbstractArrays, and in particular:

  • what the minimal set of indexing behaviors AbstractArrays must support is, and
  • what kind of complexity guarantees each kind of indexing semantics must satisfy.

Indexing semantics

Julia arrays support several indexing semantics:

  • Indexing with a number (ordinary indexing)
  • Indexing with a Colon :, UnitRange or Range (slice indexing)
  • Indexing with an iterable (fancy indexing)
  • Indexing with a boolean vector (logical indexing)

For arrays of rank N>1 we have to further distinguish between indexing with a
single entity (linear indexing) vs indexing with N entities (Cartesian
indexing). So this gives us a matrix of 4 x 2 = 8 different indexing behaviors.

Some complexity computations

@andreasnoack @jakebolewski and I sat down this afternoon to work out the complexity of indexing semantics of various kinds of arrays. The goal of these calculations is to help clarify the importance of complexity bounds on the interface of AbstractArrays.

We considered the following types of arrays:

  • strided arrays: arrays which are completely described by dope
    vectors
    containing shape
    information or its equivalent.
    The corresponding Julia type is StridedArray, which is the union of SubArray and
    DenseArray (which in turn includes ordinary Array).
  • ragged arrays or "vectors of vectors", whose shape
    information and layout can be represented by
    Iliffe vectors.
    Each dimension need not be stored contiguously, but in general will be stored in
    M distinct chunks of contiguous memory.
    For simplicity we consider only N-dimensional arrays produced by ragged arrays of
    ragged arrays, nested N-1 times.
    We also distinguish between sorted and unsorted variants.
    Ragged arrays are the default in Java.
    Ragged arrays don't really exist in Julia for M>1. For M=1 chunk per dimension,
    they correspond to the recursive family of types
    Vector{T}, Vector{Vector{T}}, Vector{Vector{Vector{T}}},... Also, DArrays with
    customizable chunking schemes support some form of chunking where each chunk
    must live on a different process.
  • compressed sparse column (CSC), a.k.a. compressed column storage or CCS,
    matrices, which many people in array type hierarchy #987 want to be AbstractArrays.
    Base Julia has SparseMatrixCSC.

Where necessary the analysis was simplified by assuming a uniform prior distribution of matrix elements in each column and each row.

The results

Let

  • N be the rank of an array (i.e., the number of dimensions, or the length of the index tuple associated with a matrix element),
  • M be the number of chunks in a single ragged array dimension,
  • k be the length of a Range, Colon or iterable (= k1 * k2 * ... * kN in
    the case of using N of these entities),
  • ncol be the number of columns,
  • NNZ be the number of stored elements.

Considering only linear indexing and Cartesian indexing using the same type of
entity in each dimension, we can write the computational complexity of each
indexing behavior as

linear Cartesian
ordinary strided O(N) strided O(N)
ragged sorted O(N log M) ragged sorted O(N log M)
ragged unsorted O(N M) ragged unsorted O(N M)
CSC ~O(log(NNZ/ncol)) CSC ~O(log (NNZ/ncol))
--------- ------------------------ -------------------------
slicing strided O(N) strided O(N)
(view) ragged sorted O(min(M, k) N log M) ragged sorted O(min(M, k) N log M)
ragged unsorted ~O(k M^(N-1)) ragged unsorted ~O(k M^(N-1))
CSC ~O(k1 log(NNZ/ncol)) CSC ~O(k1 log(NNZ/ncol))
--------- ------------------------------ -------------------------
slicing strided O(k) strided O(k)
(copy) ragged sorted ~O(M^N) ragged sorted ~O(M^N)
ragged unsorted ~O(k M^(N-1)) ragged unsorted ~O(k M^(N-1))
CSC ~O(k ncol/NNZ log(NNZ/ncol)) CSC ~O(k ncol/NNZ log(NNZ/ncol))
--------- ------------------------------ -------------------------
iterable strided O(k N) strided O(k N)
ragged sorted O(k N log M) ragged sorted O(k N log M)
ragged unsorted O(k N M) ragged unsorted O(N M)
CSC ~O(k log(NNZ/ncol)) CSC ~O(k log (NNZ/ncol))
--------- ------------------------------ -------------------------
logical strided O(NNZ) strided O(NNZ)
ragged sorted O(NNZ N) ragged sorted O(NNZ N)
ragged unsorted O(NNZ N M) ragged unsorted O(NNZ N M)
CSC O(NNZ log(NNZ/ncol)) CSC O(NNZ log(NNZ/ncol))

In no case did I find that the cost of the Cartesian to linear offset computation dominates the complexity.

Note that we have different results depending on whether slicing returns

  • a view, where only the index set of reference elements needs to be computed,
    or
  • a copy, where all the matrix elements have to be fetched also.

Breakdown of the computations

Strided array

  • Ordinary indexing is easy: computing the offset grows as the rank.
  • Slicing a view requires computing the same bounds as the input Range
  • A slice that produces a copy means fetching every element, which dominates the cost
  • Indexing with an iterable requires an ordinary index operation per element
  • Logical slices can only be computed by iterating through each element in each
    dimension.

Ragged sorted array

  • If the chunks in each ragged dimension have sorted indexes, then lookup in
    ordinary indexing is a sorted search problem in each dimension. We get log(M)
    from binary search.
  • Slicing is also essentially 2 searches per dimension (plus more work if there
    is a nonunit stride in the slice which does not dominate) because at every
    dimension you have to compute the intersection of two intervals, namely the
    slice and the index set of that chunk
  • Iterables are repeated sorted searches
  • Logical slices traverse every chunk so the chunk structure doesn't really
    matter

Ragged unsorted array (indexes for each chunk are not sorted)

  • Ordinary indexing is a linear (unsorted) search problem in each dimension
  • Slicing means a general set intersection computation for the index set of
    each chunk. On average we expect something like (NNZ/M^N) entries in each
    index set (per dimension), assuming a uniform prior on the distribution
    of entries in chunks. We expect on average k/M matches for each chunk.
    So we expect to do O((k/M) * (M^N)) work. The work is the same to compute the
    index set for a view as it is to copy.
  • Indexing by an iterable is repeated set intersections
  • Logical slices have to be evaluated over all chunks but each entry still has
    to be mapped to a chunk by linear search each time.

CSC

  • Ordinary indexing means looking up the column pointer store of the current and next
    columns, identifying the corresponding row indexes and values in the resulting
    half-open interval. Again assuming for simplicity a uniform prior distribution
    of entries, the expected number of entries in any given column is O(NNZ/ncol).
    Since each entry is sorted by row index, we can use a sorted search so the
    complexity of lookup is O(log (NNZ/ncol)).
  • Slicing means looking up the boundaries of the interval and indexing into each,
    so the cost is O(log (NNZ/ncol)). When the stride of the slice is nonunit there
    is extra work that needs to be done but I haven't checked the cost of this yet.
    The probability to match one element is ~ncol/NNZ, so we expect to retrieve
    k * ncol/NNZ stored elements.
  • Indexing by an iterable reduces to multiple single element lookups.
  • Logical indexing can be computed by traversing all the stored entries in order.

Cartesian indexing behaviors mixing different indexing types in each dimension
can get more complicated. CSC is an interesting one to look at:

  • A[:,5] returning a view costs O(1) since the answer is given immediately by
    looking up the start and end of the column in the list of column offsets.
  • A[:,5] returning a copy costs ~O(NNZ/ncol), the expected number of entries in
    the column.
  • A[5,:] returning a view costs O(ncol log(NNZ/ncol)), since you have to do a
    sorted search for each column.
  • A[5,:] returning a copy costs O(NNZ log(NNZ/ncol)), since you have to compute
    the index set then copy the elements, which on average we expect to have
    NNZ/ncol of them.

Metadata

Metadata

Assignees

Labels

arrays[a, r, r, a, y, s]

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions