srrcomp
offers compression techniques grounded in structured random rotation, with strong theoretical guarantees, as detailed in the following publications:
-
Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, and Michael Mitzenmacher. "DRIVE: One-bit Distributed Mean Estimation." Advances in Neural Information Processing Systems 34 (2021): 362-377.
-
Shay Vargaftik, Ran Ben Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben Itzhak, and Michael Mitzenmacher. "EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning." In International Conference on Machine Learning, pp. 21984-22014. PMLR, 2022.
Also, see the following blog for a high-level overview: "Pushing the Limits of Network Efficiency for Federated Machine Learning"
In particular, srrcomp can be used for:
- Fast and efficient lossy compression.
- Unbiased estimates.
- Distributed mean estimation.
- Compressing gradient updates in distributed and federated learning.
The implementation is torch-based and thus supports CPU and GPU.
Compression and decompression operations are carried out on the device where the associated vector is located.
srrcomp
currently contains the implementation of EDEN.
srrcomp
offers some functions in CUDA for faster execution (up to an order of magnitude). This acceleration requires local compilation with nvcc
/torch
/python
compatible versions.
The 'gpuacctype' argument, which specifies the GPU acceleration type, defaults to 'cuda' but can be changed to 'torch' to utilize the torch-based implementation.
The torch-based implementation is utilized when CUDA acceleration is unavailable, such as when working with CPU-based vectors or when local CUDA compilation hasn't been performed.
torch
numpy
[Optional] nvcc
for compiling the aforementioned CUDA functions for faster execution
Linux: $ pip install srrcomp
Windows: $ pip install srrcomp --extra-index-url https://download.pytorch.org/whl/ --no-cache
If the message "Faster CUDA implementation for Hadamard and bit packing is not available. Using torch implementation instead." appears when importing srrcomp on a GPU machine, try installing srrcomp
from source.
For Windows and Ubuntu versions earlier than 22.04, download source from the official repository and run $ python setup.py install
For Ubuntu 22.04 use build and pip and other standards-based tools to build and install from source.
Execute from \tests folder:
$ python basic_test.py
dim
, bits
, and seed
variables can be modified within the script.
Execute from \tests folder:
$ python dme_test.py
Use $ python dme_test.py -h
to get the test options
Shay Vargaftik (VMware Research), [email protected]
Yaniv Ben-Itzhak (VMware Research), [email protected]