We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
import numpy as np import pandas as pd nrows = 10**7 ncols=10 ngroups = 6 qs = [0.5, 0.75] arr = np.random.randn(nrows, ncols) df = pd.DataFrame(arr) df["A"] = np.random.randint(ngroups, size=nrows) gb = df.groupby("A") %timeit v1 = gb.quantile(qs) 39.6 s ± 1.74 s per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit v2 = {key: gb.get_group(key).quantile(qs) for key in gb.groups} 3.37 s ± 316 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) alt = pd.concat(v2).drop("A", axis=1) alt.index.names = ["A", None] assert alt.equals(v1)
Is GroupBy.quantile doing dramatically too much work?
The text was updated successfully, but these errors were encountered:
Looks like if you pump ngroups up high enough (parity around 10**4) the non-cython version gets slower compared to the cython version.
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Is GroupBy.quantile doing dramatically too much work?
The text was updated successfully, but these errors were encountered: