Skip to content

Commit d25b3a6

Browse files
author
George Wright
authored
Add a CSV output option to displaylist_benchmark_parser.py and add a README detailing how to run and process benchmarks on iOS/Android (flutter#31481)
1 parent 9ee933b commit d25b3a6

File tree

2 files changed

+112
-2
lines changed

2 files changed

+112
-2
lines changed
Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
# Running and Processing DisplayList Benchmarks
2+
3+
The DisplayList benchmarks for Flutter are used to determine the relative
4+
cost of operations in order to assign scores to each op for the raster
5+
cache’s cache admission algorithm.
6+
7+
Due to the nature of benchmarking, these need to be run on actual devices in
8+
order to get representative results, and because of the more locked-down
9+
nature of the iOS and Android platforms it’s a little involved getting the
10+
benchmark suite to run.
11+
12+
This document will detail the steps involved in getting the benchmarks to run
13+
on both iOS and Android, and how to process the resulting data.
14+
15+
## iOS
16+
17+
iOS does not allow you to run unsigned code or arbitrary executables, so the
18+
approach here is to build a dylib that contains the benchmarking code which
19+
will then be linked to a skeleton test app in Xcode. The dylib contains an
20+
exported C function, void RunBenchmarks(int argc, char **argv) that should
21+
be called from the skeleton test app to run the benchmarks.
22+
23+
The dylib is not built by default and it will need to be specified as a
24+
target manually when calling ninja.
25+
26+
The target name is ios_display_list_benchmarks, e.g.:
27+
28+
$ ninja -C out/ios_profile ios_display_list_benchmarks
29+
30+
Once that dylib exists, the IosBenchmarks test app in flutter/testing/ios can
31+
be loaded in Xcode. Ensure that the team is set appropriately so the code can
32+
be signed and that FLUTTER_ENGINE matches the Flutter Engine build you wish to
33+
use (e.g. ios_profile).
34+
35+
Once that is done, you can just hit the Run button and the JSON output will be
36+
sent to the Xcode console. Copy that elsewhere and save it as a .json file.
37+
38+
Note: you may need to delete some errors from the console output that are
39+
unrelated to the JSON output.
40+
41+
## Android
42+
43+
On Android, even on non-rooted devices, it is possible to execute unsigned
44+
binaries using adb. As a result, there is a build target that will build a
45+
binary that can be pushed to device using adb and executed using adb shell.
46+
The only caveat is that the binary needs to be on a volume that isn’t mounted
47+
as noexec, which typically rules out the sd card. /data/local/tmp seems like
48+
an option that is typically available.
49+
50+
The build target is called display_list_benchmarks and will create a binary
51+
called display_list_benchmarks in the root output directory
52+
(e.g. android_profile_arm64).
53+
54+
$ adb push out/android_profile_arm64/display_list_benchmarks /data/local/tmp/display_list_benchmarks
55+
$ adb shell /data/local/tmp/display_list_benchmarks --benchmark_format=json | tee android-results.json
56+
57+
58+
The results in android-results.json can then be processed.
59+
60+
## Processing Results
61+
62+
There is a script in flutter/testing/benchmark called
63+
displaylist_benchmark_parser.py which will take the JSON file and output a PDF
64+
with graphs of all the benchmark series, as well as a CSV that can be imported
65+
into a spreadsheet for further analysis.
66+
67+
This can then be manually analysed to determine the relative weightings for the
68+
raster cache’s cache admission algorithm.

testing/benchmark/displaylist_benchmark_parser.py

Lines changed: 44 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
# found in the LICENSE file.
66

77
import argparse
8+
import csv
89
import json
910
import sys
1011
import matplotlib.pyplot as plt
@@ -20,6 +21,7 @@ def __init__(self, name, backend, timeUnit, drawCallCount):
2021
self.yLimit = 200
2122
self.timeUnit = timeUnit
2223
self.drawCallCount = drawCallCount
24+
self.optionalValues = {}
2325

2426
def __repr__(self):
2527
return 'Name: % s\nBackend: % s\nSeries: % s\nSeriesLabels: % s\n' % (self.name, self.backend, self.series, self.seriesLabels)
@@ -34,6 +36,12 @@ def addDataPoint(self, family, x, y):
3436
if y > self.yLimit:
3537
self.largeYValues = True
3638

39+
def addOptionalValue(self, name, x, y):
40+
if name not in self.optionalValues:
41+
self.optionalValues[name] = {}
42+
43+
self.optionalValues[name][x] = y
44+
3745
def setFamilyLabel(self, family, label):
3846
# I'm not keying the main series dict off the family label
3947
# just in case we get data where the two aren't a 1:1 mapping
@@ -93,17 +101,41 @@ def plot(self):
93101

94102
return figures
95103

104+
def writeCSV(self, writer):
105+
# For now assume that all our series have the same x values
106+
# this is true for now, but may differ in the future with benchmark changes
107+
x_values = []
108+
y_values = []
109+
for family in self.series:
110+
x_values = ['x'] + self.series[family]['x']
111+
y_values.append([self.seriesLabels[family]] + self.series[family]['y'])
112+
113+
for name in self.optionalValues:
114+
column = [name]
115+
for key in self.optionalValues[name]:
116+
column.append(self.optionalValues[name][key])
117+
y_values.append(column)
118+
119+
writer.writerow([self.name, self.drawCallCount])
120+
for line in range(len(x_values)):
121+
row = [x_values[line]]
122+
for series in range(len(y_values)):
123+
row.append(y_values[series][line])
124+
writer.writerow(row)
125+
96126
def main():
97127
parser = argparse.ArgumentParser()
98128

99129
parser.add_argument('filename', action='store',
100130
help='Path to the JSON output from Google Benchmark')
101131
parser.add_argument('-o', '--output-pdf', dest='outputPDF', action='store', default='output.pdf',
102132
help='Filename to output the PDF of graphs to.')
133+
parser.add_argument('-c', '--output-csv', dest='outputCSV', action='store', default='output.csv',
134+
help='Filename to output the CSV data to.')
103135

104136
args = parser.parse_args()
105137
jsonData = parseJSON(args.filename)
106-
return processBenchmarkData(jsonData, args.outputPDF)
138+
return processBenchmarkData(jsonData, args.outputPDF, args.outputCSV)
107139

108140
def error(message):
109141
print(message)
@@ -127,7 +159,7 @@ def extractAttributesLabel(benchmarkResult):
127159

128160
return label[:-2]
129161

130-
def processBenchmarkData(benchmarkJSON, outputPDF):
162+
def processBenchmarkData(benchmarkJSON, outputPDF, outputCSV):
131163
benchmarkResultsData = {}
132164

133165
for benchmarkResult in benchmarkJSON:
@@ -169,18 +201,28 @@ def processBenchmarkData(benchmarkJSON, outputPDF):
169201
else:
170202
benchmarkDrawCallCount = -1
171203

204+
optional_keys = ['DrawCallCount_Varies', 'VerbCount', 'PointCount', 'VertexCount', 'GlyphCount']
205+
172206
if benchmarkName not in benchmarkResultsData:
173207
benchmarkResultsData[benchmarkName] = BenchmarkResult(benchmarkName, benchmarkBackend, benchmarkUnit, benchmarkDrawCallCount)
174208

209+
for key in optional_keys:
210+
if key in benchmarkResult:
211+
benchmarkResultsData[benchmarkName].addOptionalValue(key, benchmarkSeededValue, benchmarkResult[key])
212+
175213
benchmarkResultsData[benchmarkName].addDataPoint(benchmarkFamilyIndex, benchmarkSeededValue, benchmarkRealTime)
176214
benchmarkResultsData[benchmarkName].setFamilyLabel(benchmarkFamilyIndex, benchmarkFamilyLabel)
177215

178216
pp = pdfp(outputPDF)
179217

218+
csv_file = open(outputCSV, 'w')
219+
csv_writer = csv.writer(csv_file)
220+
180221
for benchmark in benchmarkResultsData:
181222
figures = benchmarkResultsData[benchmark].plot()
182223
for fig in figures:
183224
pp.savefig(fig)
225+
benchmarkResultsData[benchmark].writeCSV(csv_writer)
184226
pp.close()
185227

186228

0 commit comments

Comments
 (0)