Skip to content

Commit 302e0ac

Browse files
committed
Update user scenarios
1 parent a8d42e7 commit 302e0ac

File tree

4 files changed

+31
-10
lines changed

4 files changed

+31
-10
lines changed

rfcs/20201027-modular-tensorflow-graph-c-api.md

Lines changed: 31 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,27 @@ When initializing, TensorFlow loads the plugin and registers a new graph optimiz
116116
### Supported User Scenarios
117117
118118
This section describes user scenarios for plugin graph optimizer.
119-
Plugin graph optimizer is targeting backend device specific optimization, and only one optimizer is allowed to be registered per device type, so device type will be used as key to decide whether TensorFlow proper needs to run this optimizer by checking graph device type and registered device type. To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
119+
120+
* **Supported scenario**: Each plugin can register its own graph optimizer.
121+
122+
Plugin graph optimizer is targeting backend device specific optimization. Proper should fully control the behaviour of plugin, plugin can register its own graph optimizer, and optimizers with other device types are not allowed. TensorFlow proper would run plugin optimizer if graph device type and registered device type are matched.
123+
124+
<p align="center">
125+
<img src="20201027-modular-tensorflow-graph-c-api/scenario1.png" height="100"/>
126+
</p>
127+
128+
* **Unsupported scenario**: Plugin can not register multiple graph optimizers.
129+
130+
To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
131+
<p align="center">
132+
<img src="20201027-modular-tensorflow-graph-c-api/scenario2.png" height="150"/>
133+
</p>
134+
135+
* **Undefined scenario**: Registering graph optimizer without pluggable device.
136+
137+
<p align="center">
138+
<img src="20201027-modular-tensorflow-graph-c-api/scenario3.png" height="100"/>
139+
</p>
120140
121141
### Front-end python use case
122142
@@ -131,9 +151,9 @@ Flag `use_plugin_optimizers` is provided for front-end python users to control t
131151
```
132152

133153
This API can be used to:
134-
1. Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
135-
2. Use recommended configuration of existing optimizers.
136-
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
154+
* Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
155+
* Use recommended configuration of existing optimizers.
156+
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
137157

138158
```cpp
139159
TF_Bool get_remapping() { return false; }
@@ -208,11 +228,11 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
208228
void* ext; // reserved for future use
209229
void* (*create_func)();
210230
void (*optimize_func)(void*, TF_Buffer*, TF_Buffer*);
211-
void (*delete_func)(void*);
231+
void (*destory_func)(void*);
212232
} TP_Optimizer;
213233
214234
#define TP_OPTIMIZER_STRUCT_SIZE \
215-
TF_OFFSET_OF_END(TP_Optimizer, delete_func)
235+
TF_OFFSET_OF_END(TP_Optimizer, destory_func)
216236
217237
typedef struct TP_OptimizerRegistrationParams {
218238
size_t struct_size;
@@ -239,6 +259,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
239259
```
240260

241261
* **Plugin util C API**
262+
242263
```cpp
243264
#ifdef __cplusplus
244265
extern "C" {
@@ -330,12 +351,12 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
330351

331352
// Get a list of input OpInfo::TensorProperties given node name.
332353
// OpInfo::TensorProperties is represented as TF_Buffer*.
333-
void TF_GetInputProperties(TF_GraphProperties* g_prop, const char* name,
354+
void TF_GetInputPropertiesList(TF_GraphProperties* g_prop, const char* name,
334355
TF_Buffer** prop, int max_size);
335356

336357
// Get a list of output OpInfo::TensorProperties given node name.
337358
// OpInfo::TensorProperties is represented as TF_Buffer*.
338-
void TF_GetOutputProperties(TF_GraphProperties* g_prop, const char* name,
359+
void TF_GetOutputPropertiesList(TF_GraphProperties* g_prop, const char* name,
339360
TF_Buffer** prop, int max_size);
340361

341362
// Helper to maintain a map between function names in a given
@@ -395,7 +416,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
395416
for (int i = 0; i < max_size; i++) {
396417
in_prop_buf[i] = TF_NewBuffer();
397418
}
398-
TF_GetInputProperties(g_prop, "node1", in_prop_buf.data(), &max_size);
419+
TF_GetInputPropertiesList(g_prop, "node1", in_prop_buf.data(), &max_size);
399420
plugin::OpInfo::TensorProperties in_prop;
400421
plugin::BufferToMessage(in_prop_buf, in_prop);
401422
for (int i = 0; i < max_size; i++)
@@ -436,7 +457,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
436457
// Set functions to create a new optimizer.
437458
params->optimizer->create_func = P_Create;
438459
params->optimizer->optimize_func = P_Optimize;
439-
params->optimizer->delete_func = P_Delete;
460+
params->optimizer->destory_func = P_Destory;
440461
}
441462
```
442463
Loading
Loading
Loading

0 commit comments

Comments
 (0)