Skip to content

Documentation Additions #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jan 31, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
127 changes: 111 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,41 +3,136 @@ Module of the Viam vision service to automatically build, deploy, and run infere

Internally, this module calls DirectAI's `deploy_detector` and `deploy_classifier` endpoints with the provided configuration, saves the ID of the deployed models, and then calls the models via their IDs on incoming images. See [docs](https://api.alpha.directai.io/docs) for an overview of DirectAI's publicly available API.

## Getting started
Note that this README closely follows Viam's in-house motion-detector documentation, as described [here](https://github.com/viam-labs/motion-detector/).

The first step is to obtain DirectAI API credentials. Instructions [here](https://api.alpha.directai.io/docs).

This module implements the following methods of the [vision service API](https://docs.viam.com/services/vision/#api):
* `GetDetections()`
* `GetDetectionsFromCamera()`
* `GetClassifications()`
* `GetClassificationsFromCamera()`
## Getting started with Viam & DirectAI

Start by [configuring a camera](https://docs.viam.com/components/camera/webcam/) on your robot. Remember the name you give to the camera, it will be important later.
> [!NOTE]
> Before configuring your vision service, you must [create a robot](https://docs.viam.com/manage/fleet/robots/#add-a-new-robot).

Before calling DirectAI's API via Viam, you have to grab your client credentials. Generate them by clicking **Get API Access** on the [DirectAI website](https://directai.io/). Save them in an Access JSON on the same machine that's running your Viam Server.
```json
{
"DIRECTAI_CLIENT_ID": "UE9S0AG9KS4F3",
"DIRECTAI_CLIENT_SECRET": "L23LKkl0d5<M0R3S4F3"
}
```

## Configuration

Navigate to the **Config** tab of your robot’s page in [the Viam app](https://app.viam.com/). Click on the **Services** subtab and click **Create service**. Select the `vision` type, then select the `directai-beta` model. Enter a name for your service and click **Create**.

### Example Configuration
On the new component panel, copy and paste the Example Detector *or* Classifier Attributes. Note that you can deploy classifier & detector attributes **simultaneously** if you'd like. Ensure that the Access JSON path that you provide in your Config is **absolute**, not relative. (e.g. `/Users/janesmith/Downloads/directai_credential.json`, not `~/Downloads/directai_credential.json`)

### Example Detector Attributes

```json
{
"access_json": "ABSOLUTE_PATH_TO_ACCESS_JSON_FILE",
"deployed_detector": {
"detector_configs": [
{
"name": "nose",
"examples_to_include": [
"nose"
],
"detection_threshold": 0.1
},
{
"examples_to_include": [
"mouth"
],
"examples_to_exclude": [
"mustache"
],
"detection_threshold": 0.1,
"name": "mouth"
},
{
"examples_to_include": [
"eye"
],
"detection_threshold": 0.1,
"name": "eye"
},
],
"nms_threshold": 0.1
}
}
```

### Example Classifier Attributes

```json
{
"access_json": "ABSOLUTE_PATH_TO_ACCESS_JSON_FILE",
"deployed_classifier": {
"classifier_configs": [
{
"name": "happy",
"examples_to_include": [
"happy person"
],
},
{
"name": "sad",
"examples_to_include": [
"sad person"
]
}
]
}
}
```

### TODO: add example configuration

> [!NOTE]
> For more information, see [Configure a Robot](https://docs.viam.com/manage/configuration/).

### Attributes

The following attributes are available for `directai:viamintegration:directai-beta` vision services:
| Name | Type | Inclusion | Description |
|------|------|-----------|-------------|
| `access_json` | string | **Required** | A string that indicates an absolute path on your local machine to a JSON including DirectAI Client Credentials. See description in [Example Access JSON](#example-access-json) section. |
| `deployed_classifier` | json | **Optional** | A JSON that contains a `classifier_configs` key and corresponding list of classifier configurations. Each classifier is defined by a `name`, a list of text `examples_to_include`, and a list of text `examples_to_exclude`. See [Example Classifier Attributes](#example-classifier-attributes). |
| `deployed_detector` | json | **Optional** | A JSON that contains `detector_configs` (list of detector configurations) and an `nms_threshold`. Each detector is defined by a `name`, a list of text `examples_to_include`, a list of text `examples_to_exclude`, and a `detection_threshold`. For more information on NMS and Detection thresholds, check out the [DirectAI docs](https://api.alpha.directai.io/docs#/). See [Example Detector Attributes](#example-detector-attributes). |

> [!NOTE]
> For more information, see [Configure a Robot](https://docs.viam.com/manage/configuration/).

### TODO: add overview of attributes
## Usage

This module implements the following methods of the [vision service API](https://docs.viam.com/services/vision/#api):
* `GetDetections()`
* `GetDetectionsFromCamera()`
* `GetClassifications()`
* `GetClassificationsFromCamera()`

### Example Access JSON
The module behavior differs slightly for classifications and detections.

When returning classifications, the module will return a list of dictionaries. The list length will be equal to length of the `classifier_configs` list provided in the deployed classifier attributes. Each dictionary will include `class_name` and `confidence` key-value pair in decreaising confidence order.

When returning detections, the module will return a list of detections with bounding boxes that encapsulate the movement. Each detection will be of the following form:
```json
{
"DIRECTAI_CLIENT_ID": "UE9S0AG9KS4F3",
"DIRECTAI_CLIENT_SECRET": "L23LKkl0d5<M0R3S4F3"
"confidence": float,
"class_name": string,
"x_min": float,
"y_min": float,
"x_max": float,
"y_max": float
}
```

## Visualize
Once the `directai:viamintegration:directai-beta` modular service is in use, configure a [transform camera](https://docs.viam.com/components/camera/transform/) to see classifications or detections appear in your robot's field of vision. View the transform camera from the [Control tab](https://docs.viam.com/manage/fleet/robots/#control).

## Next Steps
To write code to use the motion detector output, use one of the [available SDKs](https://docs.viam.com/program/).

Please join [DirectAI's discord](https://discord.com/invite/APU6MWBKQv), contact us at [[email protected]](mailto:[email protected]), or schedule time on [our calendly](https://calendly.com/directai/demo) if you have any questions or feedback!