diff --git a/models/object_detection_yolox/LICENSE b/models/object_detection_yolox/LICENSE new file mode 100644 index 00000000..1d4dc763 --- /dev/null +++ b/models/object_detection_yolox/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright (c) 2021-2022 Megvii Inc. All rights reserved. + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/models/object_detection_yolox/README.md b/models/object_detection_yolox/README.md new file mode 100644 index 00000000..f901bc18 --- /dev/null +++ b/models/object_detection_yolox/README.md @@ -0,0 +1,116 @@ +# YOLOX + +Nanodet: YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. YOLOX is a high-performing object detector, an improvement to the existing YOLO series. YOLO series are in constant exploration of techniques to improve the object detection techniques for optimal speed and accuracy trade-off for real-time applications. + +Key features of the YOLOX object detector +- **Anchor-free detectors** significantly reduce the number of design parameters +- **A decoupled head for classification, regression, and localization** improves the convergence speed +- **SimOTA advanced label assignment strategy** reduces training time and avoids additional solver hyperparameters +- **Strong data augmentations like MixUp and Mosiac** to boost YOLOX performance + +Note: +- This version of YoloX: YoloX_s + +## Demo + +Run the following command to try the demo: +```shell +# detect on camera input +python demo.py +# detect on an image +python demo.py --input /path/to/image +``` +Note: +- image result saved as "result.jpg" + + +## Results + +Here are some of the sample results that were observed using the model (**yolox_s.onnx**), + +![1_res.jpg](./samples/1_res.jpg) +![2_res.jpg](./samples/2_res.jpg) +![3_res.jpg](./samples/3_res.jpg) + + + +## Model metrics: + +The model is evaluated on [COCO 2017 val](https://cocodataset.org/#download). Results are showed below: + + + +
Average Precision Average Recall
+ +| area | IoU | Average Precision(AP) | +|:-------|:------|:------------------------| +| all | 0.50:0.95 | 0.405 | +| all | 0.50 | 0.593 | +| all | 0.75 | 0.437 | +| small | 0.50:0.95 | 0.232 | +| medium | 0.50:0.95 | 0.448 | +| large | 0.50:0.95 | 0.541 | + + + + area | IoU | Average Recall(AR) | +|:-------|:------|:----------------| +| all | 0.50:0.95 | 0.326 | +| all | 0.50:0.95 | 0.531 | +| all | 0.50:0.95 | 0.574 | +| small | 0.50:0.95 | 0.365 | +| medium | 0.50:0.95 | 0.634 | +| large | 0.50:0.95 | 0.724 | +
+ +| class | AP | class | AP | class | AP | +|:--------------|:-------|:-------------|:-------|:---------------|:-------| +| person | 54.109 | bicycle | 31.580 | car | 40.447 | +| motorcycle | 43.477 | airplane | 66.070 | bus | 64.183 | +| train | 64.483 | truck | 35.110 | boat | 24.681 | +| traffic light | 25.068 | fire hydrant | 64.382 | stop sign | 65.333 | +| parking meter | 48.439 | bench | 22.653 | bird | 33.324 | +| cat | 66.394 | dog | 60.096 | horse | 58.080 | +| sheep | 49.456 | cow | 53.596 | elephant | 65.574 | +| bear | 70.541 | zebra | 66.461 | giraffe | 66.780 | +| backpack | 13.095 | umbrella | 41.614 | handbag | 12.865 | +| tie | 29.453 | suitcase | 39.089 | frisbee | 61.712 | +| skis | 21.623 | snowboard | 31.326 | sports ball | 39.820 | +| kite | 41.410 | baseball bat | 27.311 | baseball glove | 36.661 | +| skateboard | 49.374 | surfboard | 35.524 | tennis racket | 45.569 | +| bottle | 37.270 | wine glass | 33.088 | cup | 39.835 | +| fork | 31.620 | knife | 15.265 | spoon | 14.918 | +| bowl | 43.251 | banana | 27.904 | apple | 17.630 | +| sandwich | 32.789 | orange | 29.388 | broccoli | 23.187 | +| carrot | 23.114 | hot dog | 33.716 | pizza | 52.541 | +| donut | 47.980 | cake | 36.160 | chair | 29.707 | +| couch | 46.175 | potted plant | 24.781 | bed | 44.323 | +| dining table | 30.022 | toilet | 64.237 | tv | 57.301 | +| laptop | 58.362 | mouse | 57.774 | remote | 24.271 | +| keyboard | 48.020 | cell phone | 32.376 | microwave | 57.220 | +| oven | 36.168 | toaster | 28.735 | sink | 38.159 | +| refrigerator | 52.876 | book | 15.030 | clock | 48.622 | +| vase | 37.013 | scissors | 26.307 | teddy bear | 45.676 | +| hair drier | 7.255 | toothbrush | 19.374 | | | + +## License + +All files in this directory are licensed under [Apache 2.0 License](./LICENSE). + +#### Contributor Details + +- Google Summer of Code'22 +- Contributor: Sri Siddarth Chakaravarthy +- Github Profile: https://github.com/Sidd1609 +- Organisation: OpenCV +- Project: Lightweight object detection models using OpenCV + +## Reference + +- YOLOX article: https://arxiv.org/abs/2107.08430 +- YOLOX weight and scripts for training: https://github.com/Megvii-BaseDetection/YOLOX +- YOLOX blog: https://arshren.medium.com/yolox-new-improved-yolo-d430c0e4cf20 +- YOLOX-lite: https://github.com/TexasInstruments/edgeai-yolox diff --git a/models/object_detection_yolox/YoloX.py b/models/object_detection_yolox/YoloX.py new file mode 100644 index 00000000..615477e0 --- /dev/null +++ b/models/object_detection_yolox/YoloX.py @@ -0,0 +1,93 @@ +import numpy as np +import cv2 + +class YoloX: + def __init__(self, modelPath, confThreshold=0.35, nmsThreshold=0.5, objThreshold=0.5, backendId=0, targetId=0): + self.num_classes = 80 + self.net = cv2.dnn.readNet(modelPath) + self.input_size = (640, 640) + self.mean = np.array([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 1, 3) + self.std = np.array([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 1, 3) + self.strides = [8, 16, 32] + self.confThreshold = confThreshold + self.nmsThreshold = nmsThreshold + self.objThreshold = objThreshold + self.backendId = backendId + self.targetId = targetId + self.net.setPreferableBackend(self.backendId) + self.net.setPreferableTarget(self.targetId) + + @property + def name(self): + return self.__class__.__name__ + + def setBackend(self, backenId): + self.backendId = backendId + self.net.setPreferableBackend(self.backendId) + + def setTarget(self, targetId): + self.targetId = targetId + self.net.setPreferableTarget(self.targetId) + + def preprocess(self, img): + blob = np.transpose(img, (2, 0, 1)) + return blob[np.newaxis, :, :, :] + + def infer(self, srcimg): + input_blob = self.preprocess(srcimg) + + self.net.setInput(input_blob) + outs = self.net.forward(self.net.getUnconnectedOutLayersNames()) + + predictions = self.postprocess(outs[0]) + return predictions + + def postprocess(self, outputs): + grids = [] + expanded_strides = [] + hsizes = [self.input_size[0] // stride for stride in self.strides] + wsizes = [self.input_size[1] // stride for stride in self.strides] + + for hsize, wsize, stride in zip(hsizes, wsizes, self.strides): + xv, yv = np.meshgrid(np.arange(hsize), np.arange(wsize)) + grid = np.stack((xv, yv), 2).reshape(1, -1, 2) + grids.append(grid) + shape = grid.shape[:2] + expanded_strides.append(np.full((*shape, 1), stride)) + + grids = np.concatenate(grids, 1) + expanded_strides = np.concatenate(expanded_strides, 1) + outputs[..., :2] = (outputs[..., :2] + grids) * expanded_strides + outputs[..., 2:4] = np.exp(outputs[..., 2:4]) * expanded_strides + + predictions = outputs[0] + + boxes = predictions[:, :4] + scores = predictions[:, 4:5] * predictions[:, 5:] + + boxes_xyxy = np.ones_like(boxes) + boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2. + boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2. + boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2. + boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2. + + # multi-class nms + final_dets = [] + for cls_ind in range(scores.shape[1]): + cls_scores = scores[:, cls_ind] + valid_score_mask = cls_scores > self.confThreshold + if valid_score_mask.sum() == 0: + continue + else: + # call nms + indices = cv2.dnn.NMSBoxes(boxes_xyxy.tolist(), cls_scores.tolist(), self.confThreshold, self.nmsThreshold) + + classids_ = np.ones((len(indices), 1)) * cls_ind + final_dets.append( + np.concatenate([boxes_xyxy[indices], cls_scores[indices, None], classids_], axis=1) + ) + + if len(final_dets) == 0: + return np.array([]) + + return np.concatenate(final_dets, 0) diff --git a/models/object_detection_yolox/demo.py b/models/object_detection_yolox/demo.py new file mode 100644 index 00000000..ed31f1f2 --- /dev/null +++ b/models/object_detection_yolox/demo.py @@ -0,0 +1,146 @@ +import numpy as np +import cv2 +import argparse + +from yolox import YoloX + +def str2bool(v): + if v.lower() in ['on', 'yes', 'true', 'y', 't']: + return True + elif v.lower() in ['off', 'no', 'false', 'n', 'f']: + return False + else: + raise NotImplementedError + +backends = [cv2.dnn.DNN_BACKEND_OPENCV, cv2.dnn.DNN_BACKEND_CUDA] +targets = [cv2.dnn.DNN_TARGET_CPU, cv2.dnn.DNN_TARGET_CUDA, cv2.dnn.DNN_TARGET_CUDA_FP16] +help_msg_backends = "Choose one of the computation backends: {:d}: OpenCV implementation (default); {:d}: CUDA" +help_msg_targets = "Chose one of the target computation devices: {:d}: CPU (default); {:d}: CUDA; {:d}: CUDA fp16" + +try: + backends += [cv2.dnn.DNN_BACKEND_TIMVX] + targets += [cv2.dnn.DNN_TARGET_NPU] + help_msg_backends += "; {:d}: TIMVX" + help_msg_targets += "; {:d}: NPU" +except: + print('This version of OpenCV does not support TIM-VX and NPU. Visit https://github.com/opencv/opencv/wiki/TIM-VX-Backend-For-Running-OpenCV-On-NPU for more information.') + +classes = ('person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', + 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', + 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', + 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', + 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', + 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', + 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', + 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', + 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', + 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', + 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', + 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', + 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', + 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush') + +def letterbox(srcimg, target_size=(640, 640)): + padded_img = np.ones((target_size[0], target_size[1], 3)) * 114.0 + ratio = min(target_size[0] / srcimg.shape[0], target_size[1] / srcimg.shape[1]) + resized_img = cv2.resize( + srcimg, (int(srcimg.shape[1] * ratio), int(srcimg.shape[0] * ratio)), interpolation=cv2.INTER_LINEAR + ).astype(np.float32) + padded_img[: int(srcimg.shape[0] * ratio), : int(srcimg.shape[1] * ratio)] = resized_img + + return padded_img, ratio + +def unletterbox(bbox, letterbox_scale): + return bbox / letterbox_scale + +def vis(dets, srcimg, letterbox_scale, fps=None): + res_img = srcimg.copy() + + if fps is not None: + fps_label = "FPS: %.2f" % fps + cv2.putText(res_img, fps_label, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2) + + for det in dets: + box = unletterbox(det[:4], letterbox_scale).astype(np.int32) + score = det[-2] + cls_id = int(det[-1]) + + x0, y0, x1, y1 = box + + text = '{}:{:.1f}%'.format(classes[cls_id], score * 100) + font = cv2.FONT_HERSHEY_SIMPLEX + txt_size = cv2.getTextSize(text, font, 0.4, 1)[0] + cv2.rectangle(res_img, (x0, y0), (x1, y1), (0, 255, 0), 2) + cv2.rectangle(res_img, (x0, y0 + 1), (x0 + txt_size[0] + 1, y0 + int(1.5 * txt_size[1])), (255, 255, 255), -1) + cv2.putText(res_img, text, (x0, y0 + txt_size[1]), font, 0.4, (0, 0, 0), thickness=1) + + return res_img + +if __name__=='__main__': + parser = argparse.ArgumentParser(description='Nanodet inference using OpenCV an contribution by Sri Siddarth Chakaravarthy part of GSOC_2022') + parser.add_argument('--input', '-i', type=str, help='Path to the input image. Omit for using default camera.') + parser.add_argument('--model', '-m', type=str, default='object_detection_yolox_2022nov.onnx', help="Path to the model") + parser.add_argument('--backend', '-b', type=int, default=backends[0], help=help_msg_backends.format(*backends)) + parser.add_argument('--target', '-t', type=int, default=targets[0], help=help_msg_targets.format(*targets)) + parser.add_argument('--confidence', default=0.5, type=float, help='Class confidence') + parser.add_argument('--nms', default=0.5, type=float, help='Enter nms IOU threshold') + parser.add_argument('--obj', default=0.5, type=float, help='Enter object threshold') + parser.add_argument('--save', '-s', type=str2bool, default=False, help='Set true to save results. This flag is invalid when using camera.') + parser.add_argument('--vis', '-v', type=str2bool, default=True, help='Set true to open a window for result visualization. This flag is invalid when using camera.') + args = parser.parse_args() + + model_net = YoloX(modelPath= args.model, + confThreshold=args.confidence, + nmsThreshold=args.nms, + objThreshold=args.obj, + backendId=args.backend, + targetId=args.target) + + tm = cv2.TickMeter() + tm.reset() + if args.input is not None: + image = cv2.imread(args.input) + input_blob = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) + input_blob, letterbox_scale = letterbox(input_blob) + + # Inference + tm.start() + preds = model_net.infer(input_blob) + tm.stop() + print("Inference time: {:.2f} ms".format(tm.getTimeMilli())) + + img = vis(preds, image, letterbox_scale) + + if args.save: + print('Resutls saved to result.jpg\n') + cv2.imwrite('result.jpg', img) + + if args.vis: + cv2.namedWindow(args.input, cv2.WINDOW_AUTOSIZE) + cv2.imshow(args.input, img) + cv2.waitKey(0) + + else: + print("Press any key to stop video capture") + deviceId = 0 + cap = cv2.VideoCapture(deviceId) + + while cv2.waitKey(1) < 0: + hasFrame, frame = cap.read() + if not hasFrame: + print('No frames grabbed!') + break + + input_blob = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) + input_blob, letterbox_scale = letterbox(input_blob) + + # Inference + tm.start() + preds = model_net.infer(input_blob) + tm.stop() + + img = vis(preds, frame, letterbox_scale, fps=tm.getFPS()) + + cv2.imshow("YoloX Demo", img) + + tm.reset() diff --git a/models/object_detection_yolox/object_detection_yolox_2022nov.onnx b/models/object_detection_yolox/object_detection_yolox_2022nov.onnx new file mode 100644 index 00000000..0a22cdd5 --- /dev/null +++ b/models/object_detection_yolox/object_detection_yolox_2022nov.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5c2d13e59ae883e6af3b45daea64af4833a4951c92d116ec270d9ddbe998063 +size 35858002 diff --git a/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx b/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx new file mode 100644 index 00000000..af996081 --- /dev/null +++ b/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01a3b0f400b30bc1e45230e991b2e499ab42622485a330021947333fbaf03935 +size 9079452 diff --git a/models/object_detection_yolox/samples/1_res.jpg b/models/object_detection_yolox/samples/1_res.jpg new file mode 100644 index 00000000..c56e9ff5 Binary files /dev/null and b/models/object_detection_yolox/samples/1_res.jpg differ diff --git a/models/object_detection_yolox/samples/2_res.jpg b/models/object_detection_yolox/samples/2_res.jpg new file mode 100644 index 00000000..7eb30168 Binary files /dev/null and b/models/object_detection_yolox/samples/2_res.jpg differ diff --git a/models/object_detection_yolox/samples/3_res.jpg b/models/object_detection_yolox/samples/3_res.jpg new file mode 100644 index 00000000..5e94e369 Binary files /dev/null and b/models/object_detection_yolox/samples/3_res.jpg differ