Docs update
This commit is contained in:
parent
20d2edce6a
commit
f6fec32c7b
|
|
@ -0,0 +1,21 @@
|
|||
## v0.0.4 - 2024-06-30
|
||||
|
||||
### Added
|
||||
|
||||
- Add X struct to handle input and preprocessing
|
||||
- Add Ops struct to manage common operations
|
||||
- Use SIMD (fast_image_resize) to accelerate model pre-processing and post-processing.YOLOv8-seg post-processing (~120ms => ~20ms), Depth-Anything post-processing (~23ms => ~2ms).
|
||||
|
||||
### Deprecated
|
||||
- Mark `Ops::descale_mask()` as deprecated.
|
||||
|
||||
|
||||
### Fixed
|
||||
|
||||
### Changed
|
||||
|
||||
### Removed
|
||||
|
||||
### Refactored
|
||||
|
||||
### Others
|
||||
98
README.md
98
README.md
|
|
@ -1,31 +1,27 @@
|
|||
# usls
|
||||
|
||||
[](https://crates.io/crates/usls) [](https://docs.rs/usls) [](https://github.com/jamjamjon/usls) 
|
||||
|
||||
|
||||
A Rust library integrated with **ONNXRuntime**, providing a collection of **Computer Vison** and **Vision-Language** models including [YOLOv5](https://github.com/ultralytics/yolov5), [YOLOv8](https://github.com/ultralytics/ultralytics), [YOLOv9](https://github.com/WongKinYiu/yolov9), [YOLOv10](https://github.com/THU-MIG/yolov10), [RTDETR](https://arxiv.org/abs/2304.08069), [CLIP](https://github.com/openai/CLIP), [DINOv2](https://github.com/facebookresearch/dinov2), [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM), [YOLO-World](https://github.com/AILab-CVC/YOLO-World), [BLIP](https://arxiv.org/abs/2201.12086), [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [Depth-Anything](https://github.com/LiheYoung/Depth-Anything), [MODNet](https://github.com/ZHKKKe/MODNet) and others.
|
||||
|
||||
## Recently Updated
|
||||
|
||||
|
||||
| YOLOv8-Obb |
|
||||
| :----------------------------: |
|
||||
|<img src='examples/yolov8/demo-obb-2.png' width="800px">|
|
||||
|
||||
|
||||
| Depth-Anything |
|
||||
| :----------------------------: |
|
||||
|<img src='examples/depth-anything/demo.png' width="800px">|
|
||||
|
||||
| YOLOP-v2 | Text-Detection |
|
||||
| :----------------------------: | :------------------------------: |
|
||||
|<img src='examples/yolop/demo.png' width="385px">| <img src='examples/db/demo.png' width="385x"> |
|
||||
|
||||
| Portrait Matting |
|
||||
| :----------------------------: |
|
||||
|<img src='examples/modnet/demo.png' width="800px">|
|
||||
|
||||
|
||||
| YOLOP-v2 | Face-Parsing | Text-Detection |
|
||||
| :----------------------------: | :------------------------------: | :------------------------------: |
|
||||
|<img src='examples/yolop/demo.png' height="180px">| <img src='examples/face-parsing/demo.png' height="180px"> | <img src='examples/db/demo.png' height="180px"> |
|
||||
|
||||
|
||||
- 2024/06/30: **Accelerate model pre-processing and post-processing using SIMD**. YOLOv8-seg post-processing (~120ms => ~20ms), Depth-Anything post-processing (~23ms => ~2ms).
|
||||
| YOLOv8-Obb |
|
||||
| :----------------------------: |
|
||||
|<img src='examples/yolov8/demo-obb-2.png' width="800px">|
|
||||
|
||||
|
||||
|
||||
|
|
@ -33,11 +29,11 @@ A Rust library integrated with **ONNXRuntime**, providing a collection of **Comp
|
|||
|
||||
| Model | Task / Type | Example | CUDA<br />f32 | CUDA<br />f16 | TensorRT<br />f32 | TensorRT<br />f16 |
|
||||
| :---------------------------------------------------------------: | :-------------------------: | :----------------------: | :-----------: | :-----------: | :------------------------: | :-----------------------: |
|
||||
| [YOLOv5](https://github.com/ultralytics/yolov5) | Object Detection<br />Instance Segmentation<br />Classification | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLOv8-obb](https://github.com/ultralytics/ultralytics) | Object Detection<br />Instance Segmentation<br />Classification<br />Oriented Object Detection<br />Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLOv5](https://github.com/ultralytics/yolov5) | Classification<br />Object Detection<br />Instance Segmentation | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLOv8](https://github.com/ultralytics/ultralytics) | Object Detection<br />Instance Segmentation<br />Classification<br />Oriented Object Detection<br />Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLOv9](https://github.com/WongKinYiu/yolov9) | Object Detection | [demo](examples/yolov9) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLOv10](https://github.com/THU-MIG/yolov10) | Object Detection | [demo](examples/yolov10) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [RT-DETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [RTDETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) | Instance Segmentation | [demo](examples/fastsam) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [YOLO-World](https://github.com/AILab-CVC/YOLO-World) | Object Detection | [demo](examples/yolo-world) | ✅ | ✅ | ✅ | ✅ |
|
||||
| [DINOv2](https://github.com/facebookresearch/dinov2) | Vision-Self-Supervised | [demo](examples/dinov2) | ✅ | ✅ | ✅ | ✅ |
|
||||
|
|
@ -50,57 +46,50 @@ A Rust library integrated with **ONNXRuntime**, providing a collection of **Comp
|
|||
| [Depth-Anything](https://github.com/LiheYoung/Depth-Anything) | Monocular Depth Estimation | [demo](examples/depth-anything) | ✅ | ✅ | ❌ | ❌ |
|
||||
| [MODNet](https://github.com/ZHKKKe/MODNet) | Image Matting | [demo](examples/modnet) | ✅ | ✅ | ✅ | ✅ |
|
||||
|
||||
## Solution Models
|
||||
|
||||
|
||||
<details close>
|
||||
<summary>Additionally, this repo also provides some solution models.</summary>
|
||||
|
||||
| Model | Example | Result |
|
||||
| :---------------------------------------------------------------------------------------------------------: | :------------------------------: | :-----------------------------------------------------------------------------: |
|
||||
| Lane Line Segmentation<br /> Drivable Area Segmentation<br />Car Detection<br />车道线-可行驶区域-车辆检测 | [demo](examples/yolov8-plastic-bag) | <img src='examples/yolop/demo.png' width="220px" height="140px"> |
|
||||
| Face Parsing<br /> 人脸解析 | [demo](examples/face-parsing) | <img src='examples/face-parsing/demo.png' width="220px" height="200px"> |
|
||||
| Text Detection<br />(PPOCR-det v3, v4)<br />通用文本检测 | [demo](examples/db) | <img src='examples/db/demo.png' width="250px" height="200px"> |
|
||||
| Text Recognition<br />(PPOCR-rec v3, v4)<br />中英文-文本识别 | [demo](examples/svtr) | |
|
||||
| Face-Landmark Detection<br />人脸 & 关键点检测 | [demo](examples/yolov8-face) | <img src='examples/yolov8-face/demo.png' width="220px" height="180px"> |
|
||||
| Head Detection<br /> 人头检测 | [demo](examples/yolov8-head) | <img src='examples/yolov8-head/demo.png' width="220px" height="180px"> |
|
||||
| Fall Detection<br /> 摔倒检测 | [demo](examples/yolov8-falldown) | <img src='examples/yolov8-falldown/demo.png' width="220px" height="180px"> |
|
||||
| Trash Detection<br /> 垃圾检测 | [demo](examples/yolov8-plastic-bag) | <img src='examples/yolov8-trash/demo.png' width="250px" height="180px"> |
|
||||
|
||||
</details>
|
||||
|
||||
## Demo
|
||||
|
||||
```
|
||||
cargo run -r --example yolov8 # yolov9, blip, clip, dinov2, svtr, db, yolo-world...
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
check **[ort guide](https://ort.pyke.io/setup/linking)**
|
||||
Refer to **[ort guide](https://ort.pyke.io/setup/linking)**
|
||||
|
||||
<details close>
|
||||
<summary>For Linux or MacOS users</summary>
|
||||
|
||||
- Firstly, download from latest release from [ONNXRuntime Releases](https://github.com/microsoft/onnxruntime/releases)
|
||||
- Then linking
|
||||
```shell
|
||||
```Shell
|
||||
export ORT_DYLIB_PATH=/Users/qweasd/Desktop/onnxruntime-osx-arm64-1.17.1/lib/libonnxruntime.1.17.1.dylib
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## Demo
|
||||
|
||||
```Shell
|
||||
cargo run -r --example yolov8 # yolov10, blip, clip, yolop, svtr, db, yolo-world, ...
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Integrate into your own project
|
||||
<details close>
|
||||
<summary>Expand</summary>
|
||||
|
||||
#### 1. Add `usls` as a dependency to your project's `Cargo.toml`
|
||||
### 1. Add `usls` as a dependency to your project's `Cargo.toml`
|
||||
|
||||
```shell
|
||||
usls = { git = "https://github.com/jamjamjon/usls", rev = "xxx"}
|
||||
```Shell
|
||||
cargo add usls
|
||||
```
|
||||
|
||||
#### 2. Set `Options` and build model
|
||||
Or you can use specific commit
|
||||
```Shell
|
||||
usls = { git = "https://github.com/jamjamjon/usls", rev = "???sha???"}
|
||||
```
|
||||
|
||||
|
||||
|
||||
### 2. Set `Options` and build model
|
||||
|
||||
```Rust
|
||||
let options = Options::default()
|
||||
|
|
@ -201,5 +190,24 @@ pub struct Y {
|
|||
- Other tasks results can be found at: `src/ys/y.rs`
|
||||
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## Solution Models
|
||||
|
||||
|
||||
<details close>
|
||||
<summary>Additionally, this repo also provides some solution models.</summary>
|
||||
|
||||
| Model | Example | Result |
|
||||
| :---------------------------------------------------------------------------------------------------------: | :------------------------------: | :-----------------------------------------------------------------------------: |
|
||||
| Lane Line Segmentation<br /> Drivable Area Segmentation<br />Car Detection<br />车道线-可行驶区域-车辆检测 | [demo](examples/yolov8-plastic-bag) | <img src='examples/yolop/demo.png' width="220px" height="140px"> |
|
||||
| Face Parsing<br /> 人脸解析 | [demo](examples/face-parsing) | <img src='examples/face-parsing/demo.png' width="220px" height="200px"> |
|
||||
| Text Detection<br />(PPOCR-det v3, v4)<br />通用文本检测 | [demo](examples/db) | <img src='examples/db/demo.png' width="250px" height="200px"> |
|
||||
| Text Recognition<br />(PPOCR-rec v3, v4)<br />中英文-文本识别 | [demo](examples/svtr) | |
|
||||
| Face-Landmark Detection<br />人脸 & 关键点检测 | [demo](examples/yolov8-face) | <img src='examples/yolov8-face/demo.png' width="220px" height="180px"> |
|
||||
| Head Detection<br /> 人头检测 | [demo](examples/yolov8-head) | <img src='examples/yolov8-head/demo.png' width="220px" height="180px"> |
|
||||
| Fall Detection<br /> 摔倒检测 | [demo](examples/yolov8-falldown) | <img src='examples/yolov8-falldown/demo.png' width="220px" height="180px"> |
|
||||
| Trash Detection<br /> 垃圾检测 | [demo](examples/yolov8-plastic-bag) | <img src='examples/yolov8-trash/demo.png' width="250px" height="180px"> |
|
||||
|
||||
</details>
|
||||
|
|
|
|||
|
|
@ -109,7 +109,7 @@ impl Annotator {
|
|||
self
|
||||
}
|
||||
|
||||
/// Plotting BBOXes or not
|
||||
/// Plotting bboxes or not
|
||||
pub fn without_bboxes(mut self, x: bool) -> Self {
|
||||
self.without_bboxes = x;
|
||||
self
|
||||
|
|
@ -195,7 +195,7 @@ impl Annotator {
|
|||
self
|
||||
}
|
||||
|
||||
/// Plotting MBRs or not
|
||||
/// Plotting mbrs or not
|
||||
pub fn without_mbrs(mut self, x: bool) -> Self {
|
||||
self.without_mbrs = x;
|
||||
self
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ pub struct OrtEngine {
|
|||
model_proto: onnx::ModelProto,
|
||||
params: usize,
|
||||
wbmems: usize,
|
||||
pub ts: Ts,
|
||||
ts: Ts,
|
||||
}
|
||||
|
||||
impl OrtEngine {
|
||||
|
|
@ -353,7 +353,7 @@ impl OrtEngine {
|
|||
Ok(ys)
|
||||
}
|
||||
|
||||
pub fn _set_ixx(x: isize, ixx: &Option<MinOptMax>, i: usize, ii: usize) -> Option<MinOptMax> {
|
||||
fn _set_ixx(x: isize, ixx: &Option<MinOptMax>, i: usize, ii: usize) -> Option<MinOptMax> {
|
||||
match x {
|
||||
-1 => {
|
||||
match ixx {
|
||||
|
|
@ -369,7 +369,8 @@ impl OrtEngine {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn nbytes_from_onnx_dtype_id(x: usize) -> usize {
|
||||
#[allow(dead_code)]
|
||||
fn nbytes_from_onnx_dtype_id(x: usize) -> usize {
|
||||
match x {
|
||||
7 | 11 | 13 => 8, // i64, f64, u64
|
||||
1 | 6 | 12 => 4, // f32, i32, u32
|
||||
|
|
@ -380,7 +381,8 @@ impl OrtEngine {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn nbytes_from_onnx_dtype(x: &ort::TensorElementType) -> usize {
|
||||
#[allow(dead_code)]
|
||||
fn nbytes_from_onnx_dtype(x: &ort::TensorElementType) -> usize {
|
||||
match x {
|
||||
ort::TensorElementType::Float64
|
||||
| ort::TensorElementType::Uint64
|
||||
|
|
@ -399,6 +401,7 @@ impl OrtEngine {
|
|||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)]
|
||||
fn ort_dtype_from_onnx_dtype_id(value: i32) -> Option<ort::TensorElementType> {
|
||||
match value {
|
||||
0 => None,
|
||||
|
|
@ -630,4 +633,8 @@ impl OrtEngine {
|
|||
pub fn memory_weights(&self) -> usize {
|
||||
self.wbmems
|
||||
}
|
||||
|
||||
pub fn ts(&self) -> &Ts {
|
||||
&self.ts
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! ONNX file generated by prost-build.
|
||||
|
||||
// This file is @generated by prost-build.
|
||||
/// Attributes
|
||||
///
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! Some processing functions to image and ndarray.
|
||||
|
||||
use anyhow::Result;
|
||||
use fast_image_resize as fir;
|
||||
use fast_image_resize::{
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! Options for build models.
|
||||
|
||||
use anyhow::Result;
|
||||
|
||||
use crate::{
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ use ndarray::{Array, Dim, IxDyn, IxDynImpl};
|
|||
|
||||
use crate::Ops;
|
||||
|
||||
/// Model input, alias for [`Array<f32, IxDyn>`]
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct X(pub Array<f32, IxDyn>);
|
||||
|
||||
|
|
|
|||
107
src/lib.rs
107
src/lib.rs
|
|
@ -1,3 +1,110 @@
|
|||
//! A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models.
|
||||
//!
|
||||
//! [`OrtEngine`] provides ONNX model loading, metadata parsing, dry_run, inference and other functions, supporting EPs such as CUDA, TensorRT, CoreML, etc. You can use it as the ONNXRuntime engine for building models.
|
||||
//!
|
||||
//!
|
||||
//!
|
||||
//!
|
||||
|
||||
//! # Supported models
|
||||
//! | Model | Task / Type | Example | CUDA<br />f32 | CUDA<br />f16 | TensorRT<br />f32 | TensorRT<br />f16 |
|
||||
//! | :---------------------------------------------------------------: | :-------------------------: | :----------------------: | :-----------: | :-----------: | :------------------------: | :-----------------------: |
|
||||
//! | [YOLOv5](https://github.com/ultralytics/yolov5) | Object Detection<br />Instance Segmentation<br />Classification | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [YOLOv8-obb](https://github.com/ultralytics/ultralytics) | Object Detection<br />Instance Segmentation<br />Classification<br />Oriented Object Detection<br />Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [YOLOv9](https://github.com/WongKinYiu/yolov9) | Object Detection | [demo](examples/yolov9) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [YOLOv10](https://github.com/THU-MIG/yolov10) | Object Detection | [demo](examples/yolov10) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [RT-DETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) | Instance Segmentation | [demo](examples/fastsam) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [YOLO-World](https://github.com/AILab-CVC/YOLO-World) | Object Detection | [demo](examples/yolo-world) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [DINOv2](https://github.com/facebookresearch/dinov2) | Vision-Self-Supervised | [demo](examples/dinov2) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [CLIP](https://github.com/openai/CLIP) | Vision-Language | [demo](examples/clip) | ✅ | ✅ | ✅ visual<br />❌ textual | ✅ visual<br />❌ textual |
|
||||
//! | [BLIP](https://github.com/salesforce/BLIP) | Vision-Language | [demo](examples/blip) | ✅ | ✅ | ✅ visual<br />❌ textual | ✅ visual<br />❌ textual |
|
||||
//! | [DB](https://arxiv.org/abs/1911.08947) | Text Detection | [demo](examples/db) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [SVTR](https://arxiv.org/abs/2205.00159) | Text Recognition | [demo](examples/svtr) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [RTMO](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo) | Keypoint Detection | [demo](examples/rtmo) | ✅ | ✅ | ❌ | ❌ |
|
||||
//! | [YOLOPv2](https://arxiv.org/abs/2208.11434) | Panoptic Driving Perception | [demo](examples/yolop) | ✅ | ✅ | ✅ | ✅ |
|
||||
//! | [Depth-Anything](https://github.com/LiheYoung/Depth-Anything) | Monocular Depth Estimation | [demo](examples/depth-anything) | ✅ | ✅ | ❌ | ❌ |
|
||||
//! | [MODNet](https://github.com/ZHKKKe/MODNet) | Image Matting | [demo](examples/modnet) | ✅ | ✅ | ✅ | ✅ |
|
||||
|
||||
//! # Use provided models for inference
|
||||
|
||||
//! #### 1. Using provided [`models`] with [`Option`]
|
||||
|
||||
//! ```Rust, no_run
|
||||
//! use usls::{coco, models::YOLO, Annotator, DataLoader, Options, Vision};
|
||||
//!
|
||||
//! let options = Options::default()
|
||||
//! .with_model("yolov8m-seg-dyn.onnx")?
|
||||
//! .with_trt(0)
|
||||
//! .with_fp16(true)
|
||||
//! .with_i00((1, 1, 4).into())
|
||||
//! .with_i02((224, 640, 800).into())
|
||||
//! .with_i03((224, 640, 800).into())
|
||||
//! .with_confs(&[0.4, 0.15]) // class_0: 0.4, others: 0.15
|
||||
//! .with_profile(false);
|
||||
//! let mut model = YOLO::new(options)?;
|
||||
//! ```
|
||||
|
||||
//! #### 2. Load images using [`DataLoader`] or [`image::io::Reader`]
|
||||
//!
|
||||
//! ```Rust, no_run
|
||||
//! // Load one image
|
||||
//! let x = vec![DataLoader::try_read("./assets/bus.jpg")?];
|
||||
//!
|
||||
//! // Load images with batch_size = 4
|
||||
//! let dl = DataLoader::default()
|
||||
//! .with_batch(4)
|
||||
//! .load("./assets")?;
|
||||
//! // Load one image with `image::io::Reader`
|
||||
//! let x = image::io::Reader::open("myimage.png")?.decode()?
|
||||
//! ```
|
||||
//!
|
||||
//! #### 3. Build annotator using [`Annotator`]
|
||||
//!
|
||||
//! ```Rust, no_run
|
||||
//! let annotator = Annotator::default()
|
||||
//! .with_bboxes_thickness(7)
|
||||
//! .with_saveout("YOLOv8");
|
||||
//! ```
|
||||
//!
|
||||
//! #### 4. Run and annotate
|
||||
//!
|
||||
//! ```Rust, no_run
|
||||
//! for (xs, _paths) in dl {
|
||||
//! let ys = model.run(&xs)?;
|
||||
//! annotator.annotate(&xs, &ys);
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//! #### 5. Parse inference results from [`Vec<Y>`]
|
||||
//! For example, uou can get detection bboxes with `y.bboxes()`:
|
||||
//! ```Rust, no_run
|
||||
//! let ys = model.run(&xs)?;
|
||||
//! for y in ys {
|
||||
//! // bboxes
|
||||
//! if let Some(bboxes) = y.bboxes() {
|
||||
//! for bbox in bboxes {
|
||||
//! println!(
|
||||
//! "Bbox: {}, {}, {}, {}, {}, {}",
|
||||
//! bbox.xmin(),
|
||||
//! bbox.ymin(),
|
||||
//! bbox.xmax(),
|
||||
//! bbox.ymax(),
|
||||
//! bbox.confidence(),
|
||||
//! bbox.id(),
|
||||
//! )
|
||||
//! }
|
||||
//! }
|
||||
//! }
|
||||
//! ```
|
||||
//!
|
||||
//!
|
||||
//! # Build your own model with [`OrtEngine`]
|
||||
//!
|
||||
//! Refer to [Demo: Depth-Anything](https://github.com/jamjamjon/usls/blob/main/src/models/depth_anything.rs)
|
||||
//!
|
||||
//!
|
||||
|
||||
mod core;
|
||||
pub mod models;
|
||||
mod utils;
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! Models provided: [`Blip`], [`Clip`], [`YOLO`], [`DepthAnything`], ...
|
||||
|
||||
mod blip;
|
||||
mod clip;
|
||||
mod db;
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! Some constants releated with COCO dataset: [`SKELETONS_16`], [`KEYPOINTS_NAMES_17`], [`NAMES_80`]
|
||||
|
||||
pub const SKELETONS_16: [(usize, usize); 16] = [
|
||||
(0, 1),
|
||||
(0, 2),
|
||||
|
|
|
|||
|
|
@ -1,3 +1,5 @@
|
|||
//! Some colormap: [`TURBO`], [`INFERNO`], [`PLASMA`], [`VIRIDIS`], [`MAGMA`], [`BENTCOOLWARM`], [`BLACKBODY`], [`EXTENDEDKINDLMANN`], [`KINDLMANN`], [`SMOOTHCOOLWARM`].
|
||||
|
||||
pub const TURBO: [[u8; 3]; 256] = [
|
||||
[48, 18, 59],
|
||||
[50, 21, 67],
|
||||
|
|
|
|||
|
|
@ -8,12 +8,13 @@ pub mod colormap256;
|
|||
|
||||
pub use colormap256::*;
|
||||
|
||||
pub const GITHUB_ASSETS: &str = "https://github.com/jamjamjon/assets/releases/download/v0.0.1";
|
||||
pub const CHECK_MARK: &str = "✅";
|
||||
pub const CROSS_MARK: &str = "❌";
|
||||
pub const SAFE_CROSS_MARK: &str = "❎";
|
||||
pub(crate) const GITHUB_ASSETS: &str =
|
||||
"https://github.com/jamjamjon/assets/releases/download/v0.0.1";
|
||||
pub(crate) const CHECK_MARK: &str = "✅";
|
||||
pub(crate) const CROSS_MARK: &str = "❌";
|
||||
pub(crate) const SAFE_CROSS_MARK: &str = "❎";
|
||||
|
||||
pub fn auto_load<P: AsRef<Path>>(src: P, sub: Option<&str>) -> Result<String> {
|
||||
pub(crate) fn auto_load<P: AsRef<Path>>(src: P, sub: Option<&str>) -> Result<String> {
|
||||
let src = src.as_ref();
|
||||
let p = if src.is_file() {
|
||||
src.into()
|
||||
|
|
@ -33,6 +34,7 @@ pub fn auto_load<P: AsRef<Path>>(src: P, sub: Option<&str>) -> Result<String> {
|
|||
Ok(p.to_str().unwrap().to_string())
|
||||
}
|
||||
|
||||
/// `download` sth from src to dst
|
||||
pub fn download<P: AsRef<Path> + std::fmt::Debug>(
|
||||
src: &str,
|
||||
dst: P,
|
||||
|
|
@ -77,7 +79,7 @@ pub fn download<P: AsRef<Path> + std::fmt::Debug>(
|
|||
Ok(())
|
||||
}
|
||||
|
||||
pub fn string_now(delimiter: &str) -> String {
|
||||
pub(crate) fn string_now(delimiter: &str) -> String {
|
||||
let t_now = chrono::Local::now();
|
||||
let fmt = format!(
|
||||
"%Y{}%m{}%d{}%H{}%M{}%S{}%f",
|
||||
|
|
@ -86,7 +88,8 @@ pub fn string_now(delimiter: &str) -> String {
|
|||
t_now.format(&fmt).to_string()
|
||||
}
|
||||
|
||||
pub fn config_dir() -> PathBuf {
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn config_dir() -> PathBuf {
|
||||
match dirs::config_dir() {
|
||||
Some(mut d) => {
|
||||
d.push("usls");
|
||||
|
|
@ -99,7 +102,8 @@ pub fn config_dir() -> PathBuf {
|
|||
}
|
||||
}
|
||||
|
||||
pub fn home_dir(sub: Option<&str>) -> PathBuf {
|
||||
#[allow(dead_code)]
|
||||
pub(crate) fn home_dir(sub: Option<&str>) -> PathBuf {
|
||||
match dirs::home_dir() {
|
||||
Some(mut d) => {
|
||||
d.push(".usls");
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
/// Bounding Box 2D
|
||||
/// Bounding Box 2D.
|
||||
#[derive(Clone, PartialEq, PartialOrd)]
|
||||
pub struct Bbox {
|
||||
x: f32,
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ use ndarray::{Array, Axis, Ix2, IxDyn};
|
|||
|
||||
use crate::X;
|
||||
|
||||
/// Embedding
|
||||
/// Embedding for image or text.
|
||||
#[derive(Clone, PartialEq, Default)]
|
||||
pub struct Embedding(Array<f32, IxDyn>);
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
use std::ops::{Add, Div, Mul, Sub};
|
||||
|
||||
/// Keypoint 2D
|
||||
/// Keypoint 2D.
|
||||
#[derive(PartialEq, Clone)]
|
||||
pub struct Keypoint {
|
||||
x: f32,
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
use image::DynamicImage;
|
||||
|
||||
/// Gray-Scale Mask.
|
||||
#[derive(Clone, PartialEq)]
|
||||
pub struct Mask {
|
||||
mask: DynamicImage,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
use geo::{coord, line_string, Area, BooleanOps, Coord, EuclideanDistance, LineString, Polygon};
|
||||
|
||||
/// Minimum Bounding Rectangle
|
||||
/// Minimum Bounding Rectangle.
|
||||
#[derive(Clone, PartialEq)]
|
||||
pub struct Mbr {
|
||||
ls: LineString,
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ use geo::{
|
|||
|
||||
use crate::{Bbox, Mbr};
|
||||
|
||||
/// Polygon.
|
||||
#[derive(Clone, PartialEq)]
|
||||
pub struct Polygon {
|
||||
polygon: geo::Polygon,
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
/// Probabilities for classification
|
||||
/// Probabilities for classification.
|
||||
#[derive(Clone, PartialEq, Default)]
|
||||
pub struct Prob {
|
||||
probs: Vec<f32>,
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
use crate::{Bbox, Embedding, Keypoint, Mask, Mbr, Polygon, Prob};
|
||||
|
||||
/// Inference results container for each image.
|
||||
#[derive(Clone, PartialEq, Default)]
|
||||
pub struct Y {
|
||||
probs: Option<Prob>,
|
||||
|
|
|
|||
Loading…
Reference in New Issue