diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..4cbac93
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,21 @@
+## v0.0.4 - 2024-06-30
+
+### Added
+
+- Add X struct to handle input and preprocessing
+- Add Ops struct to manage common operations
+- Use SIMD (fast_image_resize) to accelerate model pre-processing and post-processing.YOLOv8-seg post-processing (~120ms => ~20ms), Depth-Anything post-processing (~23ms => ~2ms).
+
+### Deprecated
+- Mark `Ops::descale_mask()` as deprecated.
+
+
+### Fixed
+
+### Changed
+
+### Removed
+
+### Refactored
+
+### Others
diff --git a/README.md b/README.md
index 3720986..69a093f 100644
--- a/README.md
+++ b/README.md
@@ -1,43 +1,39 @@
# usls
+[](https://crates.io/crates/usls) [](https://docs.rs/usls) [](https://github.com/jamjamjon/usls) 
+
+
A Rust library integrated with **ONNXRuntime**, providing a collection of **Computer Vison** and **Vision-Language** models including [YOLOv5](https://github.com/ultralytics/yolov5), [YOLOv8](https://github.com/ultralytics/ultralytics), [YOLOv9](https://github.com/WongKinYiu/yolov9), [YOLOv10](https://github.com/THU-MIG/yolov10), [RTDETR](https://arxiv.org/abs/2304.08069), [CLIP](https://github.com/openai/CLIP), [DINOv2](https://github.com/facebookresearch/dinov2), [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM), [YOLO-World](https://github.com/AILab-CVC/YOLO-World), [BLIP](https://arxiv.org/abs/2201.12086), [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR), [Depth-Anything](https://github.com/LiheYoung/Depth-Anything), [MODNet](https://github.com/ZHKKKe/MODNet) and others.
-## Recently Updated
-
-
-| YOLOv8-Obb |
-| :----------------------------: |
-|
|
| Depth-Anything |
| :----------------------------: |
|
|
+| YOLOP-v2 | Text-Detection |
+| :----------------------------: | :------------------------------: |
+|
|
|
| Portrait Matting |
| :----------------------------: |
|
|
+| YOLOv8-Obb |
+| :----------------------------: |
+|
|
-| YOLOP-v2 | Face-Parsing | Text-Detection |
-| :----------------------------: | :------------------------------: | :------------------------------: |
-|
|
|
|
-
-
-- 2024/06/30: **Accelerate model pre-processing and post-processing using SIMD**. YOLOv8-seg post-processing (~120ms => ~20ms), Depth-Anything post-processing (~23ms => ~2ms).
-
## Supported Models
| Model | Task / Type | Example | CUDA
f32 | CUDA
f16 | TensorRT
f32 | TensorRT
f16 |
| :---------------------------------------------------------------: | :-------------------------: | :----------------------: | :-----------: | :-----------: | :------------------------: | :-----------------------: |
-| [YOLOv5](https://github.com/ultralytics/yolov5) | Object Detection
Instance Segmentation
Classification | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
-| [YOLOv8-obb](https://github.com/ultralytics/ultralytics) | Object Detection
Instance Segmentation
Classification
Oriented Object Detection
Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
+| [YOLOv5](https://github.com/ultralytics/yolov5) | Classification
Object Detection
Instance Segmentation | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
+| [YOLOv8](https://github.com/ultralytics/ultralytics) | Object Detection
Instance Segmentation
Classification
Oriented Object Detection
Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
| [YOLOv9](https://github.com/WongKinYiu/yolov9) | Object Detection | [demo](examples/yolov9) | ✅ | ✅ | ✅ | ✅ |
| [YOLOv10](https://github.com/THU-MIG/yolov10) | Object Detection | [demo](examples/yolov10) | ✅ | ✅ | ✅ | ✅ |
-| [RT-DETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
+| [RTDETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
| [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) | Instance Segmentation | [demo](examples/fastsam) | ✅ | ✅ | ✅ | ✅ |
| [YOLO-World](https://github.com/AILab-CVC/YOLO-World) | Object Detection | [demo](examples/yolo-world) | ✅ | ✅ | ✅ | ✅ |
| [DINOv2](https://github.com/facebookresearch/dinov2) | Vision-Self-Supervised | [demo](examples/dinov2) | ✅ | ✅ | ✅ | ✅ |
@@ -50,57 +46,50 @@ A Rust library integrated with **ONNXRuntime**, providing a collection of **Comp
| [Depth-Anything](https://github.com/LiheYoung/Depth-Anything) | Monocular Depth Estimation | [demo](examples/depth-anything) | ✅ | ✅ | ❌ | ❌ |
| [MODNet](https://github.com/ZHKKKe/MODNet) | Image Matting | [demo](examples/modnet) | ✅ | ✅ | ✅ | ✅ |
-## Solution Models
-
-Additionally, this repo also provides some solution models.
-
-| Model | Example | Result |
-| :---------------------------------------------------------------------------------------------------------: | :------------------------------: | :-----------------------------------------------------------------------------: |
-| Lane Line Segmentation
Drivable Area Segmentation
Car Detection
车道线-可行驶区域-车辆检测 | [demo](examples/yolov8-plastic-bag) |
|
-| Face Parsing
人脸解析 | [demo](examples/face-parsing) |
|
-| Text Detection
(PPOCR-det v3, v4)
通用文本检测 | [demo](examples/db) |
|
-| Text Recognition
(PPOCR-rec v3, v4)
中英文-文本识别 | [demo](examples/svtr) | |
-| Face-Landmark Detection
人脸 & 关键点检测 | [demo](examples/yolov8-face) |
|
-| Head Detection
人头检测 | [demo](examples/yolov8-head) |
|
-| Fall Detection
摔倒检测 | [demo](examples/yolov8-falldown) |
|
-| Trash Detection
垃圾检测 | [demo](examples/yolov8-plastic-bag) |
|
-
-
-
-## Demo
-
-```
-cargo run -r --example yolov8 # yolov9, blip, clip, dinov2, svtr, db, yolo-world...
-```
-
## Installation
-check **[ort guide](https://ort.pyke.io/setup/linking)**
+Refer to **[ort guide](https://ort.pyke.io/setup/linking)**
For Linux or MacOS users
- Firstly, download from latest release from [ONNXRuntime Releases](https://github.com/microsoft/onnxruntime/releases)
- Then linking
- ```shell
+ ```Shell
export ORT_DYLIB_PATH=/Users/qweasd/Desktop/onnxruntime-osx-arm64-1.17.1/lib/libonnxruntime.1.17.1.dylib
```
+
+## Demo
+
+```Shell
+cargo run -r --example yolov8 # yolov10, blip, clip, yolop, svtr, db, yolo-world, ...
+```
+
+
+
## Integrate into your own project
Expand
-#### 1. Add `usls` as a dependency to your project's `Cargo.toml`
+### 1. Add `usls` as a dependency to your project's `Cargo.toml`
-```shell
-usls = { git = "https://github.com/jamjamjon/usls", rev = "xxx"}
+```Shell
+cargo add usls
```
-#### 2. Set `Options` and build model
+Or you can use specific commit
+```Shell
+usls = { git = "https://github.com/jamjamjon/usls", rev = "???sha???"}
+```
+
+
+
+### 2. Set `Options` and build model
```Rust
let options = Options::default()
@@ -201,5 +190,24 @@ pub struct Y {
- Other tasks results can be found at: `src/ys/y.rs`
+
+
+
+## Solution Models
+
+
+
+Additionally, this repo also provides some solution models.
+
+| Model | Example | Result |
+| :---------------------------------------------------------------------------------------------------------: | :------------------------------: | :-----------------------------------------------------------------------------: |
+| Lane Line Segmentation
Drivable Area Segmentation
Car Detection
车道线-可行驶区域-车辆检测 | [demo](examples/yolov8-plastic-bag) |
|
+| Face Parsing
人脸解析 | [demo](examples/face-parsing) |
|
+| Text Detection
(PPOCR-det v3, v4)
通用文本检测 | [demo](examples/db) |
|
+| Text Recognition
(PPOCR-rec v3, v4)
中英文-文本识别 | [demo](examples/svtr) | |
+| Face-Landmark Detection
人脸 & 关键点检测 | [demo](examples/yolov8-face) |
|
+| Head Detection
人头检测 | [demo](examples/yolov8-head) |
|
+| Fall Detection
摔倒检测 | [demo](examples/yolov8-falldown) |
|
+| Trash Detection
垃圾检测 | [demo](examples/yolov8-plastic-bag) |
|
diff --git a/src/core/annotator.rs b/src/core/annotator.rs
index de6af68..72f3ede 100644
--- a/src/core/annotator.rs
+++ b/src/core/annotator.rs
@@ -109,7 +109,7 @@ impl Annotator {
self
}
- /// Plotting BBOXes or not
+ /// Plotting bboxes or not
pub fn without_bboxes(mut self, x: bool) -> Self {
self.without_bboxes = x;
self
@@ -195,7 +195,7 @@ impl Annotator {
self
}
- /// Plotting MBRs or not
+ /// Plotting mbrs or not
pub fn without_mbrs(mut self, x: bool) -> Self {
self.without_mbrs = x;
self
diff --git a/src/core/engine.rs b/src/core/engine.rs
index 796fabe..c0b18aa 100644
--- a/src/core/engine.rs
+++ b/src/core/engine.rs
@@ -32,7 +32,7 @@ pub struct OrtEngine {
model_proto: onnx::ModelProto,
params: usize,
wbmems: usize,
- pub ts: Ts,
+ ts: Ts,
}
impl OrtEngine {
@@ -353,7 +353,7 @@ impl OrtEngine {
Ok(ys)
}
- pub fn _set_ixx(x: isize, ixx: &Option, i: usize, ii: usize) -> Option {
+ fn _set_ixx(x: isize, ixx: &Option, i: usize, ii: usize) -> Option {
match x {
-1 => {
match ixx {
@@ -369,7 +369,8 @@ impl OrtEngine {
}
}
- pub fn nbytes_from_onnx_dtype_id(x: usize) -> usize {
+ #[allow(dead_code)]
+ fn nbytes_from_onnx_dtype_id(x: usize) -> usize {
match x {
7 | 11 | 13 => 8, // i64, f64, u64
1 | 6 | 12 => 4, // f32, i32, u32
@@ -380,7 +381,8 @@ impl OrtEngine {
}
}
- pub fn nbytes_from_onnx_dtype(x: &ort::TensorElementType) -> usize {
+ #[allow(dead_code)]
+ fn nbytes_from_onnx_dtype(x: &ort::TensorElementType) -> usize {
match x {
ort::TensorElementType::Float64
| ort::TensorElementType::Uint64
@@ -399,6 +401,7 @@ impl OrtEngine {
}
}
+ #[allow(dead_code)]
fn ort_dtype_from_onnx_dtype_id(value: i32) -> Option {
match value {
0 => None,
@@ -630,4 +633,8 @@ impl OrtEngine {
pub fn memory_weights(&self) -> usize {
self.wbmems
}
+
+ pub fn ts(&self) -> &Ts {
+ &self.ts
+ }
}
diff --git a/src/core/onnx.rs b/src/core/onnx.rs
index e39b507..d88dc84 100644
--- a/src/core/onnx.rs
+++ b/src/core/onnx.rs
@@ -1,3 +1,5 @@
+//! ONNX file generated by prost-build.
+
// This file is @generated by prost-build.
/// Attributes
///
diff --git a/src/core/ops.rs b/src/core/ops.rs
index e294f65..d617647 100644
--- a/src/core/ops.rs
+++ b/src/core/ops.rs
@@ -1,3 +1,5 @@
+//! Some processing functions to image and ndarray.
+
use anyhow::Result;
use fast_image_resize as fir;
use fast_image_resize::{
diff --git a/src/core/options.rs b/src/core/options.rs
index 38cf86c..0298286 100644
--- a/src/core/options.rs
+++ b/src/core/options.rs
@@ -1,3 +1,5 @@
+//! Options for build models.
+
use anyhow::Result;
use crate::{
diff --git a/src/core/x.rs b/src/core/x.rs
index 4678fd2..72da861 100644
--- a/src/core/x.rs
+++ b/src/core/x.rs
@@ -4,6 +4,7 @@ use ndarray::{Array, Dim, IxDyn, IxDynImpl};
use crate::Ops;
+/// Model input, alias for [`Array`]
#[derive(Debug, Clone, Default)]
pub struct X(pub Array);
diff --git a/src/lib.rs b/src/lib.rs
index 86881a8..c1d2855 100644
--- a/src/lib.rs
+++ b/src/lib.rs
@@ -1,3 +1,110 @@
+//! A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models.
+//!
+//! [`OrtEngine`] provides ONNX model loading, metadata parsing, dry_run, inference and other functions, supporting EPs such as CUDA, TensorRT, CoreML, etc. You can use it as the ONNXRuntime engine for building models.
+//!
+//!
+//!
+//!
+
+//! # Supported models
+//! | Model | Task / Type | Example | CUDA
f32 | CUDA
f16 | TensorRT
f32 | TensorRT
f16 |
+//! | :---------------------------------------------------------------: | :-------------------------: | :----------------------: | :-----------: | :-----------: | :------------------------: | :-----------------------: |
+//! | [YOLOv5](https://github.com/ultralytics/yolov5) | Object Detection
Instance Segmentation
Classification | [demo](examples/yolov5) | ✅ | ✅ | ✅ | ✅ |
+//! | [YOLOv8-obb](https://github.com/ultralytics/ultralytics) | Object Detection
Instance Segmentation
Classification
Oriented Object Detection
Keypoint Detection | [demo](examples/yolov8) | ✅ | ✅ | ✅ | ✅ |
+//! | [YOLOv9](https://github.com/WongKinYiu/yolov9) | Object Detection | [demo](examples/yolov9) | ✅ | ✅ | ✅ | ✅ |
+//! | [YOLOv10](https://github.com/THU-MIG/yolov10) | Object Detection | [demo](examples/yolov10) | ✅ | ✅ | ✅ | ✅ |
+//! | [RT-DETR](https://arxiv.org/abs/2304.08069) | Object Detection | [demo](examples/rtdetr) | ✅ | ✅ | ✅ | ✅ |
+//! | [FastSAM](https://github.com/CASIA-IVA-Lab/FastSAM) | Instance Segmentation | [demo](examples/fastsam) | ✅ | ✅ | ✅ | ✅ |
+//! | [YOLO-World](https://github.com/AILab-CVC/YOLO-World) | Object Detection | [demo](examples/yolo-world) | ✅ | ✅ | ✅ | ✅ |
+//! | [DINOv2](https://github.com/facebookresearch/dinov2) | Vision-Self-Supervised | [demo](examples/dinov2) | ✅ | ✅ | ✅ | ✅ |
+//! | [CLIP](https://github.com/openai/CLIP) | Vision-Language | [demo](examples/clip) | ✅ | ✅ | ✅ visual
❌ textual | ✅ visual
❌ textual |
+//! | [BLIP](https://github.com/salesforce/BLIP) | Vision-Language | [demo](examples/blip) | ✅ | ✅ | ✅ visual
❌ textual | ✅ visual
❌ textual |
+//! | [DB](https://arxiv.org/abs/1911.08947) | Text Detection | [demo](examples/db) | ✅ | ✅ | ✅ | ✅ |
+//! | [SVTR](https://arxiv.org/abs/2205.00159) | Text Recognition | [demo](examples/svtr) | ✅ | ✅ | ✅ | ✅ |
+//! | [RTMO](https://github.com/open-mmlab/mmpose/tree/main/projects/rtmo) | Keypoint Detection | [demo](examples/rtmo) | ✅ | ✅ | ❌ | ❌ |
+//! | [YOLOPv2](https://arxiv.org/abs/2208.11434) | Panoptic Driving Perception | [demo](examples/yolop) | ✅ | ✅ | ✅ | ✅ |
+//! | [Depth-Anything](https://github.com/LiheYoung/Depth-Anything) | Monocular Depth Estimation | [demo](examples/depth-anything) | ✅ | ✅ | ❌ | ❌ |
+//! | [MODNet](https://github.com/ZHKKKe/MODNet) | Image Matting | [demo](examples/modnet) | ✅ | ✅ | ✅ | ✅ |
+
+//! # Use provided models for inference
+
+//! #### 1. Using provided [`models`] with [`Option`]
+
+//! ```Rust, no_run
+//! use usls::{coco, models::YOLO, Annotator, DataLoader, Options, Vision};
+//!
+//! let options = Options::default()
+//! .with_model("yolov8m-seg-dyn.onnx")?
+//! .with_trt(0)
+//! .with_fp16(true)
+//! .with_i00((1, 1, 4).into())
+//! .with_i02((224, 640, 800).into())
+//! .with_i03((224, 640, 800).into())
+//! .with_confs(&[0.4, 0.15]) // class_0: 0.4, others: 0.15
+//! .with_profile(false);
+//! let mut model = YOLO::new(options)?;
+//! ```
+
+//! #### 2. Load images using [`DataLoader`] or [`image::io::Reader`]
+//!
+//! ```Rust, no_run
+//! // Load one image
+//! let x = vec![DataLoader::try_read("./assets/bus.jpg")?];
+//!
+//! // Load images with batch_size = 4
+//! let dl = DataLoader::default()
+//! .with_batch(4)
+//! .load("./assets")?;
+//! // Load one image with `image::io::Reader`
+//! let x = image::io::Reader::open("myimage.png")?.decode()?
+//! ```
+//!
+//! #### 3. Build annotator using [`Annotator`]
+//!
+//! ```Rust, no_run
+//! let annotator = Annotator::default()
+//! .with_bboxes_thickness(7)
+//! .with_saveout("YOLOv8");
+//! ```
+//!
+//! #### 4. Run and annotate
+//!
+//! ```Rust, no_run
+//! for (xs, _paths) in dl {
+//! let ys = model.run(&xs)?;
+//! annotator.annotate(&xs, &ys);
+//! }
+//! ```
+//!
+//! #### 5. Parse inference results from [`Vec`]
+//! For example, uou can get detection bboxes with `y.bboxes()`:
+//! ```Rust, no_run
+//! let ys = model.run(&xs)?;
+//! for y in ys {
+//! // bboxes
+//! if let Some(bboxes) = y.bboxes() {
+//! for bbox in bboxes {
+//! println!(
+//! "Bbox: {}, {}, {}, {}, {}, {}",
+//! bbox.xmin(),
+//! bbox.ymin(),
+//! bbox.xmax(),
+//! bbox.ymax(),
+//! bbox.confidence(),
+//! bbox.id(),
+//! )
+//! }
+//! }
+//! }
+//! ```
+//!
+//!
+//! # Build your own model with [`OrtEngine`]
+//!
+//! Refer to [Demo: Depth-Anything](https://github.com/jamjamjon/usls/blob/main/src/models/depth_anything.rs)
+//!
+//!
+
mod core;
pub mod models;
mod utils;
diff --git a/src/models/mod.rs b/src/models/mod.rs
index 2815614..d0ccfee 100644
--- a/src/models/mod.rs
+++ b/src/models/mod.rs
@@ -1,3 +1,5 @@
+//! Models provided: [`Blip`], [`Clip`], [`YOLO`], [`DepthAnything`], ...
+
mod blip;
mod clip;
mod db;
diff --git a/src/utils/coco.rs b/src/utils/coco.rs
index a42e2b2..11344cb 100644
--- a/src/utils/coco.rs
+++ b/src/utils/coco.rs
@@ -1,3 +1,5 @@
+//! Some constants releated with COCO dataset: [`SKELETONS_16`], [`KEYPOINTS_NAMES_17`], [`NAMES_80`]
+
pub const SKELETONS_16: [(usize, usize); 16] = [
(0, 1),
(0, 2),
diff --git a/src/utils/colormap256.rs b/src/utils/colormap256.rs
index 96b8298..c9a5c28 100644
--- a/src/utils/colormap256.rs
+++ b/src/utils/colormap256.rs
@@ -1,3 +1,5 @@
+//! Some colormap: [`TURBO`], [`INFERNO`], [`PLASMA`], [`VIRIDIS`], [`MAGMA`], [`BENTCOOLWARM`], [`BLACKBODY`], [`EXTENDEDKINDLMANN`], [`KINDLMANN`], [`SMOOTHCOOLWARM`].
+
pub const TURBO: [[u8; 3]; 256] = [
[48, 18, 59],
[50, 21, 67],
diff --git a/src/utils/mod.rs b/src/utils/mod.rs
index 893343a..8dd3b65 100644
--- a/src/utils/mod.rs
+++ b/src/utils/mod.rs
@@ -8,12 +8,13 @@ pub mod colormap256;
pub use colormap256::*;
-pub const GITHUB_ASSETS: &str = "https://github.com/jamjamjon/assets/releases/download/v0.0.1";
-pub const CHECK_MARK: &str = "✅";
-pub const CROSS_MARK: &str = "❌";
-pub const SAFE_CROSS_MARK: &str = "❎";
+pub(crate) const GITHUB_ASSETS: &str =
+ "https://github.com/jamjamjon/assets/releases/download/v0.0.1";
+pub(crate) const CHECK_MARK: &str = "✅";
+pub(crate) const CROSS_MARK: &str = "❌";
+pub(crate) const SAFE_CROSS_MARK: &str = "❎";
-pub fn auto_load>(src: P, sub: Option<&str>) -> Result {
+pub(crate) fn auto_load>(src: P, sub: Option<&str>) -> Result {
let src = src.as_ref();
let p = if src.is_file() {
src.into()
@@ -33,6 +34,7 @@ pub fn auto_load>(src: P, sub: Option<&str>) -> Result {
Ok(p.to_str().unwrap().to_string())
}
+/// `download` sth from src to dst
pub fn download + std::fmt::Debug>(
src: &str,
dst: P,
@@ -77,7 +79,7 @@ pub fn download + std::fmt::Debug>(
Ok(())
}
-pub fn string_now(delimiter: &str) -> String {
+pub(crate) fn string_now(delimiter: &str) -> String {
let t_now = chrono::Local::now();
let fmt = format!(
"%Y{}%m{}%d{}%H{}%M{}%S{}%f",
@@ -86,7 +88,8 @@ pub fn string_now(delimiter: &str) -> String {
t_now.format(&fmt).to_string()
}
-pub fn config_dir() -> PathBuf {
+#[allow(dead_code)]
+pub(crate) fn config_dir() -> PathBuf {
match dirs::config_dir() {
Some(mut d) => {
d.push("usls");
@@ -99,7 +102,8 @@ pub fn config_dir() -> PathBuf {
}
}
-pub fn home_dir(sub: Option<&str>) -> PathBuf {
+#[allow(dead_code)]
+pub(crate) fn home_dir(sub: Option<&str>) -> PathBuf {
match dirs::home_dir() {
Some(mut d) => {
d.push(".usls");
diff --git a/src/ys/bbox.rs b/src/ys/bbox.rs
index df5fa7c..61c460a 100644
--- a/src/ys/bbox.rs
+++ b/src/ys/bbox.rs
@@ -1,4 +1,4 @@
-/// Bounding Box 2D
+/// Bounding Box 2D.
#[derive(Clone, PartialEq, PartialOrd)]
pub struct Bbox {
x: f32,
diff --git a/src/ys/embedding.rs b/src/ys/embedding.rs
index 1145214..daaf63d 100644
--- a/src/ys/embedding.rs
+++ b/src/ys/embedding.rs
@@ -3,7 +3,7 @@ use ndarray::{Array, Axis, Ix2, IxDyn};
use crate::X;
-/// Embedding
+/// Embedding for image or text.
#[derive(Clone, PartialEq, Default)]
pub struct Embedding(Array);
diff --git a/src/ys/keypoint.rs b/src/ys/keypoint.rs
index 52e4e38..f22f6df 100644
--- a/src/ys/keypoint.rs
+++ b/src/ys/keypoint.rs
@@ -1,6 +1,6 @@
use std::ops::{Add, Div, Mul, Sub};
-/// Keypoint 2D
+/// Keypoint 2D.
#[derive(PartialEq, Clone)]
pub struct Keypoint {
x: f32,
diff --git a/src/ys/mask.rs b/src/ys/mask.rs
index 89c6476..e5896d1 100644
--- a/src/ys/mask.rs
+++ b/src/ys/mask.rs
@@ -1,5 +1,6 @@
use image::DynamicImage;
+/// Gray-Scale Mask.
#[derive(Clone, PartialEq)]
pub struct Mask {
mask: DynamicImage,
diff --git a/src/ys/mbr.rs b/src/ys/mbr.rs
index 4fab7fb..10f05e1 100644
--- a/src/ys/mbr.rs
+++ b/src/ys/mbr.rs
@@ -1,6 +1,6 @@
use geo::{coord, line_string, Area, BooleanOps, Coord, EuclideanDistance, LineString, Polygon};
-/// Minimum Bounding Rectangle
+/// Minimum Bounding Rectangle.
#[derive(Clone, PartialEq)]
pub struct Mbr {
ls: LineString,
diff --git a/src/ys/polygon.rs b/src/ys/polygon.rs
index 8580211..99539f4 100644
--- a/src/ys/polygon.rs
+++ b/src/ys/polygon.rs
@@ -5,6 +5,7 @@ use geo::{
use crate::{Bbox, Mbr};
+/// Polygon.
#[derive(Clone, PartialEq)]
pub struct Polygon {
polygon: geo::Polygon,
diff --git a/src/ys/prob.rs b/src/ys/prob.rs
index 6a49b34..3e5ba80 100644
--- a/src/ys/prob.rs
+++ b/src/ys/prob.rs
@@ -1,4 +1,4 @@
-/// Probabilities for classification
+/// Probabilities for classification.
#[derive(Clone, PartialEq, Default)]
pub struct Prob {
probs: Vec,
diff --git a/src/ys/y.rs b/src/ys/y.rs
index 5007fb3..a1f92b0 100644
--- a/src/ys/y.rs
+++ b/src/ys/y.rs
@@ -1,5 +1,6 @@
use crate::{Bbox, Embedding, Keypoint, Mask, Mbr, Polygon, Prob};
+/// Inference results container for each image.
#[derive(Clone, PartialEq, Default)]
pub struct Y {
probs: Option,