* Add YOLOP and face parsing model |
||
|---|---|---|
| .. | ||
| README.md | ||
| main.rs | ||
README.md
This demo shows how to use BLIP to do conditional or unconditional image captioning.
Quick Start
cargo run -r --example blip
BLIP ONNX Model
Results
[Unconditional image captioning]: a group of people walking around a bus
[Conditional image captioning]: three man walking in front of a bus
TODO
- VQA
- Retrival
- TensorRT support for textual model