Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
a6d9961
docs: ✨ jupyter notebook support added
onuralpszr Feb 4, 2024
6ffd159
docs: ✨ supervision cookbook initial page design landed
onuralpszr Feb 8, 2024
1fba192
docs: ✨ folder rename to javascript to javascripts (plural)
onuralpszr Feb 8, 2024
c436587
Merge changes from develop branch
onuralpszr Feb 8, 2024
1a20bb5
docs: ✨ add demo notebook and remove notebooks folder from ignore list
onuralpszr Feb 8, 2024
3ddeed1
docs: ✨ data-url param added
onuralpszr Feb 8, 2024
15b4914
docs(js): 🔥 unused params has been removed from js files for cookbook…
onuralpszr Feb 8, 2024
d2ccc4f
docs(nav): 🚀 nav-tab enabled and new order of pages for better unders…
onuralpszr Feb 8, 2024
1ad26b5
fix(pre_commit): 🎨 auto format pre-commit hooks
pre-commit-ci[bot] Feb 8, 2024
ab38cb1
docs(nb): 🚀 initial notebook added as quickstart and other examples a…
onuralpszr Feb 8, 2024
d900e68
docs(home): 🚀 hide table of content for home page only and give looks…
onuralpszr Feb 8, 2024
f81ccbc
fix(pre_commit): 🎨 auto format pre-commit hooks
pre-commit-ci[bot] Feb 8, 2024
4ef2d04
ci(docs): 👷 new packages added for build document
onuralpszr Feb 8, 2024
6a091b5
ci(docs): 👷 GH_TOKEN added for git-committers
onuralpszr Feb 8, 2024
985624e
docs(assets): 🐞 missing js and css files added into mkdocs.yml
onuralpszr Feb 8, 2024
4e5331b
docs(version): 🐞 restore version part and revert hide h1 on home for now
onuralpszr Feb 8, 2024
08a204d
docs(api): 📝 api/annotators position adjusted and small typo fix
onuralpszr Feb 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,15 +21,23 @@ jobs:
- name: 🐍 Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.x
python-version: '3.10'
- name: 📦 Install mkdocs-material
run: pip install mkdocs-material
run: pip install "mkdocs-material[all]"
- name: 📦 Install mkdocstrings[python]
run: pip install "mkdocstrings[python]"
- name: 📦 Install mkdocs-material[imaging]
run: pip install "mkdocs-material[imaging]"
- name: 📦 Install mike
run: pip install "mike"
- name: 📦 Install mkdocs-git-revision-date-localized-plugin
run: pip install "mkdocs-git-revision-date-localized-plugin"
- name: 📦 Install JupyterLab
run: pip install jupyterlab
- name: 📦 Install mkdocs-jupyter
run: pip install mkdocs-jupyter
- name: 📦 Install mkdocs-git-committers-plugin-2
run: pip install mkdocs-git-committers-plugin-2
- name: ⚙️ Configure git for github-actions
run: |
git config --global user.name "github-actions[bot]"
Expand Down
3 changes: 0 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -133,9 +133,6 @@ dmypy.json
# Pyre type checker
.pyre/

# Notebooks
notebooks/

# OSX folder attributes
.DS_Store
.AppleDouble
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb)
[![Gradio](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/Annotators)
[![Discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk)

[![Built with Material for MkDocs](https://img.shields.io/badge/Material_for_MkDocs-526CFE?logo=MaterialForMkDocs&logoColor=white)](https://squidfunk.github.io/mkdocs-material/)
</div>

## 👋 hello
Expand Down
22 changes: 17 additions & 5 deletions demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,23 @@
"\n",
"---\n",
"\n",
"[![version](https://badge.fury.io/py/supervision.svg)](https://badge.fury.io/py/supervision)\n",
"[![downloads](https://img.shields.io/pypi/dm/supervision)](https://pypistats.org/packages/supervision)\n",
"[![license](https://img.shields.io/pypi/l/supervision)](https://github.com/roboflow/supervision/blob/main/LICENSE.md)\n",
"[![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision)\n",
"[![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/roboflow/supervision)\n",
"<p align=\"center\">\n",
" <a href=\"https://badge.fury.io/py/supervision\"><img src=\"https://badge.fury.io/py/supervision.svg\" alt=\"version\"></a>\n",
" <a href=\"https://pypistats.org/packages/supervision\"><img src=\"https://img.shields.io/pypi/dm/supervision\" alt=\"downloads\"></a>\n",
" <a href=\"https://github.com/roboflow/supervision/blob/main/LICENSE.md\"><img src=\"https://img.shields.io/pypi/l/supervision\" alt=\"license\"></a>\n",
" <a href=\"https://badge.fury.io/py/supervision\"><img src=\"https://img.shields.io/pypi/pyversions/supervision\" alt=\"python-version\"></a>\n",
" <a href=\"https://github.com/roboflow/supervision\"><img src=\"https://badges.aleen42.com/src/github.svg\" alt=\"GitHub\"></a>\n",
"</p>\n",
"\n",
"<p align=\"center\">\n",
" <a href=\"https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Colab\"></a>\n",
" <a href=\"https://kaggle.com/kernels/welcome?src=https://github.com/roboflow/supervision/blob/main/demo.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Kaggle\"></a>\n",
" <a href=\"https://studiolab.sagemaker.aws/import/github/roboflow/supervision/blob/main/demo.ipynb\"><img src=\"https://raw.github.com/roboflow-ai/notebooks/main/assets/badges/sage-maker.svg\" alt=\"SageMaker\"></a>\n",
" <a href=\"https://nbviewer.jupyter.org/github/roboflow/supervision/blob/main/demo.ipynb\"><img src=\"https://img.shields.io/badge/Open_in_Nbviewer-F37626.svg?logo=Jupyter&logoColor=white\" alt=\"nbviewer\">\n",
" <a href=\"https://mybinder.org/v2/gh/roboflow/supervision/develop?labpath=demo.ipynb\"><img src=\"https://mybinder.org/badge_logo.svg\" alt=\"Binder\"></a>\n",
" <a href=\"https://github.com/roboflow/supervision/raw/main/demo.ipynb\" download><img src=\"https://img.shields.io/badge/Download-Notebook-A351FB.svg\" alt=\"Download\"></a>\n",
"</p>\n",
"\n",
"\n",
"We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝\n",
"\n",
Expand Down
3 changes: 1 addition & 2 deletions docs/assets.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,7 @@ comments: true
Supervision offers an assets download utility that allows you to download video files
that you can use in your demos.

## install extra

## Install extra

To install the Supervision assets utility, you can use `pip`. This utility is available
as an extra within the Supervision package.
Expand Down
2 changes: 1 addition & 1 deletion docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### 0.18.0 <small>January 25, 2024</small>

- Added [#633](https://github.com/roboflow/supervision/pull/720): [`sv.PercentageBarAnnotator`](0.18.0/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property.
- Added [#633](https://github.com/roboflow/supervision/pull/720): [`sv.PercentageBarAnnotator`](/0.18.0/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property.

```python
>>> import supervision as sv
Expand Down
8 changes: 8 additions & 0 deletions docs/cookbooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
template: cookbooks.html
comments: true
status: new
hide:
- navigation
- toc
---
22 changes: 11 additions & 11 deletions docs/how_to/detect_and_annotate.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Now that we have predictions from a model, we can load them into Supervision.

=== "Ultralytics"

We can do so using the [`sv.Detections.from_ultralytics`](/latest/detection/core/#supervision.detection.core.Detections.from_ultralytics) method, which accepts model results from both detection and segmentation models.
We can do so using the [`sv.Detections.from_ultralytics`](detection/core/#supervision.detection.core.Detections.from_ultralytics) method, which accepts model results from both detection and segmentation models.

```python
import cv2
Expand All @@ -59,7 +59,7 @@ Now that we have predictions from a model, we can load them into Supervision.

=== "Inference"

We can do so using the [`sv.Detections.from_inference`](/latest/detection/core/#supervision.detection.core.Detections.from_inference) method, which accepts model results from both detection and segmentation models.
We can do so using the [`sv.Detections.from_inference`](detection/core/#supervision.detection.core.Detections.from_inference) method, which accepts model results from both detection and segmentation models.

```python
import cv2
Expand All @@ -74,17 +74,17 @@ Now that we have predictions from a model, we can load them into Supervision.

You can conveniently load predictions from other computer vision frameworks and libraries using:

- [`from_deepsparse`](/latest/detection/core/#supervision.detection.core.Detections.from_deepsparse) ([Deepsparse](https://github.com/neuralmagic/deepsparse))
- [`from_detectron2`](/latest/detection/core/#supervision.detection.core.Detections.from_detectron2) ([Detectron2](https://github.com/facebookresearch/detectron2))
- [`from_mmdetection`](/latest/detection/core/#supervision.detection.core.Detections.from_mmdetection) ([MMDetection](https://github.com/open-mmlab/mmdetection))
- [`from_inference`](/latest/detection/core/#supervision.detection.core.Detections.from_inference) ([Roboflow Inference](https://github.com/roboflow/inference))
- [`from_sam`](/latest/detection/core/#supervision.detection.core.Detections.from_sam) ([Segment Anything Model](https://github.com/facebookresearch/segment-anything))
- [`from_transformers`](/latest/detection/core/#supervision.detection.core.Detections.from_transformers) ([HuggingFace Transformers](https://github.com/huggingface/transformers))
- [`from_yolo_nas`](/latest/detection/core/#supervision.detection.core.Detections.from_yolo_nas) ([YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md))
- [`from_deepsparse`](detection/core/#supervision.detection.core.Detections.from_deepsparse) ([Deepsparse](https://github.com/neuralmagic/deepsparse))
- [`from_detectron2`](detection/core/#supervision.detection.core.Detections.from_detectron2) ([Detectron2](https://github.com/facebookresearch/detectron2))
- [`from_mmdetection`](detection/core/#supervision.detection.core.Detections.from_mmdetection) ([MMDetection](https://github.com/open-mmlab/mmdetection))
- [`from_inference`](detection/core/#supervision.detection.core.Detections.from_inference) ([Roboflow Inference](https://github.com/roboflow/inference))
- [`from_sam`](detection/core/#supervision.detection.core.Detections.from_sam) ([Segment Anything Model](https://github.com/facebookresearch/segment-anything))
- [`from_transformers`](detection/core/#supervision.detection.core.Detections.from_transformers) ([HuggingFace Transformers](https://github.com/huggingface/transformers))
- [`from_yolo_nas`](detection/core/#supervision.detection.core.Detections.from_yolo_nas) ([YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md))

## Annotate Image

Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoundingBoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoundingBoxAnnotator) and [`sv.LabelAnnotator`](/latest/annotators/#supervision.annotators.core.LabelAnnotator) classes. If you are running the segmentation model [`sv.MaskAnnotator`](/latest/annotators/#supervision.annotators.core.MaskAnnotator) is a drop-in replacement for [`sv.BoundingBoxAnnotator`](/latest/annotators/#supervision.annotators.core.BoundingBoxAnnotator) that will allow you to draw masks instead of boxes.
Finally, we can annotate the image with the predictions. Since we are working with an object detection model, we will use the [`sv.BoundingBoxAnnotator`](annotators/#supervision.annotators.core.BoundingBoxAnnotator) and [`sv.LabelAnnotator`](annotators/#supervision.annotators.core.LabelAnnotator) classes. If you are running the segmentation model [`sv.MaskAnnotator`](annotators/#supervision.annotators.core.MaskAnnotator) is a drop-in replacement for [`sv.BoundingBoxAnnotator`](annotators/#supervision.annotators.core.BoundingBoxAnnotator) that will allow you to draw masks instead of boxes.

=== "Ultralytics"

Expand Down Expand Up @@ -138,7 +138,7 @@ Finally, we can annotate the image with the predictions. Since we are working wi

## Display Annotated Image

To display the annotated image in Jupyter Notebook or Google Colab, use the [`sv.plot_image`](/latest/utils/notebook/#supervision.utils.notebook.plot_image) function.
To display the annotated image in Jupyter Notebook or Google Colab, use the [`sv.plot_image`](utils/notebook/#supervision.utils.notebook.plot_image) function.

```python
sv.plot_image(annotated_image)
Expand Down
3 changes: 3 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
---
comments: true
hide:
- navigation
- toc
---

<div align="center">
Expand Down
102 changes: 102 additions & 0 deletions docs/javascripts/cookbooks-card.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@

document.addEventListener("DOMContentLoaded", function () {

async function setCard(el, url, name, desc, labels, version, theme, authors) {
const colorList = [
"A351FB", "FF4040", "FFA1A0", "FF7633", "FFB633", "D1D435", "4CFB12",
"94CF1A", "40DE8A", "1B9640", "00D6C1", "2E9CAA", "00C4FF", "364797",
"6675FF", "0019EF", "863AFF", "530087", "CD3AFF", "FF97CA", "FF39C9"
]

let labelHTML = ''
if (labels) {
const labelArray = labels.split(',').map((label, index) => {
const color = colorList[index % colorList.length]
return `<span style="background-color: #${color}; color: #fff; padding: 2px 6px; border-radius: 12px; margin-right: 4px;">${label}</span>`
})

labelHTML = labelArray.join(' ')
}

const authorArray = authors.split(',');
const authorDataArray = await Promise.all(authorArray.map(async (author) => {
const response = await fetch(`https://api.github.com/users/${author.trim()}`);
return await response.json();
}));

let authorHTML = '';
authorDataArray.forEach((authorData, index) => {
const marginLeft = index === 0 ? '0' : '-15px';
authorHTML += `
<div class="author-container" style="display: inline-block; margin-left: ${marginLeft}; position: relative;">
<a href="https://github.com/${authorData.login}" target="_blank">
<img src="${authorData.avatar_url}" width="32" height="32" style="border-radius: 50%;" />
</a>
<div class="tooltip" style="visibility: hidden; background-color: #555; color: #fff; text-align: center; border-radius: 6px; padding: 5px 0; position: absolute; z-index: 1; bottom: 125%; left: 50%; margin-left: -60px; opacity: 0; transition: opacity 0.3s; width: 120px;">
${authorData.login}
</div>
</div>
`;
});

document.querySelectorAll('.author-container').forEach((container) => {
const tooltip = container.querySelector('.tooltip');
container.addEventListener('mouseover', () => {
tooltip.style.visibility = 'visible';
tooltip.style.opacity = '1';
});
container.addEventListener('mouseout', () => {
tooltip.style.visibility = 'hidden';
tooltip.style.opacity = '0';
});
});



el.innerText = `
<div style="flex-direction: column; height: 100%; display: flex;
font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif,Apple Color Emoji,Segoe UI Emoji; background: ${theme.background}; font-size: 14px; line-height: 1.5; color: ${theme.color}">
<div style="display: flex; align-items: center;">
<i class="fa-solid:book-open" style="color: ${theme.color}; margin-right: 8px;"></i>
<span style="font-weight: 600; color: ${theme.linkColor};">
<a style="text-decoration: none; color: inherit;" href="${url}">${name}</a>
</span>
</div>
<div style="font-size: 12px; margin-bottom: 10px; margin-top: 8px; color: ${theme.color}; flex: 1;">${desc}</div>
<div style="display: flex; align-items: center; justify-content: flex-start; margin-bottom: 8px;">
${authorHTML}
</div>
<div style="font-size: 12px; color: ${theme.color}; display: flex; flex: 0;">
<div style="display: 'flex'; align-items: center; margin-right: 16px;">
</div>
<div style="display: 'flex'; align-items: center; margin-right: 16px;">
<img src="/assets/supervision-lenny.png" aria-label="stars" width="16" height="16" role="img" />
&nbsp; <span>${version}</span>
</div>
<div style="display: 'flex'}; align-items: center;">
&nbsp; <span>${labelHTML}</span>
</div>
</div>
</div>
`

let sanitizedHTML = DOMPurify.sanitize(el.innerText);
el.innerHTML = sanitizedHTML;
}
for (const el of document.querySelectorAll('.repo-card')) {
const url = el.getAttribute('data-url');
const name = el.getAttribute('data-name');
const desc = el.getAttribute('data-desc');
const labels = el.getAttribute('data-labels');
const version = el.getAttribute('data-version');
const authors = el.getAttribute('data-author');
const palette = __md_get("__palette")
if (palette && typeof palette.color === "object") {
var theme = palette.color.scheme === "slate" ? "dark-theme" : "light-default"
} else {
var theme = "light-default"
}

setCard(el, url, name, desc, labels, version, theme, authors);
}
})
File renamed without changes.
Loading