Skip to content

Commit 6274ea9

Browse files
committed
docs: 📝 MeanAverageRecall markdown docs file added
Signed-off-by: Onuralp SEZER <thunderbirdtr@gmail.com>
1 parent 5e43706 commit 6274ea9

File tree

3 files changed

+41
-11
lines changed

3 files changed

+41
-11
lines changed

docs/metrics/mean_average_precision.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
comments: true
3-
status: new
43
---
54

65
# Mean Average Precision

docs/metrics/mean_average_recall.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
---
2+
comments: true
3+
status: new
4+
---
5+
6+
# Mean Average Recall
7+
8+
<div class="md-typeset">
9+
<h2><a href="#supervision.metrics.mean_average_recall.MeanAverageRecall">MeanAverageRecall</a></h2>
10+
</div>
11+
12+
:::supervision.metrics.mean_average_recall.MeanAverageRecall
13+
14+
<div class="md-typeset">
15+
<h2><a href="#supervision.metrics.mean_average_recall.MeanAverageRecallResult">MeanAverageRecallResult</a></h2>
16+
</div>
17+
18+
:::supervision.metrics.mean_average_recall.MeanAverageRecallResult

supervision/metrics/mean_average_recall.py

Lines changed: 23 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,14 @@
2929
class MeanAverageRecall(Metric):
3030
"""
3131
Mean Average Recall (mAR) metric for object detection evaluation.
32-
It calculates the average recall across different IoU thresholds.
32+
Calculates the average recall across different IoU thresholds and detection limits.
33+
34+
The metric evaluates:
35+
- IoU thresholds from 0.5 to 0.95 with 0.05 step
36+
- Different maximum detection limits [1, 10, 100]
37+
- Size-specific evaluation (small, medium, large objects)
38+
39+
When no detections or targets are present, returns 0.0.
3340
3441
Example:
3542
```python
@@ -309,15 +316,21 @@ class MeanAverageRecallResult:
309316
Defaults to `0.0` when no detections or targets are present.
310317
311318
Attributes:
312-
metric_target (MetricTarget): the type of data used for the metric
313-
is_class_agnostic (bool): When computing class-agnostic results, class ID is set to `-1`
314-
mean_average_recall (float): the global mAR score
315-
ar_per_class (np.ndarray): the average recall scores per class
316-
matched_classes (np.ndarray): the class IDs of all matched classes
317-
small_objects (Optional[MeanAverageRecallResult]): the mAR results for small objects
318-
medium_objects (Optional[MeanAverageRecallResult]): the mAR results for medium objects
319-
large_objects (Optional[MeanAverageRecallResult]): the mAR results for large objects
320-
""" # noqa: E501 // docs
319+
metric_target (MetricTarget): The type of data used for the metric
320+
(boxes, masks, or oriented bounding boxes)
321+
is_class_agnostic (bool): When computing class-agnostic results,
322+
class ID is set to `-1`
323+
mean_average_recall (float): The global mAR score averaged across classes,
324+
IoU thresholds, and detection limits
325+
ar_per_class (np.ndarray): The average recall scores per class
326+
matched_classes (np.ndarray): The class IDs of all matched classes
327+
small_objects (Optional[MeanAverageRecallResult]): The mAR results for
328+
small objects (area < 32²)
329+
medium_objects (Optional[MeanAverageRecallResult]): The mAR results for
330+
medium objects (32² ≤ area < 96²)
331+
large_objects (Optional[MeanAverageRecallResult]): The mAR results for
332+
large objects (area ≥ 96²)
333+
"""
321334

322335
metric_target: MetricTarget
323336
is_class_agnostic: bool

0 commit comments

Comments
 (0)