Code underlying the publication: "Humans disagree with the IoU for measuring object detector localization error."

Code underlying the publication: "Humans disagree with the IoU for measuring object detector localization error."

3
contributors

Description

The localization quality of automatic object detectors is typically evaluated by the Intersection over Union (IoU) score. In this work, we show that humans have a different view on localization quality. To evaluate this, we conduct a survey with more than 70 participants. Results show that for localization errors with the exact same IoU score, humans might not consider that these errors are equal, and express a preference. Our work is the first to evaluate IoU with humans and makes it clear that relying on IoU scores alone to evaluate localization errors might not be sufficient. In this repository, we provide a Jupyter notebook containing the code for our data analysis.

Logo of Code underlying the publication: "Humans disagree with the IoU for measuring object detector localization error."
Keywords
Programming languages
  • Jupyter Notebook 99%
  • Other 1%
License
  • CC0-1.0
</>Source code
Packages
data.4tu.nl

Reference papers

Contributors

OS
Ombretta Strafforello
OK
Osman Kayhan

Member of community

4TU