Fixed dashes
Signed-off-by: Jim Martens <github@2martens.de>
This commit is contained in:
parent
43820194c8
commit
cb92f63775
8
body.tex
8
body.tex
|
@ -443,7 +443,7 @@ SSD network are the predictions with class confidences, offsets to the
|
|||
anchor box, anchor box coordinates, and variance. The model loss is a
|
||||
weighted sum of localisation and confidence loss. As the network
|
||||
has a fixed number of anchor boxes, every forward pass creates the same
|
||||
number of detections - 8732 in the case of SSD 300x300.
|
||||
number of detections---8732 in the case of SSD 300x300.
|
||||
|
||||
Notably, the object proposals are made in a single run for an image -
|
||||
single shot.
|
||||
|
@ -960,8 +960,8 @@ averaging was not reported in their paper.
|
|||
There is no visible impact of entropy thresholding on the object detection
|
||||
performance for vanilla SSD. This indicates that the network has almost no
|
||||
uniform or close to uniform predictions, the vast majority of predictions
|
||||
has a high confidence in one class - including the background.
|
||||
However, the entropy plays a larger role for the Bayesian variants - as
|
||||
has a high confidence in one class---including the background.
|
||||
However, the entropy plays a larger role for the Bayesian variants---as
|
||||
expected: the best performing thresholds are 1.0, 1.3, and 1.4 for micro averaging,
|
||||
and 1.5, 1.7, and 2.0 for macro averaging. In all of these cases the best
|
||||
threshold is not the largest threshold tested. A lower threshold likely
|
||||
|
@ -986,7 +986,7 @@ have the same number of observations everywhere before the entropy threshold. Af
|
|||
Without NMS 79\% of observations are left. Irrespective of the absolute
|
||||
number, this discrepancy clearly shows the impact of non-maximum suppression and also explains a higher count of false positives:
|
||||
more than 50\% of the original observations were removed with NMS and
|
||||
stayed without - all of these are very likely to be false positives.
|
||||
stayed without---all of these are very likely to be false positives.
|
||||
|
||||
A clear distinction between micro and macro averaging can be observed:
|
||||
recall is hardly effected with micro averaging (0.300) but goes down equally with macro averaging (0.229). For micro averaging, it does
|
||||
|
|
Loading…
Reference in New Issue