Age | Commit message (Collapse) | Author |
|
uncertainty estimation
* Add rudimentary test for two functions .. maybe more in future
* Fix the rotation correction from vertical translation
* Move preview class to new files
* Move neural network model adapters to new files
* Add utility functions for opencv
* Query the model inputs/outputs by name to see what is available
* Supports outputs for standard deviation of the data distribution -
What you get if you let your model output the full parameters of a
gaussian distribution (depending on the inputs) and fit it with
negative log likelihood loss.
* Disabled support for sequence models
* Add support for detection of eye open/close classification.
Scale uncertainty estimate up if eyes closed
* Add a deadzone filter which activates if the model supports uncertainty
quantification. The deadzone scales becomes larger the more uncertain
the model/data are. This is mostly supposed to be useful to suppress
large estimate errors when the user blinks with the eyes
* Fix distance being twice of what it should have been
|
|
This reverts commit a67e8630caf20e7f48151024e9e68dd9271d75c7.
|
|
This is useful not just to save on complexity in call sites, but also
because I plan on using the Verdigris library to remove needless
`valueChanged()` and `setValue()` overloads from each `value<t>`
instance.
Also fix a bug in `options/tie.hpp` where `QComboBox::setCurrentIndex`
was erroneously called as `Qt::DirectConnection`.
|
|
|
|
|
|
|
|
* The preview image is now generated with the dimensions of the widget
* The pose visualization is added afterwards adjusted to the preview size
* The fps / inference time readout is moved to the settings dialog
* The actual obtained resolution from the camera is also shown
* Dialog layout is changed
Note: Switching to using underscore to mark class member vars.
It's not consistently applied yet.
|
|
Tooltips only for half of the settings or so. When hovering over the
actual input boxes.
|
|
Regarding tweaks:
* EWA smoothing of head ROI. Smoothing strength is a UI setting.
* Adjustible zoom into the detected face. The predicted ROI is scaled by a factor the user can set. There is a sweet spot somewhere near 1.
* Adjustible number of threads
* The ROI is no longer taken as model output directly. This was actually not needed. Perhaps as auxiliary training objective for the network. But the tracker implementation now just uses a square area around the head center according to the predicted head size.
* Add comment and debug notification on face ROI model output
|
|
|
|
|
|
|
|
- Added support for MJPEG compression for neuralnet tracker
|
|
|
|
|
|
|
|
|
|
It crashes later though.
|
|
|
|
Meaning informative elements like the pose gizmo and bounding boxes
|
|
|
|
Powered by AI!
Models were generated with code from
https://github.com/DaWelter/neuralnet-tracker-traincode/releases/tag/v0.1
|