Available Oct 1st
Action Recognition In Development Expected in Q1 2023 Experimental
What it does:
This model attempts to understand the actions happening in a scene and look for potentially unsafe scenarios, like smashing, stealing, the hands up gesture, a person falling down or otherwise laying down, a panicked run, fighting, or an all clear and fine status.
Allows you to filter event feeds of people based on our best guess as to the action of the person.
Requires people detection and its camera placement requirements.
A Novel Approach:
Traditional Live Action Recognition converts a body parts into points and lines and then tries to determine what someone is doing based on those shapes, rather than the image itself. This approach is slow, mathematically complex, and nearly impossible when people are overlapping (in groups) or when people are only partially visible. Because of these limitations, Survail's action recognition models do not use this common method of action recognition.
This model is experimental. As such, we do not recommend creating alerts from this model at this time.
It is 97% accurate on our own internal dataset, but that dataset is quite limited at the moment as these are rare events. This model's training dataset has more synthetic and less real world surveillance-based data than some of our other models. The more examples that you give a machine learning model, the more likely it is to be accurate.
This model uses a novel lightweight architecture that allows it to process and analyze data much faster than other action recognition models (that cannot run in realtime). This architecture enables real time predictions of actions, but may not create reliable results in a real world environment without additional time learning.
All that being said, here's some example of this model succeeding:
This model was made by Survail and will continue to learn.