Unique Approach to Computer Vision
Consensus Machine Learning Architecture. Multi-disciplinary Data Validation. Non-Binary Filtering and Storage. Hyperlocalization.
Yes, this is the technical article.Book a Demo
Consensus Machine Learning Architecture
Most computer vision products are in the autonomous driving space where speed has been of utmost importance. YOLO, a popular framework, stands for “You only look once” and was named that because applications like autonomous driving require split second realtime data and don’t have reliable access to the cloud. They really only can look once.
Surveillance companies went and copied this architecture, but Survail didn’t. This architecture is limiting and unnecessarily confining. We don’t only look once.
Survail uses a combination of realtime efficient object detection on the edge to identify candidates for further inference by slower, more accurate models in the cloud. These models become just as fast as the realtime ones because they don’t have to do the heavy lifting of evaluating every frame or even evaluating the full image of a frame. We use the edge to identify regions of interest in specific frames for slower, more accurate models to explore further in higher resolution than any realtime application could realistically support.
This also means that we can bypass the typical precision vs recall limitations in any one model. We start by running a model with very recall rates (a high recall rate means that they don’t miss potential objects) which we use to select and crop images to run highly precise models.
Also because this is a consensus model, we aren’t being quiet about our partnerships. We’re using a combination of best-in-class practices, tech, models and pipeline, some of which we completely made ourselves and some of which we test against or run additional inference with models from Intel, NVIDIA, facebook, and Google. Survail stands on the shoulders of tech giants, keeping what works in your use case and jettisoning what doesn’t.
- Motion detection
- Object motion and paththing In development
- Object speed In development
- Object width and height constraints
- Exclusion Zones
- Line Crossing In development
- High Priority Zones In development
- Timed Zones In development
- Interval Zones In development
- Counting Zones In development
- Object shape distortion In development
- Action Recognition In development
- Scene Segmentation In development
Multi-disciplinary Data Validation
Machine learning products usually evaluate the accuracy of their models. Survail won't stop at just check the output of the model, we will evolve to perform a series of tasks that confirm that data is accurate in real time. So, if the models say "we detected a person" who is performing an action that looks like "falling," we will check to see if the object speed and orientation are changing in the way we would expect: going from tall and skinny to wide and short, for example.
Non-Binary Filtering and Storage
Easily the largest difference in the architecture is the idea that your settings and filters only affect your viewport of the underlying events, not the events themselves or the underlying metadata and video.
Your view is binary: it shows how you define “yes, this matters to me” but the metadata and video files are non-binary. What this means is that unlike its predecessors, if survail observes an event that scores below your desired confidence threshold or object size, it won’t send you an alert or clog your event viewer, but also doesn’t fail to record that event and metadata.
Survail records all the time. You decide when something should bubble up to your event viewer or alert you. But we worry about storage.
This means that if you set your settings too narrowly and decide later that your settings need to be more inclusive, not only do you affect future recordings, but you also change the past. You can forensically recover events excluded from your event feed, with a few clicks of your mouse.
The metadata and video files are immutable, the only thing your settings are changing is your desired viewport of that presentation layer.
Survail isn’t just about security. We’re here to help you find and save endangered species, stop workplace sexual harassment, know if your customers are getting good customer service, improve workplace safety, spot manufacturing defects, do your inventory, evaluate your employees, or figure out when a roadway is blocked because of traffic, accidents or car fires.
With survail you will eventually be able to swap our models out with yours, edit pipelines, or inject custom classifiers to weed out specific problematic scenarios. We're happy to make you something of value (obviously, for a fee).
Security camera installations have a key advantage over the ways of doing computer vision for autonomous vehicles: our backgrounds don’t change. So instead of a model that has to understand every possible background, we can give survail hard data on what an inactive scene looks like in your specific location right when you onboard. Every survail box starts with a computer vision model trained on global collected object detection data and your site specific backgrounds, but over time, the edge-based machine learning models become more and more location and client specific. Survail boxes retrain their own models with client protected, secure and localized detection data.