In The Trenches Video Series
Fisheye & 360 Security Cameras
Today, we deep dive into 360 degree fisheye cameras
Our speakers today are:
- > Matthew Nederlanden
- > Benjamin Larue
- > Michael Bell
- > James Campbell
Video Transcription
Matthew Nederlanden:
I think an interesting way of putting it might be there's no camera that's better at understanding a scene. Where is everybody in the office? How are they located? Where are they positioned? There's no better camera for that.
Ben Larue:
Hey, thanks again for joining us. I'm Ben with SCW. This is another session of our In the Trenches round table and today is going to be a product showcase specific episode. Today the product we're going to be showcasing and talking about is actually the Radius. We have two different models, the 12.0 and the 5.0. This is a 360 degree fisheye, EPTZ camera. Fisheye, multisensor cameras are becoming more and more increasingly popular. And we wanted to take some time today, kind of deep dive, walk through some of the features, benefits, things you can gain from it. So I'll take some time now to go ahead and introduce all of our panel. We got a awesome panel of experts here. First we've got James Campbell, he's our technology expert.
James Campbell:
Hey everybody.
Ben Larue:
And then we've got Michael Bell, he's our lead of the support team.
Michael Bell:
Hey guys.
Ben Larue:
And then we've got our CEO, Matthew Nederlanden.
Matthew Nederlanden:
Hello, everybody.
Ben Larue:
Beautiful. Super excited to dive into this one. If it's all right with you all, I'd love to just start getting granule.
Michael Bell:
Yeah.
Ben Larue:
So what exactly is a fisheye camera and how does it work?
James Campbell:
So a fisheye camera is a camera that has, as it name I guess implies, a fisheye lens. So it's got a lens that sort of warps it and makes it a 360 degree generally view. So it's a single sensor, single lens and creates a big 360 degree view that kind of looks like if you've seen a fish's eye, it looks like that's the name for it. So yeah, it's kind of how the basics of a fisheye works.
Ben Larue:
Awesome.
Matthew Nederlanden:
Wait, wait, what? Hold on a second. What does 360 degrees mean in a three-dimensional space?
James Campbell:
Good question. Yeah, I think 360... So every camera has a field of view that's going to be the angle of view that you see off your camera and the higher the number, the more you're going to see. So with the fisheye camera, you're going to see in every single direction, in a circle basically. And that's going to give you a 360 degree complete view all around the camera. So essentially there's no blind spots for that camera, unlike other cameras.
Ben Larue:
Right, right. Sometimes they're referred to, if I'm not mistaken, as like bird's eye view type of look onto a scene. Explain to me a little bit, what role does a fisheye play in this video surveillance space? When would you want to use one of these cameras versus another model or... Because I hear 360 in field of view and I'm like, oh my gosh, that's the camera I want, covers the most ground.
Michael Bell:
I can tell you coming from the support side, we see a lot of cameras that are put in the center of a room and that camera is able to cover pretty much that entire room of course depending on size. So you're able to see wall to wall and get a really good view of what's going on in that particular area.
James Campbell:
So we have one in our... Go ahead, Matthew.
Matthew Nederlanden:
Yeah, I was just going to say the same thing. We got one right here. I mounted it.
James Campbell:
Yep.
Matthew Nederlanden:
It hangs down from the ceiling, the camera's pointed directly down and then as three sees 360 degrees. So unlike a normal camera, you're going to mount it looking away from the wall. It's going to see something like this where it's looking away from the camera. This is instead mounted down and looking every single direction.
James Campbell:
Gotcha. Yeah. And typically what role we see it as is kind of like you said, the bird's eye view or the eye in the sky sort of thing. So it's used to gather the entire scene, especially in an area where you may not have the ability to see. Closed warehouses are a very common use for these fisheye cameras because you can mount one very high up and then even with the shelving and stuff you're generally able to see everything that's going on with those shelves. And that's an important thing because you don't want having to put multiple normal cameras there. You're going to have line of sight issues and everything like that. So it can act as something that can piece together a scene with a single camera, which is saving you a lot of money, which is obviously very important too.
Michael Bell:
And really good coverage too. Besides the money factor, it does give you really good coverage.
James Campbell:
Yep.
Ben Larue:
That's awesome. I was just going to ask, are there other things that we should consider with these type of cameras, with fisheye cameras specifically?
James Campbell:
Yeah, I'll throw one out there. The biggest thing, since it is covering 360 degrees, so it's covering a ton of ground. I mean you could see this picture here, it's covering our entire office here. And so the reality with that is depending on the resolution and even with the highest resolution fisheyes, it's not the best camera to get details with. Don't expect to be able to see very clearly papers or things that are going on in terms of computer screens maybe if that's something you're interested in or-
Matthew Nederlanden:
Even just being able to-
James Campbell:
... small details like that.
Matthew Nederlanden:
Even just being able to see somebody's face. I mean the one that we've got here is directly over my head right now. It's going to look down and see the top of my head.
James Campbell:
Good point.
Matthew Nederlanden:
And so depending on what you're trying to do, figuring out is Matt at his desk? It's absolutely going to tell you that. Is it going to tell you what stranger is which one? Certainly not at the center of the image. As you go towards the outside of the image as the angle of view is a little bit more appropriate for being able to tell who somebody is because you're seeing more of their facial features. But when you're directly mounting a camera on the ceiling and it's looking straight down, for the people that are in the center of the circle, it's going to be seeing more of the top of their heads than their faces. When you start getting to the edges, the angle is great enough to see faces.
James Campbell:
And then it's almost... It's a little bit of a double-edged sword though because with the fisheye camera, the sharpest part of the image is in the middle. The more you get that warp, the more details become a little softer though. So it's a little bit of a double edged sword.
So in reality, you know, don't want to expect to necessarily get details from this consistently. You may be able to see when people are coming in who they are, but you really want to use this in combination with the rest of your camera system. So I'll go back to that warehouse example. Imagine you have one fisheye up on the top of the ceiling, but you've got a couple of cameras coming in towards the entrance. You can see people as they come in, a couple covering the general area and then some of the major aisles of the warehouse, and somebody is driving their forklift and they hit it into one of the shelves. With that fisheye, you may not be able to exactly tell who that person is by their face, but you can tell based on the other cameras that here he was driving it over here and so you can piece together that information to make the complete case instead of just trying to rely on that one camera. So it's important to make sure you're still covering entrances and everything like that with good detail with the 4K standard camera.
Ben Larue:
Gotcha. So this is can act as an auxiliary camera to a system, in addition to, not to replace, right?
James Campbell:
Yeah, yeah, absolutely.
Matthew Nederlanden:
I think an interesting way of putting it might be there's no camera that's better at understanding a scene. Where is everybody in the office, how are they located, where are they positioned? There's no better camera for that. On the flip side, if you're going to use this for let's say facial recognition, half the time you're looking not exactly at the face. So it's not even going to see the face. Just like in the position where I'm at now, it's seeing more of the top of my head than my face.
Ben Larue:
Gotcha. I was just going to ask that. Are there scenarios where we would totally not recommend this? Are there places we shouldn't consider using these cameras?
Matthew Nederlanden:
I would say yes. There's one big scenario that I would bring up. If your intention with the camera system is to connect some sort of facial recognition API, after the camera, the fact that it's using that sort of distortedly warped view, you're going to have a hard time doing that because you basically have a machine that's turning the fisheye 360 degree sort of round lens, which is the lens is sort of like this and it's bending the wavelengths to be able to get it into the image sensor. And so it's bringing them in from every direction but they're all kind of bending like this and then you have a system after that that's part of the camera that dewarps it, makes it look a little bit more natural. That doesn't always play nice with stuff that's trying to do traditional sort of machine learning after the fact. Your traditional machine learning models of face detection are not going to run particularly well with a dewarped image.
James Campbell:
If your application requires you to have, be able to see somebody's face very clearly throughout the entire facility, then this is not the one either, going back to talking about the details. So that's the one place where I would say not recommended as your singular option, but again as that auxiliary option.
Ben Larue:
Gotcha.
Matthew Nederlanden:
Yeah, let me talk about that just distinctly within our environment here in the office. We've got one directly above us. It's able to see who's at every single desk, but we didn't decide not to put a camera directly looking at the doorway. So we've got a traditional viewing angle camera that's looking at the doorway, that's providing a natural, non-dewarped view so that we have a very clear facial recognition, appropriate picture of somebody walking in the door. We're trying to figure out where are they at in the building, spatially. The fisheye's really, really good at that.
Ben Larue:
Awesome, that's good. What about over the top of a POS system? You think it would be good there?
James Campbell:
No because of just the lack of details there. It's going to be good covering the entire area, but I think the lack of details is not going to be able to capture what's on the screen or even clearly maybe able to see the money going on. So definitely better cameras for that scenario.
Ben Larue:
Yep, gotcha. Help me understand a little bit more some other applications this could be used in outside of just general coverage. Or is general coverage really the only application that it works well in?
Michael Bell:
Oh, no.
Matthew Nederlanden:
I can think of at least three. Go for it, Michael.
Michael Bell:
Corners. Corners of a building are, they're amazing. So you can put it on a wall mount and have it right there at the corner and you can see all the way around that corner. It's great. You can pick up so much, not really detail, but you have a better understanding of what's going on without having to add two cameras to that particular corner covering a portion of what this camera would be able to do.
James Campbell:
And that's an important thing for a lot of people because running cable, if you have to run two or three cameras to cover that corner, talking two or three camera or cable runs, you're talking two or three extra cameras and then you're talking two extra spaces on your recorder too. So that's not just in terms of getting on camera's, great, but you got to think of the additional hard drive space and everything that comes along with doing that.
Michael Bell:
And also being able to see straight down, that's another thing about a corner that I had a customer compliment that particular camera on was that he was able to see straight down rather than having that blind spot for two cameras that he was going to use. Because unfortunately in his position there was a lot of illegal activity that he needed to catch and get to the authorities and that camera was able to help him do that.
Ben Larue:
Like right up closer.
Matthew Nederlanden:
Another-
Ben Larue:
I see.
Matthew Nederlanden:
... Another use case that I like is, let's say you've got a T-shaped hallway. You could stick right at the junction point, you could stick a single fisheye and you're going to be able to see everywhere they go. At no point are they outside of the frame. On the flip side, how would you cover that with traditional cameras? You'd probably need cameras on each end point looking inward, which produces this problem of when somebody's directly underneath the camera, you don't know what's happening. So let's say you mount a camera right above the doorway looking down the hallway, so mounted here and then looking down this hallway. When they're directly underneath that, you can no longer see them. So especially if you have let's let's say like a classroom where this T might have half a dozen doors on it, you may end up with a inadvertent blind spot without needing six cameras, one looking one and the other way down the hallway, where a fisheye could solve that whole need from a single camera usage.
And as long as you still have, let's say at the entrance point for the building, where you have your access control system or whatever that's letting people into the hallway, you've got a good traditional camera that's making sure that this is the right person. The fisheye is going to give you the best idea of where they went after that.
James Campbell:
Yeah, I'll echo that one too. That's been one of the largest increases in application I've seen for fisheyes. They used to just be throw them up in a big warehouse or a big space and be able to see things. But the advantage of having one in an intersection hallway, whether that's two, three of them or even like a cross intersection is everything Matthew said. But going back to the cost, every single camera costs more there. It's going to cost extra hard drive space to record four cameras to cover that similar ground with some blind spots. So they're really good for those scenarios. And I've seen a lot in multi-level buildings where they'll have a traditional fixed lens camera at the elevator and then basically fisheyes are covering the rest of the hallways because they already know what you look like, they already see everything else. So even if that fisheye isn't giving them the best detail in the world, it's still... They're piecing those two cameras together to get the full scene.
Matthew Nederlanden:
Let me dig in on that for a second and talk about the cost. With a fisheye in this scenario that we're talking about, right at the center point where two hallways create a juncture. And you go to put that in, yeah, you've got a little bit of a more expensive camera, but the main thing you're saving is the installation cost of running one cable there. You run one cable, you plug it in, you're done. If you do the other way where at the junction point you're creating two cameras at each angle and so you got six cameras total each one, two cameras for each hallway. You now have six cable drops, you forgot to get into the budget and that's where your real expense starts to come in is the cable drops. It's not necessarily the equipment cost that you're saving there, having to run the cable itself is expensive.
Ben Larue:
Right, right. That's a big factor. It's definitely a big factor, especially if you're talking about these larger facility setups. So that's important. How do these fisheye cameras differ from multisensor cameras?
Matthew Nederlanden:
Hold on a second. I'd like to bring up one more use case that I love personally and that I think is a really interesting application for a fisheye. So let's talk about it like a mechanic job. So you've got cars, that's generally a big square space and you've got cars that might be in the way. Traditional camera mounted on the side starts to really struggle when a car's in the way. You can't see, did the technician accidentally run his tool and scratch the entire side of the car? I can't tell you that he didn't because the car's in the way. And a fisheye can solve that problem really easily for you to be able to constantly be able to see every sort of angle of a space in a way that a camera mounted on a wall or a ceiling kind of looking across the space may end up dealing with an object in the way.
Ben Larue:
That makes sense. That could probably... The same could potentially apply in places where trucks might block camera view or something that might back in a loading dock or something.
Matthew Nederlanden:
A parking lot.
James Campbell:
Going back to the warehouse analogy, shelves are going to block your angles of views and they're definitely good there as well.
Ben Larue:
So that actually brings up a good point. So could it make sense in a convenience store? Is there a height limit though of when it isn't able to be used in that scenario? Because I think of a convenience store, the different aisles, most convenience stores are going to have a bullet pointing down each individual aisle. I just wonder if that would be a good application to potentially set up fisheyes or not.
James Campbell:
Yeah, I think that's a great option there. I don't think... Would probably have to do some calculations and all that kind of stuff, but as long as it's probably more than four to six feet away from a lot of the shelving there, I think you'll be okay. But probably double check with us and make sure we can kind of understand your scenario a little bit more.
Matthew Nederlanden:
Yeah, one of the things that you're going to need with a fisheye is to be able to mount it high enough. So it's really in a situation where we're trying to overcome the shelf, you need some space above the shelf so it can see around the shelf. If it's too close to the shelf, it's going to be blocked in the same way. So it is going to depend on how high your ceiling is. In a mechanic shop for example, your ceiling's usually pretty darn high. In a convenience store. It might be a drop ceiling that's so close that you can only really see down one aisle though. So it is going to depend on your ceiling height.
Ben Larue:
How do these cameras differ from multisensor cameras?
James Campbell:
It's a great question. So there's a couple of multi, or I should say 360 degree field cameras out there. Fisheyes are one of the technologies, but there's also multisensor cameras. So these, instead of having one lens and one sensor actually have three or four sometimes more lenses kind of in a circle and they combine to sometimes they're actually four or six different camera streams coming off of them and sometimes they'll use software to kind of stitch them together to create a 360 view.
So generally the biggest difference I would say is that the multisensor cameras tend to be a lot more expensive. It tends to be a lot more software involved there if they are stitching them together because you got to make sure that that stitching part is compatible with the NVRs, with your software. A lot of stuff goes on with that. But then if it's multi cameras and each one of those lenses are actually a different stream coming off the camera, then you run into factoring in the hard drive and all that other stuff we talked about. So multisensor cameras do help you save some installation costs because it's the same kind of thing. You only have to bring one camera in. But I think a fisheye works and it is a more affordable option for a lot of the same scenarios with some exceptions like on the side of a wall. Maybe a multisensor camera's going to be better because you're going to get more detail out of it. So they fill sometimes similar roles but a little bit different reasonings for existing really.
Matthew Nederlanden:
Do you mind if I interject for a second? Just talk about what if a multisensor camera is for people who are unaware. Multisensor camera, you've got one base, one plastic or metal housing depending on the model that is going to have individual sensors for each camera, usually somewhere around four to six. And then you're going to position these cameras, are going to be usually, again, depending on model a little bit, but usually independently movable where you can focus each one of them. And so let's say go back to the T example, we might want one to point this way and one to point down each side, but you're still going to have that problem of not being able to see what's directly below the camera. And so a multisensor can be a really great camera, can be a really great fit, but you're still going to have that big drawback that having three different cameras mounted kind of close to each other would have at that hallway. You wouldn't be able to see what's direct underneath you.
And so there's a big part of that, it just doesn't solve one of the biggest needs of that you want to solve with a fisheye, which is being able to be aware of what's happening in this space. And so it really doesn't solve that singular one. The other thing that can be challenging with a multisensor is because you can move each lens independently, you may end up where they don't actually create a 360 degree view because of the way that you angled each of them. You might have a blind spot that you were unaware of. So it takes a lot more work from the person doing the initial install to make sure that you're solving your need.
James Campbell:
There's a lot of variations on them too. Some of them I've seen that are horizontal are definitely going to have more of that blind spot effect. But I've seen ones that are, they go lens, lens, lens and then lens and they have more of a, less of a blind spot issue. One of the applications I think multisensor cameras can work pretty well on are a parking lot where you're going to pole because you still have the ability to see some more detail because it's multiple... You know may have four 4K sensors on that camera so it's going to give you more detail than a fisheye is and cover kind of the similar thing because at that point you're less concerned about maybe the cost of that camera, which is definitely going to be astronomical when you start talking about 4K versions and all that. But the fact that you may only be able to put one camera on that pole because there's only one connection to it, that's where it really becomes a good solution for multisensor that fisheye may not be the best one there.
Ben Larue:
And over the cameras lifetime it seems like with multisensor cameras there might be more moving parts which could cause more maintenance issues or more links in the chain, right? The longer the chain gets the easier it just to break. At least that's what I'm... Is that accurate somewhat or? I just feel like a 360 camera has less moving parts, so potentially less maintenance overall.
Michael Bell:
You're relying on that many sensors and that many lenses for one of those multisensor cameras. So if one of those goes down, you're going to have a blind spot until it gets taken care of.
Matthew Nederlanden:
Multisensors that I've played with also have... And this is generally more of a higher end market for multisensors, but they have motorized varifocal lenses on some of the models, not every model but it's the ones that I've played with. And yeah, that's going to drastically increase. You've got four different motors that you can control the viewing angle and level of optical zoom on four different lenses, you're definitely going to have more potential for things to break.
Ben Larue:
And that's such a good point too about even the footprint of the camera I feel like. A multisensor camera must have a larger footprint than a fisheye camera. No?
Matthew Nederlanden:
Yeah, I mean probably at least for the access ones that I played with, twice the radius.
Ben Larue:
Wow.
James Campbell:
Yeah there there's variations because there's so many different forms of multisensor. And one important thing, not all multisensor cameras or multi lens cameras are 360. A lot of them are 180 with the designation of putting them on a wall and covering a big side of a building with one camera run. That's a lot of them actually, even more so than I would say the 360. So that's a big difference there. There are 360 ones and a lot of them are very big. So they're going to cover, they're physically be bigger... So, for sure. But just consider that just because it has multi sensors doesn't mean it's 360 either.
Ben Larue:
That's a really good point.
Michael Bell:
That... One of the... Just to throw this out there, the radiuses actually have a very low profile so they're not going to be very obtrusive to your environment where you're putting it, which is really nice.
Matthew Nederlanden:
They're really not much bigger than a standard dome for us. They're very close in size.
James Campbell:
They're even low profile as far as actually how far they hang down.
Ben Larue:
Yeah and ease an install I feel like. With a 360 camera with a low profile that there's no clear cover you're pulling off like you would see with a traditional type dome style camera or some multisensors have. Yeah.
Matthew Nederlanden:
So one benefit of multisensors is you don't struggle on using it for facial recognition though because it's not dewarping it at all. So if your application is looking for some sort of machine learning, computer vision based object detector or classifier, yeah it's going to be a lot easier to work with that.
Michael Bell:
Yeah.
Ben Larue:
Sure.
James Campbell:
I think just to sum it up, the way I would look at it if you're comparing the two is fisheyes are going to be smaller, more low profile and generally significantly more affordable. But their downs, their main downside is the detail level. You're just not going to get minute details or the fact that it's a fisheye AI or computer vision stuff is not going to work very well with it either at all. Whereas a multisensor camera, it's going to be very expensive, generally very big, but you are going to get better detail from that camera and you have some other options as far as if you need 360 or if you need 180 or whatever. So there's some options there for sure.
Ben Larue:
Sounds like it's definitely application based.
James Campbell:
Yep.
Ben Larue:
Cool.
Matthew Nederlanden:
We really want to know what somebody's going to be looking to achieve before we make the recommendation.
Ben Larue:
Right. No, and that makes the most sense. Cool. So now that we know the difference between multisensors, fisheyes, the different generic things we should be considering and covering when we talk about fisheye cameras, what does SCW have to offer in this realm?
James Campbell:
So SCW has two different fisheye models now. We've got, we've call them both the radius. One's the Radius 12.0 which is 12 megapixel fisheye. And then we have the five megapixel which is as you can ima- or 5.0 which is, as you can imagine, it's a five megapixel. So the Radius five is a little smaller than the Radius 12 as well. Because a lower resolution is also more affordable as well. So just depends on what level of detail. If you're covering a very large area, the 12.0 is probably a better option, whereas if you're covering that hallway, the 5.0 is probably the one you want to pick there because you don't necessarily need all that detail.
Ben Larue:
Awesome, yeah. So you mentioned the 12.0, I'd love to dive into that model specific. Any differences between the two or when you start with the 12?
James Campbell:
Yeah, so starting with the Radius 12.0, it is obviously the higher of the resolution of the two of them, but it also has built in microphone and speaker. Whereas the radius 5.0 only has a dual mic setup, which is awesome. We can talk about that in a second. But the Radius 12, it's going to be the better one probably for the bigger areas because you are getting that more detail. I think that's the biggest difference. And then the other major difference, which we can talk about a little bit later here is the dewarping capability. The radius 12.0 has hardware decoding so it can actually split into multiple streams if it wants to, whereas the Radius 5.0 only has the software decoding and we can talk about that in just a second.
Ben Larue:
Gotcha. So you mentioned that 5.0 too and some of the differences, you said that doesn't have the 2A audio but it does have a dual mic?
James Campbell:
Yeah. The dual mic is a newer feature for this camera. What it has... So you can probably understand why do you need two mics?
Ben Larue:
Yeah, that was my next question. I don't really get it. Why would you need two of them? Is one not good enough?
James Campbell:
So two mics help increase the range of it, especially on a 360 camera because they're basically a pancake in a lot of ways. So if you have one on one side, one on the other, it can really enhance the range and quality of the recording. So general cameras that have a built-in microphone, and this is also a placement thing as well. Generally when you place, let's say a bullet camera facing this way, the microphone's facing this way. So you're getting about, I think we say 20 or 30 feet away from the camera to hear a normal conversation. But then if you're behind it, that actually that range is much shorter because there's physical lock-in.
Matthew Nederlanden:
It's a directional microphone.
James Campbell:
Yeah.
Matthew Nederlanden:
It's got an angle of you just like a camera does.
James Campbell:
And even though actually the way those camera audio, those microphones pick up audio is a 360, no matter what, if you put something in front of it's going to be harder to pick up. So the fact that this is a pancake kind of style camera, they're going to generally put on top of a ceiling and it has two mics to cap capture one side of the building and one side of the other side, it's going to give you much clearer audio. When we tested in our office, which is an old church building actually that has really tall ceilings and kind of a loud HVAC system in there as well. And so it's very difficult on the microphones but I was able to hear myself on calls when I was looking back on the recordings and I'm I think about 30 feet away from the camera, 40 feet away from the camera. And I know no other camera... actually even behind a wall so I know no other camera would've picked up me as well there. So it's probably our best audio camera when it comes to picking up voices.
Ben Larue:
5.0 is, with the dual mic?
Matthew Nederlanden:
I mean with the general, like a microphone that's picking up anything in any direction is getting so much ambient noise and it's hearing the HVAC system and the air pressure and everything in the room that you have all this sort of background noise that makes it not a great listening experience. A directional microphone is going to be way better at cutting out the sort of garbage that you didn't want to have to listen to. And that's why it's so much better. I know people are sort of familiar with our industry a lot of times because they watch a TV show and they see some cop using some sort of specialized thing.
And one of the things we often see is this sort of plastic microphone thing that they're aiming somewhere to hear something long range. And that's basically the idea, if we can narrow the band that we're trying to listen to and listen to audio just coming from this area, we're going to be able to hear significantly better without as much ambient noise pushing the thing we wanted to capture out. And so just a varifocal camera, when we focus it into the area we want, we get a lot more clarity. Same thing happens with audio. We're trying to record everything in a space 360, you end up getting all these sounds you didn't really want. And then you got to figure out how to tone them down and it's just better to have a microphone that is directional.
Ben Larue:
Yeah. I just want to quickly pause and I forgot to ask this earlier, but it just came up I'm minute ago when James mentioned the typical mounting locations and how you would mount these cameras. We've talked a lot about the 360 view and we've talked a lot about coming down from a ceiling, looking down on a view. But could you stick this thing on a wall? Would you get a panoramic 180 view or how would that work?
Matthew Nederlanden:
Well let's talk about what 360 means for a second. That that's the horizontal space. If we were to think about looking down upon a floor plan, we would see a circle. But that doesn't mean that it's going to see a circle behind the camera. It's not going to see the roof beams for example. What it's looking at is when we say 360, we're talking about the horizontal angle of view. Just in the same way when we talk about, hey, we got a bullet camera, it sees 90 degrees, we're not talking about it this way. That wouldn't even really make a lot of sense.
I mean obviously you could angle the camera that way and it does, but generally you're talking about the horizontal plane when you're talking about an angle of view. When you mount it on the side, now what was at the horizontal plane, 360 degrees is now on the vertical plane. So you know, this way. And so you're going to see something really, really different. And it's going to look from the horizontal plane more like 180 degrees. It's still 360 for what you're seeing vertically. You're seeing both all the way to the ceiling and all the way to the floor. But you're seeing on the horizontal plane something more like a half circle.
Ben Larue:
Gotcha, gotcha. Makes sense. So they can be mounted on the wall though?
Matthew Nederlanden:
Sure.
Ben Larue:
Or could be.
Matthew Nederlanden:
Sure. That's I think one of the most interesting applications. You've got a hallway that goes and sort of traces the outside of a building for example, and makes a hard corner, 90 degree. You could mount it on the side and be able to see all the way around, 180 degrees, a really interesting application. But the 360 is now vertical.
Ben Larue:
Gotcha, gotcha. Sorry, yeah, didn't mean to get off topic there. It stuck with me and I just want to make sure we covered it. So I think it makes a lot of sense James, when we talk about the difference between the 5.0 And the 12.0, the two models that we offer here at SCW, you talked a lot about some of the physical hardware feature differences and then you mentioned some of the software differences. I think if it's all right with everyone else, and then we should probably spend the remainder of our time talking about those software differences because that is such a big aspect of this camera, these type of cameras.
James Campbell:
So even when we talk about the software difference, we've been using this word since the beginning is dewarping. What does that even mean? Because you've got that kind of bulbous looking image there, it's not always easy to make sense of it. Your eyes don't quite get... We don't see that way so it doesn't make sense to us. Whereas other cameras, they're more similar to how our eyes perceive things. So if that means somebody's going to kind of walk oddly in it and the dewarping basically just allows you to have different views that make that image more clear in what it is and how to perceive it really. Michael, do you want to go over some of the dewarping because there's a bunch of different kind of, I guess methods or types of dewarping when it comes to you got ePTZ, fisheye or panoramas and everything.
Michael Bell:
Actually the panoramic is actually one of my favorites because it puts one image on top of the other and you're able to see what to my brain works out to be a 360 degree image. Yes the fisheye is a 360 but my brain doesn't see it that way. So having the panoramic view of one on top of the other, so you got 180 degree here, 180 degree here, it just makes sense. And you're able to, like here at the office, you're able to, I don't know the best word to use. Can't think of the best word to use, sorry.
James Campbell:
Yeah.
Matthew Nederlanden:
I'll interject for a second.
James Campbell:
Yeah.
Matthew Nederlanden:
The reason it's called a fisheye is because it's similar to the way an eye of a fish is like. And a lot of people don't have a lot of connection with a fish unless you're near the, live the ocean. But think about a chameleon. The eyes on the side of the head, they're really ovular, the light is coming in from any direction. This is very similar to what you see with one of these lenses and something that they've adapted from nature. That's how it gets its name. But none of us are used to seeing that because our eyes are on the frontal plane of our face. They're not on the sides of our head. And so when you see this sort of fisheye view, it feels so foreign because none of us are used to looking at things like that. If we had fish that watched security cameras, they would be confused by our way of looking at things just as much as we are confused by a fisheye. That's literally where we're adapting this technology from. But it does feel foreign. It feels very confusing unless you do the dewarping.
Ben Larue:
Right. And Michael, you're saying about the panorama view, that's one of your favorites?
Michael Bell:
Yeah, yeah. Because you just see what the camera sees and your brain just makes sense of it. It's a wonderful, wonderful thing. Another one that I like about the Radius 12 is a option to set an ePTZ. So you're taking a 1080p image and up to four actually, just a portion of what the fisheye is looking at, making it look like a traditional 1080p camera and you can record that to your NVR up to four different times.
Matthew Nederlanden:
Help me understand here Michael, you're saying ePTZ. Does that mean that I can zoom around and move the camera? How does that work?
Michael Bell:
Software. The easiest way is just say software. So what it does is you do have to enable it on the camera side before you can actually start recording that on the NVR side. But once you do, you just pull up your image and you control it with the arrows like you would a traditional PTZ camera and you just move it to the area that you want it. Probably James, do you know how far the zoom would go on that? Is it like a four-time zoom?
James Campbell:
Yeah, it seems to be about that. It's virtual so there's no real-
Michael Bell:
Yeah it is.
James Campbell:
... center or lens there, but-
Michael Bell:
It's a digital zoom.
James Campbell:
... I'll say approximate.
Michael Bell:
A digital zoom. Yeah unfortunately doesn't zoom very far, but you're able to pick up a little bit more detail that way and just record that particular thing. So your instance of a hallway, you can set one of those up to look straight down a hallway and record that while also having that 360 degree recording at the same time.
James Campbell:
So to talk about the dewarping, they're kind of categorized in two different buckets. So you've got on camera or hardware dewarping and then you also have software dewarping where your NVR or your client software, your mobile app or the web interface can dewarp them there. So the radius 12.0 supports on-camera dewarping, which is nice because then you can split that one stream out and actually record dedicated channels for that. So that ePTZ function, you're going to be able to constantly always look at that from live view, playback and everything like that. So that's a pretty nice thing. On the flip side, software decoding allows you to have just that single fisheye view recording and then go back or even during live view on models that support it, you can actually view it, you can kind of manipulate it right there as well.
So there's kind of a mixed advantage and disadvantage to both of them. If you need that dedicated view and you constantly want to have it, having that onboard fit dewarping is a better option. But if you're somebody who just needs to use it occasionally and wants that view and then is happy to do it on the different devices that you can do it on, it's a little bit more flexible to have the software de warping because that ePTZ, that if you create that stream and you're looking at it there, that's always going to look at that. You can't move it and play back, you can't manipulate it any further. It's always going to record that. Whereas the software dewarping you can actually move after recording because all it's done is record that fisheye and now you're dewarping the playback footage instead of recording it. And then last thing, the main advantage to it is-
Matthew Nederlanden:
If I'm understanding this correctly, the hardware dewarping is a bit like a varifocal camera in the sense of the camera itself is doing a thing and then we record that output and it can't be changed after the fact.
James Campbell:
Yeah.
Matthew Nederlanden:
Meanwhile the software decoding is a bit like digital zoom. It's got a few more limitations. You can't do as much with it, but you can do it after you've recorded it.
James Campbell:
Yeah.
Matthew Nederlanden:
Is that right?
James Campbell:
Exactly, yeah. And we'll have an article about that that's connected to this video that you guys will be able to read too, that goes in a little bit more detail. Honestly, in most situations I would recommend a software dewarping just because it gives you that flexibility. And the other major advantage of the software is that you're only recording that one fisheye stream. So that's one channel on your recorder that is one hard drive slot instead of four different ones that you're talking about with the hardware or on camera dewarping.
Ben Larue:
Yeah, let's pause there, James. I mean we've used that term a couple times like creating a stream or creating another channel, but this one camera with one cable plugging into the one port in the back of the unit. So walk me through that. Are we using a virtual channel? How does that...
James Campbell:
Yeah, great question, great clarification. So it is just one cable coming from that. So regardless if you're choosing to do on the radius 12.0, if you're choosing to do just the one fisheye stream or you're choosing to break it up into panorama and fisheyes and everything else that it has, it's still one cable. That's very important to keep in mind. But once you decide to use that on-camera de warping, it has to occupy a channel within your NVR. So let's say you decide to do a fisheye and two ePTZs from that Radius 12, that is going to take up three channels of your NVR. So if you have an A channel now you only have enough space for five. So it's an important factor and to consider when you're looking at a fisheye because-
Michael Bell:
When I buy a fisheye, I can choose how many channels I want to record, right?
James Campbell:
Yeah.
Michael Bell:
Or am I stuck with three?
James Campbell:
No, you have some flexibility. Yeah.
Michael Bell:
Yeah, you can just have the one and use the software dewarping, which is amazing and at what it does. But I do agree with James, this by using the software, you could save a little bit of money by going with an eight channel rather than a 16 because you have other cameras on this particular system and by using the ePTZ option from the camera to record one of those views, you might have to up your NVR limit as far as cameras go.
James Campbell:
Yeah.
Ben Larue:
And that's one of the advantages of the fisheye versus kind of that multisensor, right? Is that with a multisensor, you're forced to choose at the hardware level. When you start considering hardware, you have to choose at that point what you need versus with a fisheye, you choose the fisheye and then you have the ability to decide whether or not you want just the fisheye or those broken out views.
James Campbell:
Yeah, that's exactly right. You have a lot more flexibility when it comes to fisheye and because you can do it on a per camera basis. You may have a couple of floors where you go, you know what, I don't want that dedicated view there because my receptionist is watching that as they come into the door and I need them to just have a normal looking view without having to go into the menu and start dewarping. So that is a great point I think. Much more flexibility with the fisheye when it comes to your options to choose it. You can choose to do multiple streams with that Radius 12 or you can just choose that nice 12 megapixel fisheye, still use the software dewarping and you're still going to get more detail than the 5.0. So lots of flexibility with the 12.0 because you have both options,
Ben Larue:
Right, yeah. And you're not stuck in the mud in a sense after the fact and that I think that's a big factor. Absolutely.
Michael Bell:
And just to put this out there, the software dewarping, it's easy. It is very easy to figure out. It is very easy to set up. Set whatever view you want to look at, leave it there, you're good to go. It's extremely, extremely user friendly.
Ben Larue:
That's a really good point. And like we said a little bit earlier, time is money, especially in terms of the installation of all of this. It seems like fisheyes are practically speaking, easier to install in a sense.
James Campbell:
Yeah.
Michael Bell:
Oh yes, very. Yeah, I've done it. It's so easy.
Matthew Nederlanden:
Yeah, you're talking about several hours difference.
Michael Bell:
Yes.
Matthew Nederlanden:
Several hours.
Ben Larue:
And that could be huge hours. That can be huge over the... Imagine if you spread that over five, six cameras or James's Hospital example. 20 cameras, that's a week's worth of time essentially.
James Campbell:
Yeah.
Ben Larue:
You could see it.
Michael Bell:
Yeah. And to kind of piggyback off of James and his multi, multi, multi-use, multi-floor use in a hospital, one thing that I think is important to discuss is going to be the decoding limits on your NVRs and your local viewing. So whenever you're seeing multiple views of cameras on an NVR... Say you have nine cameras brought up or maybe 25 or 32, whatever it is you're looking at, those are showing you in substream. One thing that the ePTZ does is it records and displays and mainstream only at that 1080p resolution. So if you want to have multiple ePTZs showing on one screen at the same time, there's a pretty good possibility that your decoding limit on your NVR is not going to be able to support that. So you're going to have to separate your ePTZs into different pages and have your NVR kind of cycle through that.
Matthew Nederlanden:
Can I pause for a second? We're saying decoding a bunch of times here. What is that? What's like-
Michael Bell:
Sorry.
Matthew Nederlanden:
What's decoding?
Michael Bell:
So the decoding is how many images your NVR is able to display locally using its HDMI or VGA connection. So you have your mainstream, which is what your system records at and you can view that of course, but then you also have a lesser stream, which is a lower resolution, smaller bit rate, which is the amount of data being sent out to get an idea of what's happening on that image. And it's not going to give you the detail that what's being recorded at, but you can still see, you can get an idea and you can even tell who the person is. It just kind of depends on what the limit is for that particular camera. So when you do that and you set the ePTZ on a fisheye, it's at the mainstream all the time and that's going to take a lot of the bandwidth that's available to the NVR to display an image and multiple images. James, you might be able to help me explain it a little bit better than that.
James Campbell:
I think you did a great job. I think the rule of thumb for that is just if you are, and this is specific again to the Radius 12.0 because the 5.0 actually has a substream and everything and the fisheye dewarping, the software dewarping, that's another advantage of it. You don't have to kind of consider this, but when you split the 12.0, you have multiple streams that are now just mainstream. So it's a little bit more likely to potentially push that limit of the NVR's decoding limit above. Each one, NVR has their own kind of limit when it comes to that. And if that does happen, the consequence is what you're going to see is probably a screen that says no resource-
Michael Bell:
No resource, yeah.
James Campbell:
The NVR's CP will cut it off and it's not going to melt your NVR or anything like that. You're just going to potentially not be able to see it during live view.
Michael Bell:
And it's still recording.
James Campbell:
It still records.
Michael Bell:
It's still records, yeah. I wanted to make sure that everybody understood that it is still recording.
Matthew Nederlanden:
The way that I think is the easiest way to sort of explain it is like encoding is the computational requirements of taking stuff in the image sensor and turning it into a video file. Decoding is the computational requirements to be able to watch it. So just like any sort of device that you would get, you can only watch certain number of things at a certain out of time. And with an NVR or a software running view station or anything that's playing back multiple video files at the same time, you've got to make sure it's got the computational ability to do that. And decoding takes additional computational ability because it's converting the file real time into something that's more easily viewable for you.
Michael Bell:
We like to answer like... Go ahead, guys.
James Campbell:
I was going to say that's a great point as far as talking about the hardware decoding. Obviously that that's a big thing but since they are all mainstream issue, you've got a lower quality connection, you're trying to do a lot of remote viewing, there's no substream for that camera to send out. So that's another factor where if that is part of your security plans is having that remote viewing, you have to consider that those are going to use more data as well when it comes to streaming across your network or through the internet if it's remote.
Michael Bell:
Yeah, if I remember correctly that 1080p from the in ePTZ, it's using about two megs of data per stream. Plus you have the actual 12 megapixel in this case. It's bit rate coming down the line as well. So you could be pushing quite a lot of data and if you don't have the upload at the location where the NVR is, or even the download of where you are, you may not be able to view that or if you can, it's not going to be a great experience unfortunately because it's a lot of data.
Matthew Nederlanden:
With a traditional camera you're typically going to have a mainstream and a substream. In our cameras we have three of them. So we have three different potential streams on most of our camera models, not this one though. But each one of those streams is kind of optimized for a viewing requirement that has to do with how much data you have available. Let's say that you're in the office, you're on network, you're on campus, you can watch it in mainstream. If you're, let's say out and about on your cell phone, you got pretty good bars, it's going to downgrade the quality so that you get the video in real time. But let's say you're way out in the boonies and you've got one bar, it's going to try and send you a very low quality stream so that you can still get the video even though you don't have good cell reception. And any of these 360s don't really have that ability very well. They just have the sort of mainstream video. And so that can be a big limitation.
James Campbell:
And just to clarify a little bit there, actually I think the Radius 12 even has a fisheye when it's in... Or I'm sorry, definitely has a fisheye. I think it actually has a substream when it is in just the fisheye view. It's once you activate those extra streams is when you start to lose your main fisheye or your main substream, geez. You lose your main substream once you enable that on-camera recording because it's basically grabbing those streams it was going to use and shoring up a CPU to be able to send you all those views. And the radius 5.0 does have a normal substream as well. So there's kind of an advantage there. But if you are considering using the on camera de warping, then just the rule of thumb is I would say don't expect to put more than one or two on a page. And by page we're talking about your page of cameras.
If you're looking at 16 of them, only put two of those on there. And that's generally enough to stay under for most NVRs. And what that means is the other two will be on your next... If you have a 32 channel NVR, put those on the other, if you have four streams, put the next two on the next page and then that way at no point are you overloading it. And if you have qu- It's a kind of confusing topic, so if you do have questions about that, reach out to us, we'll be able to clarify with you and give you your best kind of path forward.
Ben Larue:
Definitely. I was just going to say we could probably have a whole round table about substreams and coding, decoding. Sounds like that's what's going to be coming down next, so make sure you stay tuned for that.
Michael Bell:
That's going to be a fun one.
Ben Larue:
Absolutely.
Michael Bell:
And I do want to go back to one of the points that you guys made earlier. Use this camera in addition to your setup. This is one that's going to give you, where are people in your office? Oh, I can look at this one camera and I can see that. In addition to it is, I like that idea. I think Matt's the one that said that.
Matthew Nederlanden:
Yeah, that's a big thing that I always mention. In our earlier days, we had a lot of times where people would call in and be like, Hey, I want to get a PTZ because I only want to have one camera. And you're like, this isn't how this works. That's not going to be a good... Yes, you can make it look wherever you want to, but do you really want to do that all day? Don't you have a job?
You want to get full coverage without worrying about it. And these sort of cameras are great when you add them to a coverage map that already exists. These are not the substitute for a full system.
Ben Larue:
You're right.
Matthew Nederlanden:
You're not going to be happy if you try and do it that way because you're going to lack the facial recognition data or any of that. Even if you're not running facial recognition, you still kind of have to do that when you hand it to a police officer and say, this is where we can see their face. It's not always going to be possible within a PTZ environment, or excuse me, a fisheye environment.
Michael Bell:
To kind of go off of that, do you know what I do with my PTZ at the house now?
Matthew Nederlanden:
What?
Michael Bell:
I look at squirrels and deer and stuff like that. I don't use it for surveillance anymore. It's just for fun now.
Ben Larue:
That's great. That's great. So true. Well, that's it. I think that we've devoted enough time. We've covered all the basics. We've covered some more granular details on the decoding, encoding, dewarping features. I think it was great. I appreciate everyone's input, James, Michael, Matt, and those. This is awesome. This was a lot of fun. Now I understand the differences in why I would really want to use a camera like this to supplement my system. There's going to be a ton of resources in the links and the description below, so please be sure to click on those. James's teams, Michael's teams, my team, we're all here to help you in case you need any of these questions answered, if you want to dive deeper into your specific scenario or situation. But this is a great one. I love this. Make sure to tune in next week for the next session, and if there's anything else we can do for you, you can always reach us directly by chat, email, or phone. Thanks so much for tuning in today, guys.