SVC on Twitter    SVC on Facebook    SVC on LinkedIn

Related Articles

 

A Campus-wide IP Video Network for a State University, Part 2

Jun 8, 2010 12:01 PM


   Follow us on Twitter    

 Listen to the Podcasts
Part 1 | Part 2

Editor's note: For your convenience, this transcription of the podcast includes Timestamps. If you are listening to the podcast and reading its accompanying transcription, you can use the Timestamps to jump to any part of the audio podcast by simply dragging the slider on the podcast to the time indicated in the transcription.

At North Carolina State University, the campus-wide video system was upgraded to IP video with Haivision's Video Furnace, zero-footprint InStream player and Stingray set-top boxes. Peter Maag is here to get into the details on how these features of the system work and how NC State is using them.

OK, Peter, in Part 1 we were talking about the Video Furnace installation at NC State. And there are so many different things on a university campus you can use video-over IP for, you could talk for an hour over just applications. How many channels are they currently using down there? Do you know?
Peter Maag:
I think they have 20 channels lit up with the expansion room for 10 more.

And one of these is what they called the "Wolf Channel." Do you know what that is?
I believe so, I am not very familiar with the content that is being pumped through the system, but that's a very interesting point because as I was saying in Part 1, many of the universities installed the system straight on the cost benefit of pumping live TV around. But it's so easy for them once they have an IP delivery system to add on the real power of video over IP, which is to have prerecorded, created content and launch their own channels against the schedule or allow content to be accessed by a video on demand. So there is really three elements in what I would call push video technology, distributing video: one is there's live channel distribution, the second one is the video on demand, and the third one is to create your own TV channels and set them up to play on a scheduled program plan. [Timestamp: 2:13]

And I guess some of the sources on this could be video production classes or the university communications people and you mentioned before, I believe—network emergency notices. In that case, I guess, it would be public safety who would have to have some sort of input to the server to put these things on.
Yes, for the emergency systems, there would be actually API level conduits between an EAS system, or an emergency alert system, and the Video Furnace. So it would kind of be an automatic push for the warnings, storm warnings, or whatever emergency alert comes across. But the ability to ingest and organize and schedule playouts of content is actually very powerful because a lot of these institutions want to launch their own TV stations. And some of it is live for live events but in the non-live times they need to fill it up with their own content. [Timestamp: 3:07]

At the user end, what kind of bit rates are we talking about at various points of viewing on this?
The bit rates of the video are typically dictated by the encoders and our encoders support anywhere from about 300k up to 15Mbps; 15Mbps would be very high-quality, high definition. Typically [with] standard-definition H.264 people will set at around 1.5Mbps to 2Mbps, and high definition people will set around 4Mbps, 5Mbps, or 6Mbps for traditional TV viewing; different market segments have their different sweet spots depending on the complexity of the content or whatever. But yeah, so the typical standard definition would be going out around maybe 2Mbps. [Timestamp: 3:56]

OK, and you mentioned before the server controls the players and the set-top boxes. What is the high/low streaming feature?
Yeah, that's a very interesting feature, and it's something that we introduced. It's almost similar to controlled, what the industry would call "adaptive streaming," or whatever. But there's a lot of circumstances around a university. Let's say you're pumping a high-def channel and you want to receive by both the set-top boxes and the soft players. A high-def channel has a comfort area of 4Mbps, 5Mbps, or 6Mbps, and that's the type of bandwidth that you want to direct to dedicated devices such as set-top boxes. But at the same time, you might want to take that live input source and make it available to PCs or Macintoshes that don't really have—perhaps maybe they're older and don't have the horse power to decode such a large high-definition stream because decoding takes up CQ power. So you might want to reduce the frame rate a little bit, reduce the resolution a little bit and hit a 2Mbps high-definition stream—which is very beautiful full-screen on your laptop or even a window—and have the laptop viewers access the lower-bit-rate stream, which is less intrusive to their device. [Timestamp: 5:29]

In a university environment like NC State, who is controlling all this?
Both the IT and media department would be organizing all of that. In many cases, it's in tight cooperation with the curriculum departments as they launch their course reserve material and make it available through video on demand. So you'll have a number of different departments involved. But when push comes to shove, it's a network device. It's a video network, and the IT department is absolutely in full control of that, so they would provide the infrastructure, they would tune the network. And in some cases they would drive the administrative interface, and in other cases they would allow sections of the administrative interface to be driven by other people—perhaps the audiovisual department that wants to capture and log material. So through different rights access, we can segment different administrative zones of the Furnace for the particular user. [Timestamp: 6:31]

And how does the network video recorder work?
Network video recording—that's quite a hot topic these days. People really like to know that if they're investing and putting media onto the network that they have the ability to record it, edit it, classify it, associate metadata with it, and make it available for retrieval. Typically this is used for capturing, let's say, classrooms. And the network video recorder—actually it's quite flexible; it can be triggered a number of different ways. If you have content that's coming in on a schedule such as programmed content, you can assign network video recorder resources to capture that in the future. Kind of like TiVo—"I want to capture this show next week between 2 p.m. and 3 p.m. or every week going forward on a Wednesday"—that's a scheduled recording. You also have the ability to do crash recording, and a lot of that can be done either through third-party devices in the classroom such as a Crestron or some type of a room controller, or can be done through the web interface. That's where you could actually start, stop, pause a recording. And we actually have a very new feature coming out that's designed to help people retrieve points of interests and areas of interests within their recordings, and we call that feature HotMarks. That's the ability to inject into the recording in realtime, while it happens, metadata. So you could be going through a class and there could be a particular—"OK, we're starting the Q and A period," and the user could bookmark in realtime that that is when that section of the class was initiated. So it's some pretty interesting technology that allows video to be captured, but also allows video to be organized and searched through with great efficiency going forward. [Timestamp: 8:35]

And that's a pretty big deal too because that's one key area where print has an advantage over streaming audio or podcast—in that certain information within those programs is much easier to zero in on and find. So anything that would speed up being able to find things on a video or audio stream is a really good deal.
Well, it's our vision, and we're bringing forward a tag on the marketing guy; that's what I am supposed to do, right? But we're bringing forward the tag on the company's intelligent video, and it's certainly my firm belief that all video content going forward is going to be as easy to search as text is. And what we have to do is—there is going to be so much noise generated by that—is that we are really going to have to add user-generated information on top of that to make the retrieval of the video much more powerful. [Timestamp: 9:28]

And this thing has, I would assume, pretty sophisticated reporting features?
Don't even get me started on that. It's really quite amazing, because of that—I referred to earlier—the client-server architecture of InStream, the server knows exactly what every user is doing at all times. So from a server perspective, we can collect information on if the video is minimized, if it's muted, if there's Windows overlaid on top of it—we can report on who accessed what, when, to absolute the greatest amount of detail. But with our commanding feature, which is related to that, we can also—if there's a campus-wide broadcast, we can also—for example, if you wanted to do so make all active players full-screen and turn up the volume or set or limit all of the players to have volume. So that's the type of control that people are looking for when they're putting out systems that deliver such a vast amount of media. [Timestamp: 10:33]



Acceptable Use Policy
blog comments powered by Disqus

Browse Back Issues
BROWSE ISSUES
  October 2014 Sound & Video Contractor Cover September 2014 Sound & Video Contractor Cover August 2014 Sound & Video Contractor Cover July 2014 Sound & Video Contractor Cover June 2014 Sound & Video Contractor Cover May 2014 Sound & Video Contractor Cover  
October 2014 September 2014 August 2014 July 2014 June 2014 May 2014