X

Edge to Cloud (to Edge) on the Biggest Stage on Earth

June 29, 2023

Many of us love to watch sports on TV or streaming on a device of choice. Several of Allyson’s recent conversations on Edge lately mentioned CDNs as components of a transforming edge. But her recent conversation with @Carl Moberg of Avassa, made me think of the other edge in the broadcast chain. While we, in front of our devices and served by CDNs, represent the consumption edge, there is a production edge, which is also transforming at a digital pace.

In the previous 3 years I had the immense fortune to work as part of Intel’s Olympics sponsorship team on deploying innovative technology solutions on the greatest possible stage. One of our focus areas was on enabling transformation strategies of the broadcast infrastructure. Among other things we helped bring 5G based live sports camera coverage and virtualize live production infrastructure. You can read about some of that work here: OBS CTO Sotiris Salamouris discusses 5G, Virtualized OB Vans and other innovations.

In any major sports event, there is usually a host broadcaster. The host broadcaster produces the original content – live, replay clips, highlight clips, and archive, and feeds it to a network of rights holders to distribute to their respective audiences (us). Typically, there is a production studio in the event venue, often a broadcast truck. The various camera streams are aggregated to that studio for the live production. The produced content then gets distributed out often through a central broadcast center first, which is geographically close to the event venues and then from there to global audiences by satellite, telecommunication networks, internet, etc. And then eventually, it arrives at our devices, through the rights holders and CDNs.

The distribution side of this chain joined the internet age a long time ago. It leverages IP networking and much of it has been transforming, along with the rest of the networking world, to software applications running on standard ICT equipment. For example, transcoding functions that used to be implemented in fixed function appliances are largely done now in software running on standard servers, with or without dedicated accelerators. But the production side of this chain has been transforming in that direction just in the last few years.

Video processing in real time is not for the faint of heart. It involves very large amounts of data (streams of high-resolution video frames, precious pixels that must be used), subject to very stringent timing requirements. Manipulating several such streams simultaneously in real time as the producer switches camera views, graphics are overlayed, replay clips are inserted and so forth places very high demands on the underlying infrastructure. Indeed, even today most venue production setups involve tightly integrated bespoke fixed appliance devices.

Essentially, these are embedded systems, made up of embedded subsystems, that are specially put together for the event, or event type. Different sports (e.g., athletics is different from basketball is different from car racing) and different production standards (e.g., pro league vs. DIII college league) require their own specific configuration. If you peel back the sheet metal from many of these fixed function appliance subsystems, you’ll see that they are usually implemented these days as “standard” software bundled together with a “standard” server. But they are tightly integrated and sold and operated as black box appliances.

Historically, video infrastructure used bespoke network protocols and infrastructure. But in the last decade of so, this network infrastructure has been moving to IP with the advent of standard like SMPTE 2110. This enabled the IT network infrastructure makers like Arista, Cisco, Juniper, Mellanox (now Nvidia) and the likes to offer their very high-speed networking capabilities in this market. Now, with appliances that are really software running on servers and networks that are just like Cloud/IT networks (with a few added features) the sheet-metal can start falling away…

Okay, so where does this all lead?

For some cases it could mean that only cameras need to be in the venue anymore and everything else can be in a cloud. We all read about the boom of remote production when events resumed during the COVID pandemic, but on-site staffs needed to be minimized. Typically, these are cases where live broadcast requirements can be relaxed. Or where the complexity of production is low, so only a limited number of raw feeds are needed.

Many other cases require fully functional in-venue production capability. This could be to ensure that high quality production can take place even in cases of catastrophic network failure between the venue and the broadcast center (or cloud). It could also be because of the complexity of the production scene. E.g., the number of simultaneous cameras and feeds and the corresponding cost of network to transmit them all live to wherever the production takes place. Or it could be because of the amount of on-site usage of those feeds. E.g., for in-venue audience engagement, for replay and adjudication, or to enable rights holders with their own incremental production on-site.

And that’s where the different considerations of far edge (venue), broadcast center, and cloud come into play as described in Carl’s and Allyson’s conversation.

· The production setting in the venue – usually a truck or a basement room, is likely to be quite constrained in space, in energy and cooling capabilities, in the available HW infrastructure, etc.

· The cloud might be too “far” to move all production there, as mentioned above. Not to mention the networking costs to do so in large scale events.

· The broadcast center is in goldilocks mode. It can typically account for quite a bit of scale, but it can’t be an unlimited ocean of available infrastructure. The cost overhead to accomplish that would likely be prohibitive and hard to justify.

As this transformation is taking place, I expect it will require developing some new methods and heuristics about what fits where and how to decide it. Processes and applications may be repartitioned between edge, broadcast center, now that they are “liberated form the sheet metal”. Orchestration will gain a whole new meaning as well, as the same application would need to abide by completely different rules depending on where an instance of it is deployed and meet different infrastructure configurations in those different locations.

It will take time to mature into these new methods. But the outcomes are certain to be even more sports to watch in more exciting ways, and much improved operational models for content makers.

Many of us love to watch sports on TV or streaming on a device of choice. Several of Allyson’s recent conversations on Edge lately mentioned CDNs as components of a transforming edge. But her recent conversation with @Carl Moberg of Avassa, made me think of the other edge in the broadcast chain. While we, in front of our devices and served by CDNs, represent the consumption edge, there is a production edge, which is also transforming at a digital pace.

In the previous 3 years I had the immense fortune to work as part of Intel’s Olympics sponsorship team on deploying innovative technology solutions on the greatest possible stage. One of our focus areas was on enabling transformation strategies of the broadcast infrastructure. Among other things we helped bring 5G based live sports camera coverage and virtualize live production infrastructure. You can read about some of that work here: OBS CTO Sotiris Salamouris discusses 5G, Virtualized OB Vans and other innovations.

In any major sports event, there is usually a host broadcaster. The host broadcaster produces the original content – live, replay clips, highlight clips, and archive, and feeds it to a network of rights holders to distribute to their respective audiences (us). Typically, there is a production studio in the event venue, often a broadcast truck. The various camera streams are aggregated to that studio for the live production. The produced content then gets distributed out often through a central broadcast center first, which is geographically close to the event venues and then from there to global audiences by satellite, telecommunication networks, internet, etc. And then eventually, it arrives at our devices, through the rights holders and CDNs.

The distribution side of this chain joined the internet age a long time ago. It leverages IP networking and much of it has been transforming, along with the rest of the networking world, to software applications running on standard ICT equipment. For example, transcoding functions that used to be implemented in fixed function appliances are largely done now in software running on standard servers, with or without dedicated accelerators. But the production side of this chain has been transforming in that direction just in the last few years.

Video processing in real time is not for the faint of heart. It involves very large amounts of data (streams of high-resolution video frames, precious pixels that must be used), subject to very stringent timing requirements. Manipulating several such streams simultaneously in real time as the producer switches camera views, graphics are overlayed, replay clips are inserted and so forth places very high demands on the underlying infrastructure. Indeed, even today most venue production setups involve tightly integrated bespoke fixed appliance devices.

Essentially, these are embedded systems, made up of embedded subsystems, that are specially put together for the event, or event type. Different sports (e.g., athletics is different from basketball is different from car racing) and different production standards (e.g., pro league vs. DIII college league) require their own specific configuration. If you peel back the sheet metal from many of these fixed function appliance subsystems, you’ll see that they are usually implemented these days as “standard” software bundled together with a “standard” server. But they are tightly integrated and sold and operated as black box appliances.

Historically, video infrastructure used bespoke network protocols and infrastructure. But in the last decade of so, this network infrastructure has been moving to IP with the advent of standard like SMPTE 2110. This enabled the IT network infrastructure makers like Arista, Cisco, Juniper, Mellanox (now Nvidia) and the likes to offer their very high-speed networking capabilities in this market. Now, with appliances that are really software running on servers and networks that are just like Cloud/IT networks (with a few added features) the sheet-metal can start falling away…

Okay, so where does this all lead?

For some cases it could mean that only cameras need to be in the venue anymore and everything else can be in a cloud. We all read about the boom of remote production when events resumed during the COVID pandemic, but on-site staffs needed to be minimized. Typically, these are cases where live broadcast requirements can be relaxed. Or where the complexity of production is low, so only a limited number of raw feeds are needed.

Many other cases require fully functional in-venue production capability. This could be to ensure that high quality production can take place even in cases of catastrophic network failure between the venue and the broadcast center (or cloud). It could also be because of the complexity of the production scene. E.g., the number of simultaneous cameras and feeds and the corresponding cost of network to transmit them all live to wherever the production takes place. Or it could be because of the amount of on-site usage of those feeds. E.g., for in-venue audience engagement, for replay and adjudication, or to enable rights holders with their own incremental production on-site.

And that’s where the different considerations of far edge (venue), broadcast center, and cloud come into play as described in Carl’s and Allyson’s conversation.

· The production setting in the venue – usually a truck or a basement room, is likely to be quite constrained in space, in energy and cooling capabilities, in the available HW infrastructure, etc.

· The cloud might be too “far” to move all production there, as mentioned above. Not to mention the networking costs to do so in large scale events.

· The broadcast center is in goldilocks mode. It can typically account for quite a bit of scale, but it can’t be an unlimited ocean of available infrastructure. The cost overhead to accomplish that would likely be prohibitive and hard to justify.

As this transformation is taking place, I expect it will require developing some new methods and heuristics about what fits where and how to decide it. Processes and applications may be repartitioned between edge, broadcast center, now that they are “liberated form the sheet metal”. Orchestration will gain a whole new meaning as well, as the same application would need to abide by completely different rules depending on where an instance of it is deployed and meet different infrastructure configurations in those different locations.

It will take time to mature into these new methods. But the outcomes are certain to be even more sports to watch in more exciting ways, and much improved operational models for content makers.

Subscribe to TechArena

Subscribe