Next Generation Converged Network (NGCN)

videoPDF version

Michel Béland

Michel Béland joined CBC/Radio-Canada in 2005 as a Senior Technologist in the Advanced Systems Development & Integration Group within the Strategy and Planning Department of CBC/Radio-Canada Technology. During his time there, he worked on many Special Events projects such as the Helsinki Games as well as the Turin and Beijing Olympics designing remote production systems. He was also part of the team that designed and built the HD Hockey Night in Canada edit and presentation suites in Toronto. He is now part of the Telecommunications Group, where he is a Senior Manager and responsible for NGCN System Design and Development.

Introduction

CBC/Radio-Canada adheres to a centralcasting model for broadcasting, which means that its regional stations contribute content to a central location, Toronto for English Services and Montreal for French Services. Regional content is assembled along with national content and returned to the regions, so that it can all be sent to the local transmitter for broadcasting.

On the business side, corporate data occupies an important place in CBC/Radio-Canada’s day-to-day operations. This includes email service, Internet access, FTP file transfers, and SAP, amongst others.

An appropriate method of transportation was chosen depending on the nature of the content: this could be Asynchronous Serial Interface (ASI), analogue, or Standard Definition (SD) Serial Digital Interface (SDI) circuits for video exchange locally or between cities, or satellite service for national contribution or collection. Integrated Services Digital Network (ISDN) and analogue audio circuits were used for radio distribution and collection. Telus provided a Multiprotocol Label Switching (MPLS) network for the corporate data and some FTP file transfers.

CBC/Radio-Canada has relied on national and local carriers to provide services for media and data exchange between CBC/Radio-Canada sites, including local loops. With long-term contractual agreements coming to an end, the Corporation wanted to explore the possibility of using a single network to carry audio, video, corporate data, and FTP file transfers, as well as answer any future needs, such as teleconferencing, IP telephony, and remote productions for special events.

Following this requirement, the Next Generation Converged Network (NGCN) was born. This network offers flexibility and scalability, and it ensures efficient use of available bandwidth. The NGCN currently provides connections between forty CBC/Radio-Canada sites as well as eight data-only sites, five of which are airports and the other three are Pippy Place, SSO Carling, and the Network Alarm Centre (NAC). As services expand and new stations are built, the NGCN will cover those services whenever possible. Remote regions remain a challenge and will continue to be serviced by satellite and other alternative means accordingly.

Context

Network Implementation

CBC/Radio-Canada’s broadcast model is one of collection or contribution followed by distribution. To better understand the network implementation part of the NGCN, let’s first take a look at how the old model worked. On the collection side, content was sent from regional stations or remote locations through terrestrial and satellite real-time networks to the Toronto Broadcast Centre (TBC) and/or la Maison de Radio-Canada (MRC), as seen in figures 1 and 2.

Figure 1 – Satellite Collection Network

Figure 2 – Satellite and Terrestrial Collection Network

On the distribution side, televised content from the TBC and MRC was sent by satellite to regional stations to reach the off-air transmitter through a Studio Transmitter Link (STL), and distributed to cable and satellite providers as well as isolated transmission sites, as shown in figure 3.

Figure 3 – Satellite Distribution Network

When dealing with radio content, collection and distribution was done mainly through land-based networks. An example of the 1P English Radio collection network is shown in figure 4.

Figure 4 – 1P English Radio One Switched Broadcast Collection Network

Each location could receive content and insert content into the network; this service was provided by Bell.

On the data side, the MPLS Cloud served as the corporate data network and was also used for some FTP file transfers and Internet access services.

Figure 5 – Data Network

The NGCN was designed to replace the satellite collection infrastructure and part of distribution network, with the goal of freeing up transponders 9A and 12A on the Anik F1 satellite. It also replaces the radio collection network. The new network is composed mostly of a fibre optic network and, in regions where fibre with Synchronous Optical NETwork (SONET) service is not yet available, Ethernet Private Line (EPL) is used. The NGCN also replaces the MPLS Cloud for data services. Multiple services converge onto a single network connecting CBC/Radio-Canada sites across the country, as well as London in the UK and Washington, D.C., in the USA.

Figure 6 shows the configuration of the NGCN for current and future CBC/Radio-Canada sites. The sites are categorised as core, branch, TV/radio, radio-only, and data-only sites, which include airports for the distribution of CBC News Express.

Figure 6 – NGCN Network

The yellow boxes represent the thirteen core sites to which branch sites and all other types of sites connect. The core sites are meshed to allow data to flow through an alternate path in case of failure of the most direct path. The core sites also have protection in the form of diverse routing of a working and alternate path, as well as a diverse entrance of the protected fibre into the site building. This type of protection was also implemented at other non-core sites whenever possible and cost effective. Refer to the legend in figure 6 for the type of protection as well as the bandwidth for every link.

Another detail to point out from figure 6 is the grey boxes around the Toronto and Montreal nodes. They show that both these sites have the equivalent of two nodes per site. The second node is located in a second Central Equipment Room (CER) in each location, and it belongs to a completely redundant architecture from the one in the first CER. Other sites also have redundancy; however, main and redundant architecture are located in the same CER room in those cases. This additional spatial redundancy was added to Toronto and Montreal for disaster recovery purposes, given that they are the network heads for English and French Services respectively.

The network provider is Rogers and its fibre network adheres to the SONET standard. The EPL segments are IP type networks over fibre or copper.

The protected OC-192 links between Montreal, Toronto, Ottawa, and Quebec City are fully redundant links, meaning that a copy of the data flows through both working and alternate paths of the links simultaneously. All other protected links work in a 1+1 configuration where, if the working path fails, then the data is switched over to the alternate path.

Bandwidth Requirements

Bandwidth requirements for each site were based on TV supper hour show needs in terms of the number of audio/video feeds required for the production of the show, plus the Radio requirements, contribution feeds, and the data requirements in terms of corporate data, including Avid and Dalet FTP file transfers, all whilst taking into account the number of users and production volume at the site.

For video, the HD format was used in the calculation. For good measure, a yearly increment of data traffic year after year for the contractual period was also factored into the bandwidth calculation.

Another factor that went into the bandwidth requirement calculation was the consideration of a total link failure between two core sites. In that eventuality, the traffic needs to be directed through an alternate path, which must be able to handle the increased bandwidth from the core site and all of its branch sites.

For instance, extra links were added between St. John’s, Newfoundland, and Halifax and Moncton to bear the extra load in case of a link or site failure related to one of these two sites. The extra links will be used to compensate for the extra bandwidth.

Supported Formats

For video, standard and high definition SDI video with embedded audio are supported as a source format and compressed using JPEG 2000 format. Two J2K encoding profiles are used, one for SD video with compression set to a bit rate of 50 Mbps with 2 AES embedded audio pairs, and HD video with a bit rate of 100 Mbps with 4 AES embedded audio pairs. Special custom profiles are also available for special cases. The resulting signal type from the J2K compression is an ASI signal. Other ASI signals from MPEG 2 and H.264 video encoders can be transported by the NGCN, as long as these ASI signals are decoded by their respective decoders at the receiver end. An example of this might be the use of the NGCN to transport an MPEG 2 encoded signal originating from a mobile truck at a remote site.

The JPEG 2000 standard uses a wavelet compression technique and works at compressing each frame rather than working on a Group of Pictures (GOP) as with MPEG 2 and H.264. The result is low encoding and decoding latency. Subjective testing helped determine the encoding profiles for SD and HD video that were used in the bandwidth calculations.

Audio uses AES as a source format and it is transported in AES format without any compression. The bandwidth allotted to the audio is 3.4 Mbps for an AES pair. If the plant only supported analogue audio, analogue to digital and digital to analogue converters were installed.

Data, which is in the form of IP packets, is comprised of a mixture of corporate data (GroupWise, SAP, network storage, Internet, etc.) combined with Avid and Dalet file transfers, as well as network and equipment management to name a few. At each site, the IP routers are assigned a minimum and a maximum data bandwidth. When audio/video services are in low usage, then the extra bandwidth is used by the data services; the routers are throttled up and down accordingly. Priority is given to real-time audio and video, and then to FTP file transfer services.

Equipment

The equipment required to interface to the NGCN is divided into three categories: the Broadcast Interface Equipment, Network Interface Equipment, and Network Transport Equipment. The Evertz Advanced optical Transport Platform (ATP) was chosen for this function.

Figure 7 – NGCN Equipment

The Broadcast Interface Equipment is the equipment required at the NGCN demarcation point to transform the input and output signals to and from a format that is handled by the Network Terminal Equipment if need be.

The Network Terminal Equipment consists of video/audio mux and demux devices that handle up to eight signals or channels and of two or eight port IP interface devices. The switch fabric, known as the ALR, routes individual video, audio, datacom, and telecom signals to and from selected sources and destinations on the network. The trunk interfaces connect the ATP to the transport network. Figure 8, courtesy of Evertz, illustrates this concept.

Figure 8 – ATP Hardware Components Shown with Sample Signal Types

Trunk interfaces come in a variety of flavours and are available for SONET/SDH (Synchronous Digital Hierarchy) or IP. The SONET standard is used in North America and the SDH standard is used everywhere else. The following table summarises the designations and bandwidths for these standards.

Table 1 – SONET/SDH Designations and Bandwidths

SONET Optical Carrier Level SONET Frame Format SDH Level & Frame Format Payload Bandwidth (kbit/s) Line Rate
(kbit/s)
OC-1 STS-1 STM-0 50 112 51 840
OC-3 STS-3 STM-1 150 336 155 520
OC-12 STS-12 STM-4 601 344 622 080
OC-24 STS-24 - 1 202 688 1 244 160
OC-48 STS-48 STM-16 2 405 376 2 488 320
OC-192 STS-192 STM-64 9 621 504 9 953 280
OC-768 STS-768 STM-256 38 486 016 39 813 120

The Network Transport Equipment is the network switching equipment provided by Rogers for connecting to the local Point of Presence (POP), and it resides at CBC/Radio-Canada’s premises in the case of large and medium-size sites. In the case of smaller sites, the switching equipment resides at the POP.

Wave Division Multiplexing (WDM) is used to transport signals to a POP from a CBC/Radio-Canada location requiring connections to multiple cities, a principle that is illustrated in figure 9. The trunk interfaces encapsulate outgoing traffic into SONET frames and de-encapsulate the SONET frames to retrieve incoming traffic. All the trunk interfaces operate with a laser set to a wavelength of 1310 nm. Outbound signals are shifted to a different wavelength and then optically multiplexed onto the transmit fibre of the fibre pair connecting the CBC/Radio-Canada site to the POP. At the POP, a Reconfigurable Add Drop Multiplexor (ROADM) device extracts each wavelength from the fibre and routes it to the Rogers inter-city core switch, whence it is sent to another inter-city core switch at the POP in the other city as part of one of many signals multiplexed onto a larger pipe.

Figure 9 – Wave Division Multiplexing

At the receiver end, the process is reversed. Establishing a connection between Toronto and Montreal for example, means that Toronto can send to Montreal and vice-versa. The full bandwidth of the link is available in both directions.

System Management

With such a large system deployed across the country, including locations in Washington D.C. and London (UK), a management tool to monitor the devices, modify settings when needed, and upgrade components when new features become available is a necessity, and it must be able to do this from any CBC/Radio-Canada location through a network connection. Evertz provides such a tool and it is called VistaLink Pro. VistaLink Pro is a client application that runs on a standard PC and connects to the VistaLink Pro central server.

An optional component to VistaLink Pro is the Alarm Server, which manages alarms generated by the components to detect such things as a component removal or insertion in a frame cage, a link loss, loss of audio, loss of video, over-provisioning of a link, and a plethora of other conditions. There is quite a bit of flexibility as to what can be set to trigger an alarm. Media Network Operations (MNO) has also configured Simple Network Management Protocol (SNMP) monitoring to poll ATP devices, IP switches, and routers on the network, and to automatically generate a problem ticket to the Network Alarm Center (NAC). Problems reported by users are also directed to the NAC and then dispatched to the MNO Network Operations Center (NOC) team for troubleshooting.

Resources

Resources use ScheduLink from ScheduAll, the time and resource management scheduling application, to create bookings. The application has been modified to receive return codes from the Evertz Intelligent Resource Manager (IRM) application that serves as a third party application interface to ATP Scheduler, which is the Evertz scheduling application. These return codes indicate the status of each booking through its lifecycle. If for some reason ScheduLink is not available, bookings may be created directly in IRM.

One interesting aspect of the ATP platform is its ability to multicast or send a source to as many destinations as required simultaneously without requiring extra bandwidth per extra destination. Figure 10 shows an example of multicasting as seen in ScheduLink. The London (UK) feed is sent to Toronto and Montreal simultaneously. The London optical connection is with Toronto, but, when the signal arrives in Toronto, the ALR switching matrix routes the signal as well towards Montreal without doubling the bandwidth between London and Toronto.

Please note that figure 10 is a logical representation of a multicast in ScheduLink. The signal, as explained above, makes its way to Toronto first and then is routed to Montreal.

Figure 10 – Multicasting

Conclusion

The NGCN system officially started carrying production signals on September 1, 2011. Since then, it has enabled both English and French networks to complete their centralised radio presentation onto the NGCN and, consequently, the Bell circuits are no longer used, aside from some local loops that were planned to be kept. The data migration was completed in December 2011.

Moving forward, a new station in Rimouski, Quebec, will come on line on the NGCN in April and be ready for full operation by September 2012. An NGCN mobile node was designed and built, and it is being tested to address Special Events needs. It currently connects to a Montreal node where trunks provide audio, video, intercom, and control connections to a choice of one of several studios to do remote production similar to what was done during the Turin and Beijing Olympic games, but on a smaller scale.

Some locations that were radio-only have requested to add video contribution capability. We have also added ASI send and receive capability in both Toronto and Montreal. A permanent lab is in the design stage for Montreal, Toronto, and the Evertz facilities in Burlington, Ontario. This three-node system connected through Rogers’ fibre network will allow us recreate every situation and condition of the production system, and to test new software and firmware releases before applying these to the production system.

We have seen a marked improvement in the quality of the video, audio, and data exchange rates for file transfers, not to mention the ease of use of the system when it comes to setting up services to send content to a neighbouring city, or across the country to a single destination or to several destinations simultaneously. As such, the NGCN offers unparalleled flexibility and scalability.

There is no doubt that the NGCN is a Next Generation Converged Network that will enable CBC/Radio-Canada to seize great development opportunities in the future.

Search highlight tool