Beyond 4G: Video Processing At The Edge
Mobile infrastructure is evolving rapidly from 3G to 4G to cope with the increasing demands of users. But what is the driving force behind the bandwidth explosion? Clearly, it’s video. This brings system-wide challenges throughout the network to support the necessary throughput and latency so users aren’t waiting excessive times to download or upload their videos. Enabling a superior user experience of video over wireless requires reducing the system-wide latency (where the system is a network of many basestations and devices), since the objective is to reduce the latency and buffering of a video cached at a datacenter being displayed on many handsets. One means to accomplish this is to put real-time video processing at the edge, which in this case is the PHY, just before transmission. This needs to happen both in the infrastructure, as well as in the devices. For infrastructure, this means caching video at the basestation, which can then be transrated or transcoded to support the needs of the specific device receiving the video. Video processing required at the edge includes de/encoding, transcoding/transrating, and even computational imaging. Furthermore, always-on augmented reality algorithms can enable a truly value-added experience by adding relevant information to the video/image being displayed. This is now possible due to the high performance and low power of HyperX processors.
From a processing perspective, the best way to reduce the latency and enable the explosion of mobile video is to co-locate the video processing and wireless processing, ideally in the same processor, which is not an easy task since both PHY and video code processing are compute intensive. This enables a highly flexible, differentiated platform that can virtualize the signal processing such that the platform does not need to be tied to any specific wireless air interface or video codec. The HyperX processor is designed just for problems such as these.
Heterogeneous networks supporting a mix of macrocells, small cells and CloudRAN create new opportunities, as well as new technical problems. New algorithms such as CoMP (Coordinated MultiPoint) and eICIC (interference cancellation) can add network intelligence and a better user experience by leveraging the combined signals and information from multiple RRHs in contact with a UE. However, this requires breaking up a baseband signal, say post-FFT as an example, and sharing information with other baseband signals before finishing the baseband processing based on the intelligence gathered. The current generation of DSP SoCs, FPGAs and ASICs are not designed to solve that problem very effectively. The HyperX processor, on the other hand, is completely scalable, from both a hardware and software perspective, which makes it an ideal solution for this problem.
HyperX processors are designed to interface to each other gluelessly with a massive amount of IO, perfect for sharing CoMP data. Furthermore, they enable high speed, low power routing of data through the mesh memory network, which can enable both CoMP, as well as virtual migration, to move a carrier to another processor or modem card for load balancing. They are completely software reprogrammable and flexible so that the baseband chain can be easily broken up to share data at the bit, symbol or sample level. Due to the unique software re-programmability, they are upgradable to a different air interface, thus enabling PHY virtualization which can allow a different mix and load of air interfaces depending on the time of day, number of users on each air interface, etc. Contact us to learn more about how HyperX processors can provide the ground-breaking technology to bring innovation back to mobile infrastructure.