Designed with ease-of-integration in mind, Phenix’s modular platform deliberately leverages widely accepted industry standards to reduce time-to-market by simplifying integration and accommodating existing infrastructure investments without sacrificing speed or scale.
Source signals are captured by cameras or taken post production over USB, SDI, or MPEG-TS and input to a contribution encoder. Streams can also be contributed directly from a webcam or mobile device.
Signals are encoded into formats such as H.264 AVC before they are contributed to the Phenix network. This is performed by a Phenix encoder, a third-party encoder, a web browser, or a mobile device with the Phenix SDK.
The Phenix encoder can be deployed as hardware or software within your infrastructure.
The encoded video, audio, and data is sent to the Phenix cloud for processing and global delivery. Supported ingest protocols include Zixi, SRT, MPEG-TS, RTMP, and WebRTC.
Phenix transports the encoded streams over private fiber to the nearest point of presence to the end user, anywhere in the world.
If not already done during contribution encoding, the Phenix edge performs multi-bitrate transcoding to ensure smooth delivery of content regardless of the end user's network connection.
When enabled, Phenix can also package the content as DASH or HLS for non-real-time use-cases such as recording for future video-on-demand playback.
Phenix delivers content from the nearest point of presence to the end users over WebRTC, HLS or DASH.
End users view the content on any web browser, iOS or Android mobile device, Android TV and AppleTV using the Phenix SDKs.