More and more web sites are using video to convey information. Most sites use a single resolution for their videos. If you want to use a responsive design approach, that is undesirable. This is a good solution, but there is even something better! At least in theory. Advantage is that it takes the available bandwidth into the account and automatically selects the appropriate video stream. This is especially helpful on mobile.
This way you also only download the parts that you are watching. And you can jump forward in the video without having to wait until the complete video has been downloaded. Sounds great, but usage is a bit tricky. I created a HLS test page to see how good the actual support is on different devices. For video conversion I use FFmpeg. So I had to figure out how to create the video streams. On iOS the created video plays without problems on new devices. Doing this in a single command creates a choppy stream on Android.
Flash playback has still some issues if you change the bitrate. So I still have some work to do.Written by: Nabil Kanaan October 26th, Latency is a major challenge for the online video industry. Typical broadcast linear stream delay ranges anywhere from seconds whereas online streaming has historically been anywhere from 30 seconds to over 60 seconds depending on the viewing device and the video workflow used.
The challenge for the online streaming industry is to reduce this latency to a range closer to linear broadcast signal latency sec or even lower, depending on the application needs. Therefore, many video providers have taken steps to optimize their live streaming workflows by rolling out new streaming standards like the Common Media Application Format CMAF and making changes to encodingCDN delivery and playback technologies to close the latency gap and to provide near real time streaming experience for end users.
Typical applications include: sports, news, betting and gaming. Another class of latency-sensitive applications includes feedback data as part of the interactive experience — an example is the ClassPass virtual fitness classas recently announced by Bitmovin here. Other interactive applications include game shows and social engagement.
In these use-cases, synchronizing latency across multiple devices becomes valuable for viewers to have a similar chance to answer questions, or provide other interactions. When we originally posted our thoughts on CMAFadoption was still in its infancy. But, in recent months we have seen increased adoption of CMAF across the video workflow chain and by device manufacturers. As end user expectations to stream linear content with latency equivalent to traditional broadcast have continued to increase, and content rights to stream real time have become more and more commonplace, CMAF has stepped in as a viable solution.
This inherently adds a few seconds of delay from transmission to playback as the segments have to be encoded, delivered, downloaded, buffered and then rendered by the player client, all of which is limited at a minimum by the segment size. With low latency CMAF or chunked CMAF, the player can now request incomplete segments and get all available chunks to render instead of waiting for the full segment to become available, thereby cutting latency down significantly.
At the transmit end of the chain, encoders can output each chunk for delivery immediately after encoding it, and the player can reference and decode each one separately. The Bitmovin Player can be configured to turn on low latency mode which then enables the player to allow chunk-based decoding and rendering without having to wait for the full segment to be downloaded.
The Bitmovin Player optimizes start up logic, determines buffer sizes and adjusts playback rate to achieve near to real live streaming latency. From our testing, this can go as low as 1. CMAF low latency is compatible with rest of the features that Bitmovin Player already supports today. Ex: ads, DRManalytics, closed captioning.
In the diagram shown above, player buffering and decoding behavior is shown, contrasting the standard segment standard latency mode with the chunked segment mode, corresponding to low latency streaming. The diagram shows that in non-chunked segments, with a segment size of 4xC where C is the size of the lowest granularity unit, the chunk, measured in milliseconds and three-segment buffering, a 14xC-second player latency is typically achieved.
In contrast, chunked segments with CMAF are shown to achieve a 2xC second latency as opposed to a 14xC-second latency, thereby achieving a 7 times improvement in latency. In short, yes.
CMAF and MPEG DASH – the Holy Grail of OTT?
There are some considerations, and some tradeoffs when trying to achieve low latency while still providing a high quality viewing experience.
Buffer Size: Ideally, we want to render frames as soon as the player receives them. This means, we have to maintain a really small buffer size. But, this also introduces instability in viewing experience especially when the player encounters any unexpected interruptions like dropped frames or frame bursts due to network or encoder issues.
Without enough locally stored frames, the player stalls or freezes until the buffer refreshes with new frames.
This in turn requires the player to re-synch its presentation timing and leads to perceived distortions in the playback experience. DRM is another factor that might introduce additional delay in start up time, the license delivery turnaround time will block content playback even though low latency is turned on.
In this case, the player adjusts to the latest live frame upon successful license delivery, and the latency is consistent with the set low latency value.
For all of the above reasons, balancing a robust, scalable online streaming platform with minimal re-buffering and stream interruptions against the time-sensitive behavior of low latency CMAF streaming can be challenging. The solution is a holistic view of the streaming experience, provided by Bitmovin Analytics. Bitmovin Analytics provides insights into session quality so customers can monitor performance of low latency streaming sessions and make real time decisions to adjust player and encoding configurations to improve the experience.Although effective, this also complicates their encoding, DRM and deployment workflow.
Content publishers utilising both formats currently need to pay twice to encode and store their content. Furthermore, these files, although representing the same media, will both be competing for space on CDN edges, reducing the overall delivery efficiency. Utilising this fMP4 container, will allow content publishers to encode and store their media once and deliver it with both the HLS and MPEG-DASH segmented adaptive streaming protocolreducing costs and simplifying their overall workflow.
Moving away from TS containers towards fMP4 will increase the effective bandwidth of the stream.
Furthermore, CMAF specifies a low latency streaming mode for cases where latency is of crucial importance and every millisecond counts. This brings forward a high cost and reduces the overall CDN delivery efficiency. Transport Stream containers are designed to carry H.
This is furthermore enforced within the HLS specification. The general expectation is that most browser, operating system and device manufacturers will quickly release updates to their products in order to be compatible with the CMAF standard. The vision for the future is clear. Universal Video Player. High Efficiency Streaming Protocol. Your browser does not support the video tag. Published on: September 22, You might also like. The last few months brought us a plethora of presentations, blogposts and articles as you can read Within our series of articles explaining the needs for, and ways to achieve low latency, we already With the increase of piracy, protecting media content is one of the key concerns of many Terms Insights Developers Contact Get started.
All Rights Reserved.All rights reserved. Terms Privacy Trademarks Legal. Wowza Streaming Engine. Wowza Streaming Cloud. Wowza ClearCaster. Wowza GoCoder app. Wowza Player. Wowza workflows. Start building. Discover SDKs. General examples.[EN] FFmpeg RTSP to HLS live streaming without transcoding howto
Connect a source. Configure streams and transcoders. Stream playback. Manage security. Use metadata. Analyze data. Manage the API. Ultra low latency examples. About the SDK. Get the SDK. Customize your iOS app. Customize your Android app. API reference. Module examples. HTTP provider examples. Get started. Software updates. Connect live sources. Configure and manage live streams.
Adaptive bitrate streaming. Use Transcoder. Hardware acceleration. Streaming protocols and formats. Apple HLS.
Adobe HDS. Microsoft Smooth Streaming. Adobe RTMP. Distribute live streams.GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Already on GitHub? Sign in to your account.
Ubuntu I would like to use this workflow: FFmpeg as an encoder, shaka packager, nginx as server and bitmovin as player. I have already configured bitmovin, it is working for sure. Right now I just tried with this simple packager command. The created manifest is not providing the "availabilityTimeOffset" value, which is I believe needed.
Can you tell me what am I missing? Also I have another question. Am I able to create low latency cmaf hls with the setup I mentioned, with Shaka packager? It is only useful if an offset is needed to adjust the segment availability time.
I'm looking for about 5 sec latency. I would like to test cmaf to reduce encoder and storage cost also, not only for reduce latency, so I don't need to achive too low. These parameters conspire to inform the client that fragments will be ready prior to the completed segment. I suppose the truth of this declaration depends on whether or not shaka-packager writes individual fragments to disk prior to segment completion and obviously also upon the http server. I may be interested in trying to add support for this.
Are my assumptions about this correct? Do you have any other insights and guidance on implementation? As far as I remember, shaka creates fragments with different name, renaming them after completion. Also, I think that http server should be configured to support Partial Content. Yes, aleek is correct that the writing behavior needs to be changed too. In summary, here are the items I think that needs to be implemented to get a complete low-latency DASH support may be incomplete :.
Change the writing behavior to create the new segment once the first fragment a. We also need to support uploading directly to http for serving, i. And jbreeyou are welcomed and appreciated to work on 1 and 2.
Let us know if you have any questions. I've made some modifications to MultiSegmentSegmenter to make it write subsegments as they arrive at base Segmenter. I wonder if you have any thoughts on properly signalling the MuxerListener. This, of course, means that the MPD isn't updated until the segment is completed. I'm not sure if there's an existing mechanism for predicting the segment duration in the MPD, or a way to update it after the fact if the prediction was wrong.
The dark mode beta is finally here. Change your preferences any time. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. This is an experimental feature. In the current implementation the lhls option doesn't work with file output. It'll work if you use another protocol like HTTP:. Learn more. Ask Question. Asked 9 months ago.
Active 9 months ago. Viewed 4k times. Any help would be highly appreciated. You need a git build. No releases have it at present. Thanksdo you know which branch to checkout? Get a precompiled binary. Thanks a lot. I will try it and post my results here. Gyan I edited my question with the steps I did to compile FFmpeg from its master branch. I also tried nighty builds.
Unfortunately I was not able to enable lhls. Active Oldest Votes. I will post my results by tomorrow. I am still trying to find the correct options for http upload. It's just a few lines of code. Sign up or log in Sign up using Google. Sign up using Facebook.The ultimate goal is to reduce the complexity when delivering video online. These same files, although representing the same content, cost twice as much to package, twice as much to store on origin and compete with each other on Akamai edge caches for space, thereby reducing the efficiency with which they can be delivered.
To try to overcome the cache efficiency problem, the market has launched a myriad of solutions which require complex synthesis of the TS and ISO segments for HDS at the edge or in a streaming mid tier. These servers, which have to build content before they can deliver it, have a lower throughput than those that can simply pass it through.
File container diversity therefore limits the total throughput achievable by a delivery server, as well as contributing significantly to our customer's content preparation, workflow management, and delivery costs. Alternatively, customers could store multiple versions of the content which impact total storage costs. In mid two unlikely collaborators - Microsoft and Apple - came together to plan an end to this inefficiency through a new media file format, which at the time was called Common Media File Format but which is now CMAF.
Microsoft and Apple reached out to Akamai and a number of their closer partners to iterate on the proposal. In February this group of companies prepared a joint submission to MPEG, which has been accepted onto a standardization track. For the Apple and HLS community however, it requires parsing a new type of container.
The benefits of encoding once, packaging once, caching once and building a single type of player are too attractive along the delivery chain for TS to persist in the long term. It is not all ice-cream cake however.
Even though Apple, Android and Microsoft operating systems and devices will quickly support CMAF, there will still be many legacy devices that are non field-upgradeable for which TS-based HLS will continue to be needed.
Additionally, Common Encryption is not as common as one might think. While draft CMAF continues to support both of these, the vision of a single content set for all devices remains blurry. Despite these issues, CMAF remains the biggest step forward the industry has taken in many years towards a harmonized and converged future. We can expect market forces to pick winners for codecs, captions, encryption modes and presentations formats and CMAF to settle quickly to be the de-facto OTT media standard.
Shawn Michels is a senior product manager for media at Akamai. Thank you for laying this out really well done. Any chance we can get a common DRM key exchange to go along with the encryption? Supporting a matrix of clients and DRM servers is not ideal. Will also need a common watermarking mechanism as these are now both requirements for premium 4K content.
Having a common client-side key exchange mechanism among the various DRMs has been requested by many player implementers, however the DRM vendors regard their key exchange mechanisms as proprietary and giving them some market advantage, and so there is been little interest from their side in harmonizing that as was done for Common Encryption. Watermarking is in a similar situation.
Having a common implementation means commoditization of the service, which the various vendors would like to avoid.
They consider their implementation mechanisms a market advantage. Additionally, dynamic session-based server-side watermarking is still in its infancy as regards deployment. I have no doubt that the UHD content protection requirements will drive more deployments.