Here's the Emmy that C-Cube Microsystems won back in 1995 for the MPEG-2 (actually unconstrained MPEG-1) encoder chip set used in the roll-out of DirecTV.
The original DirecTV encoder was MPEG-1 at 704x480 using eight CL4000 chips. Then in 1995 when the MPEG-2 capable CL4010 was finished, the encoders were upgraded to MPEG-2 (frame only encoding). Then upgraded again to a 12 chip AFF (Adaptive Field/Frame) encoder when the firmware was completed.
> AV1 fixed a structural problem in the ecosystem at the time, but the work isn’t finished. Video demand keeps rising, and the next generation of open codecs must remain competitive.
> AOMedia is working on the upcoming release of AV2. It will feature meaningfully better compression than AV1, much higher efficiency for screen/graphical content, alpha channel support, and more.
That's all nice and good, but please make AV1 as widespread as H264, so that I can just import it in every editing program as well instead of having Adobe Premiere Pro complain about not knowing what that format is (well, I personally prefer DaVinci Resolve, but my editor is on Adobe). But yeah, I think that AV1 is great but would like support for it across the board, on every device (hardware decoding and encoding) as well as Kdenlive and Resolve and all the other editors and everything on the software side.
It's all about hardware support really. AV1 is pretty new (2018), give it some time. E. g. Nvidia supports decoding since 3xxx generation, and encoding only since 4xxx generation.
I still vividly remember what a clusterfk was H264 support on mobile devices just ten years ago, circa 2010-2015. AVC spec was published in 2003, High Profiles standardised in 2005, universally supported only since ~2015. I personally had a 2011 Tegra 2 tablet which did support H264, but didn't support high profiles.
AdobeWebM is adding AV1 support for webm files in premiere and after effects btw. It already has vp9 and 8 support! But yeah I'm hoping it becomes ubiquitous in the future.
Some Sony TV's only hardware accelerate AV1 content through streaming services, and not through Blueray and USB...
Right now, I have a drive filled with H264 content that I can hook up to any old hotel TV and play back. It's gonna be a while before I switch to AV1. And is H264 by now largely out of patent anyway?
tbf AOMedia doesnt really make this call. The steam deck for example doesn't do AV1 natively. It could, but Valve has so far decided not to implement it. I dont know how many other devices and systems that could do AV1 but don't do it exist, but to get this level of support, we really need to pressure these companies.
Are you aware that you are barking up the wrong tree? AOMedia already made AV1 a free and open standard, if Adobe does not want to do an engineer's afternoon worth of work to link a C library into their executable, then that's on their head, not AOMedia's.
Less of a plea with AOMedia (it's not like they can't work on AV2 when AV1 is out there but doesn't have full adoption), and more at the industry as a whole - otherwise we'll still see H264 be widespread by 2030 and onwards because there's not enough interest in modern codecs even when they don't have licensing bullshit going on.
Shouldn't will only ever be enforced when it can't be. There's a lot of editing that doesn't require a lot of reverse playback which is where long-GOP really falls down to the point it is worth the slight pain in session vs length of delaying session starting for I-frame transcoding
The real killer of NLEs[0] is variable framerate. Long GOPs just give you higher playhead latencies, but it's still possible[1] for the NLE to actually edit video in such a state. Your computer has to be fast enough or it'll be miserable, but in contrast, variable framerate footage will immediately cause audio desync.
Of course, this distinction is moot, since I've yet to see a (consumer) video source that provides fixed framerate footage. If anyone wants to explain why, I'm all ears. As a result, I habitually re-encode everything before taking it into a video editor as a precaution, and once you're doing that then capping the GOP length is a no-brainer.
I recently learnt that one can download WOFF files from any website that uses them, and convert them into TTF files (using an online tool like CloudConvert) if we wanted to use them in say a MS Word or Powerpoint slide deck.
This allowed me to create a custom powerpoint theme / template that captures the essence of a particular brand.
Worth noting that web fonts are often split up across multiple files for sets of codepoints and font weights/styles, so depending on the language you're writing in a single WOFF file might be missing a few letters.
I think compression ratio is not as important, as being open-source and patent-free. I would prefer an open codec even if it produces 20-50% larger videos. It's not a big difference between 1 and 1.5 Mbit/s. And if it matters that much for you, then you should be paying for the patents, not everyone else (for example, by using a codec which is free to decode, but encoding software is paid).
To you a 33% drop from 1.5 to 1 is not much, but when you're paying for bandwidth usage that is a pretty good bit of savings. I'm not sure of anyone legitimate that's pushing that kind of data isn't using a licensed encoder.
Well, VP8 was only released as an open codec in 2010, and subject of patent lawsuits until late 2014.
In 2010 the majority of (YouTube and other) videos were still served as H.264, because no major browser supported it back then and the majority of video playback devices were already smartphones (without vp8 decoding capabilities)
iOS for example didn't support VP8 until iOS12 in 2019, Firefox and MS IE only added it in 2011. Even Google only added VP8 to Chrome in September 2010.
Wrong. Google aggressively enabled VP8 on YouTube even when there was very little hardware decode. Saved a few megabits per stream on their side, nuked everyone's battery but hey Google didn't give a hoot because that was an externalized cost.
It's why the h264ify extension existed, and forced h264 was for that time a large part of the reason Safari had vastly superior battery life.
> In 2010... the majority of video playback devices were already smartphones
I find this extremely difficult to believe. In 2010 the only widely used smartphone would have been the iPhone. The Motorola Droid was the first widely marketed Android device in the US and was only launched in late 2009.
There's actually two Engineering Emmy Awards. There's also the 'Primetime Engineering Emmy Awards' given by the ATAS (Academy of Television Arts and Sciences) while 'Technology and Engineering Emmy Awards' are given by NATAS (National Academy of Television Arts and Sciences), not confusing at all.
It really is amazing how far compression has come in the last decades. Would love to see a chart showing the progress as I think quite a bit of it was very recent. At least, I know that the videos I make on a gopro can't be viewed without effort on a chromebook.
On a similar note Matt Parker recently released a video about Perlin Noise, which won an Oscar (for Technical Achievement) in 1996: https://www.youtube.com/watch?v=JrLSfSh43oA
What does Netflix has to do with AV1 codec? While Netflix’s Norkin has contributed some minor add-ons like film grain, Daala folks should have been mentioned along with x264/x264 guys who were at the origins on AV1 development, Google VP9 guys for their contributions and Intel maybe for HW porting, among others. Basically whomever pays for the show, gets to wear the crown. Nothing to see here…
> In 1990, both Ritchie and Thompson received the IEEE Richard W. Hamming Medal from the Institute of Electrical and Electronics Engineers (IEEE), "for the origination of the UNIX operating system and the C programming language".
> In 1997, both Ritchie and Thompson were made Fellows of the Computer History Museum, "for co-creation of the UNIX operating system, and for development of the C programming language."
> On April 21, 1999, Thompson and Ritchie jointly received the National Medal of Technology of 1998 from President Bill Clinton for co-inventing the UNIX operating system and the C programming language
video decoding on a general-purpose cpu is difficult, so most devices that can play video include some sort of hardware video decoding chip. if you want your video to play well, you need to deliver it in a format that can be decoded by that chip, on all the devices that you want to serve.
so it takes a long time to transition to a new codec - new devices need to ship with support for your new codec, and then you have to wait until old devices get lifecycled out before you can fully drop support for old codecs.
To this day no AppleTV boxes support hardware AV1 decode (which essentially means it’s not supported). Only the latest Roku Ultra devices support it. So obviously Netflix, for example, can’t switch everyone over to AV1 even if they want to.
These days, even phone-class CPUs can decode 4k video at playback rate, but they use a lot of power doing it. Not reasonable for battery-powered devices. For AC-powered devices, the problem might be heat dissipation, particularly for little streaming boxes with only passive cooling.
Would it be possible to just ship video streaming devices with a FPGA that can be updated to support whatever hardware accelerated codec is fashionable?
Hardware acceleration has been a thing since...forever. Video in general is a balancing act between storage, bandwidth, and quality. Video playback on computers is a balancing act between storage, bandwidth, power, and cost.
Video is naturally large. You've got all the pixels in a frame, tens of frames every second, and however many bits per pixel. All those frames need to be decoded and displayed in order and within fixed time constraints. If you drop frames or deliver them slowly no one is happy watching the video.
If at any point you stick to video that can be effectively decoded on a general purpose CPU with no acceleration you're never going to keep up with the demands of actual users. It's also going to use a lot more power than an ASIC that is purpose-built to decode the video. If you decide to use the beefiest CPU in order to handle higher quality video under some power envelope your costs are going to increase making the whole venture untenable.
The whole video/movie industry is rife with mature, hardware-implemented patents. The kind that survive challenges. They are also owned by deep pockets (not fly-by-night patent trolls). Fortunately, the industry is mature enough, that some of the older patents are aging out.
The image processing industry is similar, but not as mature. I hated dealing with patents, when I was writing image processing stuff.
For whatever reason, the file sharing community seems to strongly prefer H.265 to AV1. I am assuming that either the compression at a preferred quality, or the quality at preferred bitrates is marginally better than AV1, and that people who don't care about copyright also don't care about patents.
I assume "file sharing community" is the euphemism for "movie pirating community", but I apologize if I made the wrong assumption.
If that's a correct guess -- I think the biggest reason is about hardware support, actually. When you have pirated movies, where are you going to play it? TV. Your TV or TV box very likely has support for H265, but very few has AV1 support.
What is odd is that the power-seeders, the ones who actually re-encode, don't do both. You see H264 and H265 released alongside eachother. I'm surprised it doesn't go H265/AV1 at this point.
From a quick skim of hardware support on Wikipedia, it looks like encoding support for H.265 showed up in NVIDIA, AMD, et. al around 2015 whereas AV1 support didn't arrive until 2022.
So, the apparent preference could simply be 5+ years more time to do hardware-assisted transcoding.
Timing. Patent encumbered codecs get a foothold through physical media and broadcast first. Then hw manufactures license it. Then everyone is forced to license them. Free codecs have a longer path to market as they need to avoid the patents and get hw and sw support.
Backwards compatibility. If you host a lot of compressed video content, you probably didn't store the uncompressed versions so any new encoding is a loss of fidelity. Even if you were willing to take that gamble, you have to wait until all your users are on a modern enough browser to use the new codec. Frankly, the winner that takes all is H.264 because it's already everywhere.
AV1 is still worse in practice than H.265 for high-fidelity (high bitrate) encoding. It's being improved, but even at high bitrates it has a tendency to blur.
> AV1 is also the foundation for the image format AVIF, which is deployed across browsers and provides excellent compression for still and animated images
I wish adoption was better. When will Wikipedia support AVIF?
Way wider browser adoption, potential to evolve together with AV#, since it's using a container format, so it shouldn't be limited to AV1 base. I.e. sites just need to adopt AVIF, and I expect then seamless ability to start using AV2 (and on) there without sites needing another wave of adding a new mime type and etc. which seems to be a huge hurdle.
It doesn't matter that AVIF uses the same container for AV1 or AV2 based encoding, if the browsers don't have the right decoder for it then they can't decode it.
An example of this is MP4: Browsers can decode videos encoded with H264 in MP4 containers, but not H265 even if it uses the same container, because one thing is the container and another thing is the codec, they're related but they aren't the same.
Notably, AVIF uses the HEIF container like HEIC. HEIF is an extension of ISOBMFF, mp4 files are another example of an ISOBMFF format. I'm surprised how ubiquitous that container format is becoming; webm uses the matroska / mkv format but I bet if it was created today they would have likely used something ISOBMFF derived
Browser adoption happens way faster than sites adoption (as current AVIF itself clearly demonstrates), so same container does matter to reduce contention on sites adoption side.
I.e. once browser adoption happens you'll be able to use AV2 for AVIF without the likes of Wikipedia taking another decade after that to add an additional mime type to their supported images.
My Fujifilm X100VI shoots HEIC/HEIF, which is like the AVIF of H.265/HEVC. It seems to offer better compression than JPEG while having smaller file size. iPhone does this too. Why are you calling it an abomination?
On the web? Good luck. AVIF is considered a baseline browser feature as of last year by the W3C; whereas JPEG XL is not fully supported by any stable browser release whatsoever, only Safari has been shipping partial support.
https://www.w6rz.net/DCP_1235.JPG
The original DirecTV encoder was MPEG-1 at 704x480 using eight CL4000 chips. Then in 1995 when the MPEG-2 capable CL4010 was finished, the encoders were upgraded to MPEG-2 (frame only encoding). Then upgraded again to a 12 chip AFF (Adaptive Field/Frame) encoder when the firmware was completed.
https://www.w6rz.net/videorisc.png
reply