
One of the best graphics playing cards aren’t only for taking part in video games. Synthetic intelligence coaching and inference, skilled purposes, video encoding and decoding can all profit from having a greater GPU. Sure, video games nonetheless get essentially the most consideration, however we like to take a look at the opposite elements as properly. Right here we’ll focus particularly on the video encoding efficiency and high quality you could count on from numerous generations of GPU.
Typically talking, the video encoding/decoding blocks for every era of GPU will all carry out the identical, with minor variance relying on clock speeds for the video block. We have checked the RTX 3090 Ti and RTX 3050 for instance — the quickest and slowest GPUs from Nvidia’s Ampere RTX 30-series era — and located successfully no distinction. Fortunately, that leaves us with fewer GPUs to take a look at than would in any other case be required.
We’ll check Nvidia’s RTX 4090, RTX 3090, and GTX 1650 from crew inexperienced, which covers the Ada Lovelace, Turing/Ampere (functionally similar), and Pascal-era video encoders. For Intel, we’re taking a look at desktop GPUs, with the Arc A770 in addition to the built-in UHD 770. AMD finally ends up with the widest unfold, at the least when it comes to speeds, so we ended up testing the RX 7900 XTX, RX 6900 XT, RX 5700 XT, RX Vega 56, and RX 590. We additionally wished to test how the GPU encoders fare towards CPU-based software program encoding, and for this we used the Core i9-12900K and Core i9-13900K.
Replace 3/09/2013: The preliminary VMAF scores had been calculated backward, which means we used the “distorted” video because the “reference” video, and vice versa. This, for sure, led to a lot increased VMAF scores than meant! We’ve since recalculated all the VMAF scores. Should you’re questioning, that is tremendous computationally intensive and takes about half-hour per GPU on a Core i9-13900K.
The excellent news is that our charts are corrected now. The unhealthy information (in addition to the initially incorrect outcomes being revealed) is that we killed off the preliminary torrent file and are updating with a brand new file that features the logs from the brand new VMAF calculations. Additionally, scores obtained decrease, so for those who had been pondering 4K 60 fps encodes at 16Mbps would principally match the unique high quality, that is solely selectively the case (relying on the supply video).
Video Encoding Take a look at Setup
Most of our testing was accomplished utilizing the identical {hardware} we use for our newest graphics card opinions, however we additionally ran the CPU check on the 12900K PC that powers our 2022 GPU benchmarks hierarchy. As a extra strenuous CPU encoding check, we additionally ran the 13900K with a higher-quality encoding preset, however extra on that in a second.
TOM’S HARDWARE TEST EQUIPMENT
For our check software program, we have discovered ffmpeg nightly (opens in new tab) to be the most effective present choice. It helps all the newest AMD, Intel, and Nvidia video encoders, could be comparatively simply configured, and it additionally gives the VMAF (opens in new tab) (Video Multi-Methodology Evaluation Fusion) performance that we’re utilizing to match video encoding high quality. We did, nevertheless, have to make use of the final official launch, 5.1.2, for our Nvidia Pascal assessments (the nightly construct failed on HEVC encoding).
We’re doing single-pass encoding for all of those assessments, as we’re utilizing the {hardware} supplied by the varied GPUs and it isn’t at all times able to dealing with extra complicated encoding directions. GPU video encoding is usually used for issues like livestreaming of gameplay, whereas if you need very best quality you’d typically have to go for CPU-based encoding with a excessive CRF (Fixed Price Issue) of 17 or 18, although that after all ends in a lot bigger recordsdata and better common bitrates. There are nonetheless loads of choices which are price discussing, nevertheless.
AMD, Intel, and Nvidia all have completely different “presets” for high quality, however what precisely these presets are or what they do is not at all times clear. Nvidia’s NVENC in ffmpeg makes use of “p4” as its default. And switching to “p7” (most high quality) did little for the VMAF scores, whereas dropping encoding efficiency by anyplace from 30 to 50 %. AMD opts for a “-quality” setting of “pace” for its encoder, however we additionally examined with “balanced” — and like Nvidia, the utmost setting of “high quality” decreased efficiency loads however solely improved VMAF scores by 1~2 %. Lastly, Intel appears to make use of a preset of “medium” and we discovered that to be a sensible choice — “veryslow” took virtually twice as lengthy to encode with little enchancment in high quality, whereas “veryfast” was reasonably quicker however degraded high quality fairly a bit.
In the end, we opted for 2 units of testing. First, we have now the default encoder settings for every GPU, the place the one factor we specified was the goal bitrate. Even then, there are slight variations in encoded file sizes (a couple of +/-5% unfold). Second, after consulting with the ffmpeg subreddit (opens in new tab), we tried to tune the GPUs for barely extra constant encoding settings, specifying a GOP measurement equal to 2 seconds (“-g 120” for our 60 fps movies). AMD was the largest benefactor of our tuning, buying and selling pace for roughly 5-10 % increased VMAF scores. However as you will see, AMD nonetheless trailed the opposite GPUs.
There are lots of different potential tuning parameters, a few of which might change issues fairly a bit, others which appear to perform little or no. We’re not concentrating on archival high quality, so we have opted for quicker presets that use the GPUs, however we could revisit issues sooner or later. Hold forth in our feedback if in case you have various suggestions for the most effective settings to make use of on the varied GPUs, with a proof of what the settings do. It is also unclear how the ffmpeg settings and high quality evaluate to different potential encoding schemes, however that is past the scope of this testing.
Listed below are the settings we used, each for the default encoding in addition to for the “tuned” encoding.
AMD:
Default: ffmpeg -i [source] -c:v [h264/hevc/av1]_amf -b:v [bitrate] -y [output]
Tuned: ffmpeg -i [source] -c:v [h264/hevc/av1]_amf -b:v [bitrate] -g 120 -quality balanced -y [output]
Intel:
Default: ffmpeg -i [source] -c:v [h264/hevc/av1]_qsv -b:v [bitrate] -y [output]
Tuned: ffmpeg -i [source] -c:v [h264/hevc/av1]_amf -b:v [bitrate] -g 120 -preset medium -y [output]
Nvidia:
Default: ffmpeg -i [source] -c:v [h264/hevc/av1]_nvenc -b:v [bitrate] -y [output]
Tuned: ffmpeg -i [source] -c:v [h264/hevc/av1]_nvenc -b:v [bitrate] -g 120 -no-scenecut 1 -y [output]
Most of our makes an attempt at “tuned” settings did not truly enhance high quality or encoding pace, and a few settings appeared to trigger ffmpeg (or our check PC) to only break down fully. The principle factor in regards to the above settings is that it retains the important thing body interval fixed, and probably gives for barely increased picture high quality.
Once more, if in case you have some higher settings you’d suggest, publish them within the feedback; we’re comfortable to offer them a shot. The principle factor is that we would like bitrates to remain the identical, and we would like cheap encoding speeds of at the least real-time (which means, 60 fps or extra) for the most recent era GPUs. That mentioned, for now our “tuned” settings ended up being so near the default settings, except the AMD GPUs, that we’re simply going to point out these charts.
With the preamble out of the best way, listed below are the outcomes. We have got 4 check movies, taken from captured gameplay of Borderlands 3, Far Cry 6, and Watch Canines Legion. We ran assessments at 1080p and 4K for Borderlands 3 and at 4K with the opposite two video games. We even have three codecs: H.264/AVC, H.265/HEVC, and AV1. We’ll have two charts for every setting and codec, evaluating high quality utilizing VMAF and displaying the encoding efficiency.
For reference, VMAF scores (opens in new tab) comply with a scale from 0 to 100, with 20 as “unhealthy”, 40 equates to “poor,” 60 charges as “truthful,” 80 as “good,” and 100 is “glorious.” Normally, scores of 90 or above are fascinating, with 95 or increased being largely indistinguishable from the unique supply materials.
Video Encoding High quality and Efficiency at 1080p: Borderlands 3
Beginning out with a video of Borderlands 3 at 1080p, there are just a few attention-grabbing tendencies that we’ll see elsewhere. We’re discussing the “tuned” outcomes, as they’re barely extra apples to apples among the many GPUs. Let’s begin with AVC/H.264, which stays the preferred codec. Though it has been round for a couple of decade and has since been surpassed in high quality, its successfully royalty-free standing made for widespread adoption.
AMD’s GPUs rank on the backside of our charts on this check, with the Polaris, Vega, and RDNA 1 generations of {hardware} all performing about the identical when it comes to high quality. RDNA 2 improved the encoding a bit, largely with high quality at decrease bitrates growing by about 10%. RDNA 3 seems to make use of the similar logic from RDNA 2 for H.264, with tied scores on the 6900 XT and 7900 XTX which are a bit increased than the opposite AMD playing cards at 3Mbps, however the 6Mbps and 8Mpbs outcomes are successfully unchanged.
Nvidia’s H.264 implementations come out as the highest options, when it comes to high quality, with Intel’s Arc only a hair behind — you would be exhausting pressed to inform the distinction in observe. The Turing, Ampere, and Ada GPUs do only a tiny bit higher than the older Pascal video encoder, however there are not any main enhancements in high quality.
With our chosen “medium” settings on the CPU encoding, the libx264 codec finally ends up being roughly on the stage of AMD’s UHD 770 (Xe LP) graphics, only a contact behind Nvidia’s older Pascal-era encoder, at the least for these 1080p assessments.
So far as efficiency goes, which is one thing of a secondary concern supplied you possibly can break 60 fps for livestreaming, the RTX 4090 was quickest at practically 500 fps, about 15% forward of the RTX 2080 Ti, RTX 3090, and RX 7900 XTX. Intel Arc is available in third place, then Nvidia’s Pascal within the GTX 1650, adopted by the RX 6900 XT, after which the GTX 1080 Ti tackle Pascal — word that the GTX 1080 Ti and GTX 1650 had similar high quality outcomes, so the distinction is simply in video engine clocks.
Curiously, the Core i9-13900K can encode at 1080p quicker than the previous RX Vega 56, which can be about as quick because the UHD built-in graphics within the Core i9-13900K. AMD’s previous RX 590 brings up the caboose, at simply over 100 fps. Then once more, that most likely wasn’t too shabby again within the 2016–2018 period.
Switching to HEVC gives a strong bump in picture high quality, and it has been round lengthy sufficient now that nearly any trendy PC ought to have first rate GPU encoding assist. The one factor actually hindering its adoption in additional purposes is royalties, and lots of software program opted to stick with H.264 to keep away from that.
So far as the rating go, AMD’s numerous GPUs once more fall under the remainder of the choices on this check, however they’re fairly a bit nearer than in H.264. The RDNA 3 video engine is barely higher than the RDNA 1/2 model, however curiously the older Polaris and Vega GPUs principally match RDNA 3 right here.
Nvidia is available in second, with the Turing, Ampere, and Ada video engines principally tied on HEVC high quality. Intel’s earlier era Xe-LP structure within the UHD 770 additionally matches the newer Nvidia GPUs, adopted by Nvidia’s older Pascal era after which the CPU-based libx265 “medium” outcomes.
However the large winner for HEVC high quality finally ends up being Intel’s Arc structure. It is not an enormous win, nevertheless it clearly comes out forward of the opposite choices throughout each bitrate. With an 8Mbps bitrate, lots of the playing cards additionally attain the purpose of being very shut (perceptually) to the unique supply materials, with the Arc reaching a VMAF rating of 92.1.
For our examined settings, the CPU and Nvidia Pascal outcomes are principally tied, coming in 3~5% above the 7900 XTX. Intel’s Arc A770 simply barely edges out the RTX 4090/3090 in pure numbers, with a 0.5–1.9% lead, relying on the bitrate. Additionally word that with scores of 95–96 at 8Mbps, the Arc, Ada, and Ampere GPUs all successfully obtain encodings that may be nearly indistinguishable from the supply materials.
Efficiency is a bit attention-grabbing, particularly trying on the generations of {hardware}. The older GPU fashions had been typically quicker at HEVC than H.264 encoding, generally by a reasonably sizable margin — take a look at the RX 590, GTX 1080 Ti, and GTX 1650. However the RTX 2080 Ti, RTX 3090, and RTX 4090 are quicker at H.264, so the Turing era and later appear to have deprioritized HEVC pace in comparison with Pascal. Arc in the meantime delivers high efficiency with HEVC, whereas the Xe-LP UHD 770 does significantly better with H.264.
There is a large shift in general high quality, significantly on the decrease 3Mbps bitrate. The worst VMAF outcomes are actually within the mid 60s, with the CPU, Nvidia GPUs, and Intel GPUs all staying above 80. However let’s examine if AV1 can do any higher.
Is AV1 higher than HEVC on high quality? Maybe, relying on which GPU (or CPU) you are utilizing. Total, AV1 and HEVC ship practically equal high quality, at the least for the 4 samples we have now. There are some variations, however lots of that comes all the way down to the varied implementations of the codecs.
AMD’s RDNA 3 for instance does higher with AV1 than with HEVC by 1–2 factors. Nvidia’s Ada playing cards obtain their greatest outcomes up to now, with a couple of 2 level enchancment over HEVC. Intel’s Arc GPUs go the opposite method and rating 2–3 factors decrease in AV1 versus HEVC. As for the libsvtav1 codec CPU outcomes, it is principally tied with HEVC at 8Mbps, however leads by 1.6 factors at 6Mbps and has a comparatively substantial 5 level lead at 3Mpbs.
Given the price of RX 7900-series and RTX 40-series playing cards, the one sensible AV1 choice proper now for lots of people would be the Arc GPUs. The A380 by the way delivered similar high quality outcomes to the A770 (down to 6 decimal locations), and it was truly barely quicker — most likely as a result of the A380 card from Gunnir that we used has a manufacturing unit overclock. Alternatively, you would go for CPU-based AV1 encoding, which practically matches the GPU encoders and might nonetheless be accomplished at over 200 fps on a 13900K.
That is assuming you even need to trouble with AV1, which at this stage nonetheless would not have as widespread of assist as HEVC. (Attempt dropping an AV1 file into Adobe Premiere, for instance.) It is also attention-grabbing how demanding HEVC encoding could be on the CPU, the place AV1 by comparability seems to be fairly “straightforward.” In some circumstances (at 4K), the CPU encoding of AV1 was truly quicker than H.264 encoding, although that can clearly rely on the settings used for every, in addition to your specific CPU.
Video Encoding High quality and Efficiency at 4K: Borderlands 3
Transferring as much as 4K encoding, the bitrate calls for with H.264 grow to be considerably increased than at 1080p. With 8Mbps, Intel’s Arc claims high honors with a rating of simply 65, whereas Nvidia’s Ada/Ampere/Turing GPUs (which had successfully the identical VMAF outcomes) land at round 57, proper with the Intel UHD 770. Nvidia’s Pascal era encoder will get 54, then the CPU encoder comes subsequent at 48. Lastly, AMD’s RDNA 2/3 get 44 factors, whereas AMD’s older GPUs are slightly abysmal at simply 33 factors.
Upping the bitrate to 12Mbps and 16Mbps helps fairly a bit, naturally, although even the most effective outcome for 4K H.264 encoding with 16Mbps continues to be solely 80 — a “good” outcome however not any higher than that. You could possibly get by with 16Mbps at 4K and 60 fps in a pinch, however actually you want a contemporary codec to higher deal with increased resolutions with reasonable bitrates.
AMD made enhancements with RDNA 2, whereas RDNA 3 seems to make use of the identical core algorithm (similar scores between the 6900 XT and 7900 XTX), simply with increased efficiency. However the positive aspects revamped the sooner AMD GPUs largely materialize on the decrease bitrates; by 16Mbps, the most recent 7900 XTX solely scores 1–2 factors increased than the worst-case Vega GPU.
Encoding efficiency at 4K is a lot extra demanding, as anticipated. A lot of the options are roughly 1/3 as quick as at 1080p, and in some circumstances it may even be nearer to 1/4 the efficiency. AMD’s older Polaris GPUs cannot handle even 30 fps based mostly on our testing, whereas the Vega playing cards fall properly under 60 fps. The Core i9-13900K does plug alongside at round 80 fps, although for those who’re taking part in a recreation chances are you’ll not need to dedicate that a lot CPU horsepower to the duty of video encoding.
HEVC and AV1 encoding had been designed to scale as much as 4K and past, and our charts show this level properly. HEVC and AV1 with 8Mbps can typically beat the standard of H.264 with 12Mbps. The entire GPUs (and the CPU) also can break 80 on the VMAF scale with 16Mbps, with the highest outcomes flirting with scores of 90.
Should you’re after picture high quality, AMD’s newest RDNA 3 encoder nonetheless falls behind. It may be fairly quick at encoding, however even Nvidia’s Pascal period {hardware} typically delivers superior outcomes in comparison with something AMD has, at the least with our ffmpeg testing. Even the GTX 1650 with HEVC can beat AMD’s AV1 scores, at each bitrate, although the 7900 XTX does rating 2–3 factors increased with AV1 than it does with HEVC. Nvidia’s RTX 40-series encoder in the meantime wins in absolute general high quality, with its AV1 outcomes edging previous Intel’s HEVC outcomes at 4K on the numerous bitrates.
AMD has knowledgeable us that it is working with ffmpeg to get some high quality enhancements into the code, and we’ll should see how that goes. We do not know if it’ll enhance high quality loads, bringing AMD as much as par with Nvidia, or if it’ll solely be one or two factors. Nonetheless, each little little bit of enchancment is nice.
HEVC and AV1 aren’t actually any higher than H.264 from the efficiency perspective. AMD’s 7900 XTX does hit 200 fps with AV1, the quickest of the bunch by a reasonably large margin, which once more leaves us some hope for high quality enhancements. In any other case, for 4K HEVC, you will need a Pascal or later Nvidia card, an RDNA or later AMD card, or Intel’s Arc to deal with 4K encoding at greater than 60 fps (assuming your GPU can truly generate frames that rapidly for livestreaming).
Additionally discover the CPU outcomes for HEVC versus AV1 on the subject of efficiency. A few of that will likely be because of the chosen encoder (libsvtav1 and libx265), in addition to the chosen preset. Nonetheless, contemplating how shut the VMAF outcomes are for HEVC and AV1 encoding through the CPU, with greater than double the throughput you possibly can definitely make a very good argument for AV1 encoding right here — and it is truly quicker than our libx264 outcomes, probably attributable to higher multi-threading scaling within the algorithm.
Video Encoding High quality and Efficiency at 4K: Far Cry 6
The outcomes with our different two gaming movies at 4K aren’t considerably completely different from what we noticed with Borderlands 3 at 4K. Far Cry 6 finally ends up with increased VMAF scores throughout all three codecs, with some minor variations in efficiency, however that is about all we have now so as to add to the story.
Given we have not spent a lot time taking a look at it up to now, let’s take a second to debate Intel’s built-in UHD Graphics 770 in additional element — the built-in graphics on twelfth Gen Alder Lake and thirteenth Gen Raptor Lake CPUs. The standard is a bit decrease than Arc, with the H.264 outcomes typically falling between Nvidia’s Pascal and the more moderen Turing/Ampere/Ada era when it comes to VMAF scores. HEVC encoding then again is just one–2 factors behind Arc and the Nvidia GPUs normally.
Efficiency is a distinct story, and it is loads decrease than we anticipated, far lower than half of what the Arc can do. Granted, Arc has two video engines, however the UHD 770 solely manages 44–48 fps at 4K with H.264, and that drops to 24–27 fps with HEVC encoding. For HEVC encoding, the 12900K CPU encoding was quicker, and with AV1 it was about 3 times as quick because the UHD 770’s HEVC outcome whereas nonetheless delivering higher constancy.
Intel does assist some “enhanced” encoding modes if in case you have each UHD 770 and an Arc GPU. These modes are referred to as Deep Link (opens in new tab), however we suspect given the marginally decrease high quality and efficiency of the Xe graphics options that it’ll largely be of profit in conditions the place most efficiency is extra essential than high quality.
Video Encoding High quality and Efficiency at 4K: Watch Canines Legion
Watch Canines Legion repeats the story, however once more with elevated VMAF scores in comparison with Far Cry 6. That is most likely attributable to there being much less fast motion within the digital camera, so the body to border modifications are smaller than that enables for higher compression and thus increased goal high quality throughout the identical quantity of bitrate.
Efficiency finally ends up practically the identical as Far Cry 6 normally, and there is some variability between runs that we’re not likely making an allowance for — we would estimate a couple of 3% margin of error for the FPS, since we solely did every encode as soon as per GPU.
Instead view of all of the charts, this is the uncooked efficiency and high quality knowledge summarized in desk format. Notice that issues are grouped by bitrate after which codec, so it is maybe not that straightforward to parse, however this provides you with all the precise VMAF scores for those who’re curious — and it’s also possible to see how little issues modified between a few of the {hardware} generations when it comes to high quality.
We even have a second desk, this time coloured based mostly on efficiency and high quality relative to the most effective outcome for every codec and bitrate. One attention-grabbing side that is readily seen is how briskly AMD’s encoding {hardware} is at 4K with the RX 7900 XTX, and equally you possibly can see how briskly the RTX 40-series encoder is at 1080p. High quality in the meantime favors the 40-series and the Arc GPUs.
Take a look at Movies and Log Recordsdata
If you would like to see the unique video recordsdata, I’ve created this personally hosted BitTorrent magnet hyperlink:
magnet:?xt=urn:btih:df31e425ac4ca36b946f514c070f94283cc07dd3&dn=Tompercent27spercent20Hardwarepercent20Videopercent20Encodingpercent20Tests
UPDATE: That is the up to date torrent, with corrected VMAF calculations. It additionally consists of the log recordsdata from the preliminary encodes with incorrect VMAF, however these recordsdata have the FPS outcomes for the encoding occasions.
Within the torrent obtain, every GPU additionally has two log recordsdata. “Encodes-FPS-WrongVMAF” textual content recordsdata are the logs from the preliminary encoding assessments, with the backward VMAF calculations. “Encodes-CorrectVMAF” recordsdata are solely the corrected VMAF calculations (with placeholder “—” values for GPUs that do not assist AV1 encoding, to make our lives a bit simpler on the subject of importing the info used for producing the charts). For the curious, you possibly can see the earlier than and after VMAF outcomes in addition to the encoding pace FPS values by issuing the next instructions from a command line within the acceptable folder:
for /f "usebackq tokens=*" %i in (`dir /b *Encodes*.txt`) do sort "%i" | discover /i "vmaf rating"
for /f "usebackq tokens=*" %i in (`dir /b *Encodes*.txt`) do sort "%i" | discover /i "fps=" | discover /v "N/A"
If you would like to do your individual testing, the above BitTorrent magnet hyperlink will likely be stored energetic over the approaching months — be a part of and share for those who can, as that is seeding from my own residence community and I am restricted to 20Mbps upstream. Should you beforehand downloaded my Arc A380 (opens in new tab) testing, word that almost all the recordsdata are fully new as I recorded new movies at a better 100Mbps bitrate for these assessments. I imagine I additionally swapped the VMAF comparisons in that evaluate, so these new outcomes take priority.
The recordsdata are all unfold throughout folders, so you possibly can obtain simply the stuff you need for those who’re not concerned with the whole assortment. And for those who do need to get pleasure from all 350+ movies for no matter motive, it is a 16.4GiB obtain. Cheers!
For people who do not need to obtain all the recordsdata, I will present a restricted set of display captures from the Borderlands 3 1080p and 4K movies for reference under. I am solely going to do screens from latest GPUs, as in any other case this will get to be very time consuming.
Video Encoding Picture High quality Screenshots
The above gallery reveals screengrabs for only one body from the AMD RX 7900 XTX, Intel Arc A770, Nvidia RTX 4090, and CPU encodes on the Core i9-13900K. In fact a single body is not the purpose, and there could be variations in the place key frames are positioned, however that is why we used the “-g 120” choice in order that, in principle, all the encoders are placing key frames on the identical spot. Anyway, if you wish to flick thru them, we suggest viewing the total photos in separate tabs.
We have solely elected to do display grabs for the bottom bitrates, and that alone took loads of time. You’ll be able to at the least get some sense of what the encoded movies seem like, however the VMAF scores arguably supplied a greater information as to what the general expertise of watching a stream utilizing the desired encoder would seem like.
As a result of we’re solely doing the bottom bitrates from the chosen resolutions, word that the VMAF outcomes vary from “poor” to “truthful” — the upper bitrate movies undoubtedly look significantly better than what we’re displaying right here.
Video Encoding Showdown: Closing Ideas
After operating and rerunning the varied encodes a number of occasions, we have accomplished our greatest to attempt to stage the taking part in discipline, nevertheless it’s nonetheless fairly bumpy in locations. Perhaps there’s some choices that may enhance high quality with out sacrificing pace that we’re not acquainted with, however for those who’re not planning on placing a ton of time into figuring such issues out, these outcomes ought to present a very good baseline of what to anticipate.
For all of the hype about AV1 encoding, in observe it actually would not look or really feel that completely different from HEVC. The one actual benefit is that AV1 is meant to be royalty free, however for those who’re archiving your individual motion pictures you possibly can definitely persist with utilizing HEVC and you will not miss out on a lot if something. Perhaps AV1 will take over going ahead, similar to H.264 grew to become the de facto video customary for the previous decade or extra. Definitely it has some large names behind it, and now all three of the main PC GPU producers have accelerated encoding assist.
From an general high quality and efficiency perspective, Nvidia’s newest Ada Lovelace NVENC {hardware} comes out because the winner with AV1 because the codec of alternative, however proper now it is solely accessible with GPUs that begin at $799 — for the RTX 4070 Ti. It ought to ultimately arrive in 4060 and 4050 variants, and people are already transport in laptops, although there’s an essential caveat: the 40-series playing cards with 12GB or extra VRAM have twin NVENC blocks, whereas the opposite fashions will solely have a single encoder block, which may imply about half the efficiency in comparison with our assessments right here.
Proper behind Nvidia when it comes to high quality and efficiency, at the least so far as video encoding is anxious, Intel’s Arc GPUs are additionally nice for streaming functions. They really have increased high quality outcomes with HEVC than AV1, principally matching Nvidia’s 40-series. You too can use them for archiving, positive, however that is seemingly not the important thing draw. Nvidia undoubtedly helps extra choices for tuning, nevertheless, and appears to be getting extra software program assist as properly.
AMD’s GPUs in the meantime proceed to lag behind their competitors. The RDNA 3-based RX 7900 playing cards ship the highest-quality encodes we have seen from an AMD GPU up to now, however that is not saying loads. The truth is, at the least with the present model of ffmpeg, high quality and efficiency are about on par with what you would get from a GTX 10-series GPU again in 2016 — besides with out AV1 assist, naturally, since that wasn’t a factor again then.
We suspect only a few persons are going to purchase a graphics card purely for its video encoding prowess, so test our GPU benchmarks and our Secure Diffusion assessments to see how the varied playing cards stack up in different areas. Subsequent up, we have to run up to date numbers for Secure Diffusion, and we’re taking a look at another AI benchmarks, however that is a narrative for one more day.