Unterstützt AMD On-Die-Videodekodierung?

415
mrK

Unterstützen AMD-Prozessoren On-Die-Video-Dekodierung auf dieselbe Weise wie die Intel Core-Serie? Wenn ja, kann jemand auf irgendeinen Artikel verweisen, der die beiden vergleicht?

1

3 Antworten auf die Frage

0
Rich Homolka

Ich nehme an, Sie meinen auf Paket, nicht auf Die. Der Chip ist Teil der Chipherstellung. Wenn Sie "auf demselben Chip" sehen, werden die Schaltungen auf das gleiche Silizium gedruckt.

Es hängt wirklich davon ab, auf welcher Stufe du meinst. AMD und Intel haben seit dem ersten Pentium Multimedia-Erweiterungen. Diese helfen bei vielen mathematikintensiven Aufgaben, einschließlich Video.

AMD Unterstützt einige Videobeschleunigung . Es befindet sich auf dem gleichen Silizium-Hunk, allerdings nicht in der eigentlichen GPU. Ich denke nicht, dass dies mit der Vorgehensweise von Intel kompatibel ist, also nicht sicher, ob dies zu Ihrer "Like Intel Core-Serie" passt.

mrK scheint eher von CPUs als von GPUs zu sprechen (also von Intel & AMD, nicht von Nvidia & AMD / ATI). ATI Avivo ist eine Reihe von Technologien in ATI-Grafikkarten, nicht in AMD-CPUs. Lèse majesté vor 11 Jahren 0
0
Sourav

AMD APU [Vision series] does with inbuilt ATI GPU

0
Lèse majesté

There are currently 2 main types of on-die video acceleration: APUs and SIMD instruction set extensions. APUs are simply IGP GPUs that sit on the chip rather than being part of the motherboard chipset. Like other IGPs, they share the main system memory, but they are accessed and operate separately from the CPU itself. Both Intel and AMD have processors with APUs.

The other type of on-die video acceleration are SIMD instruction sets that are part of the CPU architecture itself. These are part of the CPU proper, and they're accessed via CPU instructions. SIMD instruction sets give CPUs the vector processing capabilities usually only found on GPUs, Stream Processors, and DSPs.

Specifically, SIMD instructions are used to apply a single operation to a large set of data, which is stereotypical of the types of mathematical operations performed in multimedia processing, 3D modeling, scientific modeling, etc., which are problems with a high level of data parallelism. The reason they were historically excluded from CPU ISAs is because they're not useful for most traditional general-purpose computing tasks like running OSes or word processors, surfing the web, reading email, etc., which rely on SISD or perhaps MISD instructions.

However, as casual computing has evolved to include more gaming and multimedia, CPU manufacturers began to add such instructions to CPU architectures in order to boost computer performance without needing a powerful GPU (either in an IGP or discrete video card). This began in mainstream computing with MMX, then SSE, and now the latest iteration is SSE5 proposed by Intel but also largely supported by AMD in the Bulldozer cores.

The one thing that previous GPUs (and other dedicated coprocessors) had over CPU instruction set extensions was that GPU architectures are designed for very specific applications like 2D/3D rendering, video encoding/decoding, etc., whereas CPU architectures have to be generalized to handle all types of applications, so even with their SIMD extensions, they're not a match for dedicated GPUs in terms of speed. But since Intel introduced Quick Sync on some of their CPUs, this has somewhat changed. Sandy Bridge CPUs with Quick Sync can actually transcode video much faster than even high-end discrete video cards. Though the downside is that the results are somewhat lower quality than pure software transcoding, but this seems to be true with hardware accelerated video transcoding in general.

And this is perhaps the main problem with hardware accelerated video. It's easy for developers to support software encoding/decoding because they're only using the standard x86 instruction sets. To do hardware encoding/decoding, there are no industry standards, only vendor-specific proprietary extensions. So even comparing one hardware solution to another is difficult because different video encoders/decoders will be better adapted for a specific hardware solution. CPU and GPU manufacturers recognize this too, and so they all form close alliances with specific software vendors in order to ensure there's a leading video transcoding application which performs best on their technology (Nvideo CUDA, or AMD APP, or Intel Quick Sync).

If you're interested in comparisons between the leading hardware acceleration technologies for video encoding/decoding, I would suggest this article on Tom's Hardware. But ultimately their conclusion was that (at least in 2011) there's no clear winner. For speed, you probably want to use Quick Sync, but output quality is a different matter, and that's where your chosen transcoder and playback software matters.