Christ is Risen!
For those interested, it is the cheapest solution for Hardware AV1 encoding (100+ fps in 4K H264 8bit 4:2:0->4K AV1 in Handbrake 1.6.1 and ~300fps @ 1080p in h264 and 250+ in h265 on an 8-core CPU for further file compression) and you can easily integrate it into a machine that has RTX 2000/3000 & RX 5000/6000 series cards (or even older ones) and thus obtain the most affordable HW AV1 encoding (works perfectly in Resolve Studio, as it takes care of the exported video encoding and not the timeline rendering--also a top solution for HW decoding H264/5 with Quicksync with the appropriate settings in the program).
I acquired it exclusively to upgrade my system with AV1 encoding in Handbrake (1.6.1+) and in Davinci Resolve Studio where I work, as it is not cost-effective to go for the RTX 4000 just for AV1, having already an RTX 3000 series card for video editing tasks. AMD cards have very mediocre media engines (in terms of video quality, see the linked image below with the measurement results on modern hardware).
The theoretical experiment worked perfectly and you can have Intel Quicksync AV1 in any application that supports it (OBS 29.0.2+/Handbrake 1.6.1/Resolve Studio 18.1.4+/Filmora 12/Topaz Video AI 3.0.11+), without having to spend a small fortune to have 2 media engines (where, for example, one can record in OBS and the other can do encoding for streaming with different quality settings or both for final video rendering). Theoretically, you can load each media engine with more than 1 task, but you risk dropped frames (due to a weak CPU) or significantly reduced performance (during video export/rendering). The fact that each A380 media engine handles the "heavy" work doesn't mean that the CPU is idle, but it actively contributes to the management, so you will see high CPU % utilization.
The card also supports HyperEncode, which is supported on 12th generation Intel and above (automatically activated), allowing parallel rendering with your processor's iGPU (always choose such models and not the F ones that don't have it and cost slightly less, they are worth the money when you eventually need it).
It also works on 3rd and 9th generation Intel, as well as on AMD Ryzen 5000, but lacks Resizable BAR in BIOS (this only applies to 3rd gen, for 9th gen processors they finally added Res.BAR@BIOS), which significantly reduces its performance (according to Intel) in games. I didn't test it at all for gaming, but for Intel Quicksync tasks, where it works perfectly regardless of No Res.BAR=OFF.
Another major issue with the drivers is that on older generations before the 10th (as stated by Intel), it is not guaranteed to work as the primary 1st GPU (it works as a secondary GPU normally in all the applications mentioned above), something that other users have also reported, but I'm talking about my personal tests.
It may work in the system and be displayed normally in the Task Manager, but some graphic applications do not run and crash when opened. Personally, it doesn't bother me because I got it to be placed in the 2nd/3rd PCI-E slot and assist in encoding/decoding tasks, NOT as the main card, I clarify this because your motherboard in combination with the BIOS and your CPU may cause issues.
In the end, I ended up using it completely fanless, as I removed its fan and plastic frame to make it a 1-slot card and work exclusively as a QuickSync Hardware Encoder. However, do not use it like this for compute tasks because it raises the temperature significantly, as the power consumption exceeds 20W (during Quicksync tasks). I even removed the PCI bracket and installed it as a standalone card in the last (3rd) PCI slot PCI-E 4.0 4X to give full speed bandwidth to the other two powerful GPUs I have for rendering. It is something that works flawlessly if you want to maximize the capabilities of your machine.
The positive thing is that Intel continuously works on its drivers to eliminate such issues, giving hope to everyone, beyond the constantly increasing new games support, which is a different matter that I am not familiar with because I am not into video games.
I also did some test video editing on this machine to see if it performs correctly, and it has no problem with the "hack" in the Resolve Studio settings, allowing the card to function as the primary (bypassing the integrated GPU in the PCI-E slot 1) and do everything on its own. It gives you such flexibility. Read below for the answers I gave to someone who asked, so I don't have to rewrite them.
It is the most exceptional/affordable solution for AV1 Streaming+Encoding (Recording), as even the A380 has 2 media engines, comparable in quality to the RTX 4070Ti/4080/4090 at a "cutting-edge" price, as the fresh 4070 only has one media engine. Quality measurements of the final video have been done by many websites, and Intel is 1-3% worse than the top NVENC chips of the RTX 4000, although there are reports that Intel provides better quality in AV1 streaming compared to the RTX 4000. Regardless, it does an excellent job with both 8-bit and 10-bit footage.
Detailed results can be found in the mentioned table, where VMAF=the quality index of compressed video compared to the original source video (probably uncompressed) and how it is rated with 100 being the highest score in the final result provided by each card with different generation Encoders. There is also a measurement in software mode (via CPU), which is considered one of the top solutions but also the slowest at the same time.
EposVox on YouTube has done several comparisons of the performance of each card and company (Intel, NVIDIA, AMD), and you can see the detailed results in videos (search for AV1 on his channel, top right corner to show only the corresponding measurement videos).
To not bore you, if you want an affordable card with plenty of VRAM for video editing+AV1 encoding, then the ARC A770 16GB is the way to go with its dual media engines, while the specific A380 can handle decent 4K H264/5 8/10-bit 4:2:2 with its satisfactory 6GB VRAM in the most demanding video editing program (with the best utilization of GPU resources), which is none other than Davinci Resolve Studio.
The free version of Davinci Resolve does NOT utilize the GPU @ render/export time, but it does it through the CPU, which will delay it accordingly depending on the number of cores you have (or don't have). The equivalent solution for a laptop is the ARC A370M 4GB, which, although it has fewer GB of VRAM, performs the same as the desktop A380 in all the tests I conducted.
Intel made a smart move by penetrating the content creation machines through AV1, where it would be impossible with NVIDIA's exclusive dominance due to CUDA cores. Consider it as an HW acceleration card for media creation, and you will not be disappointed at all, especially now that its price has dropped below €150.
P.S. December '23: Besides being one of the top media engines for 4K 10-bit 422 (which you won't find anywhere else except for iGPU 12/13/14th gen for modern mirrorless cameras instead of creating H264 proxies in 1080p), it also works perfectly for AI subtitling (large model v1/2/3, the largest) in Greek/English due to its 6GB VRAM (4GB GPU only supports medium), just like NVIDIA does with CUDA. So, one more reason. You can find all of these features combined in the free and top-notch Subtitle Edit in Whisper (video to text menu) with Faster Whisper/CPP/Const-me engines, while for the same price, you will barely get a 4GB NVIDIA! Happy New Year, I'm telling you!! :)
P.P.S. Finally, Davinci Resolve (FREE) now supports up to one graphics card in its free version, so the limitations of the past are gone. With 6GB VRAM, you can perfectly handle 4K timeline and editing, as long as you don't overload it with too many effects (which are only available in the Studio version anyway). But still, don't lose your sleep over it, as the program constantly fills/empties the VRAM according to its needs. Ultimately, before rendering, it would be good to reboot and open ONLY Resolve for rendering, so that Windows can release the VRAM they consume from ARC to display on the screen. This way, Resolve will have the maximum available memory (and consequently, the best performance) during rendering. Of course, this applies if you use it as the primary card (with your monitor(s) connected). If it's in another slot as I mentioned before, then there is no such issue with Windows.
P.P.P.S. What I didn't mention clearly enough earlier is that this particular affordable card provides the BEST hardware acceleration for video editing, as it supports a huge number of video codecs compared to AMD/NVIDIA GPUs by default. So, it is the best and most cost-effective solution to turn any PC into a video editing station, as long as you have the recommended 16GB RAM by Resolve, as it also consumes a significant amount of memory. As a program, you should know that the more memory you give it, the more it "spreads" and utilizes it accordingly.
Finally, for those who have the obvious question, in practice, the new Intel Core Ultra processors are CPU+ARC GPU with the best media engine (QuickSync) that does a BETTER job than a dedicated GPU without QuickSync because they lack support (in terms of codecs) that iGPUs generally have, and ARC is more complete (and newer) compared to UHD 770/710 included in all 12/13/14th gen Intel CPUs (desktop & laptop). We don't have much information about the codecs supported by AMD CPU/GPU's RDNA2/3 Radeon iGPU, although they do have acceleration, but QuickSync excels in this aspect, which is why all video editors prefer it. The media engine was not upgraded in the brand-new Core Ultra processors (compared to ARC A380/580/750/770), so don't be tempted to buy a laptop for the new capabilities in video editing/transcoding/encoding/decoding if you have a desktop that supports Intel ARC dGPU with Resizable BAR support.
The ARC dGPU in practice was the preparation that Intel needed to integrate the A380 into the Meteor Lake, but the performance is not the same, because the dGPU consumes many more watts than the entire Meteor Lake (CPU/GPU/NPU), and that is obvious. I hope I made myself understood!
Finally, thank God for glory!