Just some personal speculation, but, guys, do you think that with the help of Artificial Intelligence could "custom" drivers be generated so you can launch any graphics on any system for example? because maybe AI on all that dark side inside the machine, that is really hard to understand and crack, maybe just maybe an AI could do it rather easily? And this might unlock some unforeseen possibilities, I'm wondering if anyone has already started tinkering with this option
I'm making my "last" for now hackintosh with a 13700k and a 6950XT wish me luck on the build!
I think this is a great philosophical question because it calls out how expectations about the new AI can't be handled with common sense— My point is not that the question lacks common sense, but rather that common sense is of no help to understanding the powers of the new AI, which leads to magical thinking.
While the GPU interfaces are public, the way that Apple writes its drivers is not, where will the AI to get its training.
Moreover, even if it could be trained the instruction set architecture of GPUs is an area of continual progress and refinement, so adapting a new GPU may require major structural changes in the approach to getting work done on the device. For example, imagine a building construction site analogy where you might ask an AI explain a conversion of construction processes from bricklaying to erecting prefabricated reinforced concrete panels, but without access to blueprints or exposure to the work patterns, what will it say?
Survey this tech article on the latest AMD consumer offerings:
Editor’s Note (6/14/2023): We have a new article that reevaluates the cache latency of Navi 31, so please refer to that article for some new latency data. Late last year, AMD launched high end RDNA…
chipsandcheese.com
From the article:
[GPU] Register files have to deliver exceptionally high bandwidth especially for vector execution. Having a larger register file potentially lets a GPU keep more work in flight, which is critical for hiding latency. However, AMD probably decided that the extra power and die area required to implement a larger register file wasn’t worthwhile for lower end products. Therefore, the RX 7600 has a 128 KB register file per SIMD, compared to the 192 KB register file found on the RX 7900 XTX. A WGP has four SIMDs, so the RX 7600 has 8 MB of vector registers across the entire GPU. For comparison, the 7900 XTX has 36.8 MB of vector registers.
Imagine the problem of constructing the correct prompt for a LLMA transformer to get it to refactor an existing driver to take these parameters into account, even if it has been trained on both the GPU spec and binary sample driver code.
Alohomora!
The systems developer can do it because he has a model of the system in mind, at multiple levels of abstraction.
An LLMA has no mind, and therefore no mental model for what is to be accomplished: it's going to apply a billion interdependent vectors probability distribution to get from the inputs to the outputs, where ultiumately the judgement of whether the output is satisfactory will be determined by a person who applies the LLMA, not by some self-selective trait of the LLMA. If a LLMA transforms in the forest and no one is around to hear, does it make a noise? As to whether the graph traversed by the LLMA includes features that can be likened to concepts I cannot say. Even to consider this question is to entertain a theory of mind, which AFAIK there none— Not just that there is none yet developed, but that likely there can never be any theory of mind because the mind is, of its nature, substantially unconscious, whereas any theory is by definition an intelligible manifestation. There's a bootstrapping problem such that the mind is trapped away from any deep account of itself by virtue of its own precepts. There's still reason for us to hope to learn more about the limits of mind, which you might term a "theory of mind" similar to how the term "artificial intelligence" refers to the weakly defined term "intelligence."
Yet the original question is still a good one! AI might well be able to report a survey of device-type + hack approaches that reveals compatibility opportunities otherwise overlooked.
There's a general problem of suitability of any AI-produced code to any purpose whatsoever. It's oft remarked that AI produces unexpected "hallucinatory" results, but this is a misunderstanding from the get-go: there's a far deeper unexamined, and possibly undecidable, concern of knowing when AI is not hallucinating! So far the consensus is amazement that AI output is so intelligible, with no criteria whatsoever about its reliability in context. Every rational domain of AI application is rife with grotesque edge-cases and extreme ungraceful degradations.
It seems to me that AI's value today is the power to inspire, not invent.
But just for grins try to explain why we are impressed by AI generated sonnets and haiku? Could macOS device drivers be generated poetically, depending on how you look at it?
I haven't encountered any vocabulary of the AI that can make sense of your question, so maybe magic is on the table? But the observation of limits to mind should guide our expectations: imagine you have magical powers, but they can't do just anything. For example, to wield the magic requires an incantation, a caster, arcane rites, and an subject for the spell... "You get three wishes—" "The Midas Touch," etc.