Contribute
Register

Hackintosh in the future

I run Sierra / High Sierra on one disk and Mojave / Catalina on another.
So I guess Sierra / High Sierra on HFS partitions and Mojave / Catalina each one on its own APFS container?
There's no issue or particular trick for having Mojave / Catalina on the same drive?
Do you think it'll be the same for Mojave / Monterey?
 
Do you think it'll be the same for Mojave / Monterey?
I've tried this dual boot scenario on the same drive with these macOS versions. It does work. The only problems that might occur when you try to use the latest OC and kexts on a much older OS like Mojave. It doesn't always work seamlessly. You might need to experiment with some different versions of OC. I guess it depends on what hardware you are installing on as well.
 
Last edited:
I've tried this dual boot scenario on the same drive with these macOS versions. It does work. The only problems that might occur when you try to use the latest OC and kexts on a much older OS like Mojave. It doesn't always work seamlessly. You might need to experiment with some different versions of OC. I guess it depends on what hardware you are installing on as well.
Until now, I've kept updating OC on 10.14 without any issue, but you're right, OC's settings/kexts might differ between OSes. Anyway, I'm a bit sceptical about being able to update my Z68 to Ventura+ (or if it's worth the pain ;) ).
 
Just some personal speculation, but, guys, do you think that with the help of Artificial Intelligence could "custom" drivers be generated so you can launch any graphics on any system for example? because maybe AI on all that dark side inside the machine, that is really hard to understand and crack, maybe just maybe an AI could do it rather easily? And this might unlock some unforeseen possibilities, I'm wondering if anyone has already started tinkering with this option :p

I'm making my "last" for now hackintosh with a 13700k and a 6950XT wish me luck on the build!
 
Just some personal speculation, but, guys, do you think that with the help of Artificial Intelligence could "custom" drivers be generated so you can launch any graphics on any system for example? because maybe AI on all that dark side inside the machine, that is really hard to understand and crack, maybe just maybe an AI could do it rather easily? And this might unlock some unforeseen possibilities, I'm wondering if anyone has already started tinkering with this option :p

I'm making my "last" for now hackintosh with a 13700k and a 6950XT wish me luck on the build!

I think this is a great philosophical question because it calls out how expectations about the new AI can't be handled with common sense— My point is not that the question lacks common sense, but rather that common sense is of no help to understanding the powers of the new AI, which leads to magical thinking.

While the GPU interfaces are public, the way that Apple writes its drivers is not, where will the AI to get its training.

Moreover, even if it could be trained the instruction set architecture of GPUs is an area of continual progress and refinement, so adapting a new GPU may require major structural changes in the approach to getting work done on the device. For example, imagine a building construction site analogy where you might ask an AI explain a conversion of construction processes from bricklaying to erecting prefabricated reinforced concrete panels, but without access to blueprints or exposure to the work patterns, what will it say?

Survey this tech article on the latest AMD consumer offerings:


From the article:
[GPU] Register files have to deliver exceptionally high bandwidth especially for vector execution. Having a larger register file potentially lets a GPU keep more work in flight, which is critical for hiding latency. However, AMD probably decided that the extra power and die area required to implement a larger register file wasn’t worthwhile for lower end products. Therefore, the RX 7600 has a 128 KB register file per SIMD, compared to the 192 KB register file found on the RX 7900 XTX. A WGP has four SIMDs, so the RX 7600 has 8 MB of vector registers across the entire GPU. For comparison, the 7900 XTX has 36.8 MB of vector registers.

Imagine the problem of constructing the correct prompt for a LLMA transformer to get it to refactor an existing driver to take these parameters into account, even if it has been trained on both the GPU spec and binary sample driver code.

Alohomora!

The systems developer can do it because he has a model of the system in mind, at multiple levels of abstraction.

An LLMA has no mind, and therefore no mental model for what is to be accomplished: it's going to apply a billion interdependent vectors probability distribution to get from the inputs to the outputs, where ultiumately the judgement of whether the output is satisfactory will be determined by a person who applies the LLMA, not by some self-selective trait of the LLMA. If a LLMA transforms in the forest and no one is around to hear, does it make a noise? As to whether the graph traversed by the LLMA includes features that can be likened to concepts I cannot say. Even to consider this question is to entertain a theory of mind, which AFAIK there none— Not just that there is none yet developed, but that likely there can never be any theory of mind because the mind is, of its nature, substantially unconscious, whereas any theory is by definition an intelligible manifestation. There's a bootstrapping problem such that the mind is trapped away from any deep account of itself by virtue of its own precepts. There's still reason for us to hope to learn more about the limits of mind, which you might term a "theory of mind" similar to how the term "artificial intelligence" refers to the weakly defined term "intelligence."

Yet the original question is still a good one! AI might well be able to report a survey of device-type + hack approaches that reveals compatibility opportunities otherwise overlooked.

There's a general problem of suitability of any AI-produced code to any purpose whatsoever. It's oft remarked that AI produces unexpected "hallucinatory" results, but this is a misunderstanding from the get-go: there's a far deeper unexamined, and possibly undecidable, concern of knowing when AI is not hallucinating! So far the consensus is amazement that AI output is so intelligible, with no criteria whatsoever about its reliability in context. Every rational domain of AI application is rife with grotesque edge-cases and extreme ungraceful degradations.

It seems to me that AI's value today is the power to inspire, not invent.

But just for grins try to explain why we are impressed by AI generated sonnets and haiku? Could macOS device drivers be generated poetically, depending on how you look at it?

I haven't encountered any vocabulary of the AI that can make sense of your question, so maybe magic is on the table? But the observation of limits to mind should guide our expectations: imagine you have magical powers, but they can't do just anything. For example, to wield the magic requires an incantation, a caster, arcane rites, and an subject for the spell... "You get three wishes—" "The Midas Touch," etc.
 
Last edited:
I think this is a great philosophical question because it calls out how expectations about the new AI can't be handled with common sense— My point is not that the question lacks common sense, but rather that common sense is of no help to understanding the powers of the new AI, which leads to magical thinking.

While the GPU interfaces are public, the way that Apple writes its drivers is not, where will the AI to get its training.

Moreover, even if it could be trained the instruction set architecture of GPUs is an area of continual progress and refinement, so adapting a new GPU may require major structural changes in the approach to getting work done on the device. For example, imagine a building construction site analogy where you might ask an AI explain a conversion of construction processes from bricklaying to erecting prefabricated reinforced concrete panels, but without access to blueprints or exposure to the work patterns, what will it say?

Survey this tech article on the latest AMD consumer offerings:


From the article:


Imagine the problem of constructing the correct prompt to get a LLMA transformer to get it to refactor an existing driver even if it has been trained on both the GPU spec and binary sample driver code.

Alohomora!

OTOH the AI might well be able to report a survey of device-type + hack approaches that reveals compatibility opportunities otherwise overlooked.

There's a more general problem of suitability of any AI-produced code to any purpose whatsoever. It's oft remarked that AI produces unexpected "hallucinatory" results, but this is a misunderstanding from the get-go: there's a far deeper unexamined, and possibly undecidable, concern of knowing when AI is not hallucinating! So far the consensus is amazement that AI output is so intelligible, with no criteria whatsoever about reliability in context. Every rational domain of application is rife with grotesque edge-cases and extreme ungraceful degradations.

It seems to me that AI's value today is the power to inspire, not invent.

But just for grins try to explain why we are impressed by AI generated sonnets and haiku? Could drivers be generated poetically, depending on how you look at it?

I haven't encountered any vocabulary of the AI that can make sense of your question, so maybe magic is on the table?

I got a very similar answer from one of my friends the other day when I asked him if AI could generate floor plans and buildings designs for me.

He told me:

"Sure after you program all the data sets it would need to understand what you are telling it. Think of AI as an Infant or a child that needs to be taught whatever it is you want it to be a specialist at."
 
Think of AI as an Infant or a child that needs to be taught whatever it is you want it to be a specialist at."

I think is a hopelessly unsuitable analogy. Unless you believe a child is a mere product of training, which even the most cursory examination of human growth and development unequivocally reveals cannot be the case. You can introspect this matter. Ask yourself: "At what point was my mind given to me." QED.

Tabla rasa is archaic and abandoned centuries ago.

 
Well in my opinion we're just seeing the tip of the iceberg with all this technology yet, so yeah you can program a deep mind, artificial intelligence, data science, some combinations of all of the above… I mean you see what i mean? You can train a model to find source code if you know how to program it, may require some samples or maybe can do it from scratch IDK but that's a possibility…

But yeah the most probable answer is that customization and such become less and less easy in general and we all have to use AIO products in general… just check the new mac pro, 0 upgradeable parts, it was obvious they would end up doing that. I supose they will argue that you have 8 PCIe ports and shitload of thunderbolts… but no internal ROM, RAM or CPU can be changed, which in part is ridicolous, but it's not at the same time… so yeah, the best we can do is try to adapt to what we currently have and keep looking forward because they ain't stoppin', we ain't stoppin' either ahhahahaha

Good talk!

PS: I hope I can stretch my new computer as much as I can :D this i7 4790k is still running strong, but I just need more horse power for current workflows, but I'd say you don't need to stay exactly at the edge to create stuff and use the computer for amazing ****.
 
To be honest, I don't think it's over YET. Instead of x86 it will be ARM. Microsoft already released their Windows 11 ARM Developer Preview : https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewARM64 so it means that we could still TRY getting the ARM MacOS image on usb, and use it in a ARM based PC. But x86 Hackintosh will soon die, which should be obvious.
 
Last edited:
Back
Top