Sui Huang
1 min readNov 24, 2023

--

Thank you for this wonderfully crisp exposition of the need to differentiate between AGI and AHI and the proposal of a rigorous taxonomy. I belong to those who have committed the negligence of not making this distinction....

But I still I do not know how this discussion is helped by the Halting/Entscheidungs-problem which is such an all encompassing category of thought in that AHI is also subject to these constraints, anyway., inevitably

I think it is worth putting AHI within your AGI framework of C(L) and Omega(L).

Crudely, where would AHI "fit" in your AGI taxonomy framework? Between Level 4 and 5?

Here is my take: Both the human brain and any AI system based on deep NN are subject to the very same math and logics, and the "training" leading to AHI (via phylogenesis and ontogenesis and schooling) is amazingly similar to that of LLMs... if one considers the biology of evolution of complex traits. Only the phsyical implementation is different.

I summarized the equivalence here: https://cancerwarrior.medium.com/on-the-plausibility-and-inevitability-of-artificial-general-intelligence-agi-it-is-in-the-d77dd5d117c4

Thus I think that one cannot simply say "The Incomprehensible Complexity: Human intelligence, a complexity of cognitive, emotional, and experiential threads, stands as a paradigm far beyond the current computational grasp. "

Given your sold AGI taxonomy framework, it may be worth thinking about how to place AHI into it. AHI must be part of AGI in the most universal sense.

--

--