5 Comments
User's avatar
Satoshigod's avatar

Bravo. One of a few true articles regarding AI. The way it's built right now it's just hype. It is useful in some cases, but far far away from cognition and AGI.

Expand full comment
Jesse Parent's avatar

LLMs, big money, and big economy (including land grabs). Even in 2021 I remember conversations about the economics of big data and DL. We aren't out of that tide, yet.

Expand full comment
John Luchau's avatar

Briliant! And so true.

Expand full comment
John Beatty's avatar

Think of any "AI" as a very expensive, very sophisticated auto-complete program and you'll be OK...because it isn't much more than that.

Expand full comment
George Burch's avatar

Hi Srini, I totally agree. But there are solutions with the same purpose. One is the AICYC semantic AI model (SAM) that is fully tested on a global scale. I took your well summarized description and added a couple of must reads articles describing how SAM attempts to meet your criteria. LLM never will.

"It’s the full mental process by which an entity perceives and interprets its situation [ 1]

, maintains relevant context in memory [2], learns from new information [1], forms concepts, reasons about intents, relationships, and implications [2], takes appropriate actions, and adapts its beliefs, behaviors, and understanding [2], all continuously, autonomously in real time."

[1] TL;(DR: Extending the Semantic AI Model (SAM) with do-calculus lets AI reason about cause and effect, not just correlation. This allows for better prediction of intervention outcomes, avoids spurious correlations, supports counterfactual reasoning ("what if"), and enables transfer learning. It achieves this by using causal graphs, structural equation models, and do-calculus rules, and integrates with reinforcement learning. This makes SAM better at understanding and manipulating the physical world.

http://aicyc.org/2025/04/30/sam-implementation-of-a-belief-system/

[2] TL;DR: This post discusses implementing a belief system in a Semantic AI Model (SAM) using Dempster-Shafer theory. It describes how to:

* Represent beliefs and handle uncertainty using second-order logic

* Define axioms and belief functions in SAM’s knowledge graph

* Perform belief updates using fuzzy logic

* Implement Dempster’s rule for combining evidence

The post includes a detailed example modeling anti-vaccination beliefs, showing how SAM can update beliefs and compute pignistic probabilities to make decisions. The implementation aims to help SAM reason about complex topics with uncertain or conflicting information.

http://aicyc.org/2025/04/30/sam-implementation-of-a-belief-system/

Expand full comment