Discussion about this post

User's avatar
Roi Ezra's avatar

Most predictions around AGI are based on observable trends:

* Improved reasoning capabilities

* Generalization across tasks

* Multi-modal integration

* Larger context windows

* Faster fine-tuning and adaptation

The assumption is simple: as these models get better at more things, and require less prompting to perform those things, we are approaching general intelligence.

And in some domains, this is true. Models can now:

* Write complex code from vague specs

* Perform at superhuman levels on symbolic benchmarks

* Simulate relational empathy and reflection

* Reason across ambiguous inputs

But none of this proves the presence of integration.

What it proves is: models can perform coherence. That is not the same as being coherent.

Expand full comment

No posts