A Wyld Ride πΈπ΄ββ οΈ 2
It's Been Wyrd
Part Deux
"It's dangerous to go alone, here take this.
It'll help."
Turns out I was dreaming that the winners had been posted the other day, because they were just posted less than an hour ago.
I was correct that I did not win.
https://x.com/levelsio/status/1915127796097290534
But, I'm still okay with that. Because I have lots of plans in store for my games and my business. And it will be as fun or more than this last month has been pursuing that goal of making a game for the contest.
Not only that, but I have a platform in place letting me pursue my own experimentation plans for how to reach AGI.
Because I believe wholeheartedly that the ways people are building agents and using LLMs in automation workflows is too primitive for creating actually useful AGI systems.
What do I mean by too primitive? I don't say it to be offensive or downplay the incredible achievements the frontier labs and companies have accomplished. Just that I see them all repeating the same foundational assumptions that create a system that cannot possibly mimic human intelligence and capacity to grow and self correct.
Memory is the major issue, but that still simplifies the problem too much.
Because we humans don't have one memory system. We have layers, and imagination.
We have active recall, conscious memories, subconscious memory systems that recursively review and retrain the autonomic responses of the system on each days experiences while we sleep, we have dreams with their own memories systems, and an imagination that effectively lets us alter reality to suit our will. Why is no one trying to use more than one or two methods to provide memory to an agentic system?
This is one of the problems I think I can solve. And with robust memory tools, an agentic system can then be wired in such a way to use them to facilitate evolving capacities in real-time.
This is how we make NPCs that are indistinguishable from human players in a game world. And if we can do that, we can easily repurpose them for use in real-world use cases such as virtual employees with accountability and capacity to correct and learn from mistakes.
Another issue is modality, should be omni-directional at all times. Human brains are capable of it, even if they don't default to it in every use case. A similar framework could be built on top of a model that was truly omni-modal.
That being said, I believe there are powerful use cases for many of the existing architectures for models that are one modality or dual modality currently. I've seen a few agentic systems that take a couple steps in this direction but also seem too afraid to go too deeply into it due to the rising costs of API calls for enough loops in thinking to create appreciable improvements. Though I'd wager there are enough optimization strategies available today to do what we need to for allowing us to run multiple models on devices to accomplish this right now.
There are other circuits that need connecting as well, but these need to be better researched and experimented with more first to make the others more useful when their time comes. Imagination is just one, dreaming is another. There are others.