llms are entities
inbetween people, ideas, and objects
Common between these concepts are ideas, what we normally associate with thoughts, cognits. Similar to word roots (lemmaitized ideas). This is because they generalized a gnn on a subset of humanities written thoughts.
I’m positing with enough of these entities in a room–with few shot generative adversial prompts between them–would synergize (create an interaction) that would result in an emergent convesriation that could qualify as sentient. Think of it simply as multiplying the vector space akin to how a and b make two linear lines into an area. This becomes the inferential space, a product of the inputs.
An idea I’m working on. I’m considering using the outputs of such conversations in a fine tuning pipeline as a type of reinforcement learning, but my aim is to avoid the need for expensive finetuning and rather simply iterate on the prompt engineering maybe with a llm that is doing just that.
I imagine I would hit some qualitative limit as a result of a models generalized ability, but that could be solved by upgrading the model when available.
I think something simple would be
- “How to improve upon this joke?”
- “How can I improve these few shot learning prompts? Can you think of any meta elements I’m missing that would help grab more attention from the responses?”
Then feed that back and forth between two model’s updating on actual responses to questions and update the few-shot learning prompts.
I got this idea from governmental bodies as entities and walked it back to LLM’s.