as we introduce the ability to be incorrect, we can trade off efficiency for robustness (because they are at odds) in order to maintain correctness.
And so we’re left with the Robust-First Computing Creed:
First be robust
Then as correct as possible
Then as efficient as necessary
I think this is what we’re going to see become much more common as we have LLM agents and non-deterministic/non-CEO (correct and efficient only) agents assembled as an algorithm/program in distributed computing.
Each one will need to be checking the other and the whole program will need to be robust to false/hallucinated responses.
Josh Beckman