sniggyfigbat
Offline
Hmm, my summary is getting a bit long in the tooth. Time to write some bollocks for my personal amusement.
Writing a personal summary poses an interesting question: How does one define a person? Not in the sense of what counts as a person, but of what makes an individual themselves. How can one define their limits, their strengths, their tendencies, their beliefs?
Trying to systemically define a person is an inherently difficult task - and one, of course, that has been tackled by RPGs for decades with the sledgehammer of subtlety that games trend towards. Any individual metric will inevitably be flawed, any combination of factors will inevitably still simplify, and any system too deep risks becoming a Kafkaesque black box. Even the difficulty of the problem is hard to accurately convey.
I tend to think of the challenge in the terms used by AI research, although my understanding of the subject is only comprehensive enough to have emerged from the Dunning–Kruger fog. As we currently understand it, any intelligence must be motivated by maximising a 'utility function'. In the case of an AI, this is usually defined as a points system, wherein producing a cup of tea awards ten points, obtaining the right solution of milk awards another five, wiping out the planet with thermonuclear weapons subtracts a few million, and so on. The immediate issue, of course, is that Goodhart's law applies: "When a measure becomes a target, it ceases to be a good measure." This leads to unwanted behaviour wherein a rogue super-intelligence wipes out all life by using nanobots to render all the matter in the solar system into its component atoms, then reconstitutes it all into trillions of paperclips, because you told it to maximise production of paperclips and that's what it's going to do, dammit!
This concept of a utility function doesn't merely apply to artificial intelligence, however, but all intelligence, and most especially ourselves. The only reason this isn't immediately obvious is that our utility function is biological: our endorphin release mechanism. People attempt to maximise endorphin production, in the short and long term, by doing things they like, and by planning long term to allow them to continue doing things they like, even if that necessitates some things they don't. This is complicated by the fact that the human utility function is, indeed, a Kafkaesque black box, a series of self-referential multi-variable equations of byzantine complexity, that we cannot see and must infer from the patterns in the level II chaotic system of human experience. Or, in other words, it's a bit tricky.
When framed in those terms, many things become clear. Why is it hard to create an AI that convincingly simulates human intelligence? Because we don't understand how our own utility function works, and thus cannot parse it into code. Why are humans such varied and conflicted creatures? Because we're barely-rational monkeys, whose intelligence is an accidental evolutionary byproduct of complex social hierarchies, trying to satisfy the hidden demands of a convoluted utility function in a way that makes some kind of sense and alludes to some kind of internal consistency. And why is it difficult to create and simulate convincing three-dimensional characters in art? Because to do so we cannot yet rely on rational means, and instead must look to some strange alchemy of empathy, tapping into circuitous patterns of our subconscious to bring to life a person who does not exist.
So, to return to the initial question, how does one define a person, capturing their behaviours and understandings in a comprehensive yet digestible manner? The answer: With great difficulty.
Oh course, the most interesting thing is that the reader has likely formed a profile of myself, the writer, simply from my musings on the subject. The human mind is surprisingly good at the impossible.