The AI Interaction Design Report 2023-24: Part 3 of 3
on growth, autocompletion, jurisdiction, and organisms
On Growth and Adaptability
If we think of AI as an organism - as something that grows, learns and adapts, it appropriately shifts our attitude toward relating to it. (Consider the fact that even errors are (appropriately) framed as ‘hallucinations’).
AI-native devices and software will have malleable functionality - growing and learning new skills based on how people use them. Programming them will involve more soft feedback loops from users, rather than hard-coding functions. When we make something now, what we are creating is the infrastructure for systems to build/develop intuition.
The software we use now also has far more inputs and information to make sense of, enabling fuzzier tasks. Soon everything will have a camera and a mic – sight/visual processing and audio recognition will be cheap and valuable enough to become ubiquitous. When we combine intuition, purpose, and senses, it naturally creates the ideal collaborator. Something we could partner with, to augment human behavior.
When future products compete against each other, it won’t necessarily be just core feature sets and user experience pitted against each other - it’ll almost be a war of personalities - who is the most intuitive collaborator?
On Autocompletion
Adaptability shows up in different ways across each side of the equation: the software changing in response to how it is used, and humans themselves growing and extending themselves.
We’ve long been headed toward a state of predictive-everything, every product we use competing to show that it understands us best. Recent technological advances make us actually capable of extrapolating our identities and activities we don’t have time for, enabling us to operate beyond the constraints of time and space. Whether it’s our voice, our professional (or even social or romantic) presence, our ability to assimilate more information in a shorter timespan, our ability to create faster and more efficiently than we would have in the past - there are more tools at our disposal to accurately represent parts of ourselves in a scalable way.
However, just as the prevalence of streaming algorithms carried the threat of homogenizing everyone’s taste in music, will predictive technology create the risk of everyone’s personalities eventually converging?
On Jurisdiction
What remains in the jurisdiction of humans? Large language models are, by definition, designed to look for and understand spoken language. When paired with multimodal functionality, they get closer to approximating human capabilities. What still feels purely within the realm of humanity is genuine emotive response, intuition, desire.
The final frontiers for AI to run autonomously could end up being health, and money. Situations that demand vulnerability are difficult to hand over entirely to a machine. Currently AI works as an extrapolation of human ability, augmenting rather than replacing. But for any such system to work fully, this needs to be transparently communicated to build trust.
In a study where AI reviewed medical images, it reported higher accuracy than radiologists’ evaluations. Radiologists working in collaboration with AI produced results of lower accuracy than the levels of just the radiologists alone - it was revealed that they felt a distrust toward AI, almost an instinct to try and prove it wrong.
So who/what is it then?
What is the internet to us now? It’s more a place than a collection of resources. Digital real estate, whether as community spaces or service providers, has value equal to or far surpassing physical real estate. The internet is the enabler of many different realities coexisting, juxtaposed upon one another.
What is AI to us now? I think it’s essentially an incredibly smart person, one you shouldn’t trust blindly, and everyone wants to hang out with the smart person in their own way. It might also be a substrate to soak in to improve our communications, even to make ourselves better people. As multi-modal interactions grow, it will resemble an organism more and more - it may become an avenue we turn to for some degree of companionship and collaboration. Optimistically, it might even become the proving grounds for how we develop socially, develop more awareness of ourselves and the world around us.
The author Daniel Defoe, in his description of a mechanical “thinking engine” called the Cogitator, described its place in civilization as a supportive structure to the cognitive processes of humans - almost as an additional layer of thought and desire management. He writes:
“…the main Wheels are turn’d, which wind up according to their several Offices; this the Memory, that the Understanding; a third the Will, a fourth the thinking Faculty; …perfectly uninterrupted by the Intervention of Whimsy, Chimera, and a Thousand fluttering Damons that Gender in the Fancy, but are effectually Lockt out as before, assist one another to receive right Notions, and form just Ideas of the things they are directed to, and from thence the Man is impower’d to make right Conclusions, to think and act like himself, suitable to the sublime Qualities his Soul was originally blest with. There never was a Man went into one of these thinking Engines, but he came wiser out than he was before.”
It shows up as an objective collaborator that quickens the turning of our minds, sharpens our focus, and amplifies our ability to be the best version of our own selves. But as AI eventually starts to trespass into the most human of behaviors, we will be forced to question our own interpretation of consciousness.
What humans have bestowed upon AI is the ability to create a matrix of meaning, codify the logical spaces between words and concepts and reflect that back to us - but how far off is that from the way that humans experience the world?
“We feel in one world; we think, we give names to things in another. Between the two we can establish a certain correspondence, but not bridge the gap.”
-Marcel Proust



