Public protocols like Uniswap were the first time I really bought into “one protocol, many UIs.” Generative AI pushed that thought even further: open APIs + endless interface variations on the user side.
I wondered: do top-down UIs still matter? Should apps encourage fully open, malleable interfaces so everyone can make the space their own?
I really doubt that’s where this converges. It adds friction in the wrong places. It assumes most people want to make design decisions across every platform they use. I personally don’t. I imagine most people don’t. And the bigger thing we keep ignoring is the interface layer itself: almost everything is still mapped through the phone.
There’s a whole wave of people saying software is cooked (thanks to AI), so hardware is next. I think that’s directionally correct. But that logic often gets coupled with hardware-specific software, which can recreate the same lock in and dead-ends that made people imagine “many UIs” in the first place.
I keep coming back to what I call network interface. The core idea is simple: increase the surface area for how people can interact with a network, and increase throughput into and out of that network.
Make a camera that posts directly to a network.
Make a field recorder that posts directly to a network.
The idea is that there’s more user freedom when we design each interface natively to its medium, while aggregating to the same protocol. If people want a screenless, phoneless experience but still want to be connected to a network, people can design those experiences.
That’s the interesting lever imo: not infinite font/theme/layout control inside one app, but multiple meaningful ways to interact with one network. People choose interaction mode, which opens up better experience design where it actually matters, and lets people interface with the networks they care about in the way that best matches their preferences.
Interface captures the signal. Matching routes the signal. Resonance decides how much it hits.