July 09, 2005

Fridge as Philosophy of Everything: A Manifesto

Someday I'll rewrite this so it makes more sense.    (PRC)

Summary    (PRD)

If you lose track in the ramble below, here's the summary: Architect your hardware, your software and your life so your environment helps you out and you don't have to waste your brain deciding on things that shouldn't need deciding.    (PRE)

The Meat    (PRF)

In a conversation (in IRC, I haven't met Ryan yet but will next week) yesterday with my coworker Ryan, I was trying to convince him of the value of a service oriented architecture in systems design. While not entirely resistant to the notion, Ryan felt perhaps I was trying to impose needless boundaries that didn't provide any gain. So I tried a metaphor I've used before:    (PRG)

part of the goal in having a multi-piece/service sort of solution is so you can write giant swaths of code that you then forget about for the rest of time    (PRH)

because it is, in effect, encapsulated    (PRI)

not in the dogmatic oo sort of way, but in the it's a refridgerator we never have to think about it kind of way    (PRJ)

rking: you have a fridge, yeah?    (PRK)

do you ever think about it?    (PRL)

it does one or two things: it keeps food cool or frozen    (PRM)

if it breaks you buy another and put your food in the new one    (PRN)

you can do just a few things with the fridge: you can put some food in, you can take some food out    (PRO)

smart soa is about creating systems of fridges, both in terms of your code and your hardware    (PRP)

systems that are easy to replace, easy to maintain, and dramatically simple    (PRQ)

My earlier uses of the fridge metaphor described hardware systems, not software systems. In the physical world the metaphor mapped nicely, but it seems to map just as well in the abstract world of software and maybe getting things done as well.    (PRR)

If you have the fortune to live in the modern developed world you probably have a fridge and you probably rarely think about it. When you go to the kitchen you have a goal that's not directly related to the fridge. You're hungry or thirsty and want to change that situation. In the kitchen (the context of your experience) there is a suite of tools and appliances to help you achieve your goal. Tools like knives and forks augment you by extending your existing capabilities. Appliances like fridges automate certain process you either cannot or do not want to do. Elegant tools and appliances, like knives and fridges, do a small number of things (cut food, chill food) and present a small number of clear handles or activities for doing those things.    (PRS)

My earliest experience with a fridge metaphor was as a system administrator at Kiva Networking. We had the time and resources to create a collection of systems, each performing just one service, taking a small number of inputs and providing a small number of outputs. We put them on the network and if we were lucky, forgot about them while they serviced several thousand users.    (PRT)

We didn't call them fridges then. That happened with the re-architecting of the presentation side of Indiana University's Knowledge Base. A small army of cheap replaceable Debian boxes provides just one service: the front end to the KB. Boxes can be automatically installed and added and removed from the network.    (PRU)

Small cheap front ends to data services is nothing new and the reasoning for it is pretty solid and well accepted. It's harder to convince people that we should use the same logic for building software and maybe managing our lives.    (PRV)

In the software realm people develop a resistance to a fridge or service oriented mentality because some people have made those notions synonymous with the wordy, dogmatic, heavy-seeming worlds of multi-tiered Java and the mess of the WS-* standards.    (PRW)

It doesn't have to be like that. You can be service oriented by having and using components that present narrow interfaces, provide simple and predictable outputs, are easily layered, and (ideally) are location independent. That's it. Nothing's being said here about SOAP, or REST, or J2EEE. Perhaps we perceive those things as noise because they are insufficiently mature? Imagine the noise if a detailed knowledge of how a computer worked at the electronic level was a requirement for use?    (PRX)

What matters is the analysis that lets the designer or developer decide the points that are the boundaries between or chunks of automatable activity in the system. That is, which parts of the system can be described by theory given known inputs and require no additional inputs. For example, a wikitext to html formatter can be an independent automatic system given inputs of wikitext and perhaps a list of existing pages in the wiki system. We can reliably predict the output of the formatter, knowing the input.    (PRY)

We cannot, however, know the input, so gathering input is a process that is augmented by presenting an interface to the outside (to the user through some system). We may know, given a certain set of circumstances which interface to present, but we don't know what the user will type.    (PRZ)

A wiki formatter is a service we might wish to reuse or layer. Because we can know what it will take in and what it will put out, we can layer and reuse it in a relatively straightforward fashion.    (PS0)

We can make it a fridge. Once that is done, it is possible to stop worrying about it, and move on to the harder and/or less predictable aspects of the system where being smart is required. Places like making an effective user interface or innovating a new exciting feature.    (PS1)

Until the things that ought to be fridges are turned into fridges, they occupy an overly large segment of thinking in a developer's activity. There's a lack of foundation. Imagine how it would be if every time you wanted to eat you had to evaluate the fridge to make sure it is working correctly. All day long you would sit at work wondering, "Has the milk gone off? What am I going to do about dinner."    (PS2)

Properly made fridges remove fear and uncertainty.    (PS3)

Fridges do create boundaries, but those are a form of constraint that act as a form of external cognitive aid. The boundary lets you know what you don't have to think about: it keeps you in a limited space, focussing your thought on what matters in the current context. The boundary also lets you know what you can think about (related to the fridge): the boundary encapsulates a known and predictable space. You are not removed from your context: the fridge is contained, but not you. Constraints help you think about the stuff you need to think about.    (PS4)

Spacing Out    (PS5)

Hardware and software design don't matter much and do not a manifesto make. What's going on in the background here is a recapitulation of Doug Engelbart's (and others') ideas about augmentation fed through a filter of Landauer's descriptions of the differences between automating and augmenting systems. Those ideas are generally used to describe technical systems, but are really a way of looking at life.    (PS6)

We waste a lot of time making decisions that don't really matter. A monk creates space for contemplation by automating away much of their life into a rigorous routine of daily sameness. Our liberty is experienced in the decisions we make that feel relevant.    (PS7)

So how do we, and should we, figure out how to identify and automate into a fridge like state those decisions and concerns that aren't particularly relevant so as to maximize the experience of being relevant? There's more to it than being like the monk, but there's something there: routinize those things for which you almost always choose the same thing. Sure there's a cost to such behavior, but why waste time in the existentially alienating experience (for me) of wasting time choosing between a host of nothingness.    (PS8)

Engelbart's goal, all along, has been to figure out ways to solve the complex and urgent problems that face the collective we. Automate (only) those processes which can be correctly automated, and augment (place tools at your hands to navigate the gaps between automated processes) the others, removing the irrelevant, so as to approach problem solving as effectively as possible.    (PS9)

Automating and augmenting must involve a constant process of review and re-composition of the pieces that make up the system: When some collection of variables has become, after rigorous investigation, deterministic, you may, at your option, turn the system into a fridge and remove it from your concern. But only if you know the inputs. Most of the time you don't.    (PSA)

At the individual level finding the meaningful life is a problem to solve. If I make some fridges, I might find some additional relevant space. Room to breathe and think and do what matters.    (PSB)


Apologies to those who can't stomach geeking out about software and hardware with geeking out about life or take too seriously taking everything too seriously. Everything everywhere is a metaphor and it's all analogies to be learned, compared and connected. We want to remove fear and uncertainty in life at least as much as we want to remove it in software development. We want to think effectively about all sorts of problems.    (PSC)

Thanks to Dave Rolsky and Ryan King for recently stirring this pot and providing some ideas and to Joe Blaylock, Kevin Bohan and Matthew O'Connor.    (PSD)

Posted by cdent at 09:30 PM | Trackback This | Comments (2) | TrackBack (2) | Technorati cosmos | bl

May 03, 2005

Apple Amplifier

Meet Automator by Matt Neuburg at TidBITS opens with:    (PK4)

The history of the Mac is paved with Apple's attempts to enable ordinary users to tap the programmable power of their own computers.    (PK5)

They've done this by giving users more granular access to the operations of the tools and applications on the system. Applescript and Automator let people manipulate and assemble simple data and actions in ways that create complex systems.    (PK6)

It's like lego. If you have lots of little pieces of lego you can build all kinds of fancy things that are less limited than what you can build with the bigger blocks of Duplo.    (PK7)

This is the argument that's often been used to explain the superiority of Un*x over Windows; the lack of flexibility and real assistance provided by wizards and Clippy; the value of nanotechnology; and, near and dear, the importance of purple numbers and similar systems.    (PK8)

Apple's use of this model may explain why I've never felt particularly insulted by the company. Granting people usable handles to actions and information is a generous and trusting gesture that hopes and assumes the receiving end is a creative and intelligent person that wants to use their tools in a less dumb fashion. There's extra power here if you want it. Headroom.    (PK9)

Posted by cdent at 01:27 AM | Trackback This | Comments (0) | TrackBack (1) | Technorati cosmos | bl

February 07, 2004

Idiom is Important

I've just returned from a lecture given by DouglasHofstadter? here at IndianaUniversity. The title was "Can Computers Understand Language? A ten-year booster shot against Searliomyelitis."    (2LF)

Back in 1980 JohnSearle? published a paper, "Minds, Brains and Programs", describing the now famous ChineseRoom? problem. Hofstadter believes the paper is full of errors, wrong, and akin to a virus in the way it infects otherwise right-minded folk with poor thinking.    (2LG)

Prior to the lecture I assumed Doug was going to give us a highly controversial refutation of Searle's thesis. I was prepared for an event--a painful inoculation. I didn't get that: at the end of the presentation simply asked that Searle and his cohorts lighten up and make room in their world for the simple idea that understanding of meaning exists not on an absolute black and white scale, but instead is a continuum on which progress--in the realm of computers understanding language--can be made: even if that progress thus far has been tiny.    (2LH)

I can get behind that. However, I think Hofstadter made some generalizations that were convenient for his argument when not explored but disruptive when considered more deeply.    (2LI)

When we ask if a computer is understanding language, we are asking more than whether it understands what we've said. A computer can give back reasonable responses to queries, so some kind of understanding is going on. The real question is: Do we believe there is meaning inside the machine. Are the symbols being manipulated by the computer related to "real things" with "real meaning".    (2LJ)

Hofstadter asks the same question of humans. When a human uses symbols when are they associated with "real meaning"? If a person uses baseball idioms without ever playing baseball, do they have a right to use the idiom? People use the idioms and we understand them when they do, so meaning is transmitted in some fashion.    (2LK)

I think Hofstadter misses the important question here. The question is not whether the idioms have meaning, but rather how do we know when a particular idiom is the right one to use? How do we judge or interpret the context in which we exist when we are communicating?    (2LL)

Hofstadter seemed to imply, although he may not have intended to do so, that children are programmed in much the same way as computers. Children, he said, know the difference between abstract and literal because we tell them. An audience member disagreed with this, saying there was far more subjectivity involved in the human child's judgment when compared with the computer.    (2LM)

The primary thrust of the criticism of Searle is that Searle's model of understanding is absolute, black and white and thus smacks of religious sacred-cowness. Hofstadter's alternative is a scale he calls semanticity from no understand to complete understanding. This is reasonable but it too has degree of absolutism. It assumes that there is only one dimension of understanding, and one ultimate peak of understanding which is the right one. We should certainly strive to improve understanding, and getting away from sacred cows is a good step, but let's not build another in the process?    (2LN)

Finally, throughout the presentation I was constantly reminded of what I consider to be a very important distinction: Behavior that is based on rules, such as programs, no matter how complex, can be decoded, eventually, by stepping backwards (perhaps in multiple dimensions) through the rules that were used to generate the behavior. It may be that the behavior can't be repeated, because of randomness in the system, but it still can be described by what amounts to programs. It's an article of apparent faith in some circles of the AI and/or cognitive science worlds that the complex behavior of the human brain is theoretically describable. I find it advantageous to disagree with this, not because I'm attached to meat as the only possible source of real intelligence but because believing it makes us emphasize the wrong problems, which I hope someday to describe at ClassesVersusCategories?.    (2LO)

Literary theorist should read more cognitive science. And vice versa. There's religion on both sides.    (2LP)

Posted by cdent at 12:22 AM | Trackback This | Technorati cosmos | bl