I felt a bit bad writing the last post on artificial intelligence: it's outside my usual area of writing, and as I'd just admitted, there are a number of other points within my area that I haven't got round to properly putting in order.
However, the questions raised in the AI post aren't as far from the debates Anomaly UK routinely deals in as I first thought.
Like the previous post, this falls firmly in the category of "speculations". I'm concerned with telling a consistent story; I'm not even arguing at this stage that what I'm describing is true of the real world today. I'll worry about that when the story is complete.
Most obviously, the emphasis on error relates directly to the Robin Hanson area of biases and wrongness is human thinking. It's not surprising that Aretae jumped straight on it. If my hypothesis is correct, it would mean that Aretae's category of "monkeybrains", while of central importance, is very badly named: the problems with our brains is not their ape ancestry, but their very purpose: attempting to reach practical conclusions from vastly inadequate data. That is what we do; it is what intelligence is, and the high error rate is not an implementation bug but an essential aspect of the problem.
(I suppose there are real "monkeybrains" issues in that we retain too high an error rate even when there actually is adequate data. But that's not the normal situation)
The AI discussion relates to another of Aretae's primary issues: motivation. Motivation is getting an intelligence to do what it ought to be doing, rather than something pointless or counterproductive. When working with human intelligence, it's the difficult bit. If artificial intelligence is subject to the problems I have suggested, then properly specifying the goals that the AI is to seek will quite likely also turn out to be the difficult bit.
I'm reminded in a vague way of Daniel Dennett's writings on meaning and intentionality. Dennett's argument, if I remember it accurately, is that all "meaning" in human intelligence ultimately derives from the externally-imposed "purpose" of evolutionary survival. Evolutionary successful designs behave as if seeking the goal of producing surviving descendants, and seeking this goal implies seeking sub-goals of feeding, defence, reproduction, etc. etc. etc. In humans, this produces an organ that explicitly/symbolically expresses and manipulates subgoals, but that organ's ultimate goal is implicit in its construction, and not subject to symbolic manipulation.
The hard problem of motivating a human to do something, then, is the problem of getting their brain to treat that something as a subgoal of its non-explicit ultimate goal.
I wonder (in a very handwavy way) whether building an artificial intelligence might involve the same sort of problem of specifying what the ultimate goal actually is, and making the things we want it to do register properly as subgoals.
The next issue is what an increased supply of intelligence would do to the economy. Though an apostate libertarian, I have continued to hold to the Julian Simon line that "Human inventiveness is the Ultimate Resource". To doubt that AI will have a revolutionarily beneficial effect is to reject Simon's claim.
Within this hypothesis, the availability of humanlike (but not superhuman) AI is of only marginal benefit, so Simon is wrong. Then, what is the ultimate resource?
Simon is still closer than his opponents; the ultimate resource (that is the minimum resource as per the law of the minimum) is not raw materials or land. If it is not intelligence per se, it is more the capacity to endure that intelligence within the wider system.
I write conventional business software. What is it I spend my time actually doing? The hard bit certainly isn't getting the computer to do what I want. With modern programming languages and and tools, that's really easy — once I know what it is I want. There used to be people with the job title "programmer" whose job it was to do that, with separate "analysts" who told them what the computer needed to do, but the programmer was pretty much an obsolete role when I joined the workforce twenty years ago.
Conventional wisdom is that the hard bit is now working out what the computer needs to do — working with users and defining precisely how the computer fits into the wider business process. That certainly is a significant part of my job. But it's not the hardest or most time-consuming bit.
The biggest part of the job is dealing with errors: testing software before release to try to find them; monitoring it after release to identify them, and repairing the damage they cause. The testing is really hard because the difficult bits of the software interact with multiple outside people and systems, and it's not possible to fully simulate them. New software can be tested against pale imitations of the real world, and if it's particularly risky, real users can be reluctantly drafted in to "user acceptance" testing of the software. But all that — simulating the world to test software, having users effectively simulate themselves to test software, and running not-entirely-tested software in the real world with a finger hovering over the kill button — is what takes most of the work.
This factor is brought out more by the improvements I mentioned in the actual writing of software, but it is by no means new. Fred Brooks wrote in The Mythical Man-Month that if writing a program took n days, integrating it into a system would take 3n days, properly productionising it (so that it would run reliably unsupervised) would take 3n days, and these are cumulative, so that a productionised, integrated version of the program would take something like ten times as long as a stand-alone developer-run version to produce.
Adding more intelligences, natural or artificial, to the system is the same sort of problem. Yes, they can add value. But they can do damage also. Testing of them cannot really be done outside the system, it has to be done by the system itself.
If completely independent systems exist, different ideas can be tried out in them. But we don't want those: we want the benefits of the extra intelligence in our system. A separate "test environment" that doesn't actually include us is not a very good copy of the "production environment" that does include us.
All this relates to another long-standing issue in our corner of the blogosphere: education, signalling and credentialism. The argument is that the main purpose of higher education is not to improve the abilities of the students, but merely to indicate those students who can first get into and then endure the education system itself. The implication is that there is something very wrong with this. But one way of looking at it is that the major cost is not either producing or preparing intelligent people, but testing and safely integrating them into the system. The signalling in the education system is part of that integration cost.
Back on the Julian Simon question, what that means is that neither population nor raw materials are limiting the growth and advance of civilisation. Rather, civilisation is growing and advancing roughly as fast as it can integrate new members and new ideas. There is no ultimate resource.
It is not an original observation that the things that most hurt our civilisation are self-inflicted. The organisation of mass labour that produced industrialisation also produced the 20th century world wars. The flexible allocation of capital that drove the rapid development of the last quarter century gave us the spectacular misallocations with the results we're now suffering.
The normal attitude is that these accidents are avoidable; that we can find ways to stop messing up so badly. We can't. As the external restrictions on our advance recede, we approach the limit where the benefits of increases in the rate of advance are wiped out by more and more damaging mistakes.
Twentieth Century science-fiction writers recognised at least the catastrophic risk aspect of this situation. The concept that the paucity of intelligence in the universe is because it tends to destroy itself is suggested frequently.
SF authors and others emphasised the importance of space travel as a way of diversifying the risk to the species. But even that doesn't initially provide more than one system into which advances can be integrated; at best it reduces the probability that a catastrophe becomes an extinction event. Even if we did achieve diversity, that wouldn't help our system to advance faster, unless it encouraged more recklessness — we could take a riskier path, knowing that if we were destroyed other systems could carry on. I'm not sure I want that; it raises the same sort of philosophical questions as duplicating individuals for "backup" purposes. In any case, I don't think even that recklessness would help: my point is not just that faster development creates catastrophic risk, but that it increases the frequency of more moderate disasters, like the current financial crisis, and so wipes out its own benefits.
Labels: philosophy, technical