Pedantry - Moved to http://pedantry.fistfulofeuros.net

Saturday, August 23, 2003
 
Mediation, Collectivism, Self-Development and Political Theory

Because of the extra workload my boss has dropped on me along with the length of this essay, I have decided to break this third post on language rights into two posts. I expected to put this up yesterday, but better late than never. The next post will go up sometime early next week.

You can view part 1 and part 2 by scrolling down the page or following the links. I'd also like to thank Language Hat for linking to this discussion.

This post outlines an alternative to normative liberal political theory, something I have promised to do since this surprisingly controversial post on the Middle-East. I've actually started writing it a half dozen times in the last couple of months, and then scrapped it. I guess I need a certain amount of pressure to actually get some kinds of work done, but language rights are a good example application so I'm taking this opportunity to do it.

This post only outlines the theory and offers an example of its application that has nothing to do with language policy. I use it to analyse arguments justifying affirmative action in the United States as a form of slavery reparations. In the next post, I will apply it specifically to language issues.

I'm not a philosopher or a political scientist by training. I grew up immersed in child development and education theory because of my parents and later studied a lot of linguistics and translation theory, ending up - somehow - with a degree in physics and becoming a computer programmer by trade. In a lot of the social sciences, and particularly in philosophy, I'm more or less self-educated, and I realise the limitations that entails. Sometimes it means rediscovering something that's been hashed over decades ago without knowing it. But sometimes, it has advantages in terms of thinking outside of the box. Bits and pieces of my outlook have been advanced by other people, but I don't know anyone else saying quite the same thing. On the other hand, this may all be old news and I just don't know about it.

I want to start by giving you a brief and quite skewed summary of a theory in psychology and child development. It was originally promoted by Lev Vygotsky, a Russian who worked in the era of the revolution. He was a contemporary of Jean Piaget, and for a long time people tended to group them together as if they were largely saying the same things. They weren't, but Vygotskyan thinking is certainly informed by the experiences of Piaget's disciples.

Vygotsky advanced, among other things, a notion he called mediation. He believed that people always interact with the world through culturally constructed artefacts. Mostly, he just called them tools. The most trivial examples of this sort of thing are, in fact, physical tools. Hammers, for instance, are culturally constructed. They require a metalworking industry, mass produced nails and access to lumber, each of which involves a complicated cultural framework of divided labour, market relations, transportation networks and the like. When we want to build something, we don't just construct it with our bare hands, we use carpentry tools, and how we build is determined in large part by the tools and materials we have.

Part of what was unique about Vygotsky was that, first, he claimed that symbolic tools were just as important, and just as much culturally constructed artefacts, as physical tools. Second, he claimed that tools not only affect how we interact with the world, they also affect how we think about it. Mathematical algorithms, categories, philosophies and beliefs constitute, in Vygotsky's thinking, tools in the same sense as a hammer or a car. They have histories, they are instrumental in mediating how we interact with the world, they are supported by a cultural and institutional framework and, just like a hammer, we come into possession of them as is, without direct knowledge of the history and cultural supports all tools have. Furthermore, not only are our tools culturally adapted to us and our needs, we adapt ourselves to them as we use them. Vygotsky considered this the core principle of psychology. He called it cultural-historical activity theory, now often abbreviated as "CHAT."

This school of thought has been very influential in education theory and is beginning to have influence in cognitive science, thanks in large part to work at UC San Diego's Communications department and in the Education department at the University of Helsinki. It has found an especially productive home in the computer industry in recent years, where CHAT has become an important school of thought in the theory of Human-Computer Interface design. Bonnie Nardi, former chief interface wonk at Apple, is one of its better known proponents.

But I want to highlight some of the more philosophical consequences of this kind of thinking. To do that, I'm going to use a semi-famous quote from Gregory Bateson, an anthropologist and I think Margaret Mead's husband:

Suppose I am a blind man, and I use a stick. I go tap, tap, tap. Where do "I" start? Is my mental system bounded at the handle of the stick? Is it bounded by my skin? Does it start halfway up the stick? Does it start at the tip of the stick?

Bateson goes on to argue that cognition - at least, if cognition is understood as information processing - is something that takes place both inside and outside of our bodies. Tools, like the blind man's stick, are as much a part of the human information processing system as a chunk of our brains. But if the stick is part of the blind man's mental processes, isn't the curb he taps it against just as much a part? How about the man who made the curb, or the city planner who put it there in the first place?

Cognition isn't the only place where this sort of "out of body" thinking applies. No one can run a mile in a minute, but I can sure drive a mile in a minute if I have a car. If my legs are a part of my locomotive apparatus, surely so is any vehicle I'm using? But then, so is the road, the road signs, the gas stations and the crews who repave the road every few years. Digestion is the process of turning food into stored energy for my body, so where does my digestive system start? I have to cut food up on the plate before I can eat it, so my hands and my fork and knife must also be part of my digestive system. But then, the cook also has to be a part of my digestive system, since he too plays a role in converting food into metabolic energy.

This more philosophical way of looking at Vygotskyan mediation says that our selves expand out from our bodies to the whole world, permeating the machines, the social structures and even the people all around us. We are not merely our bodies, or our memories, or even some sort of software running in our nervous system. My mind is not just my brain, it's also my Ultra 10 workstation, my books, my job and the whole cultural and intellectual milieu I inhabit, as well as the larger global sphere that it exists in. Each of us encompasses the entire universe.

This "oneness with everything" begins to sound like New Age mysticism, but I want to convince you that there is nothing mystical about it. People chop the world up into categories and things and what I am describing may not be the way that you're used to chopping the world up, but all I'm doing is chopping it up differently. I have no intention of evoking gods, essences or mystical forces. I am not proposing any sort of strange theory of physics, although I might be guilty of proposing a theory of metaphysics. I am saying that the stars affect our destinies, but only in the same sense as I might claim that if the sun went out you'd freeze to death.

Vygotsky was a materialist - both historical and dialectical - and had little use for mysticism. Mediation was, for him, the tool that let him get at the human mind.

As an aside, I've always thought there would be an interesting paper in comparing Vygotskyan mediation to Dennettite memes. At times it seems like they're almost talking about the same thing and at others they seem like they're on different planets. Vygotsky felt that consciousness is constructed by the use of cultural artefacts, while Dennett feels that it is constructed by the action of memes. The big difference is that Dennett says that consciousness is a socially constructed illusion, while Vygotsky claims that it is a socially constructed fact. Therein lies a world of difference.

Now, there are several other alternative ways of looking at cognition and identity, but one frequently discussed alongside CHAT is Actor-Network Theory (ANT), a school of thought most associated with Bruno Latour. Bruno Latour is fairly famous, especially in Europen intellectual culture, primarily because of his work in science studies. He's been pretty heavily attacked for his work, and I think he's done a reasonably good job of defending himself, except that unfortunately some of his books - Pandora's Box in particular - are far too dense and stylistically difficult to explain some of his ideas very well.

Latour devised ANT in large part to explain the semantic capacity of non-humans, particularly the objects of scientific study and the experiments used to study them. He deploys Greimas' notion of the actant to describe a wide variety of phenomena as actors, and then suggests that a more appropriate way to look at scientific work is to see scientists as people who interpret the acts of their objects of study, doing so within a cultural framework just like all other kinds of acts of interpretation. This view is opposed to a vision of scientific work as the generation of theories and testing of hypotheses. A full discussion is too far from my main topic and a bit too complicated to get into here.

What Latour claims, at some length, is that cognition should be understood not as the actions of individual agents like people, but as a process which takes place over heterogeneous networks of actors held together by different kinds of relations. Thought, for him, is always and everywhere a collective process. It is reasonable and useful to say that a network capable of cognition and action constitutes an agent in the same sense that a person does.

There is quite a lot I find appealing in Latour's approach, although I think Latour would be disinterested and possibly horrified to see ANT used to develop a normative political theory. This notion of collective cognition and action allows me to identify a wide variety of things in the world - companies, countries, institutions of various kinds - as things with the capacity for thought and action in their own right.

People already do this all the time. Every time someone says that "Microsoft is out to destroy Linux" or that "America made a mistake invading Iraq", they are ascribing the capacity for thought and action to an entity which is not a human being. Most people, when pressed on this issue, will say that it's a sort of verbal shorthand for saying that "some of the people who run Microsoft want to use the human and material resources at their disposal as Microsoft's bosses to destroy Linux" and that "George W. Bush and other executive office decision makers made a mistake in ordering individual American soldiers to invade Iraq." I don't think it makes any sense to see the one as a shorthand for the other.

Most of us have had the experience of dealing with some kind of customer support person, or some kind of government agent, only to be told that whatever perfectly reasonable thing we want them to do is "against policy." It is possible to have an outlook which claims that when this happens we are dealing with an individual person who has just told us "no." But, that strikes me as counter-intuitive. We don't usually blame the actual person we're dealing with for failing to do what we requested, no matter how desperate we are to have them do it. Think about it, who precisely is to blame when your local library is closed due to budget cuts? The librarian has the keys, she (it's usually a she) could keep it open if she wanted to. Of course, there would be consequences for her, but do we genuinely attribute the underlying fault to the individuals making those choices? No, we blame the government and usually we blame them in a quite amorphous and indistinct manner, since bureaucrats and elected officials also work in a context of limited choice.

I am not claiming that anything except individual people are making decisions and taking actions, just that we can attribute outcomes to the whole that we can not necessarily attribute to individuals alone. All I am saying is that "[m]en make their own history, but they do not make it as they please; they do not make it under self-selected circumstances..." The circumstances in which people act are not static and sometimes we do act as a part of something else. To attribute only to individuals the causes of their actions is utterly contrary to the way we usually conceptualise the world and the way we behave towards each other.

Instead, I propose to step back as say that it is perfectly reasonable to identify an action with a country, a firm or another kind of institution. This is very much in line with modern thinking about institutions. The division of labour, for instance, is a perfect example of this sort of collectivist thinking. A firm is not an individual. Its cognition is not the cognition of a single person because it is impossible that a single person could do all the planning, much less execute all the actions, of a large firm. Yet, we are presented with and usually interact with a firm as a whole which singly produces whatever goods and services it sells and is singly compensated. In the same way, a government is never merely one man, even in the most absolute dictatorship. There is always an apparatus of state which cannot be micromanaged from the top. Yet, the whole point in having a state is that it should act with a single mind in those matters that fall within its sphere of activity.

I harp on this point because it is the hardest one to get people to accept. The world does not have to be analysed as if only individuals counted and there are many times when such an analysis is counter-intuitive and misleading. I should also make clear what I am not saying. People are never merely elements of a collective. Nor am I claiming that no one is ever to blame if they are "following orders." Humans have individual powers of agency and I do not seek to deny this.

Furthermore, it is extremely important to recognise that not just any group of people can be called a collective by my definition. That was what got me into so much trouble the last time I brought this topic up. A collective exists where a heterogeneous group of humans and non-human actors exist in a network of relations that creates a capacity for cognition and action as a single thing. This pretty much always means an institution of some kind. America is a collective. The Americans are just a bunch of people. America is not just the 280 million odd US citizens and residents; it is also a mass of land, an industrial plant, an armoury of weapons, a body of law and traditions that have developed over history and a set of social relations, some of which extend outside of US soil and encompass people who have never set foot in the USA. It is reasonable to say that America has invaded Iraq; it is not reasonable to say that the Americans have done so.

The last time I brought this up it was to make a point about Israel's relations with the Palestinians - that it is a category error to claim that the Palestinians are to blame for something or that the Palestinians have to do something in order for there to be peace, while to make the same assertions about Israel is not a category error. The reason is because Israel is a collective in the sense that I have described, and the Palestinians are not. Naturally, it is not a category error to say that the Palestinian Authority or Hamas is to blame for something or must do something. They are collectives and, should there someday be a Palestinian state, it too will qualify as a collective. But, "the Palestinians" will never qualify as a collective, nor will "the Israelis", "the Jews", "the Americans", "the Muslims" nor anything else that is just a bunch of people.

Now, I want you to consider my definition of a collective in light of CHAT and mediation. The term collective in the sense I am using it here describes individual people as well as institutions. Cognition is something that happens both inside and outside the brain, through networks of cultural artefacts which may include other people. This approach to cognition, action and identity has the advantage of scaling well. It treats people as just one class of collective.

Here is the first principle I want to put forward: responsibility and intent can only be attributed to collectives. Remember, individuals are collectives by my definition and can be held responsible for their acts, but it is also possible to attribute to collectives responsibility for acts without necessarily attributing them to any specific individual people. This notion is very productive in the discussion of historical injustices, as I will show later.

There is one other element that I need to bring into this discussion. It's a concept that comes down to us from Hegel via several other thinkers (attn: Brad): self-development. I want to advance self-development as the core idea of a sort of humanism. I assert that people have the right to develop themselves as they wish and that enhancing people's ability to do so should be identified as the good thing on which utilitarian discussions of policy should focus. That means that people should be able to become what they want to be; that their thoughts, desires and choices should be able to evolve in as unrestricted a manner as possible. This idea subsumes the notion of "opportunities" in liberal discourse but it is larger than that. It, too, has a sort of new-agey feel to it that I want to dispel.

Norman Geras is one of the few writers I know of pursuing this line of thought. Those interested in self-development as a normative principle tend to eschew any discussion of justice as if sefl-development renders it superfluous. Geras argues here that it does not, and I am inclined to agree with him. What I am hoping to do is build up a right to free self-development as a normative theory to compete with liberalism.

Naturally, self-development is not an absolute standard which exists independently of time, place and social context; nor can all developmental efforts be treated equally. If someone wants to develop into a serial murder, they can't assert the freedom to go around killing people in the name of self-development. Furthermore, what policies specifically enhance or block self-development are always conditioned by the historical circumstances people find themselves in. To someone who is starving, food insecurity is an enormous barrier to self-development even when they have nominal political liberties like freedom of speech. It is possible, under this scheme, to come to the conclusion that a dictatorial regime which grants none of those political rights but which is able to keep people fed may actually be the juster regime. Of course, this is not to say that a regime that offers food security and political rights isn't juster still.

This is a sort of relativism, but it is quite different from the kind of vulgar relativism that serves as a strawman in a lot of arguments. What enhances the freedom of self-development in one time and place may harm it in another. Even the mostly widely adopted and agreed upon liberal principles are not necessarily universally applicable. I claim that the freedom of self-development is a universally applicable principle, but that what that means is highly relative.

Asserting a freedom of self-development enables us to get rid of the taxonomy of freedoms that have proliferated under liberalism. Both negative rights (freedoms from something) and positive rights (freedoms that enable people to do things) can be evaluated within the same framework: do they enhance or hinder self-development? A standard of self-development enables us to more rationally judge the classical liberal freedoms, since in practice each is to a significant degree restricted.

We can claim, for example, that freedom of speech is a necessary condition to self-development because it enhances the cognitive abilities of individuals. The principle of mediation means that when we communicate with others, we are in effect taking advantage of cognitive abilities outside of our own brains and enhancing our cognitive powers because of it. But, to do this, we need to be free to communicate with other people. Freedom of speech is, in my analysis, a freedom to think outside of your own head. At the same time, we can identify communicative acts which hinder self-development. The classical instance, of course, is yelling "Fire!" in a crowded theatre, but more realistic examples are acts of fraud and conspiracy.

Also, the freedom of self-development allows us to treat empowerment in the same framework as liberty. Access to an automobile and good roads, or to an efficient public transportation network, empowers people to develop more freely by giving them access to more of the world. Income security and social esteem enhance free self-development. And, access to good education and the freedom to learn what you want are key liberties in a self-development-based notion of rights. This last point in particular makes my philosophy appealing to someone coming from a background in child development and education.

So, let me summarise. I have advanced three principles:

  • Individual identity is not a property of bodies. It is a property of a set of relations between people and things which are centred on the body. We can identify the structures that we live in as parts of ourselves.
  • A collective is any assemblage of things, physical, symbolic or otherwise, which we can identify with a single centre of cognition and action. That definition includes people, according to the first principle. We can, to some degree treat collectives the same way we treat people, even though not all collectives are people. We can assert responsibility and lay blame on collectives. Assemblages that can not be identified with a centre of cognition and action can not be identified as collectives and can not be treated at all like people.
  • The right to free self-development is the standard for establishing, defending, justifying and limiting all other rights and freedoms. It is the final tool by which policies are to be judged. It is also a context-sensitive right which may mean very different kinds of policies and priorities in different times and places.

The first two are really ontological principles; only the third is genuinely normative. To this, I want to add one more normative principle:

  • Our judgements of collectives that aren't individual human beings must be according to their instrumental value in enhancing the freedom of self-development of individual human beings. These non-human collectives are not people and do not enjoy equal rights with people. They have no intrinsic value. We are free to construct them and terminate them as we see fit, guided only by the needs of free self-development.

I am privileging one kind of collective - the kind we identify as individual human beings - above all others. I have no argument to deploy in favour of this principle, although I doubt that most people will be terribly bothered by it. I don't intend to deduce it from something else. I don't think the universe privileges people in any special way, but I do.

However, not everyone acts as if they agree. Consider carefully what I am saying. I am saying that there is never any need to "give one's life for one's country." It is reasonable to be willing to risk your life to defend your state for its instrumental value in enhancing the freedom of self-development. But, to die for King and Country, for honour, for glory, for the mother or fatherland, for your race, ethnic group, religion, whatever - I am saying that all of that is plain stupid. People possess intrinsic value and institutions only have instrumental value.

This principle is intended in part to undermine the charge of collectivism. This is a collectivist theory in that it recognises the real existence of collectives and assigns value to them. But, I am specifically saying that the worth of the individual is not their worth to the collective; instead, the worth of the collective is only its value to the individual.

I actually have a critique of capitalism based on this principle, but that is for another post.

There is a specific example from outside of language policy that this line of thought works well with: affirmative action as a form of slavery reparations. Most of the people opposed to affirmative action will point out that there is no living slave owner in America and many Americans don't even have ancestors who lived in America when there was slavery. However, even though individual slave owners are all dead, we can still attribute liabilities for slavery to various collectives: the US government, the various state governments, political parties, church organisations, even to America as a collective entity. These collectives are still alive today.

Furthermore, I don't have a problem with the logical consequence: Making a collective responsible, and compelling it to make amends, means that individuals who participate in the collective must bear the costs. I considered writing much of this post a few weeks ago, when Brad Delong had a post on more or less the same subject and justified affirmative action on almost identical grounds to the ones I am using. America is a collective, but it is also a culturally constructed tool - one that is both symbolic and more substantial - through which Americans as individuals interact with the world. To accept the benefits of this tool - to make it a part of yourself - means accepting the costs associated with it. That means paying taxes, but it also means accepting the liability for its past injustices. Cultural artefacts have histories, they do not come into the world as they are, and the artefact and its history are not readily separable things. No individual is liable for slavery because of their ancestors, even those whose ancestors did own slaves. Everyone is liable for America's past because of their acceptance of America's present instrumental value, even those with no history in America until recently.

The problem I have with the idea of slavery reparations - even in as distended a form as affirmative action - is determining just who is owed. I can't identify black people, or people descended from slaves, as a collective by my definition. Were there any individuals who had been slaves still alive today, they would be personally eligible for compensation. But there are no such individuals.

Instead, let me offer an alternative to hereditarian theories about who should be the beneficiary of a collective liability for past policies. People alive today who suffer diminished freedom of self-development due to historical slavery are the ones who ought to be the beneficiaries of whatever America owes.

This has a number of advantages. For example, one of the notions that I've seen in circulation is that contemporary racial inequalities in America have as much to do with a shift in the way labour is employed as it has to do with continuing racism. The idea is that at some point in the fairly recent past - the 1970's in most analyses - the American economy shifted from one that offered a lot of opportunities to unskilled labourers to one that was heavily tilted against them, and that the mechanisms by which children from families of labourers gained skills in the past have disappeared. Black people, having entered this period with poor skills due to past racism, have since tended to stay unskilled and poor even as racism diminishes.

As an educated middle class white guy, this theory strikes a chord with me. I don't claim that there is no racism in America, but members of the class with the most power in America are the people least likely to think that skin colour is a good factor in making decisions about people. I think quite a few Americans are bothered by persistent racial inequality in America even though neither they nor the people they associate with are bothered by having black neighbours, co-workers or friends; and, I think people are hard pressed to understand how this can be. This theory explains how even if no one in America was racist, there could still be racial inequalities.

The logic I'm advancing still justifies assistance specifically for black Americans, as compensation for the present consequences of past injustices. It enables us to compensate black people who may not even be descended from former slaves - immigrants and their offspring - who have diminished present day opportunities for self-development, while at the same time identifying black people who appear to enjoy as much freedom of self-development as everyone else - say, Condoleezza Rice - as people who should not benefit from compensation but who bear the same liability for past injustices as other Americans.

This, I think, takes away the most pernicious problems people see in compensatory policies that make racial distinctions. My logic does not lead to the conclusion that "white people" owe "black people." It justifies targeting compensation in the same way that the injustice we are compensating for is targeted. It also makes liability conditional on benefiting from a collective rather than hereditary criteria or racial classification. It suggests that affirmative action should not merely target people by their race, but also by their social status.

So far, I have not discussed language in this framework. My analysis of affirmative action is emblematic of how I intend to bolster language rights claims on the basis of historical injustice and specify who should benefit from them and what kinds of policies may legitimately serve those ends. But that will have to wait for my next post.
 

Thursday, August 21, 2003
 
French Immersion

I know I need to respond to the comments on the post below and I said I would get the important and somewhat complicated third post in my discussion of language policy up today. It may still happen, maybe. My boss - who really is a nice guy in most respects - has only today found me a copy of the outline for our research grant application, and informed me at the same time that I only have until Tuesday to write the whole thing up because he promised weeks ago we were going to submit it by the end of the month. No pressure.

Folks, if you have people working for you, and you work in a business with deadlines, I implore you to keep your people abreast of their work schedules and not inform them at the last minute when work must be ready.

It looks like part 3 will be up, in all likelihood, Friday instead of today. In the meantime, I want to discuss just one of the comments to part 2 because it has some bearing on language policy in general. Sylvia Li asks if I have "statistics, as well as anecdotal evidence, for saying that French immersion schools are a failure? If so, you'd think there'd be a bunch of very annoyed Anglophone parents in Western Canada."

For lack of university access, I can't claim to have numbers or citations at the tip of my fingers. There are a number of people in Canadian education research who are critical of French immersion. Gilles Bibeau is the most notorious, but I can't recommend him because he believes several things that I think are not merely wrong but also stupid and harmful. Roy Lyster is much more sympathetic to the goals of French immersion and is also cited as someone quite critical of the programme.

One thing you will not find any well-informed advocate of French immersion saying is that children graduate from school with the level of French necessary to genuinely live in the language. They usually graduate with better French than children coming from Core French programmes (manditory French classes taught in ordinary English-language schools), but they are not at all comparable to native speakers. One paper I remember reading on the subject claimed that a minority of Ontario French immersion graduates who went on to study French at bilingual universities (Laurentian, U of Ottawa, and York) did ultimately develop real fluency. Most did not. Other studies show that children in immersion can develop fairly good passive comprehension skills in French, but that fluent speech and writing rarely develop.

This, to me, constitutes a failure. Now, I should make clear what I am not saying. Children in French immersion - including early immersion and 100% French programmes - do not appear to measurably suffer from the experience. The overwhelming majority feel that it was a positive experience and intend to send their children to French immersion. There is no apparent failure to meet other educational goals. English abilities are as high - and according to some higher - in children graduating from French immersion as in children coming from ordinary English schools. Putting your child in French immersion does not harm them in any way that regular schools won't.

Furthermore, immersion programmes were quite successful in Quebec before they started disappearing in the aftermath of Bill 101. Nowadays, English-language children often graduate from Quebec's ordinary English schools with excellent French, and demand for French immersion in Quebec has dropped because parents identify more and more with their schools as symbols of their Anglo-Quebecois identity. I would not be surprised to discover that French immersion remains quite successful in the Ottawa Valley, northern Ontario and New Brunswick because it is in those areas that a child is most likely to be exposed to French in their daily life.

French immersion is a failure because the majority of 18 year olds can acquire genuinely fluent French by getting average grades in Honours French in high school and then spending a year in Chicoutimi. To send children to schools where 75% to 100% of the time is spent in French classes for as much as 12 years and still not produce fully functional French speakers does not incline me to think highly of the efficacity of French immersion programmes in Western Canada.

Now, you may be asking yourselves, how can it be that a child spends all that time in French languages classes, pass, still not be able to communicate in French and have learned just as much as ordinary students? Since I already have a reputation as something of a radical on education policy, let me suggest that it is because most kids don't learn very much in school anyway. But that is a different issue.

However, these problems in Canadian French immersion highlight the practical difficulties that follow from Canada's choice of the personality principle - as Denise Réaume and Alan Patten call it - instead of more territorial principles as the basis for language policy. It is nearly impossible for a Canadian who did not learn French at home to acquire real fluency without living - at least for a while - in a community where French is widely spoken. This, not the government of Quebec and not anti-French sentiment in the west, is the major barrier to Trudeau's vision of coast-to-coast bilingualism.

The problem can, to some degree, be remedied by placing English-speaking children not in French immersion schools filled with other English-speaking children but in native French schools. Unfortunately, this very solution is categorically forbidden by the constitution of Canada in every province except Quebec. Only children with a largely hereditary right to study in French may do so in regular French schools outside of Quebec. The alternative is to promote physical mobility and try to construct large francophone communities with limited English skills across Canada which can serve as real-life immersion environments for children. That option seems unlikely.

Why, then, isn't French immersion a scandal instead of being incredibly popular? Well, first, neither the parents nor the students are usually readily able to judge the French skills taught in immersion. They may think they have quite good French, but they are comparing themselves to their friends from English schools. Since children's overall educational outomes are not harmed, immersion schools are not producing swarms of English-illiiterate graduates who are flunking out of university.

Second, it is a scandal, but only among language education scholars.

But, I think the most important factor is that many people in Canada so strongly identify their nation with a policy of state bilingualism, that patriotism keeps the immersion programmes popular. Parents believe that they are doing the right thing, not only for their children but for their country, by sending them to French immersion schools. Attitudes towards francophones and towards official bilingualism are very positive among French immersion students and their parents.

The programme serves an important political purpose. In a nation with very little militarism, French immersion substitutes for sending your young ones off into military service as a demonstration of nationalist pride.
 

Wednesday, August 20, 2003
 
A different kind of language policy

First, let me thank Jacob Levy and Matthew Yglesias for linking to Part 1 of this discussion of language policy. If hits are any measure, you've drawn a fair amount of attention to it, and I appreciate it.

This post tries first to make an argument for linguistic diversity without assigning any intrinsic value to languages, and second introduces an additional element into the debate that I think has been sharply neglected: the economic value of second-language education for speakers of dominant languages. Then, I talk about some policy options that I think are worth considering.

It is after 1am as I post this. I had expected to put on a few finishing touches this afternoon and found instead that I didn't like where it was going to much, so I took a nap and then rewrote it from scratch. I tend to do that a lot. Long format blogging is a kind of seat-of-the-pants exercise for me. If I didn't work this way, every post would take a month to write. The downside is that I always find myself rereading these posts and cringing at things I would have said differently if I could edit it a week later.

My boss is back from Sweden, which means I have a paying writing gig tomorrow, extracting research funding from the Flemish Council for Industrial Research. So, although it's more than half done, I expect the third post to go up Thursday rather than tomorrow. It will cover a different normative political theory, one derived in large part from child development theory rather than traditional political or economic principles.




There are really two somewhat separate arguments that have to be resolved in debates over language policy. First, is linguistic diversity something worth supporting? Second, what goals should a language policy try to meet?

Most of the people who write about language politics either place value on linguistic diversity or at least don't think that there's a good reason to be opposed to it. Like most linguists, I certainly tend to place value on it. But there are, of course, people opposed to linguistic diversity. They tend to be anglophones nowadays and far too many seem to regard the existence of multiple languages as the "curse of Babel." Arguments about the inherent superiority of one language over another are, thankfully, no longer very fashionable, but in their place English-speakers will tend to say that theirs is the only viable candidate for universal common tongue, shrug their shoulders and say that there's nothing to be done and it would be better to not resist the inevitable Anglicisation of the world.

It is hard enough to get past this barrier, much less actually advocate support for multiple languages in a single community. It is hard to convince people of arguments in favour of linguistic diversity when they do not feel that their languages are threatened. But, such arguments practically go without saying for those whose languages are threatened.

The authors in Language Rights and Political Theory who do see intrinsic value in language diversity appear to be bothered by the weakness of their arguments, and rightly so. I think some of their arguments can be presented more strongly. Linking different languages with different cultures is helpful, since people are generally more at ease with the case for cultural diversity. It is difficult to deny the aesthetic and economic value of cultural diversity when the dominant popular artistic forms in America derive overwhelmingly from its minority cultures.

Arguments from justice are stronger when integration into a dominant language community is viewed as an expense born by the minority rather than a privilege granted by the majority. We can, in fact, make an extended version of this case on purely economic grounds. Although people usually invoke the vocabulary of "greater opportunities" in the abstract, this term is in almost every case a synonym for "more money." Employment opportunities are generally greater for speakers of more dominant languages, but this is not something that has happened in isolation from language policy. The American state, through the public education system, subsidises companies by providing them with English-speaking employees at no additional cost. If the state did not undertake this form of subsidy, businesses would have to offer more opportunities to non-English speakers and would sometimes have to operate through bilingual intermediaries in order to most productively deploy labour. We can even view the lost opportunities to minority language speakers as a cost to the economy as a whole rather then simply a burden on individuals. Multilingualism can be justified on the grounds that it results in more productive use of labour.

This highlights the impossibility of language neutral policies but it also points to a serious problem in policies towards cultural minorities on the whole: Policies designed to help minorities, often policies designed with only the highest of ideals in mind and with a very genuine intent to improve the lives of real people, can have the opposite effect. I suspect that if there was less concern in the US about how well Latin American immigrants were integrating, their socio-economic status would actually be good deal better.

I think Canadian and Belgian histories support the validity of my case, although the order of events is somewhat reversed. Before WWII, French Canadians were, in the words of one Québecois activist, les nègres blancs d'Amérique - the white negros of the Americas. They suffered from all the same patterns of poverty and discrimination that have to some degree characterised Spanish-speaking Americans and at one time the Flemish. During the war, the British needed labour to build weapons, and since conscription did not apply to Québec, the province had a large available labour pool far out of range of German bombers. Hundreds of thousands of young French Canadians were enticed off their farms and into the cities, primarily to Montreal, to work in the factories. The needs of war meant that if factories had to operate in French to get things done, they operated in French. It is this economic shift, and its continuation in the post-war period, that led to the Quiet Revolution and the rise of francophone activism and Québec nationalism.

Belgian history is in some respects similar. In the late 19th century, Wallonia - the southern, French-speaking half of Belgium - was what Silicon Valley was in the 1990's: a global high-tech centre, where standards of wealth were higher than virtually everywhere else. Belgium was a major global player in the coal and steel industry - an industry as central to growth in the 19th century as electronics is today. After WWII, during the years of the German "economic miracle", there was an enormous demand for labour in manufacturing, and Flanders was conveniently located near large German industrial centres. Germans had no particular preference for French over Dutch, so Flemish industries operated in the language of Flemish workers. At the same time, Wallonia's engines of wealth were failing. The steel industry was moving to Japan, and coal didn't fetch the price it used to. Wallonia became poor while Flanders grew rich. It is this economic shift which made Flemish nationalism and linguistic equality feasible.

Alan Patten makes a distinction that I think is genuinely useful in this sort of instance. He labels certain language groups as ones able to support a "societal culture." I think his terminology is atrocious, but that the idea is sound. This enables us to distinguish between the minority language rights we might extend to a relatively small immigrant community from those we extend to a much larger and better established community. Where a language community exists in sufficient numbers and concentration that it is only policy and prejudice which prevents people from having as full and complete a life within their own community as the majority has within its community, I don't see any good reason why that language shouldn't enjoy full legal and social status wherever numbers merit. Insisting on linguistic integration into the majority community serves neither their best interests nor a more general economic interest.

This does not mean restricting anyone's access to education in the majority language, and need not deny anyone whatever limited choice they may realistically have over what language they want to live in or raise their children with. It need not even mean failing to learn the majority language well enough to participate in public life.

Would you believe me if I told you that in Canada there are schools that are entirely in French, where enrolment is, in effect, conditioned on being a member of a specific ethno-linguistic minority and the schools themselves are completely segregated from English language students, yet where graduates on the average score higher in English than the graduates of neighbouring English-only schools? This is routinely the case in French schools across Western Canada. It is not because the French schools are superior. The more likely explanation is, in fact, their exclusivity. Having no immigrants in the school means having no children who don't already have fair English knowledge. It also means that more students come from socially secure middle class homes.

There are other informative examples. In Scandinavia, the level of fluency in English is extraordinary, often better than among second-generation Latin American immigrants in the US, although many Scandinavians in my experience - especially engineers - believe their English to be better than it actually is.

If these things are happening elsewhere, why is it so hard to improve English knowledge among Spanish-speaking Americans? It is traditional to claim that American schools are terrible, that they are failures, that they can't teach anything, etc. This is not exactly true, or rather it is true but not in the way or for the reasons most people think it is. As is regularly pointed out by anti-bilingualism activists, some immigrants have fewer problems with the schools than others. I'll give long odds that second generation Swedish-American children do quite well in America's schools.

So, let me beat on a traditional leftist drum: social inequality is the reason why Spanish-speaking children do poorly both in immersion and bilingual programmes. It's all about class. For many Spanish-speaking Americans, there is a vicious cycle where poor English and a certain amount of old-fashioned prejudice leads to poverty, poverty leads to poor outcomes from public education, and poor response to schools leads to poor English. Someone who immigrates from Sweden to the US, in contrast, is probably white, probably comfortable in English, and probably a professional with a decent income.

The single most important justice-motivated argument for better language policies ought to be the breaking just this sort of vicious cycle. By creating a viable, respectable, Spanish-speaking culture in the US, one which is equal to anglophone culture in esteem if not in numbers, not only is the inequity that arises from ignorance of the majority language reduced but actual knowledge of English may improve. This is more or less what happened in Canada and might have happened in Belgium in an alternate universe where early proposals for personality principle based bilingualism had been accepted.

This line of argument has some limitations. It really only applies to the kinds of language conflicts in generally well-developed countries where there is a clear dominant language, and even then only for those minority languages that are relatively well-established. At the limit, it might serve Inuktitut language activists and perhaps Cree/Montagnais speakers, but it is of little use to those seeking to promote Navajo, Welsh, Basque, Breton or other languages where there are few if any monolingual speakers left and fluency in the dominant language is at least as great as in the minority language. These are the hard cases, where one must rely on weaker arguments for diversity per se, or else on what I consider the weak grounds of historical injustice.

However, I think there are a few principles that can help, and a few fallacies that need to be swept away.

I would offer the language activist the following advice: If there is insufficient local political will to support a minority language, radical efforts to support it will fail. This principle is particularly important to the indigenous minority languages of the United Kingdom and Ireland. Although the Irish public has repeatedly expressed its support for the Irish language, the political will to make it thrive simply does not exist. There is no longer anyone who fears that the Irish will cease to be Irish if they just speak English, and few people in Ireland are willing to accept the costs of making knowledge of Irish economically necessary. The same, to greater and lesser degrees, applies to Welsh and Scots Gaelic. This is what distinguishes them from the Basques, for example. The Basques have shown a good deal more political will because they much more strongly identify their language as a core element of their identity. Many Spanish monolingual ethnic Basques send their children off to Basque language schools, while few Welsh are willing to do the same for their children.

If communities with diminished status have the political will to rehabilitate their languages, they should have the right to try. They should even have the right to moderately coercive territorial measures, like mandatory bilingualism for certain classes of work, restrictions on the use of particular languages on outdoor signs, and mandatory education in their language for children who come under their jurisdiction. However, I don't think this entails a right to prevent children from learning the more dominant language, or even throwing up excessive barriers to acquiring that knowledge. It is not even incompatible with mandating bilingualism. Certainly, people should be free to choose to leave the community for any reason they like. No language is worth saving at the cost of diminished opportunities, but unlike many, I do not think saving endangered or minority languages needs to entail any such risk.

This brings me to the thing that I feel is most lacking in discussions of language policy, not just in Language Rights and Political Theory but in the field in general: The failure to consider minority language rights in the same context as second-language education for dominant language speakers. People in these debates tend not to assign much value to multilingualism for speakers of secure, more dominant languages. Countries spend billions of dollars trying to eradicate immigrant languages in the name of integration, and then spend billions more teaching many of those same languages to speakers of the majority language. Surely, I am not the only person to wonder why this should be?

Jacob Levy is the only person I can think of who even mentions this issue in passing: "A native French-speaker who learns Breton instead of German as a second language trades more options (people to talk with, books to read, job opportunities, and so on) for fewer..." Although on the surface this appears to be a reasonable assumption, it is frequently untrue in practise, especially in the case of large, hegemonic languages like French and English.

I have been through high school and college language studies in America, and very few people emerge from those programs fluent in a second language unless they take more intensive immersion studies in addition to their classwork. The same is true to a very significant degree in France and Germany and much less true of English studies in Scandinavia and the Low Countries. I am convinced, after living in Belgium, that this is primarily because people living in Scandinavia and in Dutch speaking countries have far greater access to English-language media and English speakers than people in France and Germany.

By comparison, look at Canada's French immersion schools. These are special schools where English-speaking children are enrolled in a fully French-language programme. Their investment of time in learning French is as large as it could possibly be. French immersion education started as an option in the Quebec anglophone school system in the 1960's, where it was phenomenally successful. Yet, transplanting this programme to other parts of Canada has been a failure. Children rarely emerge from these schools fluent in French. When I was a student at the University of Montreal, my programme had several anglophone students from other parts of Canada, students who had mastered French well enough to attend a French university. Not one of them came from an immersion school.

We can conclude that an investment of time in second language studies does not produce fluency in proportion to the time spent. Local language access is a significant if not determining factor in actual probability of acquisition.

This uncomfortable fact is uniquely annoying for me. In one year in France, I went from almost useless French to good enough to gain access to the university. In one year in Quebec, I went from good enough to study at the university to good enough to pass as a native. (And in nine years outside the French-speaking world, I have gone from near perfect French to awkward, but still functional French.) In contrast, in two years in Flanders, I have gone from no Dutch to awful Dutch. These situations are, of course, not identical. My year in France was spent almost entirely in language courses, and my first year in Quebec I almost never spoke English except on the phone to my mother. In Flanders, I have a spent one year in an exclusively English-language university programme and one year in a full time job in a firm where French and English actually reach more of the employees than Dutch. As Phillipe van Parijs puts it, stubbornness counts. The legendary intransigence of francophones (which has causes quite different than mere personal cussedness) actually makes French easier to acquire.

Still, this suggests to me that a child living in Brittany will likely acquire more real fluency in Breton than they would acquire in German while living in most parts of France, given the same investment of time and effort. Investment in the larger language (whether larger is interpreted in terms of population or gross economic importance) does not always offer a higher rate of return. It is better, in my opinion, to spend a few years learning Breton and actually be able to use the language than to spend the same time studying German and have little to show for it.

This argument is important to debates over Spanish in the USA. A child in school in New Mexico is far more likely to successfully acquire Spanish than French. Furthermore, if this child continues to live in New Mexico as an adult, his or her economic opportunities are almost certainly substantially more advanced by Spanish bilingualism than by French bilingualism. This is not only because of the demographic weight of Spanish-speaking New Mexicans but also the proximity and economic importance of Mexico to the local economy. I fail to see how a state that claims that studying algebra is in the best interest of children, even though very few of them will remember or ever use it, is making an unreasonable imposition by requiring them to study the native language of roughly one in ten of their fellow Americans and the official language of several of America's nearest neighbours, especially for those children those living near Mexico and Cuba and in areas where the demographic weight of Spanish-speakers is greatest.

I think this claim is important because it opens the way to a more symmetric notion of language rights and duties. There may be some obligation on the part of minorities to learn the majority language, and certainly if the state is to set curricula and requirements on the basis of what most promotes economic opportunities for children, then teaching everyone the dominant language (although not necessarily to the exclusion of their own languages) is perhaps reasonable. But, I should think this same obligation ought to be equally imposed on speakers of dominant languages. If a substantial portion of your community speaks a language other than your own, you ought to feel as much obligation to be able to communicate with your neighbours as they do. Your economic opportunities are certainly enhanced by a knowledge of the languages in use in your community. And, if the state is to decide what is best for children to learn, it is certainly reasonable for them to require the study of their own area's major languages.

This sort of language education is not a pipe dream. It can be accomplished, and the proof comes from the very same bilingual educators so derided in the US in recent years. The first big bilingual education programme in the United States was founded in Texas in the 1960's. It took Spanish and English-speaking children and put them together in roughly equal numbers, in bilingual classrooms with bilingual teachers. The intent was not merely that Spanish-language children should learn English, but that the English-language children should learn Spanish. This programme was very successful at achieving both goals and had no apparent negative consequences on other educational goals. It is this kind of education which is feasible in genuinely multilingual communities, and which at once sweeps away most of the arguments against multilingualism.

This leads to some radical ideas. As someone who has previously advocated some genuinely counter-intuitive education proposals on this blog, let me advance a very different language education policy: All schools should be bilingual schools. Local linguistic dominance and arguments from economic opportunities may be enough to fix one of the two languages, but the other language ought to be any language where there is sufficient community interest. For large minority languages, the economic advantages associated with knowing the them ought to be enough to get dominant-language parents to enrol their children in those schools instead of distant, but perhaps more globally important languages. Failing that, a quota system - where seats in schools for some languages are numerically limited - ought to be enough inducement. If it proves difficult to find majority language speakers willing to enrol their children to learn smaller community languages, then perhaps minority-language parents should be encouraged to pay a small tuition fee used to bribe majority-language parents to send their children to those schools. For small languages that enjoy political support within some community - cases like Scots Gaelic or Navajo - simply offering majority language parents money to enrol their children in bilingual schools with these smaller languages is probably the least coercive way to sustain them.

The very smallest languages in a community will probably be unable to get their own schools, or will have to pay the majority some significant amount to ensure that their school remains bilingual. Otherwise, I don't see how such a school system is linguistically unequal. All students are subject to the same requirements: you must master your own language and another in order to graduate. No one needs to feel more linguistically repressed - at least at school - than the speakers of the dominant language. Freedom of choice would certainly be more secure than under monolingual regimes, and there is no reason to think that any child is being deprived of the opportunity to learn something more profitable to them. There is no reason why such schools have to perform worse on any other educational measure.

This sort of system rests, however, on a different social foundation than the one most frequently found in English-speaking countries. Local access is essential in second language education and the success of Spanish-English bilingual programmes is likely to be conditional on placing the two languages on a closer to equal footing in the community. The same logic applies to language like Welsh or Breton. In order for this to work, people have to be exposed to far more culture in other languages.

Making a case for promoting minority language cultures is unusually hard to make in the English-speaking world because however much minority culture may be the engine of popular arts in America and the UK, anglophones are only barely exposed to foreign language culture. This may sound like liberal elite carping about American provincialism - which is pretty much what it is - but that doesn't mean it isn't true.

As I write this paragraph I am listening to my (fully legally acquired) MP3 collection. Looking over the music I've listened to over the last couple hours, it includes one song in Irish (Chicane - Saltwater), two in French (Mylène Farmer - Desenchantée, one of my all-time favourite pieces of French pop; and Un Jour en France - Noir Désir, the French band whose lead singer murdered his actress girlfriend in Latvia a few weeks ago), one in Icelandic (Björk - Hriti Bjorn), one in Japanese (the ending theme to my all-time favourite piece of animation, Key the Metal Idol), one in German (Rammstein - Du hast!) and right now I'm listening to a song in Punjabi (Panjabi MC - Mundian To Bach Ke). Now, most of my music is in English, and I'm the first to admit that I'm not an especially typical person, but none of this music is very obscure here in Belgium or abroad. Except for Mylène Farmer and Noir Désir, I doubt I would have very much difficulty getting music by these artists in the US. But, I think Saltwater is the only thing on that list that I've ever heard on the radio in America. Du hast! was on the soundtrack to The Matrix. Mundian To Bach Ke is fairly recent stuff, so it may be better known in the States than I think it is and it surely gets airplay in the UK, but somehow I suspect that Punjabi rap music is not a growth market in North America. I saw Key the Metal Idol on PBS in California, but it was fully dubbed - even the music was translated. I don't think any of Björk's music in Icelandic is ever sold in the US. People miss out.

That is, to my way of thinking, the all too often neglected element of language policy. Monolingualism has costs for dominant language speakers too. It makes it harder for them to learn languages which clearly expand their own opportunities, and it cuts them off from the currents of culture elsewhere. This is not a uniquely anglophone problem. It applies to a significant if lesser degree to French and German as well, and applied even more to them in the past.

I am convinced that the most effective way to attack this sort of cultural isolationism is through local multilingualism. I want to see countries using the native languages of immigrant and minority cultures as resources. Imagine the impact on America's so-called "war on terrorism" if New York and Detroit were dotted with Arabic language schools full of Anglo kids. The military, the CIA, the FBI and other wings of the American state are constantly complaining about the language barriers they face in the Middle-East. To have on hand a community - not just of immigrants but of fully integrated Americans - who are not only fluent in Middle-Eastern languages but for whom the people and cultures of the Middle-East just aren't terribly foreign or scary - it seems to me that has some real value. The same logic applies to doing business in China, or for that matter in France.

But to do this means rethinking not just schools. It means rethinking the whole way we identify and deal with things that are foreign. As someone with a long history of regularly changing countries, freedom of movement is a more important principle to me than it is to most people. This gives me a somewhat unusual perspective on multiculturalism and multilingualism. I want everyone to be free to go where they want, and I don't want them to have to be afraid either that they will be rejected as foreign or forced to adopt arbitrary cultural norms in order to avoid the charge of being a bad immigrant. I want people not to have to live in fear of foreign languages, either in their own community or elsewhere.

At the same time, I don't want people who speak and live in smaller languages to be afraid every time an outsider moves into their community or a young person moves out. In my perfect world, people in Wales would be encouraging immigrants from India and teaching them to speak Welsh rather than living in fear that summer people from London are going to buy up their homes and make them all speak English. It's true that not all the world's small languages can be saved. Too many aboriginal American and Australian languages are already dead. But many can still be saved if there is both the will to do it in those communities that identify with them, and a reduction in fear and arrogance from others who live with them.

It is a radical vision, but I don't think it's a utopian one. I do think it would be a better world, and that is what is hardest to demonstrate to most people. I think my policy prescriptions for larger languages make sense even if you place no intrinsic value on language diversity, so long as you think that a monolingual world is simply not feasible. In order to justify defending the smaller and politically weaker languages, I have to actually articulate reasons why a multilingual world is a better place than a monolingual one. That means finding a different answer the first question I posed, at the beginning of this post. To do that, I have to delve a little deeper into philosophy and language, and that will be the subject of my next post.
 

Monday, August 18, 2003
 
The fit hits the shan

Via Mac-a-ro-nies, I see the first real attempt to put the story of the Great Blackout of 2003 into a single narrative. I have been behind on the nes, and this article is already a couple days old.

Blackout probe eyes failure near Cleveland

US electric industry officials said last night they had strong indications that the massive power outage that shut down New York City and much of the Northeast began with the failure of a high-voltage line near Cleveland.

That failure was the first in a 60-minute series of breakdowns that spread blackouts across eight states and Ontario, affecting about 50 million people. The North American Electric Reliability Council, a group originally formed to prevent a recurrence of the massive 1965 Northeastern blackout, said the crisis began at 3:06 p.m. Eastern time on Thursday on a line in the "Lake Erie loop."

Michehl R. Gent, president of the electric council, said it could take days or even months to come up with a detailed explanation of what went wrong. But Gent said the Erie loop and a gaggle of power plants that feed into it -- 22 nuclear reactors and 80 conventional plants -- are "the center of the focus" of council investigations.

Over the space of 9 to 10 seconds at about 4:10 p.m. Thursday, Gent said, at least 12 high-voltage power lines in the loop failed, and 100 power plants almost simultaneously shut down under standard emergency precautions intended to prevent generators from swamping the crippled grid.

As they shut down, 800 megawatts of electricity -- an amount comparable to the power used by 600,000 homes -- that had been flowing from west to east suddenly surged in the other direction, sucked into the growing vacuum in Ohio and Michigan. [...]

It does read a bit like Bruce Sterling in The Hacker Crackdown, doesn't it? The ass-covering and suspicion-casting is already underway according to Newsday:

Ohio Company Defends Itself

The Ohio-based company whose failed power lines have been targeted preliminarily as the starting point of Thursday's massive blackout defended itself Sunday, insisting that the wider electric grid was faltering hours before its lines went out of service.

Kristen Baird, a spokeswoman for Akron, Ohio-based FirstEnergy, said that as early as noon Thursday operators noted fluctuations in the frequency and voltage of electricity traveling on lines in the Eastern Interconnection, a grid that includes all the United States west of the Rocky Mountains and north of Texas. She declined to specify where the anomalies occurred. [...]

"Our position is that what happened Thursday is much more complex than a few tripped transmission lines in our system," Baird said.

On Saturday, the North American Electric Reliability Council said that five high-voltage lines in northern Ohio failed during a period of an hour, beginning at 3:06 p.m. Moments later, Canada and the Northeast United States experienced huge power swings that quickly cut electricity to an estimated 50 million people.

FirstEnergy owns or co-owns four of the five lines that failed, the company said.

This, to the best of my knowledge, the first time I have heard a company deploy the "it's more complex than that" defense. But wait, there's more:

Sunday, Michael Gent, president and chief operating officer of the [Midwest Independent System Operator, a nonprofit cooperative of utilities in 15 midwestern states and Canada], said Thursday's blackout "is exactly what we are supposed to prevent."

"We have a problem here where we either have a bad design, or we have bad following [of] the rules," Gent said in an interview on ABC News' "This Week with George Stephanopoulos."

Long before last week's blackout, Gent's organization was lobbying Congress to put teeth into the voluntary reliability rules. The changes are needed, the council says, because electric utilities that generated their own power and were regulated by the government have given way to a free-market energy system in which kilowatts are bought and sold by hundreds of disparate players.

"The users and operators of the transmission system, who used to cooperate voluntarily on reliability matters, are now competitors without the same incentives to cooperate with each other or to comply with voluntary reliability rules," the council said in a statement on its Web site, which was posted before the blackout. "As a result, there has been a marked increase in the number and seriousness of violations of these rules."

Long-delayed energy legislation pending in Congress would give the council the authority to enforce compliance with reliability standards among all market participants.

So, it is all going to turn out to be deregulation's fault, isn't it?

Speaking of power failures...

GOP Candidates Under Pressure to Support Arnold

Republican gubernatorial candidate Bill Simon, under considerable pressure by the party to withdraw from the recall election and endorse front-runner Arnold Schwarzenegger, wouldn't rule out that option on Sunday.

"I'm running hard," Simon told NBC's "Meet the Press" when asked a second time if he would stay in the governor's race to the bitter end. "Where's Mr. Schwarzenegger stand on the issues? This has to be about the future."

When asked if there was a situation where he could imagine endorsing Schwarzenegger, Simon told interviewer Brian Williams: "I need to hear people's vision."

Simon wouldn't say if he planned to talk with White House political director Karl Rove about his future in the race or if any appeals from the Bush administration might sway his decision.

Tea-leaf reading on the status of Simon's campaign and that of Republican challenger state Sen. Tom McClintock has become a daily obsession throughout GOP political circles. [...]

In the latest Field Poll, Simon registered 8 percent and McClintock 9 percent. Schwarzenegger led the GOP field with 22 percent, trailing only Democratic Lt. Gov. Cruz Bustamante's 25 percent. GOP strategists, however, said the biggest number in the poll was the cumulative support for the best-known Republican candidate. [...]

The Schwarzenegger campaign has also encouraged the other party candidates to consider dropping out. "Without a doubt, with one Democratic candidate and multiple Republican candidates it's obvious that there is a factor that could diminish Republican votes for the front-runner but there's not a factor diminishing the Democratic votes," Schwarzenegger spokesman Rob Stutzman told Fox News.

It's nice to see Democrats united for once and Republicans divided.
 




 
More stuff missed while I've been busy

I suppose I should put more smilies in my text. Jurgen over at No Cameras has posted a response to some earlier posts of mine. As for my post containing a few less than methodologically sound ways of comparing the US military budget to the rest of the world, I am quite guilty. I did note some of this in the original post.

However, I think my point still holds. It costs relatively little to force your enemy to pay a lot to invade you and the expense of good equipment and a relatively high level of casualty aversion does make it far more expensive for America to go on the offensive than for others to defend themselves. I did not put military spending statistics in terms of percentage of GDP, but I suspect that it would be more politically intolerable for the US to spend 3% of its GDP on military action than for a moderately industrialised dictatorship to spend 10% on its defense.
 

 
Language Rights and Political Theory - Chapter Summaries and Specific Criticisms

Welcome to part 1 of what I'm planning as a three or four part discussion of language policy, starting with my long awaited review of Kymlicka, Patten, et al's Language Rights and Political Theory. Part 2 is almost finished, and I'm part of the way through part 3. It's long, folks. Pull up a pew an' set a spell.

If you're Jim over at Uncle Jazzbeau's Gallimaufrey, you can follow along with your own copy. I have included spoilers, so be warned. For everybody else, let me reveal the surprise ending in invisible text: The British did it, the French helped them and the Americans covered it up.

Okay, now on to the serious stuff.




Language Rights and Political Theory brings together a number of authors, primarily working within a mostly Rawlsian liberal framework, to investigate issues in language policy. There are a number of things that strike me about this work in contrast to other efforts to flesh out a theory of language policy.

First, it is abundantly clear that the authors have only a handful of instances of language contact in mind as they write. The arguments and principles advanced in this volume derive overwhelmingly from just four regimes: Canada, the United States, Belgium and Spain. There is mention of other places and cases - it is not the work of 12 authors with blinders to the rest of the world and Jacob Levy is one of the few to give anything close to equal time to language issues outside the west - but there is almost nothing here of value to people interested in post-colonial language policy and there is little sense in this volume of the diversity of linguistic contact situations.

Still, these four flagship cases - each involving linguistic conflicts that have come to boil in the last 50 years in well connected, reasonably wealthy, occidental liberal democratic states - are informative. A focus on the most powerful states is not, per se, a criticism. The powerful are, obviously, powerful, and their conflicts tend to colour everyone's politics, even those quite culturally and politically remote.

Second, with the exception of Stephen May, I don't think any of the authors are particularly trained in or aware of linguistics. I can't blame them - the most visible school of linguistics in the English speaking world is almost completely without value to a discussion of language policy. Still, there are places where this lacuna is especially unfortunate.

However, the book does offes some valuable points for debate and clues in the search for a more productive theory of language policy. I will review each chapter in turn, and then put forward a more general critique in the second post. In the third part, I'm going to fulfill my long running promise to put up a post sketching an alternative to liberalism as a normative political theory.

Chapter Summaries and Individual Critiques

I. Language Rights and Political Theory: Contexts, Issues and Approaches

This introductory chapter, from the Canadian co-editors Will Kymlicka and Alan Patten, outlines some of the challenges language policy poses for liberalism and some of the specific issues a liberal theory of language policy has to face.

Language simply can not be handled by analogy with those areas where liberals are more at home: race, class, religion, ethnicity, sexual orientation and other traditional concerns. We have no difficulty envisioning collective institutions which are indifferent to those things, but we are hard pressed to imagine institutions which do not, either de jure or de facto, favour some small set of languages over others. Language rights are essentially collective rights - to conceive of them as rights individuals can exercise independently of their community is to seriously misunderstand the nature of language.

Kymlicka and Patten go on to describe the various fields of policy that are most frequently subject to linguistic prescription. This list includes access to government services, participation in public discourse, employment rights, access to education, the situation of indigenous minorities, historical oppression, the problems posed by immigration, and state language polices as a tool of constructive nationalism. They also takes an initial stab at classifying language policies by their scope and nature, but this sort of policy distinction is, regrettably, strictly limited to European and North American states.

II. Language Rights: Exploring the competing rationales

Ruth Rubio-Marín places a great deal of emphasis on the distinction between instrumental and non-instrumental language rights. This seems - if I am reading her correctly - to represent the distinction between language rights granted to individuals in order to enable them to enjoy political liberties and rights designed to offer security to language communities, ensuring that their language is able to continue to exist. An example of instrumental rights is the requirement - fixed by precedent in the US and codified under the European Charter of Rights - that people brought before a court be able to understand the charges against them and be able to defend themselves, even if that means employing the services of interpreters and translators. In contrast, an example of a non-instrumental language right is the right to schools in your language of choice, even if it is not the dominant language in your community.

Rubio-Marín goes on to investigate the different kinds of measures this distinction entails, and advances the idea that language policies should properly be placed in a framework of legal rights rather than mere regulation.

III. A Liberal Democratic approach to Language Justice

David Laitin and Rob Reich offer a contrast to Rubio-Marín's advocacy of a rights-based framework for understanding language policy. They first attack this rights-based conception by dividing liberal normative approaches to language policy into three categories: compensatory justice, nationalism and liberal culturalism. They argue against each one in turn.

Compensatory justice is identified with the idea that linguistic minority communities are or have been the victims of unjust policies and that language rights are justified on the basis of compensation. The example they use is Catalonia, where the rhetoric of historical injustice has been used to gain the help of the state in re-establishing the linguistic security of their language. This is problematic for Laitin and Reich because few minority language speakers are willing to accept compensation in order to integrate into the majority community. Therefore, they must envision their language as something of intrinsic value. This undermines claims for compensatory justice in their view.

The archetypical instances of nationalist language policies are in Eastern Europe, where most of the current states are less than a century old ad their the national language came into being in conjunction with the demand for a nation-state. The language served as proof of the existence of a unified nation and the desire for a nation served to promote the language. Liberal nationalism therefore envisions language policy as a mechanism for reclaiming cultural sovereignty or national territorial rights. Laitin and Reich regard this position as foundationally incompatible with liberalism, since it entails a state authority over people's freedom to live in the language they choose.

Liberal culturalism is the position Laitin and Reich associate with Will Kymlicka, but it is one I would associate with an uncritical sort of multi-culturalism. It is a position which tends to regard groups which share an identity - be it ethnic, religious, racial or linguistic, as a single entity possessed of rights that merit protection. Laitin and Reich point out the difficulties this presents for the individualistic focus of liberal theory. These groups do not speak with one mouth, nor do they have a common view of what they want or need.

They offer an alternative: the prospect of politically negotiated language rights. Where a language community is able to mobilise within a system of essentially democratic decision-making to secure its language rights, they should be secured. Like all but the least liberal monolingualism advocates, they deplore the beatings children once received for using their own languages in school, but otherwise do not see any particular liberal interdiction against monolingualist policies. They explicitly advocate the politicisation of language issues, limited only by general liberal principles of just and unjust behaviour towards individuals. I think they are rightly critical of liberal theorists for distrusting democratic processes to decide on what rights are appropriate for which communities. We are, after all, able to advance more sophisticated notions of the democratic process nowadays than mere majority rule.

It is, at times, hard to get a bead on where Laitin and Reich are coming from. On the one hand, they are critical of the efficacy of bilingual education and on the other seem to deplore the way in which the wealthy in Catalonia are able to purchase private Spanish language educations while the poor are stuck in Catalan-language schools. They are deeply hostile to Stephen May's promotion of minority political rights in terms of power relationships, but I do not see how they expect any linguistic minority to promote its rights in a politicised framework without such advocacy.

I am inclined to attribute to Laitin and Reich a sin worse than the distrust of politics that they attribute to other liberal thinkers: the development of a political theory that serves no purpose but to justify the status quo. They point to Quebec and Spain as places where political negotiation ultimately secured significant language rights, but it does not seem to occur to them that bilingual regimes in schooling and government in the US are also the product of the same kind of political mobilisation.

IV. Accommodation Rights for Hispanics in the United States

Thomas Pogge offers the least universalist perspective on language policy, restricting his arguments to the Spanish language in the United States. He is particularly critical of Will Kymlicka advocacy of minority language rights, and defends a quite resolutely monolingual nationalist policy.

Pogge argues that historical injustices are irrelevant to Spanish language policy, since it is impossible to segregate from the descendants of recent immigrants that part of the Hispanic community descended from those present in the United States at the time that its borders were extended. Second, he makes the baffling claim that linguistic inequality does not entail any sort of injustice as understood by liberals. He supports this claim, as far as I can tell, only with the idea that if Hispanics choose to live among their own, it is by choice and therefore of value to them.

Pogge goes on to offer us a red herring: He raises a strawman argument against teaching English to Spanish-speaking Americans - an unlikely position that he attributes to Kymlicka, but which Kumlicka does not claim in Pogge's quotes. As far as I know, forbidding English education for children in American schools, or even failing to mandate it, is a not position advocated by any mainstream political force. Thus, Pogge's attack on it is quite irrelevant to the actual context of the United States. Had he attempted to generalise his position to Belgium, Switzerland or even Canada, where it has far more bearing on matters, he would have been compelled to generalise his case to a far more complicated context.

To justify monolingual English education, Pogge advances the notion that the best education for children is the education which is best for each child. That's fine, as far as it goes, but there is an enormous gap between this postulate and a policy of English-only education which Pogge makes no effort to bridge. He neither makes empirical claims about what form of education is best for children, nor does he defend himself from the charge that he wants the government to decide in lieu of parents. Given what I presume to be a liberal preference for freedom of choice, this deserves some explanation.

This "English for the children" sort of rhetoric is uncompelling to me. Consider an alternative form of the same argument. In post-9/11 America, it is likely that Muslim children, especially those of more visible and conservative sects, face significant disadvantages in education and employment. They are taunted at school and almost certainly have a harder time getting a job, especially in the sorts of unskilled trades that many immigrants need to survive in a new country. Are we, therefore, for the sake of the children, justified in Christianising them or at least pressing them to adopt a more secular and less visible form of Islam? I should think the liberal answer to be no. Pogge proposes nothing to explain why this is less true of language than of religion.

V. Misconceiving Minority Language Rights: Implications for Liberalism

Stephen May is a sociolinguist who I associate primarily with Maori language issues. In some ways, I am more comfortable with May than the other authors here, because he does not speak the language of Rawlsian liberalism, opting instead of the language of cultural criticism. He is particularly hostile to the explicit monolingual nationalism of Thomas Pogge, and the more hidden form he sees in Laitin and Reich.

First, he is critical of the magic link between the nation-state and the identification of a single official language. There is a reason for that link and May makes no mention of it: the belief that a common citizenship and a common political space is difficult to sustain without a common language. However, May is still on fairly firm ground pointing out that this is a post-facto justification of national monolingualism. The historical foundation of states, especially America, is far less simple.

May also highlights the asymmetry of claims about the importance of reinforcing the dominant language over minority ones. He points to the either/or nature of many language claims as representative of this problem. I, too, noticed how the authors of many of the chapters in this book seem to think that bilingualism is simply impossible, or assume that any bilingualism is simply a step towards assimilation into the dominant language and culture. There is no inherent reason why this should be true. Although May does not make this case, in the era before the modern nation state, whole multilingual communities persisted for generations, and in many place they were the norm, not the exception. Even today, large parts of the Balkans have communities where universal or near-universal bilingualism is the norm, and in the most Anglophilic nations of Europe - the Low Countries and Scandinavia - near universal bilingualism has become a stable situation.

May goes on to criticise the notion that language must define identity as an essentialist and reductionist view - fighting words for the cultural critic. One can be American while still speaking English, Spanish while speaking Catalan and British while still speaking Welsh. He is in my opinion on the right track here. It was once considered unthinkable that one could be Irish without being Catholic, and to claim that to be American requires being Anglophone is just as pernicious a position unless it can be supported by some stronger claim than the presumption that one nation must have just one language.

VI. Linguistic Justice

Philippe van Parijs is, I assume, largely kidding with his contribution to this volume. Deploying the notion of distributive justice, he proposes to use cash to compensate minority language speakers for the effort they must expend in learning the majority language, since he deems this an effort which benefits the majority at a cost to the minority. This resembles Swift's famous proposal for resolving Ireland's overpopulation problems in the 19th century.

However, let us for a moment take van Parijs seriously. This makes some sense in light of the history of van Parijs' native country: Belgium. The history of language politics in Belgium was, until 1989, a history of Dutch speakers learning French, while French speakers saw no particular need to reciprocate since Flemings were largely able to understand and express themselves in French. This persisted even after Dutch-speakers became a majority of the population. Flemish bilingualism was largely beneficial to French-speakers, who were therefore able to expend less effort learning and using a non-native language.

Consider, however, the effect of guaranteeing every Spanish speaker in the US a regular payment from the government. What would this do for Spanish retention rates among Latin American immigrants? It has the distorting effect of making it profitable to retain a native knowledge of Spanish, undermining the very effect so earnestly sought after by integrationist policies. Money has secondary effects, and offering money to Spanish speakers creates a moral hazard for the whole community, discouraging their langauge from behaving as it should by dying off.

VII. Diversity as a paradigm, analytical device and policy goal

François Grin takes a long hard look at the logic and consequences behind support for social diversity and finds them lacking.

One paradox that Grin identifies is the distinction most countries make between "indigenous" minorities and "immigrant" ones. The United Kingdom has more Gujarati speakers than Scots Gaelic speakers, yet Scots Gaelic enjoys some legal status in the UK, while Gujarati has none. The goal of fostering diversity would presumably be just as well served by support for the Gujarati community as for Scots Gaelic.

Grin recognises that our natural sense of justice leads us to grant more support to these "indigenous" communities than to other communities, but asks whether making time the deciding factor in language rights isn't problematic. Where does one draw the line? Spanish, French and German have been spoken in the United States for as long or longer than English. Each predates the founding of the United States by a considerable time. Should support for language rights in the US only include languages spoken before 1492? If so, how does one transplant this decision to the rest of the world? Europe's ethnic distribution is the product of millennia of migration, assimilation and remigration where no magic date separates some previously just distribution from the present. Grin does not have an answer.

VIII. Global Linguistic Diversity, Public Goods, and the Principle of Fairness

Idil Boran is, to me anyway, the most sympathetic author in this volume. She considers arguments in favour of biodiversity to see if they can inform arguments for linguistic diversity. As Boran points out, she is not the first to consider this train of thought. There are a number of similarities between language diversity and biodiversity. The most diverse ecosystems tend to be fairly small, and advocating biodiversity means protecting relatively small territories. In the same way, the world's hundred most common languages are spoken by some 90% of the world's population, while thousands of other languages are spoken by small communities.

Furthermore, the very places with the richest biodiversity also tend to be the places with the richest linguistic diversity. This is not a coincidence. Biodiversity and linguistic diversity are generally greatest in areas that have not been fully colonised by agricultural civilisations. Just as farmers bring with them their own organisms to the detriment of local flora and fauna, they bring with them their languages and tend to liquidate or assimilate less efficient users of fertile land. Biodiversity and linguistic diversity also tend to be greatest in areas that are heavily partitioned by geographical barriers. The same mechanisms that limit the movement of species limit the movement of cultures.

Discourse on biodiversity tends to be centred on the notion of a public good. A public good, in liberal discourse, usually means something which is identified as beneficial to at least most people, but where it is difficult to exclude anyone from enjoying the good if it exists. This undermines voluntarist and market-driven solutions to distributing the good and theorists most often treat the identification of a public good as something which justifies an exception to the liberal predisposition towards freedom of choice.

Boran rehearses many of the arguments in favour of viewing linguistic diversity as a public good. First is the argument from aesthetic value so often favoured by classical humanists. Language is not exclusively an instrument of communication. It is also a medium for artistic works. To lose a language means to lose all the arts which are only accessible in that language - its poetry, its literature, its songs, etc. However, she finds this argument weak. There are ample disputes over the recognition of artistic ventures as public goods, and what policy implications this entails. Look, for example, at the constant griping in the US over state funding for controversial artists, like the display of Robert Maplethorpe photos in public museums. Adding language issues to this conflicting mess seems ill-considered.

She is also confronts arguments from scientific value. Although local cultures do contain a variety of useful information about the world - information which is often far less self-evident to occidental scientists - we should not overestimate the value of this knowledge. In my estimate, Boran is right to think this is also a weak argument.

She also identifies an individual's freedom of choice as grounds for supporting language diversity. However, this is difficult to accept at face value. An individual's freedom to live in a particular language is conditioned on access to a substantial community of speakers. This can not be guaranteed in the same manner as an individual freedom to hold particular political views or religious beliefs. The essentially collective nature of language rights makes this entire line of thinking problematic.

Instead, she offers us a principle of fairness which can be interpreted as a more serious effort to apply the logic of just compensation advanced by Philippe van Parijs. If we identify linguistic diversity as a public good, it is appropriate to accept its maintenance as a public cost born by linguistic majorities.

IX. Language Death and Liberal Politics

Michael Blake claims that language rights can only be understood by embracing what he feels is a paradox. He contrasts two hypothetical situations: In the first, a language charges over time until its speakers no longer understand the earlier form of the language; in the second, a language changes over time until it becomes indistinguishable from some other language which was earlier clearly distinct. Is it not appropriate, in both cases, to claim that a language has died? Why then do we object so forcefully to the second case but are unbothered by the first?

Blake's example is a case where a more complete knowledge of linguistics would have been very useful, because while Blake wants us to understand the second to correspond to what happens in unjust language death, what he describes in fact virtually never occurs.

I say "virtually never" because whether it really occurs at all remains the subject of some controversy. In linguistics, this process is called decreolisation, and it is exceedingly rare if it ever actually happens. The study of language contact is complex and somewhat disorganised. There are still vast gaps in our knowledge and plenty of controversy over what happens when languages come into contact. One of the things that can happen is creolisation. This corresponds, in some respects at least, to what Blake is describing.

There is no controversy over the idea that sometimes elements from one language are adopted into another. The current thinking is that this process is pervasive and forms a part of the past and present of nearly every language in the world. The elements that are most frequently and obviously adopted are lexical. Languages borrow words from each other. However, there are ample well-documented instances of morphological and syntactic borrowing as well. The school of linguistics that I more or less adhere to does not even make very sharp distinctions between lexical items, morphological rules and syntactic structures, so for me this poses no difficulties at all.

The problem is the other half of what Blake is claiming: borrowing foreign elements can turn two languages into one. This idea is one of the theories about the origin of Black English. (Also known as African American Vernacular English, but when I call it AAVE, I'm saying that this is a matter for linguists, and if you aren't a linguist you shouldn't be talking about it. When I say "Black English" people are quite clear on what I am talking about. So I stick to "Black English.") The decreolisation hypothesis says that non-standard speech patterns among African Americans came into being because African language patterns persisted among early American slaves, who spoke a creole instead of standard English. In this view, the language of African American communities has been converging with the standard language ever since.

This hypothesis is not highly regarded among linguists. Historical records of slave language in the US do not support this account. Furthermore, arguments from historical reconstruction - claiming that copula dropping in Black English is evidence of African origin because of pervasive copula dropping in Bantu languages - are not convincing. Russian is also a copula dropping language, yet we would not call this fact evidence of the African origin of Russian. Black English appears to have originated as a dialect of colloquial American English which grew away from the standard due to low levels of literacy and segregation.

There are a few other borderline cases. Hawaiian Creole English speakers clearly manipulate a variety of intermediate levels of language between a completely basolectal (= incomprehensible to outsiders) creole and standard English. The same is true to some degree among the Caribbean creoles. However, in each of those cases, the people who speak mesolectal (= may be more comprehensible to outsiders) forms enjoy some mastery of the standard language. It is not clear whether the underlying creole languages are being progressively transformed into the standard language, or if growing bilingualism with the standard language isn't simply creating mesolectal forms among the already bilingual.

Unfortunately, the whole of Blake's argument is built on this base. He demands that before a linguistic right can be established, we must show that the second situation has occurred due to a historical injustice rather than happenstance. He believes that progressive assimilation can occur in an entirely just, voluntary manner. But this process describes no real situation. In every case that might in some way resemble Blake's description, we have a community which has been compelled, by more or less coercive means, to become bilingual in some more dominant language. Without extensive bilingualism in the minority community and unequal access to power, there is never assimilation, and even in cases where there is widespread bilingualism, social inequality and extensive borrowing, there is not always linguistic assimilation.

Blake's core argument - that language death is not always the consequence of coercion so we must look to historical factors in assigning language rights - collapses entirely on this matter of historical record. He might have made the case that either extensive bilingualism or unequal access to power occurs for reasons that are, if not just then at least difficult to remedy without creating more injustice. That is that case Jacob Levy makes in the next chapter, and I am far more sympathetic to that kind of claim.

X. Language Rights, Literacy and the Modern State

Jacob Levy, like Blake in the previous chapter, claims that the death of a language can not necessarily be identified with an injustice. Levy, however, uses a somewhat novel approach in making this claim - the costs associated with acquiring literacy. He is correct to say that literacy does not play an important part in discussions of multilingualism. Modern linguistics, which has since the era of de Saussure eschewed literacy as a subject of study, is unfortunately the main culprit. It is part of a general trend in theoretical linguistics - a particularly pronounced one in the era of the structuralists - to ignore any area of language study that might actually prove useful to someone.

Levy recognises, unlike many other commentators on language issues, that multilingualism is a feature of many language communities and claims that a major engine of linguistic assimilation is the cost of becoming literate in multiple languages rather than the cost of becoming conversant in a foreign tongue. I found this claim surprising, because it is quite contrary to most people's experience in learning languages. Developing true verbal fluency - the ability to follow conversations in diverse local accents under noisy conditions using local idioms - is quite a bit more difficult than developing basic literacy in the more standard form of a language.

Then the logic of it came to me. This claim is true for a set of languages. Chinese, Japanese, English and French are the prototype examples of languages where even native speakers have a great deal of difficulty acquiring literacy and second language speakers are still more disadvantaged. Otherwise, this claim is simply false for the overwhelming majority of the world's languages, particularly its smaller and more threatened ones.

Literacy in Inuktitut, which is written using an unusual and moderately complicated writing scheme unique to Canada, spread spontaneously after its introduction by a Methodist missionary in the 19th century. Inuit children, who are hard-pressed to develop fluency and literacy in English, often enter school already literate in their native language. This situation is also common in Africa. Among my father's four native languages was Kituba, a trade language spoken in Bandundu province in the Democratic Republic of Congo. He developed fluency through exposure as a young child, but became literate in a matter of minutes after he was introduced to its largely phonetic writing scheme.

I do not find arguments from the added burden of literacy terribly convincing. The creation of written forms for languages is not, in fact, usually the realm of "linguistic activists and outside preservationists" as Levy claims. It is in most cases the work of either the state in some guise or of missionaries. Missionary linguistic work nowadays is carried out primarily by an organisation called the Wycliffe Bible Society and its more secular wing, the Summer Institute of Linguistics. One of the most common features of missionary linguists' stories is the speed and ease with which literacy spreads once it has been introduced. It is unheard of for linguistic assimilation to outpace the spread of literacy when a reasonably phonetic writing system is introduced to a community. In many instances, its spread is faster than the missionaries themselves. In the case of Inuktitut, missionaries would sometimes arrive in new villages prepared to teach people how to read only to find that the written language had preceded them, and this in a culture that could only write in the snow because they had no paper.

Levy is on firmer ground when he points out that one of the key advantages of literacy is access to a wider society. Many modern languages were constructed, some more explicitly than others, as unions of diverse dialects. Building a competitive linguistic community is a form of cultural self-defence. However, it is better understood as a sort of compromise measure for linguistic communities. Consider the case of Inuktitut. Although partitioned into a number of partially intercomprehensible dialects, there is a growing degree of standardisation on the phonologically conservative dialect of Iglulik. Although this means that some Inuktitut communities' unique language forms may be lost, this standard Inuktitut is a far better vehicle for their culture and traditions than English. By choosing this strategy, Inuit are accepting the loss of smaller group identities in return for preserving some of what is valuable to them.

Personally, I wish this strategy was more widespread. I know of no comparable movement among Canada's Cree and Montagnais communities, who are numerically superior to the Inuit and who could even more effectively take advantage of a common linguistic strategy. Unfortunately, the political barriers to doing so are much larger for them, since they are divided by two scripts, several churches, two different prefered European languages, and spread across six provinces and one of the territories. However, regardless of its necessity or justice, this phenomenon of language construction by merging dialects is rarely if ever spontaneous. It is, almost without exception, a result of a policy designed to sacrifice some linguistic diversity in return for some good. It is true that it is not in all cases the result of brutally unjust policies imposed from the outside, but it is unlikely to occur unless there is some perceived threat to a language community. Levy's claim that language death through this kind of process is spontaneous is difficult to support.

Levy finally returns to what is the best argument against linguistic diversity and the only one I think actually has enough merit to be worth discussing. Living one's entire life in a language of limited scope is an expensive proposition. It cuts its speakers off from opportunities for personal advancement. Language should not be a prison, and I am largely in agreement with Levy's statement that children should not be tools in the maintenance of unsustainable sociological divisions. However, they have no choice but to be tools in the maintenance of sustainable ones, and distinguishing the lost causes from the viable languages is not an easy task, nor one that the designers of language policies can so easily avoid.

XI. The Antinomy of Language Policy

Daniel Weinstock rehearses many of the same issues in language policy described at length by previous authors, but goes on to describe a vision of a more just kind of language policy. It is composed of three principles:

  1. Minimalism. The only language dependent goal states should be allowed to pursue is effective communication. Language policies which serve other goals - nation-building, cultural preservation, political unity - are to be rejected.
  2. Anti-symbolism. The selection a particular language by the state should not have a symbolic significance. It would, under Weinstock's principles, be wrong for the United States to declare its official language to be English so that non-English speakers can be identified as Unamerican.
  3. Revisability. The state should be prepared to change its language choices in the face of demographic change. It should be committed to effective communication, and if a change in language policy serves this goal it should be adopted.

Weinstock concedes that this set of policy prescriptions will generally favour the dominant language, but at least it will do so for pragmatic reasons, and without any sudden deprivation of reasonable linguistic rights to communities of any size. I find myself in substantial agreement with Weinstock, although I think there are many cases where these principles do not form an adequate decision procedure for language policy.

I fear that Weinstock's prescriptions, as good as they are, are too little, too late. Had these principles been in place in Canada and the United States since their respective foundings, it is unlikely that either state would have English-speaking majorities today. In the era before mass media and rapid transportation, they would in fact have constituted a relatively just and economically efficient basis for language policy. However, the instrumental value of mass languages today is so great that to imagine that any sort of minimalist language policy can be economically efficient may be an unreasonable assumption.

XII. Beyond Personality: The Territorial and Personality Principles of Language Policy Reconsidered

Denise Réaume contrasts two general classes of language policy and the justifications behind them. The "territorial principle" attaches language rights to particular geographic territories, constructing for each language a place where it can be dominant. Your right to use your language in all parts of your life may be restricted if you are not resident in a territory where your own language is legally established. The "personality principle", in contrast, guarantees language rights without respect to location.

Réaume is right to consider the territorial principle suspect. It is little more than a weak extension of territorial ethno-linguistic nationalism, a principle responsible for more than its fair share of the world's ills. There is nothing special about an existing set of national borders or administrative divisions that makes them worth entrenching as linguistic frontiers. Furthermore, creating these territorial divisions always creates new linguistic minorities by stranding minorities of both languages on the wrong side of the line.

However, she goes a step further, pointing out that a personality principle may justify no more protection for language than any other kind of social division, like religion. Clearly, this is inadequate. Religions can generally be practised individually and privately without losing their value to those who adopt them. Languages can not.

Territorial solutions do have this vexing property of actually working and alternatives often do not, and that is where Réaume finds herself in a pickle. She wants to use the personality principle to advocate radical policies designed to promote minority languages and is hard pressed to do it. Even Canada, champion of the personality principle, has a very different situation on the ground than the Trudeauist vision of coast-to-coast bilingualism. Québec and New Brunswick are the only places in Canada where French is genuinely thriving and they are the only places where the legal code genuinely favours French.

Réaume takes a very Canadian approach to justifying radical minority language support. Language rights are, to her, justified on the basis of collective, rather than individual rights. The constitution of Québec is one of the few in the West to recognise any notion of collective rights by that name and collective rights form the basis of the native claims that are so vexing for Canadian politics right now. However, her argument is subject to the criticisms advanced by Laitin and Reich against collective rights. Réaume will convince no one outside of Canada because her position is so utterly remote from the traditional liberal embrace of individual rights.

XIII. What kind of bilingualism?

Alan Patten picks up many of the same themes as Réaume in the final chapter of the book. He distinguishes a number of arguments for multilingualism and attempts to discern what sort of language regime - territorial or personality based - each tends to favour.

Patten argues that a concern for language rights based on access to public institutions favours the personality principle rather than regionalist language policies. This is not inherently true, because the resources to support bilingualism are not unlimited, and the group which most needs support in accessing public institutions may vary from place to place.

Patten goes on to visit arguments from social mobility, which he deems more likely to favour a territorial principle. Where there are millions of speakers of some language living in close proximity, it is possible to have a reasonably complete set of social institutions in that language, ensuring that members of that language community do not face diminished opportunities. Where a language community has insufficient numbers, there is no prospect of equal opportunity except by acquiring a more dominant language. Therefore, it makes the most sense to promote minority languages where they are viable, and to promote integration elsewhere.

Patten also treats arguments from social cohesion, although he does so under the name "democratic participation." He views this argument as supportive of territoriality, but curiously I am inclined to come to the opposite conclusion. If people do not share a common language, it is even more damaging to their cohesion to segregate them geographically. The difference, I suppose, follows from a different set of assumptions. If you presume linguistic disunity to be the norm no matter how you cut a territory up, you will not support a principle of one state - one language. If you make the opposite assumption, you might well conclude that it is better to have two monolingual states than one bilingual one.

Patten's last argument is from intrinsic identity. To whatever extent language is constitutive of identity, people ought to have the right to the identity they like no matter where they are. This tends to favour a personality-based language policy.

Unlike Réaume, Patten does not come out in favour of unalloyed personality principles in language policy. He finds that arguments from social mobility and social solidarity are good arguments, even if they do not trump the case for personality principle.

Tomorrow: A more general critique