Wednesday, February 21, 2007



INSTRUCTIONS: Fill in the blank with the name of any group of people, nation, substance, or activitiy that threatens to destroy the American way of life and everything America stands for.

Americans must come together and give government the tools and funding needed to end the menace of (BLANK). The Special Commission for the Study of (BLANK) made up of former government officials working at a prestigious Washington think tank recently issued a report that reveals the true danger of (BLANK). The report clearly states that if government does not act now to deal with (BLANK), millions of Americans who work hard and play by the rules will suffer needlessly and our way of life will be put into jeopardy.

The Special Commission's report calls for a war on (BLANK) and provides a blueprint to deal with eliminating the threats posed by (BLANK). The report recommends that the President should immediately appoint a task force to form a network with various local, state, and federal law enforcement agencies, federal intelligence agencies, and the Pentagon in order to mount a well organized attack on (BLANK).

The Special Commission suggests forming an international coalition with friends and allies abroad who share our concern over the (BLANK) menace. Of course, the war on (BLANK) must be fought most vigorously on American soil. This means giving authorities special powers to deal with the elusive and dangerous people involved with (BLANK). The Commission's report suggests specific legislation needed to give the government the powers to deal with (BLANK).

Freedoms that Americans have become used to will have to be temporarily curtailed in order for government to mount an effective campaign to eradict the threat of (BLANK). The government must have the ability to identify and target anyone who is involed in any manner with (BLANK).

This may mean that Americans will have to give up some of their privacy until the war on (BLANK) is won. The war on (BLANK) requires authorities to be able to conduct random searches, monitor financial transactions, monitor communications, detain suspects, and interrogate detainees.

Americans must also be informed of the financial costs of the war on (BLANK). Fighting the gathering dangers of (BLANK) will not come cheap. But, no cost is too great to prevent the destruction of the American way of life.

Experts predict that it may take many years to win the war on (BLANK) and it may cost hundreds of billions of dollars to eliminate the threat (BLANK) poses to America. The Commission's report suggests that the war on (BLANK) can be financed through borrowing in order to protect Americans from costly tax increases. The report notes that most Americans have been supportive of deficit spending for other wars.

Americans working together and supporting the war on (BLANK) will ensure that the war on (BLANK) will be won. There can be no question that if the blueprint outlined in the report issued by the Special Commission for the Study of (BLANK) is followed, we will win the war on (BLANK) and guarantee future generations the same security and freedom that we all cherish.

powered by performancing firefox


737 U.S. Military Bases = Global Empire

By Chalmers Johnson, Metropolitan Books.

With more than 2,500,000 U.S. personnel serving across the planet and military bases spread across each continent, it's time to face up to the fact that our American democracy has spawned a global empire.

Once upon a time, you could trace the spread of imperialism by counting up colonies. America's version of the colony is the military base; and by following the changing politics of global basing, one can learn much about our ever more all-encompassing imperial "footprint" and the militarism that grows with it.

It is not easy, however, to assess the size or exact value of our empire of bases. Official records available to the public on these subjects are misleading, although instructive. According to the Defense Department's annual inventories from 2002 to 2005 of real property it owns around the world, the Base Structure Report, there has been an immense churning in the numbers of installations.

The total of America's military bases in other people's countries in 2005, according to official sources, was 737. Reflecting massive deployments to Iraq and the pursuit of President Bush's strategy of preemptive war, the trend line for numbers of overseas bases continues to go up.

Interestingly enough, the thirty-eight large and medium-sized American facilities spread around the globe in 2005 -- mostly air and naval bases for our bombers and fleets -- almost exactly equals Britain's thirty-six naval bases and army garrisons at its imperial zenith in 1898. The Roman Empire at its height in 117 AD required thirty-seven major bases to police its realm from Britannia to Egypt, from Hispania to Armenia. Perhaps the optimum number of major citadels and fortresses for an imperialist aspiring to dominate the world is somewhere between thirty-five and forty.

Using data from fiscal year 2005, the Pentagon bureaucrats calculated that its overseas bases were worth at least $127 billion -- surely far too low a figure but still larger than the gross domestic products of most countries -- and an estimated $658.1 billion for all of them, foreign and domestic (a base's "worth" is based on a Department of Defense estimate of what it would cost to replace it). During fiscal 2005, the military high command deployed to our overseas bases some 196,975 uniformed personnel as well as an equal number of dependents and Department of Defense civilian officials, and employed an additional 81,425 locally hired foreigners.

The worldwide total of U.S. military personnel in 2005, including those based domestically, was 1,840,062 supported by an additional 473,306 Defense Department civil service employees and 203,328 local hires. Its overseas bases, according to the Pentagon, contained 32,327 barracks, hangars, hospitals, and other buildings, which it owns, and 16,527 more that it leased. The size of these holdings was recorded in the inventory as covering 687,347 acres overseas and 29,819,492 acres worldwide, making the Pentagon easily one of the world's largest landlords.

These numbers, although staggeringly big, do not begin to cover all the actual bases we occupy globally. The 2005 Base Structure Report fails, for instance, to mention any garrisons in Kosovo (or Serbia, of which Kosovo is still officially a province) -- even though it is the site of the huge Camp Bondsteel built in 1999 and maintained ever since by the KBR corporation (formerly known as Kellogg Brown Root), a subsidiary of the Halliburton Corporation of Houston.

The report similarly omits bases in Afghanistan, Iraq (106 garrisons as of May 2005), Israel, Kyrgyzstan, Qatar, and Uzbekistan, even though the U.S. military has established colossal base structures in the Persian Gulf and Central Asian areas since 9/11. By way of excuse, a note in the preface says that "facilities provided by other nations at foreign locations" are not included, although this is not strictly true. The report does include twenty sites in Turkey, all owned by the Turkish government and used jointly with the Americans. The Pentagon continues to omit from its accounts most of the $5 billion worth of military and espionage installations in Britain, which have long been conveniently disguised as Royal Air Force bases. If there were an honest count, the actual size of our military empire would probably top 1,000 different bases overseas, but no one -- possibly not even the Pentagon -- knows the exact number for sure.

In some cases, foreign countries themselves have tried to keep their U.S. bases secret, fearing embarrassment if their collusion with American imperialism were revealed. In other instances, the Pentagon seems to want to play down the building of facilities aimed at dominating energy sources, or, in a related situation, retaining a network of bases that would keep Iraq under our hegemony regardless of the wishes of any future Iraqi government. The U.S. government tries not to divulge any information about the bases we use to eavesdrop on global communications, or our nuclear deployments, which, as William Arkin, an authority on the subject, writes, "[have] violated its treaty obligations. The U.S. was lying to many of its closest allies, even in NATO, about its nuclear designs. Tens of thousands of nuclear weapons, hundreds of bases, and dozens of ships and submarines existed in a special secret world of their own with no rational military or even 'deterrence' justification."

In Jordan, to take but one example, we have secretly deployed up to five thousand troops in bases on the Iraqi and Syrian borders. (Jordan has also cooperated with the CIA in torturing prisoners we deliver to them for "interrogation.") Nonetheless, Jordan continues to stress that it has no special arrangements with the United States, no bases, and no American military presence.

The country is formally sovereign but actually a satellite of the United States and has been so for at least the past ten years. Similarly, before our withdrawal from Saudi Arabia in 2003, we habitually denied that we maintained a fleet of enormous and easily observed B-52 bombers in Jeddah because that was what the Saudi government demanded. So long as military bureaucrats can continue to enforce a culture of secrecy to protect themselves, no one will know the true size of our baseworld, least of all the elected representatives of the American people.

In 2005, deployments at home and abroad were in a state of considerable flux. This was said to be caused both by a long overdue change in the strategy for maintaining our global dominance and by the closing of surplus bases at home. In reality, many of the changes seemed to be determined largely by the Bush administration's urge to punish nations and domestic states that had not supported its efforts in Iraq and to reward those that had. Thus, within the United States, bases were being relocated to the South, to states with cultures, as the Christian Science Monitor put it, "more tied to martial traditions" than the Northeast, the northern Middle West, or the Pacific Coast. According to a North Carolina businessman gloating over his new customers, "The military is going where it is wanted and valued most."

In part, the realignment revolved around the Pentagon's decision to bring home by 2007 or 2008 two army divisions from Germany -- the First Armored Division and the First Infantry Division -- and one brigade (3,500 men) of the Second Infantry Division from South Korea (which, in 2005, was officially rehoused at Fort Carson, Colorado). So long as the Iraq insurgency continues, the forces involved are mostly overseas and the facilities at home are not ready for them (nor is there enough money budgeted to get them ready).

Nonetheless, sooner or later, up to 70,000 troops and 100,000 family members will have to be accommodated within the United States. The attendant 2005 "base closings" in the United States are actually a base consolidation and enlargement program with tremendous infusions of money and customers going to a few selected hub areas. At the same time, what sounds like a retrenchment in the empire abroad is really proving to be an exponential growth in new types of bases -- without dependents and the amenities they would require -- in very remote areas where the U.S. military has never been before.

After the collapse of the Soviet Union in 1991, it was obvious to anyone who thought about it that the huge concentrations of American military might in Germany, Italy, Japan, and South Korea were no longer needed to meet possible military threats. There were not going to be future wars with the Soviet Union or any country connected to any of those places.

In 1991, the first Bush administration should have begun decommissioning or redeploying redundant forces; and, in fact, the Clinton administration did close some bases in Germany, such as those protecting the Fulda Gap, once envisioned as the likeliest route for a Soviet invasion of Western Europe. But nothing was really done in those years to plan for the strategic repositioning of the American military outside the United States.

By the end of the 1990s, the neoconservatives were developing their grandiose theories to promote overt imperialism by the "lone superpower" -- including preventive and preemptive unilateral military action, spreading democracy abroad at the point of a gun, obstructing the rise of any "near-peer" country or bloc of countries that might challenge U.S. military supremacy, and a vision of a "democratic" Middle East that would supply us with all the oil we wanted. A component of their grand design was a redeployment and streamlining of the military. The initial rationale was for a program of transformation that would turn the armed forces into a lighter, more agile, more high-tech military, which, it was imagined, would free up funds that could be invested in imperial policing.

What came to be known as "defense transformation" first began to be publicly bandied about during the 2000 presidential election campaign. Then 9/11 and the wars in Afghanistan and Iraq intervened. In August 2002, when the whole neocon program began to be put into action, it centered above all on a quick, easy war to incorporate Iraq into the empire. By this time, civilian leaders in the Pentagon had become dangerously overconfident because of what they perceived as America's military brilliance and invincibility as demonstrated in its 2001 campaign against the Taliban and al-Qaeda -- a strategy that involved reigniting the Afghan civil war through huge payoffs to Afghanistan's Northern Alliance warlords and the massive use of American airpower to support their advance on Kabul.

In August 2002, Secretary of Defense Donald Rumsfeld unveiled his "1-4-2-1 defense strategy" to replace the Clinton era's plan for having a military capable of fighting two wars -- in the Middle East and Northeast Asia -- simultaneously. Now, war planners were to prepare to defend the United States while building and assembling forces capable of "deterring aggression and coercion" in four "critical regions": Europe, Northeast Asia (South Korea and Japan), East Asia (the Taiwan Strait), and the Middle East, be able to defeat aggression in two of these regions simultaneously, and "win decisively" (in the sense of "regime change" and occupation) in one of those conflicts "at a time and place of our choosing."As the military analyst William M. Arkin commented, "[With] American military forces ... already stretched to the limit, the new strategy goes far beyond preparing for reactive contingencies and reads more like a plan for picking fights in new parts of the world."

A seemingly easy three-week victory over Saddam Hussein's forces in the spring of 2003 only reconfirmed these plans. The U.S. military was now thought to be so magnificent that it could accomplish any task assigned to it. The collapse of the Baathist regime in Baghdad also emboldened Secretary of Defense Rumsfeld to use "transformation" to penalize nations that had been, at best, lukewarm about America's unilateralism -- Germany, Saudi Arabia, South Korea, and Turkey -- and to reward those whose leaders had welcomed Operation Iraqi Freedom, including such old allies as Japan and Italy but also former communist countries such as Poland, Romania, and Bulgaria. The result was the Department of Defense's Integrated Global Presence and Basing Strategy, known informally as the "Global Posture Review."

President Bush first mentioned it in a statement on November 21, 2003, in which he pledged to "realign the global posture" of the United States. He reiterated the phrase and elaborated on it on August 16, 2004, in a speech to the annual convention of the Veterans of Foreign Wars in Cincinnati. Because Bush's Cincinnati address was part of the 2004 presidential election campaign, his comments were not taken very seriously at the time. While he did say that the United States would reduce its troop strength in Europe and Asia by 60,000 to 70,000, he assured his listeners that this would take a decade to accomplish -- well beyond his term in office -- and made a series of promises that sounded more like a reenlistment pitch than a statement of strategy.

"Over the coming decade, we'll deploy a more agile and more flexible force, which means that more of our troops will be stationed and deployed from here at home. We'll move some of our troops and capabilities to new locations, so they can surge quickly to deal with unexpected threats. ... It will reduce the stress on our troops and our military families. ... See, our service members will have more time on the home front, and more predictability and fewer moves over a career. Our military spouses will have fewer job changes, greater stability, more time for their kids and to spend with their families at home."

On September 23, 2004, however, Secretary Rumsfeld disclosed the first concrete details of the plan to the Senate Armed Services Committee. With characteristic grandiosity, he described it as "the biggest re-structuring of America's global forces since 1945." Quoting then undersecretary Douglas Feith, he added, "During the Cold War we had a strong sense that we knew where the major risks and fights were going to be, so we could deploy people right there. We're operating now [with] an entirely different concept. We need to be able to do [the] whole range of military operations, from combat to peacekeeping, anywhere in the world pretty quickly."

Though this may sound plausible enough, in basing terms it opens up a vast landscape of diplomatic and bureaucratic minefields that Rumsfeld's militarists surely underestimated. In order to expand into new areas, the Departments of State and Defense must negotiate with the host countries such things as Status of Forces Agreements, or SOFAs, which are discussed in detail in the next chapter. In addition, they must conclude many other required protocols, such as access rights for our aircraft and ships into foreign territory and airspace, and Article 98 Agreements. The latter refer to article 98 of the International Criminal Court's Rome Statute, which allows countries to exempt U.S. citizens on their territory from the ICC's jurisdiction.

Such immunity agreements were congressionally mandated by the American Service-Members' Protection Act of 2002, even though the European Union holds that they are illegal. Still other necessary accords are acquisitions and cross-servicing agreements or ACSAs, which concern the supply and storage of jet fuel, ammunition, and so forth; terms of leases on real property; levels of bilateral political and economic aid to the United States (so-called host-nation support); training and exercise arrangements (Are night landings allowed? Live firing drills?); and environmental pollution liabilities.

When the United States is not present in a country as its conqueror or military savior, as it was in Germany, Japan, and Italy after World War II and in South Korea after the 1953 Korean War armistice, it is much more difficult to secure the kinds of agreements that allow the Pentagon to do anything it wants and that cause a host nation to pick up a large part of the costs of doing so. When not based on conquest, the structure of the American empire of bases comes to look exceedingly fragile.

From the book NEMESIS: The Last Days of the American Republic by Chalmers Johnson. Reprinted by arrangement with Metropolitan Books, an imprint of Henry Holt and Company, LLC. Copyright (c) 2006 by Chalmers Johnson. All rights reserved.

powered by performancing firefox

Saturday, February 17, 2007


Mystery: How Wealth Creates Poverty in the World

By Michael Parenti

There is a “mystery” we must explain: How is it that as corporate investments and foreign aid and international loans to poor countries have increased dramatically throughout the world over the last half century, so has poverty? The number of people living in poverty is growing at a faster rate than the world’s population. What do we make of this?

Over the last half century, U.S. industries and banks (and other western corporations) have invested heavily in those poorer regions of Asia, Africa, and Latin America known as the “Third World.” The transnationals are attracted by the rich natural resources, the high return that comes from low-paid labor, and the nearly complete absence of taxes, environmental regulations, worker benefits, and occupational safety costs.

The U.S. government has subsidized this flight of capital by granting corporations tax concessions on their overseas investments, and even paying some of their relocation expenses---much to the outrage of labor unions here at home who see their jobs evaporating.

The transnationals push out local businesses in the Third World and preempt their markets. American agribusiness cartels, heavily subsidized by U.S. taxpayers, dump surplus products in other countries at below cost and undersell local farmers. As Christopher Cook describes it in his Diet for a Dead Planet, they expropriate the best land in these countries for cash-crop exports, usually monoculture crops requiring large amounts of pesticides, leaving less and less acreage for the hundreds of varieties of organically grown foods that feed the local populations.

By displacing local populations from their lands and robbing them of their self-sufficiency, corporations create overcrowded labor markets of desperate people who are forced into shanty towns to toil for poverty wages (when they can get work), often in violation of the countries’ own minimum wage laws.

In Haiti, for instance, workers are paid 11 cents an hour by corporate giants such as Disney, Wal-Mart, and J.C. Penny. The United States is one of the few countries that has refused to sign an international convention for the abolition of child labor and forced labor. This position stems from the child labor practices of U.S. corporations throughout the Third World and within the United States itself, where children as young as 12 suffer high rates of injuries and fatalities, and are often paid less than the minimum wage.

The savings that big business reaps from cheap labor abroad are not passed on in lower prices to their customers elsewhere. Corporations do not outsource to far-off regions so that U.S. consumers can save money. They outsource in order to increase their margin of profit. In 1990, shoes made by Indonesian children working twelve-hour days for 13 cents an hour, cost only $2.60 but still sold for $100 or more in the United States.

U.S. foreign aid usually works hand in hand with transnational investment. It subsidizes construction of the infrastructure needed by corporations in the Third World: ports, highways, and refineries.

The aid given to Third World governments comes with strings attached. It often must be spent on U.S. products, and the recipient nation is required to give investment preferences to U.S. companies, shifting consumption away from home produced commodities and foods in favor of imported ones, creating more dependency, hunger, and debt.

A good chunk of the aid money never sees the light of day, going directly into the personal coffers of sticky-fingered officials in the recipient countries.

Aid (of a sort) also comes from other sources. In 1944, the United Nations created the World Bank and the International Monetary Fund (IMF). Voting power in both organizations is determined by a country’s financial contribution. As the largest “donor,” the United States has a dominant voice, followed by Germany, Japan, France, and Great Britain. The IMF operates in secrecy with a select group of bankers and finance ministry staffs drawn mostly from the rich nations.

The World Bank and IMF are supposed to assist nations in their development. What actually happens is another story. A poor country borrows from the World Bank to build up some aspect of its economy. Should it be unable to pay back the heavy interest because of declining export sales or some other reason, it must borrow again, this time from the IMF.

But the IMF imposes a “structural adjustment program” (SAP), requiring debtor countries to grant tax breaks to the transnational corporations, reduce wages, and make no attempt to protect local enterprises from foreign imports and foreign takeovers. The debtor nations are pressured to privatize their economies, selling at scandalously low prices their state-owned mines, railroads, and utilities to private corporations.

They are forced to open their forests to clear-cutting and their lands to strip mining, without regard to the ecological damage done. The debtor nations also must cut back on subsidies for health, education, transportation and food, spending less on their people in order to have more money to meet debt payments. Required to grow cash crops for export earnings, they become even less able to feed their own populations.

So it is that throughout the Third World, real wages have declined, and national debts have soared to the point where debt payments absorb almost all of the poorer countries’ export earnings---which creates further impoverishment as it leaves the debtor country even less able to provide the things its population needs.

Here then we have explained a “mystery.” It is, of course, no mystery at all if you don’t adhere to trickle-down mystification. Why has poverty deepened while foreign aid and loans and investments have grown? Answer: Loans, investments, and most forms of aid are designed not to fight poverty but to augment the wealth of transnational investors at the expense of local populations.

There is no trickle down, only a siphoning up from the toiling many to the moneyed few.

In their perpetual confusion, some liberal critics conclude that foreign aid and IMF and World Bank structural adjustments “do not work”; the end result is less self-sufficiency and more poverty for the recipient nations, they point out. Why then do the rich member states continue to fund the IMF and World Bank? Are their leaders just less intelligent than the critics who keep pointing out to them that their policies are having the opposite effect?

No, it is the critics who are stupid not the western leaders and investors who own so much of the world and enjoy such immense wealth and success. They pursue their aid and foreign loan programs because such programs do work. The question is, work for whom? Cui bono?

The purpose behind their investments, loans, and aid programs is not to uplift the masses in other countries. That is certainly not the business they are in. The purpose is to serve the interests of global capital accumulation, to take over the lands and local economies of Third World peoples, monopolize their markets, depress their wages, indenture their labor with enormous debts, privatize their public service sector, and prevent these nations from emerging as trade competitors by not allowing them a normal development.

In these respects, investments, foreign loans, and structural adjustments work very well indeed.

The real mystery is: why do some people find such an analysis to be so improbable, a “conspiratorial” imagining? Why are they skeptical that U.S. rulers knowingly and deliberately pursue such ruthless policies (suppress wages, rollback environmental protections, eliminate the public sector, cut human services) in the Third World? These rulers are pursuing much the same policies right here in our own country!

Isn’t it time that liberal critics stop thinking that the people who own so much of the world---and want to own it all---are “incompetent” or “misguided” or “failing to see the unintended consequences of their policies”? You are not being very smart when you think your enemies are not as smart as you. They know where their interests lie, and so should we.

powered by performancing firefox


Use Community: Smaller Footprints, Cooler Stuff and More Cash

Alex Steffen

If we want to build a society which is both prosperous and sustainable, we're going to need to innovate ways of delivering the material goods which underpin that prosperity at a small fraction of the ecological cost they exact today. We must learn to live large while leaving tiny ecological footprints.

We have extremely huge footprints today. If every person lived as the average wealthy American does today, we'd need almost ten planets worth of resources to sustain ourselves, while the gap between our consumption and the capacities of the planet's natural systems has already crossed into overshoot, threatening mass-extinctions and catastrophic climate change.

If we're going to have a bright green future -- if we want to avoid living out the rest of our lives in one long emergency, a kind of constant Katrina -- we need to reinvent our lives now, immediately, on a radical scale. British researchers found that in order to reach sustainable prosperity, Londoners would have to shrink their ecological impacts 80% in the next four decades. For affluent Americans, the number may be more like 90%. And the more we learn about the extent of the damage we're causing the planet, the shorter our timeframes for change become. I suspect that we need to be thinking more along the lines of cutting our impact in half in the next ten years.

Impossible, you say? I think not.

I believe that three main barriers present themselves.

First, we must learn to see the damage we already do. Most of the ecological devastation we cause happens in ways and places which are obscured from our eyes. You might say it happens off-stage: when we turn the ignition key, we don't see the glaciers of Greenland melting; when we throw out our our old television, we don't see its toxic chemicals and heavy metals seeping from the landfill into the groundwater; when we install a new hardwood floor, we don't see the rainforest disappearing in a cloud of chainsaw smoke.

But we ought to see these things. We ought to know the backstory. I believe the next decade will see a lot of artists, activists and culture-jammers finding new ways of highlighting the negative backstories of the goods and services we buy (especially when other choices with better stories exist).

Observation changes behavior. Telling the history of the stuff in our lives is a great way to induce us to change, of course -- for instance, most people will never again want a fur coat once they know what happens to the animals who were wearing that fur before -- but there are even more powerful ways to harness the force of sustained observation. Congestion taxes, for instance, can dramatically alter driving behavior in a very short time. Simply installing home energy meters often leads to a drop in energy use: when we can see immediately the consequences of leaving a light bulb burning unnecessarily, we have an added incentive to switch it off.

Second, we need to make better things. We can shrink our footprints quite a bit through better design and engineering of the products in our lives, by making things which use no raw materials, function at near-optimal energy efficiency, are non-toxic and can be completely recycled or re-used at the end of their lives. That may sound utterly utopian, but we may actually be able to accomplish much of this redesign in the next couple decades, as better tools for designing more sustainably (like computer-aided design programs that take into account not only the strength and function of the materials a designer is playing with using, but their ecological and social impacts) meet emerging technologies and materials. Indeed, some of us are already much farther ahead in this race than others -- the Japanese, for example, have created an extremely prosperous society with an ecological footprint less than half as large as that of most Americans. And there are extremely encouraging signs that designers, engineers and architects around the world are taking the need for transformative change seriously.

Sometimes, we need to see the system in which that good is embedded in a fresh light. Take Netflix. Most of us don't think of it this way, but this DVD-by-mail service is actually a great model of sustainability innovation. Consider: when many North Americans want to watch a movie at home, they get in their cars, drive to a big box store, park in a huge parking lot, shop for an available title under the hot lights with the HVAC whooshing air around above them, pay for their film, drive home, watch their film and then repeat the process. When I watch a Netflix movie, though, I drive nowhere. The postal carrier is already coming to my house to drop of my other mail, so the added effort to get me my movie is negligible. I still get to see Lethal Smoking Gun With a Vengeance 4 or whatever, but my drives to and from the store, and even the store itself, have been dematerialized. The DVD itself is unchanged, yet my movie sits more lightly on the planet.

Third, we need a revolution in how we think about the things we have. We've focused quite a bit here on the concept of product-service systems, and for good reason: transforming one's relationship with objects from one of ownership to one of use offers perhaps the greatest immediately available leverage point for greening our lives.

Take power drills. Supposedly, the average power drill is used for somewhere between six and twenty minutes in its entire lifetime. And yet supposedly almost half of all American households own one. If you think of all the energy and materials it takes to make, store and then dispose of those drills -- all the plastic and metal parts; all the trucks used to ship them and stores built to sell them; all the landfills they wind up in -- the ecological cost of each minute of drilling can be seen to be absurdly large, and thus each hole we put in the wall comes with a chunk of planetary destruction already attached.

But what we want is the hole, not the drill. That is, most of us, most of the time, would be perfectly happy not owning the drill itself if we had the ability to make that hole in the wall in a reasonably convenient manner when the need arose. What if we could substitute, in other words, a hole-drilling service for owning a drill?

We can. Already there are tool libraries, tool-sharing services, and companies that will rent you a drill when you want one. Other models are possible as well, and such product-service systems are not limited to hand tools.

Car sharing offers a great example. With mobile phones, swipe cards and walkshed technologies. it's easy to find the nearest car, quickly make a reservation, walk over and swipe your way inside. Indeed, in sufficiently dense neighborhoods, using a shared car is significantly easier than owning your own car. It can also save you serious cash. It fits perfectly with an urban, high-tech lifestyle.

Even better, car sharing offers major ecological benefits. Because as much as half the energy ever used by a car (and almost all of the material resources) are used not in the operation of the car but in its manufacture and disposal, sharing cars has an immediate and major ecological benefit attached. If three people share one car to do the same amount of driving they used to do in three separate cars, they have roughly one-third the backstory impact on those trips that they used to.

And it turns out that a lot of people can use the same few cars. Zipcar founder Robin Chase told me that they have found that every efficiently-used shared car can replace as many as 20 private cars (that is, cars which users either sell or decide not to buy in the first place). That means that the backstory impacts of all those trips drops to as little as 5% of what it once was.

But the beneficial impacts of car-sharing don't stop there. Because car-sharers' driving time is limited and measured (most pay by the hour), they tend to use it more efficiently, making fewer trips and planning routes more effectively, all of which means that they tend to use less fuel to accomplish the same tasks. Also, because the cars are being used more, they spend less time sitting in parking lots, and as car-sharing becomes more common, we can slash the number of parking spaces in our cities [anyone have a good number for parking-spaces-per-auto in the U.S.?], greatly reducing the amount of space we need to cover with asphalt (if shared cars and carpools were given priority access to the remaining spaces, this would have the additional advantage of disincentivizing people driving alone. We may not go car-free anytime soon, but we could go car-sensible tomorrow.) Perhaps the PARK(ing) kids have the right idea after all. Overall, though it may not be right for everyone, car sharing delivers most of the comfort and utility for less money and a fraction of the footprint of driving one's own car around.

What's more, why stop with drills and cars? We already share exercise equipment (gyms), books (libraries), outdoor space (parks) and short-haul rides (taxis); what kind of a scenario might present itself if we took the concept one step further?

Like many people, I want less clutter and hassle in my life. I already have too much stuff I have to store, too many things I have to maintain and keep track of; I even have, I've decided, too much space (despite loving my home, the first I've ever owned, I find that I could easily, perhaps even more happily, live in half the square footage). All of these things take up much of the time, energy and money I might otherwise apply to having the experiences I want in my life. I want an institutional tool for owning less and doing more.

Let's call it a use community. Imagine a member-owned facility located in the heart of a dense urban neighborhood where I could not only access a tool library, a laundry room, a gym and a shared car, or what-have-you, but access a whole suite of services designed to outsource my responsibility for owning or buying things.

For instance, I love to entertain, and so it is a real pleasure to have a dining room and a decent kitchen. But the reality is that I entertain more than a couple guests at most once a month. And I am told that in New York a company already offers studio dwellers access to a professional kitchen and well-appointed dining room, for a fee. If I had access to a place I could throw bigger dinner parties, I could easily live in a much smaller home and not worry that my kitchen stove only has four burners (and two of those don't work so well).

In a similar way, I have a home office. Now that Worldchanging is both so all-consuming and headquartered in a great, funky space, I spend almost no time working at home, but as someone who's often made my living freelancing and consulting, a home office was long an essential. Or was it? Already there are some amazing groups out there offering shared offices: WorkSpace in Vancouver is a fabulous example (they hosted our Vancouver book tour event), but there are other cool models as well, like the Hub and Aula.

Like a lot of urban people, I love third places like cafes, bars and art spaces, but often wrestle with the discomforting reality that in most third places I have limited ability to influence my surroundings. This is the problem rich people solve by joining exclusive private clubs and our grandparents solved by joining fraternal organizations (like Fred Flintstone's Loyal Order of Water Buffaloes), but those aren't the only models for sharing social space. Take for instance the McLeod Residence, an experimental project here in Seattle which aims to create a member-driven art/social space where everyone can have a voice in creating something cool out of the raw materials of square footage and fun allies.

One could also over-lay this basis of shared space and shared objects with systems for informal sharing -- like Sharer! or RentAThing, even a place-based FreeCycle -- so that me and my fellow members could function as one large, informal, distributed product-service system on top of the formal program. Heck, we could even go the whole nine yards and host various neighborly technologies like yellow chairs.

Combined purchasing power and shared facilities could also make the best available sustainable products more accessible. Services like CSAs would be a snap, but that's only the beginning. If I as an individual buy a super-green washing machine, it may take years to "earn out" (to have saved me more in water and energy costs than the difference in price between the green machine and cheaper, more wasteful alternatives). Ten people using that same machine, however, would earn out much more quickly (as well as reducing their individual backstory footprints), meaning they could live more sustainably, more cheaply. Similarly, with a shared facility, pushing the building itself to reflect cutting-edge best practices would become more cost-effective. Why shouldn't my use community's facility be something like the Jubilee Wharf? The money we saved would be our own.

I'd bet that a comprehensive survey of both my ecological impact now and the life I'd like to be living would reveal a ton of ways in which I could give up things I now own or purchase, replace them with things I use and share, and in the process not only greatly reduce my impact on the planet but actually get more life through the energy and money I'd save. (Indeed, an interesting subject I won't pursue here is the sudden explosion of financial models through which people can act to their mutual benefit -- not only what are called Mutual Benefit Corporations here in the US, but Tenancy-in-Common arrangements, joint ownership agreements and various forms of time-shares and cooperatives. Wealthy people already understand this principle well, creating corporations to share things like hunting lodges and golf courses -- what if a community of users did the same? I am pretty intrigued by the possibilities such mechanisms offer people looking to create innovative new systems of sharing.)

Building passion for such an institution would take creating some serious service envy, but that might be easier than old school marketers might think, especially if the execution of the idea lead visibly to the bright green trifecta of having cooler stuff, more money and less impact on the planet.

The impacts might be broader still. One of our goals here must be the redefinition of stylish affluence, not only because the affluent of the Global North are directly responsible for a fairly large share of global pollution, but because it is their lifestyle which is being emulated and adopted by the affluent in the emerging economies. If we can change the way we deliver affluence here, we can share affluence there without losing the great wager. That seems worth some experimentation.

powered by performancing firefox

Friday, February 02, 2007


The Mystery of Consciousness

The young women had survived the car crash, after a fashion. In
the five months since parts of her brain had been crushed, she could
open her eyes but didn't respond to sights, sounds or jabs. In the
jargon of neurology, she was judged to be in a persistent vegetative
state. In crueler everyday language, she was a vegetable.

picture the astonishment of British and Belgian scientists as they
scanned her brain using a kind of MRI that detects blood flow to active
parts of the brain. When they recited sentences, the parts involved in
language lit up. When they asked her to imagine visiting the rooms of
her house, the parts involved in navigating space and recognizing
places ramped up. And when they asked her to imagine playing tennis,
the regions that trigger motion joined in. Indeed, her scans were
barely different from those of healthy volunteers. The woman, it
appears, had glimmerings of consciousness.

Try to comprehend
what it is like to be that woman. Do you appreciate the words and
caresses of your distraught family while racked with frustration at
your inability to reassure them that they are getting through? Or do
you drift in a haze, springing to life with a concrete thought when a
voice prods you, only to slip back into blankness? If we could
experience this existence, would we prefer it to death? And if these
questions have answers, would they change our policies toward
unresponsive patients--making the Terri Schiavo case look like child's

The report of this unusual case last September was just
the latest shock from a bracing new field, the science of
consciousness. Questions once confined to theological speculations and
late-night dorm-room bull sessions are now at the forefront of
cognitive neuroscience. With some problems, a modicum of consensus has
taken shape. With others, the puzzlement is so deep that they may never
be resolved. Some of our deepest convictions about what it means to be
human have been shaken.

It shouldn't be surprising that research
on consciousness is alternately exhilarating and disturbing. No other
topic is like it. As René Descartes noted, our own consciousness is the
most indubitable thing there is. The major religions locate it in a
soul that survives the body's death to receive its just deserts or to
meld into a global mind. For each of us, consciousness is life itself,
the reason Woody Allen said, "I don't want to achieve immortality
through my work. I want to achieve it by not dying." And the conviction
that other people can suffer and flourish as each of us does is the
essence of empathy and the foundation of morality.

To make
scientific headway in a topic as tangled as consciousness, it helps to
clear away some red herrings. Consciousness surely does not depend on
language. Babies, many animals and patients robbed of speech by brain
damage are not insensate robots; they have reactions like ours that
indicate that someone's home. Nor can consciousness be equated with
self-awareness. At times we have all lost ourselves in music, exercise
or sensual pleasure, but that is different from being knocked out cold.


philosopher David Chalmers has dubbed the Easy Problem and the Hard
Problem. Calling the first one easy is an in-joke: it is easy in the
sense that curing cancer or sending someone to Mars is easy. That is,
scientists more or less know what to look for, and with enough
brainpower and funding, they would probably crack it in this century.

exactly is the Easy Problem? It's the one that Freud made famous, the
difference between conscious and unconscious thoughts. Some kinds of
information in the brain--such as the surfaces in front of you, your
daydreams, your plans for the day, your pleasures and peeves--are
conscious. You can ponder them, discuss them and let them guide your
behavior. Other kinds, like the control of your heart rate, the rules
that order the words as you speak and the sequence of muscle
contractions that allow you to hold a pencil, are unconscious. They
must be in the brain somewhere because you couldn't walk and talk and
see without them, but they are sealed off from your planning and
reasoning circuits, and you can't say a thing about them.

Easy Problem, then, is to distinguish conscious from unconscious mental
computation, identify its correlates in the brain and explain why it

The Hard Problem, on the other hand, is why it feels
like something to have a conscious process going on in one's head--why
there is first-person, subjective experience. Not only does a green
thing look different from a red thing, remind us of other green things
and inspire us to say, "That's green" (the Easy Problem), but it also
actually looks green: it produces an experience of sheer greenness that
isn't reducible to anything else. As Louis Armstrong said in response
to a request to define jazz, "When you got to ask what it is, you never
get to know."

The Hard Problem is explaining how subjective
experience arises from neural computation. The problem is hard because
no one knows what a solution might look like or even whether it is a
genuine scientific problem in the first place. And not surprisingly,
everyone agrees that the hard problem (if it is a problem) remains a

Although neither problem has been solved,
neuroscientists agree on many features of both of them, and the feature
they find least controversial is the one that many people outside the
field find the most shocking. Francis Crick called it "the astonishing
hypothesis"--the idea that our thoughts, sensations, joys and aches
consist entirely of physiological activity in the tissues of the brain.
Consciousness does not reside in an ethereal soul that uses the brain
like a PDA; consciousness is the activity of the brain.


mechanistic killjoys but because they have amassed evidence that every
aspect of consciousness can be tied to the brain. Using functional MRI,
cognitive neuroscientists can almost read people's thoughts from the
blood flow in their brains. They can tell, for instance, whether a
person is thinking about a face or a place or whether a picture the
person is looking at is of a bottle or a shoe.

consciousness can be pushed around by physical manipulations.
Electrical stimulation of the brain during surgery can cause a person
to have hallucinations that are indistinguishable from reality, such as
a song playing in the room or a childhood birthday party. Chemicals
that affect the brain, from caffeine and alcohol to Prozac and LSD, can
profoundly alter how people think, feel and see. Surgery that severs
the corpus callosum, separating the two hemispheres (a treatment for
epilepsy), spawns two consciousnesses within the same skull, as if the
soul could be cleaved in two with a knife.

And when the
physiological activity of the brain ceases, as far as anyone can tell
the person's consciousness goes out of existence. Attempts to contact
the souls of the dead (a pursuit of serious scientists a century ago)
turned up only cheap magic tricks, and near death experiences are not
the eyewitness reports of a soul parting company from the body but
symptoms of oxygen starvation in the eyes and brain. In September, a
team of Swiss neuroscientists reported that they could turn out-of-body
experiences on and off by stimulating the part of the brain in which
vision and bodily sensations converge.


STARTLING CONCLUSION FROM the science of consciousness is that the
intuitive feeling we have that there's an executive "I" that sits in a
control room of our brain, scanning the screens of the senses and
pushing the buttons of the muscles, is an illusion. Consciousness turns
out to consist of a maelstrom of events distributed across the brain.
These events compete for attention, and as one process outshouts the
others, the brain rationalizes the outcome after the fact and concocts
the impression that a single self was in charge all along.

the famous cognitive-dissonance experiments. When an experimenter got
people to endure electric shocks in a sham experiment on learning,
those who were given a good rationale ("It will help scientists
understand learning") rated the shocks as more painful than the ones
given a feeble rationale ("We're curious.") Presumably, it's because
the second group would have felt foolish to have suffered for no good
reason. Yet when these people were asked why they agreed to be shocked,
they offered bogus reasons of their own in all sincerity, like "I used
to mess around with radios and got used to electric shocks."

not only decisions in sketchy circumstances that get rationalized but
also the texture of our immediate experience. We all feel we are
conscious of a rich and detailed world in front of our eyes. Yet
outside the dead center of our gaze, vision is amazingly coarse. Just
try holding your hand a few inches from your line of sight and counting
your fingers. And if someone removed and reinserted an object every
time you blinked (which experimenters can simulate by flashing two
pictures in rapid sequence), you would be hard pressed to notice the
change. Ordinarily, our eyes flit from place to place, alighting on
whichever object needs our attention on a need-to-know basis. This
fools us into thinking that wall-to-wall detail was there all along--an
example of how we overestimate the scope and power of our own

Our authorship of voluntary
actions can also be an illusion, the result of noticing a correlation
between what we decide and how our bodies move. The psychologist Dan
Wegner studied the party game in which a subject is seated in front of
a mirror while someone behind him extends his arms under the subject's
armpits and moves his arms around, making it look as if the subject is
moving his own arms. If the subject hears a tape telling the person
behind him how to move (wave, touch the subject's nose and so on), he
feels as if he is actually in command of the arms.

The brain's
spin doctoring is displayed even more dramatically in neurological
conditions in which the healthy parts of the brain explain away the
foibles of the damaged parts (which are invisible to the self because
they are part of the self). A patient who fails to experience a
visceral click of recognition when he sees his wife but who
acknowledges that she looks and acts just like her deduces that she is
an amazingly well-trained impostor. A patient who believes he is at
home and is shown the hospital elevator says without missing a beat,
"You wouldn't believe what it cost us to have that installed."

does consciousness exist at all, at least in the Easy Problem sense in
which some kinds of information are accessible and others hidden? One
reason is information overload. Just as a person can be overwhelmed
today by the gusher of data coming in from electronic media, decision
circuits inside the brain would be swamped if every curlicue and muscle
twitch that was registered somewhere in the brain were constantly being
delivered to them. Instead, our working memory and spotlight of
attention receive executive summaries of the events and states that are
most relevant to updating an understanding of the world and figuring
out what to do next. The cognitive psychologist Bernard Baars likens
consciousness to a global blackboard on which brain processes post
their results and monitor the results of the others.


strategic. Evolutionary biologist Robert Trivers has noted that people
have a motive to sell themselves as beneficent, rational, competent
agents. The best propagandist is the one who believes his own lies,
ensuring that he can't leak his deceit through nervous twitches or
self-contradictions. So the brain might have been shaped to keep
compromising data away from the conscious processes that govern our
interaction with other people. At the same time, it keeps the data
around in unconscious processes to prevent the person from getting too
far out of touch with reality.

What about the brain itself? You
might wonder how scientists could even begin to find the seat of
awareness in the cacophony of a hundred billion jabbering neurons. The
trick is to see what parts of the brain change when a person's
consciousness flips from one experience to another. In one technique,
called binocular rivalry, vertical stripes are presented to the left
eye, horizontal stripes to the right. The eyes compete for
consciousness, and the person sees vertical stripes for a few seconds,
then horizontal stripes, and so on.

A low-tech
way to experience the effect yourself is to look through a paper tube
at a white wall with your right eye and hold your left hand in front of
your left eye. After a few seconds, a white hole in your hand should
appear, then disappear, then reappear.

Monkeys experience
binocular rivalry. They can learn to press a button every time their
perception flips, while their brains are impaled with electrodes that
record any change in activity. Neuroscientist Nikos Logothetis found
that the earliest way stations for visual input in the back of the
brain barely budged as the monkeys' consciousness flipped from one
state to another. Instead, it was a region that sits further down the
information stream and that registers coherent shapes and objects that
tracks the monkeys' awareness. Now this doesn't mean that this place on
the underside of the brain is the TV screen of consciousness. What it
means, according to a theory by Crick and his collaborator Christof
Koch, is that consciousness resides only in the "higher" parts of the
brain that are connected to circuits for emotion and decision making,
just what one would expect from the blackboard metaphor.


Neuroscientists have long known that consciousness depends on certain
frequencies of oscillation in the electroencephalograph (EEG). These
brain waves consist of loops of activation between the cortex (the
wrinkled surface of the brain) and the thalamus (the cluster of hubs at
the center that serve as input-output relay stations). Large, slow,
regular waves signal a coma, anesthesia or a dreamless sleep; smaller,
faster, spikier ones correspond to being awake and alert. These waves
are not like the useless hum from a noisy appliance but may allow
consciousness to do its job in the brain. They may bind the activity in
far-flung regions (one for color, another for shape, a third for
motion) into a coherent conscious experience, a bit like radio
transmitters and receivers tuned to the same frequency. Sure enough,
when two patterns compete for awareness in a binocular-rivalry display,
the neurons representing the eye that is "winning" the competition
oscillate in synchrony, while the ones representing the eye that is
suppressed fall out of synch.

So neuroscientists are well on the
way to identifying the neural correlates of consciousness, a part of
the Easy Problem. But what about explaining how these events actually
cause consciousness in the sense of inner experience--the Hard Problem?


ever know whether you see colors the same way that I do. Sure, you and
I both call grass green, but perhaps you see grass as having the color
that I would describe, if I were in your shoes, as purple. Or ponder
whether there could be a true zombie--a being who acts just like you or
me but in whom there is no self actually feeling anything. This was the
crux of a Star Trek plot in which officials wanted to reverse-engineer
Lieut. Commander Data, and a furious debate erupted as to whether this
was merely dismantling a machine or snuffing out a sentient life.

one knows what to do with the Hard Problem. Some people may see it as
an opening to sneak the soul back in, but this just relabels the
mystery of "consciousness" as the mystery of "the soul"--a word game
that provides no insight.

Many philosophers, like Daniel
Dennett, deny that the Hard Problem exists at all. Speculating about
zombies and inverted colors is a waste of time, they say, because
nothing could ever settle the issue one way or another. Anything you
could do to understand consciousness--like finding out what wavelengths
make people see green or how similar they say it is to blue, or what
emotions they associate with it--boils down to information processing
in the brain and thus gets sucked back into the Easy Problem, leaving
nothing else to explain. Most people react to this argument with
incredulity because it seems to deny the ultimate undeniable fact: our
own experience.

The most popular attitude to the Hard Problem
among neuroscientists is that it remains unsolved for now but will
eventually succumb to research that chips away at the Easy Problem.
Others are skeptical about this cheery optimism because none of the
inroads into the Easy Problem brings a solution to the Hard Problem
even a bit closer. Identifying awareness with brain physiology, they
say, is a kind of "meat chauvinism" that would dogmatically deny
consciousness to Lieut. Commander Data just because he doesn't have the
soft tissue of a human brain. Identifying it with information
processing would go too far in the other direction and grant a simple
consciousness to thermostats and calculators--a leap that most people
find hard to stomach. Some mavericks, like the mathematician Roger
Penrose, suggest the answer might someday be found in quantum
mechanics. But to my ear, this amounts to the feeling that quantum
mechanics sure is weird, and consciousness sure is weird, so maybe
quantum mechanics can explain consciousness.

And then there is
the theory put forward by philosopher Colin McGinn that our vertigo
when pondering the Hard Problem is itself a quirk of our brains. The
brain is a product of evolution, and just as animal brains have their
limitations, we have ours. Our brains can't hold a hundred numbers in
memory, can't visualize seven-dimensional space and perhaps can't
intuitively grasp why neural information processing observed from the
outside should give rise to subjective experience on the inside. This
is where I place my bet, though I admit that the theory could be
demolished when an unborn genius--a Darwin or Einstein of
consciousness--comes up with a flabbergasting new idea that suddenly
makes it all clear to us.

Whatever the solutions to the Easy and
Hard problems turn out to be, few scientists doubt that they will
locate consciousness in the activity of the brain. For many
nonscientists, this is a terrifying prospect. Not only does it strangle
the hope that we might survive the death of our bodies, but it also
seems to undermine the notion that we are free agents responsible for
our choices--not just in this lifetime but also in a life to come. In
his millennial essay "Sorry, but Your Soul Just Died," Tom Wolfe
worried that when science has killed the soul, "the lurid carnival that
will ensue may make the phrase 'the total eclipse of all values' seem


IS THAT THIS IS backward: the biology of consciousness offers a sounder
basis for morality than the unprovable dogma of an immortal soul. It's
not just that an understanding of the physiology of consciousness will
reduce human suffering through new treatments for pain and depression.
That understanding can also force us to recognize the interests of
other beings--the core of morality.

As every student in
Philosophy 101 learns, nothing can force me to believe that anyone
except me is conscious. This power to deny that other people have
feelings is not just an academic exercise but an all-too-common vice,
as we see in the long history of human cruelty. Yet once we realize
that our own consciousness is a product of our brains and that other
people have brains like ours, a denial of other people's sentience
becomes ludicrous. "Hath not a Jew eyes?" asked Shylock. Today the
question is more pointed: Hath not a Jew--or an Arab, or an African, or
a baby, or a dog--a cerebral cortex and a thalamus? The undeniable fact
that we are all made of the same neural flesh makes it impossible to
deny our common capacity to suffer.

And when you think about it,
the doctrine of a life-to-come is not such an uplifting idea after all
because it necessarily devalues life on earth. Just remember the most
famous people in recent memory who acted in expectation of a reward in
the hereafter: the conspirators who hijacked the airliners on 9/11.

too, about why we sometimes remind ourselves that "life is short." It
is an impetus to extend a gesture of affection to a loved one, to bury
the hatchet in a pointless dispute, to use time productively rather
than squander it. I would argue that nothing gives life more purpose
than the realization that every moment of consciousness is a precious
and fragile gift.

Steven Pinker is Johnstone Professor of
Psychology at Harvard and the author of The Language Instinct, How the
Mind Works and The Blank Slate

powered by performancing firefox

This page is powered by Blogger. Isn't yours?