My Blog List

Sunday, October 21, 2012

Tempering Iron Dome: US may spend $680 million on Israeli missile shield

Tempering Iron Dome: US may spend $680 million on Israeli missile shield
Published: 21 April, 2012, 21:45

An Iron Dome short-range missile defence system (AFP Photo / Jack Guez)

The US could fork out $680 million on strengthening the Israeli Iron Dome rocket shield. To help Israelis defend themselves, the Republicans are seeking to stretch an already enormous US military budget a little bit more.

The plan to push for funding for Israel’s Iron Dome was disclosed by two congressional staff members, Reuters reports. The Obama administration earlier announced that an “appropriate” level of funding will be provided for this program, but did not request any specific sum for 2012, and thus Congress has not appropriated funds for it.

Top Republicans criticized Obama for his lack of support for this “vital defense cooperation program,” as it was called by Howard McKeon (R-CA) and Ileana Ros-Lehtinen (R-FL).

The Iron Dome is Israel’s most modern and efficient short-range anti-rocket interception system.

It has reportedly intercepted over 80 per cent of an approximate 300 targets in March. Mobile units were moved around the country to bring down the rockets fired by Gaza militants at Israeli-populated areas. At the moment Israel has three of these systems deployed and by 2013 is expected to have at least nine units in operation.

The Israeli military says the country needs up to 15 of these locally-manufactured units to adequately protect the largest urban areas.

The US has already spent over $200 million co-financing Iron Dome deployment in 2011. However, for the election year of 2012, the US only approved $235 million to finance the lesser-known Israeli anti-missile systems “Arrow” and “Magic wand”(or David’s Sling) types, leaving the Iron Dome behind.

US military help to Israel would come as part of a $30 billion 10-year military-aid agreement signed by the Bush administration back in 2007. From 2009 till 2018 the US is granting Israel approximately $3 billion per year, 26 per cent of which Israel can spend on locally-manufactured equipment. Moreover, Israel is the only recipient of US military aid granted permission to use the funds on local development and the procurement of military hardware.

About 60 per cent of total US foreign military financing in 2012 (excluding spending on “the global war on terror,” or Overseas Contingency Operations) will be received by Israel.

The Obama administration, as well as its predecessors, considers Israel to be the protector of peace in the Middle East, whose "strength and superiority in the region is critical to regional stability."

"Israel is a long-term democratic ally and we share a special bond," the US assistant secretary of state for political-military affairs, Andrew J. Shapiro, said in November 2011, expressing the cabinet's commitment to preserving Israel's military superiority. "We don't just support Israel because of a long standing bond; we support Israel because it is in our national interests to do so."

Besides the annual money transfers and hardware supplies, the US has established munitions stockpiles in Israel, which are not part of the military aid agreement. But although they are intended for use by and belong to the US military, in emergency situations they can also be used by Israel. The overall cost of the US munitions stored in Israel is estimated at around $1 billion in 2012.

US aid represents about 20 per cent of Israel’s total military budget. The aid is spent mostly on US-built military hardware and inventory build-up of cluster munitions, smart and bunker-buster bombs. Israel also announced it would spend $2.75 billion of US aid on 19 notorious F-35s, the fifth-generation stealth aircraft, whose development had already cost the US taxpayers some $382 billion.

The National Defense Authorization Act (NDAA) for Fiscal Year 2012 authorized $662 billion in funding "for the defense of the United States and its interests abroad." With the "Overseas Contingency Operations,” Department of Defense spendings will exceed $700 billion. And with all indirect expenses and interest on debt incurred in past wars the total military budget will exceed $1 trillion.

Responding to the nation’s anxiety about debt and deficit problems, US President Barack Obama has vowed to stop the frenzied rise in military spending. The defense budget is expected to grow by some $487 billion over the next decade. And even defense secretary Leon Panetta believes the budget can be reduced without posing a risk to national security.


Rafael Team with Raytheon to Offer Iron Dome in the U.S.

Rafael Team with Raytheon to Offer Iron Dome in the U.S.
Posted by Tamir Eshel

Raytheon Company and Rafael Advanced Defense Systems Ltd have teamed to market the combat proven Iron Dome weapon system in the United States. Rafael developed the original Iron Dome to provide protection against rockets, artillery and mortar attacks. “Iron Dome complements other Raytheon weapons that provide intercept capabilities to the U.S. Army’s Counter Rocket, Artillery, and Mortar initiative at forward operating bases,” said Mike Booen, vice president of Raytheon Missile Systems’ Advanced Security and Directed Energy Systems product line. “Iron Dome can be seamlessly integrated with Raytheon’s C-RAM systems to complete the layered defense.” Raytheon and Rafael are also teaming on the David Sling Weapon System, which is a mobile, land-based missile defense program, and the Blue Sparrow missile defense targets program. Raytheon is already marketing the Centurion point defense system, an operational, combat proven system employing the Phalanx close-in weapon system protecting forward operating bases in Iraq and Afghanistan against mortar and short range rockets.

The Iron Dome has also been proven in combat, intercepting short range rockets fired at Israeli population centers in South-central Israel. The Iron Dome program awarded to Rafael in 2007 has completed flight test trials, and the weapon system is currently used in Israeli population centers to protect against terrorist rocket attacks based on an Israeli Ministry of Defense decision.

“The Iron Dome teaming builds on our decade-long, ongoing cooperation with Raytheon Missile Systems to provide air and missile defense solutions,” said David Stemer, Rafael executive vice president and general manager of Rafael’s Missile Division. “Iron Dome delivers a leap-ahead, affordable capability for future customers.”


Raytheon-Rafael get boost for Iron Dome

Raytheon-Rafael get boost for Iron Dome
by Staff Writers
Tel Aviv, Israel (UPI) Aug 23, 2011

Raytheon's partnership with Rafael Advanced Defense Systems to market the Israeli company's Iron Dome anti-rocket system got a major boost in recent clashes in southern Israel when two batteries downed at least 15 rockets aimed at populated areas.

The batteries were deployed to protect the southern cities of Beersheba, in the Negev Desert, and Ashkelon on the coast from Soviet-designed Grad rockets fired from the Gaza Strip, which is ruled by the Palestinian Hamas movement.

More than 80 rockets were unleashed from Gaza from Thursday-Monday after Palestinian guerrillas infiltrated Israel's southern border with Egypt.

But Iron Dome, which became operational in March, has radars built by the Israeli software company Elta Systems, a subsidiary of state-owned Israel Aerospace Industries, that compute the trajectories of incoming rockets and can determine where they will land.

The battle management and weapon control center, developed by mPrest Systems will only engage rockets heading for populated areas and fire Tamir interceptor missiles built by Rafael.

Each battery consists of three launchers equipped with 20 Tamirs and is reported to be able to protect an area of around 60 square miles. The system is designed to defend against rockets and artillery shells at ranges of 2-45 miles.

The successful interceptions of the last few days confirmed the system's operational capabilities that got their baptism of fire April 7 when Iron Dome made its first kill, downing a Grad heading for Beersheba.

That was the first time a short-range rocket had been intercepted in flight.

In the April action, the system destroyed nine Hamas rockets and missed one.

In the latest fighting, Iron Dome had its first operational failure. On Saturday, it shot down a volley of six but a seventh missile evaded the system and hit Beersheba, killing one man.

Following the April interceptions, militants in Gaza changed tactics in an effort to evade the two Iron Dome batteries deployed in southern Israel.

One tactic was to unleash volleys of 122mm Grads almost simultaneously, rather than individual launches, seeking to overwhelm the system. But, as far as can be determined, Iron Dome was able to handle the rocket swarms.

"This is the first system of its kind anywhere in the world, it's in its first operational test and we've already intercepted a large number of rockets targeting Israeli communities," Brig. Gen. Doron Gavish, the Air Defense Corps commander, said Sunday.

But he stressed, "We said in advance that this wasn't a hermetic system."

Military commanders say they need 10-15 Iron Dome batteries to effectively cover the main population centers and key military installations. Other estimates range as high as 20 batteries.

The Defense Ministry has accelerated the Iron Dome production timetable.

A third battery is to be deployed in early October, with a fourth due for delivery in six months. The air force is expected to get another two by the end of 2012.

Most of the budget for the four new batteries is covered by a special allocation of $205 million authorized by the U.S. Congress in May.

The military has said it will be investing around $1 billion to produce more Iron Dome batteries. Defense Minister Ehud Barak says he's working on an emergency plan to have nine batteries operational by late 2013.

Iron Dome is the lower tier of a four-layer missile defense network planned by Israel.

The Arrow 2 high-altitude, long-range system to counter ballistic missiles has been operational since 2000 and an upgraded version, Arrow 3, extending its range and altitude, is currently being developed. It was successfully flight-tested in July.

Medium-range missiles will be countered by a system known as David's Sling, which is under development by Rafael in Haifa.

At least two Asian states are reported to be "actively examining" the system. One is believed to be Singapore, which has had military links to Israel since the 1960s.

South Korea, which faces missile and artillery threats from the north, has also shown interest in recent months.

The Jerusalem Post reported recently the Israeli Defense Ministry is talking to several European countries about acquiring Iron Dome to protect their forces in Afghanistan.


Saturday, October 20, 2012

Air America's Black Helicopter | Military Aviation | Air & Space Magazine

Air America's Black Helicopter

The secret aircraft that helped the CIA tap phones in North Vietnam.

  • By James R. Chiles
  • Air & Space magazine, March 2008

The Quiet One had a forward-looking infrared (FLIR) camera on its belly that helped the pilots navigate at night.
Shep Johnson
Photos from: "Air America's Black Helicopter" 

BLACK HELICOPTERS ARE A FAVORITE FANTASY when conspiracy theorists and movie directors conjure a government gone bad, but in fact, the last vehicle a secret organization would choose for a stealthy mission is a helicopter. A helicopter is a one-man band, its turbine exhaust blaring a piercing whine, the fuselage skin's vibration rumbling like a drum, the tail rotor rasping like a buzzsaw.

In the last dark nights of the Vietnam War, however, a secret government organization did use a helicopter for a single, sneaky mission. But it was no ordinary aircraft. The helicopter, a limited-edition model from the Aircraft Division of Hughes Tool Company, was modified to be stealthy. It was called the Quiet One—also known as the Hughes 500P, the "P" standing for Penetrator.

Just how quiet was the Quiet One? "It was absolutely amazing just how quiet those copters were," recalls Don Stephens, who managed the Quiet One's secret base in Laos for the CIA. "I'd stand on the [landing pad] and try to figure out the first time I could hear it and which direction it was coming from. I couldn't place it until it was one or two hundred yards away." Says Rod Taylor, who served as project engineer for Hughes, "There is no helicopter today that is as quiet."

The Quiet One grew out of the Hughes 500 helicopter, known to aviators in Vietnam as the OH-6A "Loach," after LOH, an abbreviation for "light observation helicopter." The new version started with a small research-and-development contract from the Advanced Research Projects Agency (now the Defense Advanced Research Projects Agency) in 1968. The idea of using hushed helicopters in Southeast Asia came from the CIA's Special Operations Division Air Branch, which wanted them to quietly drop off and pick up agents in enemy territory. The CIA bought and then handed over two of the top-secret helicopters to a firm—by all appearances, civilian—called Air America. Formed in 1959 from assets of previous front companies, Air America was throughout its life beholden to the CIA, the Department of State, and the Pentagon.

The Quiet One's single, secret mission, conducted on December 5 and 6, 1972, fell outside Air America's normal operations. The company's public face—what spies might call its "legend"—was that of a plucky charter airline delivering food and supplies to civilians in Laos, and flying occasional combat evacuation missions in Laos and South Vietnam. While it did substantially more than that, and at considerable peril (217 of its employees died in Laos), Air America crews did not make it a practice to fly deep into North Vietnam.

The mission was intended to fill an information gap that had been galling Henry Kissinger, secretary of state under President Richard Nixon. Negotiations to end the 11-year war had begun in March 1972 but stalled in part because South Vietnamese leaders feared that North Vietnam would invade not long after U.S. troops left. A five-month Air Force and Navy bombing campaign called Operation Linebacker had brought the North Vietnamese to the negotiating table in Paris that October, but even that campaign could not force a deal. Kissinger wanted the CIA to find out whether the North Vietnamese were following the peace terms or just using them as a smokescreen for attack plans.

From its intelligence work a year earlier, the CIA knew about a weak point in the North Vietnamese wall of security: a telephone line used by the country's military commanders, located near the industrial city of Vinh. A patrolled bicycle path ran alongside the string of telephone poles, but at one spot, about 15 miles southwest of Vinh and just east of the Cau River, the phone line went straight up a bluff, over a ridge, and down the other side. The terrain was too steep for bikes, so the path followed the river, which flowed around the bluff, rejoining the telephone poles on the bluff's far side (see hand-drawn map, p. 67). This would be the best place to drop off commandos to place a wiretap.

Because the Vinh tap would be sending its intercepts out of North Vietnam, across Laos, and into Thailand, it would need a solar-powered relay station that could catch and transmit the signal, broadcasting from high ground. The station would be within earshot of enemy patrols, so both the tap and relay would have to be dropped in by helicopter—a very quiet one.

Disturbing the peace

The Hughes Tool Aircraft Division had started working on such a helicopter in 1968; that year an affluent suburb of Los Angeles had bought two piston-powered Hughes 269 helicopters for police patrols. Citizens soon called to complain about the noise of the low-flying patrols, and the city told

Hughes to either make them quieter or take them back. An emerging market for police patrols was at stake. Engineers at Hughes identified one of the worst of the noisemakers: the tail rotor. By doubling the number of blades to four, Hughes was able to cut the speed of the rotor in half, which reduced the
helicopter's noise.

Coincidentally, the Advanced Research Projects Agency was hunting for contractors who could cut noise from military helicopters of all sizes. After hearing about Hughes' work on the police helicopters, ARPA offered the company $200,000 in 1968 to work similar magic on a Hughes OH-6A light helicopter. Hughes Tool made a short movie about the modifications, which included a new set of gears to slow the tail rotor, and showed it to ARPA. "ARPA came back and offered a blank check to do a Phase Two of the program with no holds barred," recalls Taylor, the project engineer. "Each and every noise source in the helicopter was to be addressed in an attempt to reduce the signature to an absolute minimum." ARPA gave the project the code name Mainstreet. Even before work was fully under way, the CIA ordered two (later registered as N351X and N352X) for use in the field. Test flights began at Culver City, California, in 1971, followed by a brisk training program for the U.S. instructor-pilots who would later train mission pilots.

Flights of the Quiet One included low-level work at the secret Air Force base Area 51 in Nevada and touchdowns on peaks in California to familiarize pilots with close-quarters maneuvering and landing in darkness. Pilots needed at least eight hours to get comfortable with steering by sole reference to the comparatively narrow view of the forward-looking infrared (FLIR) camera, which was mounted just above the skids. Says Allen Cates, an Air America pilot who flew one in 1973: "When you saw a person, it was like looking at a photo negative. Or you'd see just the hood of a car, glowing from heat off the engine block…. And when you were landing, a blade of grass looked as big as a tree."

The slapping noise that some helicopters produce, which can be heard two miles away or more, is caused by "blade vortex interaction," in which the tip of each whirling rotor blade makes tiny tornadoes that are then struck by oncoming blades. The Quiet One's modifications included an extra main rotor blade, changes to the tips on the main blades, and engine adjustments that allowed the pilot to slow the main rotor speed, making the blades quieter (see "How To Hush a Helicopter," p. 68). The helicopter also had extra fuel tanks in the rear passenger compartment, an alcohol-water injection system to boost the Allison engine's power output for short periods, an engine exhaust muffler, lead-vinyl pads to deaden skin noise, and even a baffle to block noise slipping out the air intake.

The extensive alterations did not blank out all noise, Taylor says. Rather, they damped the kinds of noise that people associate with a helicopter. "Noise is very subjective," he says. "You can reduce the overall noise signature and an observer will still say, 'I can hear it as well as before.' It's related to the human ability to discriminate different sounds. You don't hear the lawnmower next door, but a model airplane is easily heard. It has a higher frequency and seems irritating."

Hughes shipped the two Quiet Ones to Taiwan in October 1971. Under the CIA's original plans, the Vinh wiretap mission would be flown by pilots from the Taiwanese air force's 34th Squadron. This would offer the United States some deniability, however flimsy, if any of the helicopters were captured. The pilots' U.S. instructors included two veteran helicopter pilots with experience flying low-level missions in Vietnam: Lloyd George Anthony Lamothe Jr. and Daniel H. Smith. The two had joined Air America six months earlier for that purpose.

The decoys arrive

Meanwhile, Air America's fleet in Thailand accepted delivery of two more Hughes 500 models—standard ones—and used them for air taxi operations. The job of these plain-vanilla Loaches was to distract attention from the Quiet Ones before they even landed in Laos. Loaches were common in Vietnam but not in Laos, so Air America needed to start using them in full view of North Vietnamese sympathizers. That way, if an enemy observer later saw the modified Loaches flitting past on a moonlit night, he might not consider the event worthy of comment.

Initial flight training on the Quiet Ones, conducted in Taiwan, was complete by June 1972. The two helicopters and their gear traveled on a C-130 transport to an isolated airstrip in Thailand called LS-05. Mechanics pulled them out, swung the rotor blades for flight, and filled the tanks, and the two helicopters flew by night to an even more obscure base, a secret one in southwest Laos known to insiders as PS-44. PS stood for "Pakse Site," a reference to the garrison town of Pakse, 18 miles to the southeast. PS-44 had been built to house Laotian commandos and the aircraft that flew them around. Its dirt strip and three tin-roof buildings sat on the edge of a plateau, surrounded on three sides by steep ground that was unusual for its expanses of bright beach-like sand, eroded from nearby cliffs of white sandstone.

It appeared to be far away from everything, but it was not far from the enemy. By late 1972, units of the North Vietnamese army were ensconced 20 miles to the north. To offer some peace of mind, the CIA had Air America keep a turbine transport helicopter, the Sikorsky S-58T "Twin Pack," handy for evacuations. More reassuring, the terrain was so steep and overgrown that the enemy could have stormed it from only one direction: the west. The base also relied on a perimeter of six guard posts staffed by Laotian soldiers, and reinforcements could have been called in from a base lying southwest, along the Mekong River.

No pictures allowed

Cameras were discouraged at PS-44, and photographing the Quiet One was strictly forbidden. Crews already knew the risk of telling tales in the bars and brothels of Southeast Asia, but even inside the base, the code of silence persisted. "You just
didn't come up and introduce yourself at PS-44," says Dick Casterlin, an Air America pilot who came to the base often. "Nobody talked about their personal background or where they were from." Men who worked closely for months knew each other only by first names or nicknames. The CIA itself had its own nickname at PS-44: The men called it simply "the Customer."

Casterlin flew an S-58T helicopter during some of the wiretap attempts, accompanying the Quiet One in order to rescue the wiretap teams if that became necessary. Casterlin had a security clearance for special missions, but even he wasn't told where the CIA had hidden the Quiet One.

According to base manager Stephens, the Quiet One was kept out of sight about 600 yards northwest of PS-44's main building, reachable down an unmarked, narrow forest trail. Because of the distance, the forests, and the quieting gear, the helicopter couldn't be heard from the porch of the base's main building unless it was flying overhead. Even then, at night, it sounded like a far-off airplane. The helicopter had its own hangar so Soviet spyplanes and satellites could not get a look at the peculiar profile produced by the extra main rotor blade, a tail rotor with blades in an odd scissored configuration, and big muffler on the rear fuselage.

Between June and September, Lamothe and Smith tried to train the Taiwanese crews to fly the mission, but after months of poor performance by the trainees—including a botched night landing that demolished one of the two Quiet Ones—and bickering over who would be the chief pilot, the CIA managers got fed up and sent the whole contingent home. Lamothe and Smith prepared to fly the mission themselves.

At the same time, the agency placed the project under new management. James Glerum arrived in Pakse to direct operations. Glerum had been the CIA's assistant base chief at Udorn Royal Thai Air Force Base when the Quiet Ones landed in Laos. The new assignment demonstrated how urgently the state department wanted the wiretapped information, according to Air America chief helicopter pilot Wayne Knight. Glerum, he says, was a CIA "super-grade," outranking many careerists at headquarters.

Soon after his arrival, Glerum quizzed Smith and Lamothe on their cover story. When he realized they had none, he provided them with false identities and a story to go with them in case of capture.

More help came from Air America, which was offering up its best aircraft (the term used was "gold-plated") and its most experienced men to support the mission. One was Thomas "Shep" Johnson, a rangy Idahoan with a background in smoke-jumping. Johnson had started with Air America in its first year, 1959, rigging bundles with parachutes and pushing them out of aircraft. A year before, he had been one of only three men to survive a North Vietnamese attack at another Laotian air base. Johnson's main responsibility was to train a squad of eight Laotian commandos for the Vinh wiretap mission. For years, the commandos had been fighting communist forces and had reported on enemy traffic along the Ho Chi Minh Trail in eastern Laos. A group of 100, they lived in a separate part of PS-44 and manned the perimeter.

The CIA had hoped to get the wiretap in place before monsoon season, but a series of mishaps and equipment malfunctions, compounded by the monsoons starting early, delayed the mission. "We had a string of unbelievably bad weather," says Glerum. "Normally, November to January is the rainy season. It had started right as I got there [in October]." Twice Lamothe and Smith took off from PS-44 to fly the wiretap mission, refueling in eastern Thailand and heading into enemy territory, only to turn back after running into clouds in the passes or fog at the wiretap site. "The preparation for the mission was a very hectic time," says Stephens, "but it also seemed like it dragged on forever."


Hughes technicians toiled over the troublesome infrared camera; problems with it had forced cancellation of an October 21 attempt. "The FLIR [forward-looking infrared] required a lot of work," recalls Glerum. Other gadgetry included SU-50 night-vision goggles (their first use in Laos), which worked only when the moon was a quarter to a half full. The helicopter also had a long-range navigation system (LORAN-C).

Any mishap during the night flight into North Vietnam, particularly while the crew maneuvered among trees and telephone poles, would doom the mission and probably its participants. By day Lamothe and Smith studied photos and maps marking the stealthiest route to the target. By night they practiced by using LORAN to navigate from the hangar to a nearby training ground they called the Hole. The topography of the Hole was an "astonishingly accurate duplicate" of the actual wiretap site, according to Glerum. Flying into and out of it was "no problem in the daytime, [but] it could be a bugger at night," recalls Casterlin. Smith and Lamothe dropped the commandos near a simulated telephone pole (a tree stripped of branches and equipped with a cross arm) and flew to a pre-selected tree, where they laid out the radio rig called the spider relay.

The spider relay was to be deployed as the helicopter hovered over a tree. With its solar panels, electronics boxes, and antennas sprung open to a width of almost 10 feet, the relay perched atop the branches with a fishnet-like webbing. It was nearly impossible to see from the ground. The relay could be folded into a compact package that fit between the helicopter skids, but there was so little ground clearance left after it was attached, the pilots could land only on a hard, flat surface.

When each night's practice was complete, Lamothe and Smith flew back through the darkness to the concrete landing pad, which was shaped like an old-fashioned keyhole. The approach to landing was memorable because the Quiet One used no landing lights; it relied on an infrared floodlight on the nose. The light cast an eerie, ruddy glow.

Some of the biggest threats to mission success came not from North Vietnamese army spies but from plain bad luck. One flight opportunity was lost when a scorpion bit a wiretap team commando, setting off an allergic reaction. On one of the training flights at the Hole, after Lamothe and Smith deployed the spider relay used for practice, it slid off the branches and crashed to the ground, with pieces scattering. Training for the mission could not proceed without the relay, and joyful speculation spread among the ranks: It would be a month or more until a new spider could come from the States, so the men could go on leave.

But no: Stephens flew to the spot by helicopter, slid down a rope, and helped technician Bob Lanning bag up the pieces. Back at camp, Lanning laid them out on a floor and said he could get the relay working if he had some new parts. "Jim Glerum sent a cable," says Stephens, "and in three days we had the parts by courier. Bob worked two and a half days, almost nonstop, and put it back together. So we only lost a few days."

With the moon entering the favorable phase, the rescue crews moved to a forward staging base in eastern Thailand while Lamothe, Smith, and the Quiet One remained at PS-44. An attempt was scheduled for the night of December 5, amid rising doubts among Air America veterans as to whether the scheme would ever work.

That night, the Quiet One flew to a refueling base at the Thai-Laotian border, where it met a de Havilland DHC-6 Twin Otter with the Laotian commandos. Two commandos with guns and the wiretap equipment climbed aboard the Quiet One, and the rest stayed on the Otter with parachutes and more guns in case they were needed for a rescue. Accompanied by an armed Twin Pack flown by Casterlin and Julian "Scratch" Kanach, the Quiet One set course for the northeast. The Twin Pack broke away at the North Vietnamese border and took up a slow orbit over Laos, out of radar range but on call if needed. Despite the Twin Pack's readiness to play the rescue role, security was as tight as ever. "I did the LORAN navigation, but I didn't have the coordinates of the wiretap location," Casterlin says. "I assumed they'd tell me if I needed to know, or maybe Scratch knew."

Leaving the Ho Chi Minh Trail, and without being targeted by the anti-aircraft defenses along it, Lamothe and Smith climbed to cross the Annamese mountains, then dropped to follow the nap of the earth, following streambeds when possible. When the pilots identified the wiretap spot, they hovered, and the two Laotian commandos jumped a few feet to the ground.

Lamothe and Smith then flew west across the Cau River to a 1,000-foot-high mountain to set the spider relay. Finding the ideal tree for the relay had taken months of intense photo-
reconnaissance work. The tree had to be tall, on high ground with a clear view of the western horizon, and flat at the crown. An Otter orbited over a receiver relay, which was already in place atop another mountain halfway into Laos. Inside the Otter, technicians were watching an oscilloscope measure a test signal from the spider relay.

Meanwhile, the Laotian commandos at the wiretap site found that the poles were concrete rather than wood, so they couldn't use their pole-climbing boots to get up them or a stapler to attach the antenna. The men shinnied up instead. After splicing into the phone wires, they put the tap in place; it was concealed in a glass insulator of the same color used on the French-built line. The commandos began taping up the short-range antenna and installing narrow solar panels atop the pole's cross-arm. This would power the tap's transmitter.

When Lamothe and Smith heard from the Otter that the Thai oscilloscope was getting a clear signal from the spider relay's transmitter, they threw a switch that released the last cables connecting the spider relay to the helicopter and flew the Quiet One to a streambed to wait for the commandos to finish attaching the solar panels. At the scheduled time, Smith restarted the helicopter's turbine; he picked up the commandos at the wiretap site and the team returned to Laos without incident. Those listening to progress reports at PS-44, Udorn, and the Lima 40A refueling site were pleasantly startled to hear that the crew was on its way back and the tap was in place without a firefight, recalls Wayne Knight.

"What makes the Vinh tap so special is that they pulled it off," Knight says. "It had to be right the first time."


Lamothe and Smith left the Quiet One at PS-44 and flew to the CIA's regional office at Udorn by conventional aircraft. Much celebration at ensued there—perhaps too much. During the subsequent R&R, someone at the Wolverine Night Club in town bit off part of Smith's ear. If a reprimand for attracting attention was ever entered in Smith's secret personnel file, it didn't matter: The CIA had no plans to send the Quiet One up again, and within a week all the Americans connected with the mission and their equipment were on their way out of Laos.

Recollections differ on how long the Vinh tap worked—perhaps one to three months—and why it went silent. But allegedly it yielded enough inside information from the North Vietnamese high command to help nudge all parties to sign a peace pact in late January 1973. Exactly what Kissinger eavesdropped on remains classified.

"I was not aware of any specifics Kissinger and company were looking for," Glerum says. "Since the land line [at Vinh] was understood to hold the command channel, virtually anything would have been welcome."

The one flyable Quiet One relocated to California. Air America pilots Allen Cates and Robert Mehaffey trained on it at Edwards Air Force Base, achieving proficiency in early 1973. Then, before any special-mission training began, and with no explanation, Cates and Mehaffey were sent back to their old piloting jobs at Air America. Mechanics pulled most of the special features out of the Quiet One, and its trail of insurance and registration papers ends in 1973, after it was transferred to Pacific Corporation of Washington, D.C., a holding company used as a screen for CIA-backed companies and assets.

"The agency got rid of it because they thought they had no more use for it," says Glerum. At least one of the ex-Quiet Ones surfaced years later at the Army's Night Vision & Electronic Sensors Directorate in Fort Belvoir, Virginia.

But according to the participants, no more were built. It's puzzling why the CIA did not keep a stable of Quiet Ones, at least while the technology remained under wraps. And it remained a secret for more than two decades, until Ken Conboy and James Morrison told the story in their 1995 book Shadow War.

But there were valid reasons for dropping the Quiet One from the spymasters' catalog.

"In the long run, the 500P was not the best for setting wiretaps," says Casterlin. "It was not good for high-altitude work." It was a light helicopter and had to be loaded with gear that cut into its payload capability and operating altitude. The Twin Pack was much louder but also simpler to run and more powerful, so Air America used it for later wiretap missions in North Vietnam. At least one tap, placed on the night of March 12-13, 1973, was successful.

Some of the Quiet One's innovations did show up on later helicopters, including the Hughes AH-64 Apache, which has a scissor-style tail rotor. And Hughes engineers' interest in modifying the tips of the main rotor blades to cut the slapping noise caused by blade vortices has been taken up by other experts. Aerospace engineer Gordon Leishman and his team at the University of Maryland, for example, are developing a blade with curved tubes at the tip to divert the air, thereby countering vortex formation. But, thanks to its many unusual modifications, the 500P still holds the title that Hughes gave it in April 1971: "the world's quietest helicopter."

Pilots viewed the terrain imaged by the FLIR on screens in the cockpit.
Shep Johnson

 At a secret base in Laos, Air America's Thomas "Shep" Johnson trained local commandos to set a wiretap. U.S. pilots flew them to the wiretap site, far behind enemy lines.
Shep Johnson

 A Sikorsky H-34, here about to hoist a wingless Cessna O-1 Bird Dog, was a multi-mission helicopter in South Vietnam. One of the H-34 models, called an S-58T. escorted the Quiet One partway on its wiretap mission, in case a rescue was needed.
Department of Defense

A Laotian commando practices for the secret operation.
Shep Johnson

It was rare for the Quiet One, designed for flight at night, to see the light of day.
Shep Johnson

Friday, October 12, 2012

National Security and the Internet - 21st Century Project

The War on Terror is actually a war on Islam and most importantly against the Christian faith, to make both fall. While the computers have a part in that, the lies that are told is the worst.

Be careful of false prophets, have nothing to do with them and their deceit:

Palestine Cry: Benediktos the current false prophet - Desolation Angel « Mystery of the Iniquity

Look up, your redemption is at hand: No to all Terrorists including a false ' Mahommot ', the false Mahdi

World War III and the False Peace: Commodianus - quoting oral sayings that are the root of the hadiths later in Islamic tradition for the Mahdi.

National Security  and the Internet - 21st Century Project

Gary Chapman
The 21st Century Project
LBJ School of Public Affairs
Drawer Y, University Station
University of Texas
Austin, TX 78713
(512) 471-8326
(512) 471-1835 (fax)

July, 1998
This paper was presented at the annual convention of the Internet Society in July, 1998, in Geneva, Switzerland.

The modern concept of "national security" and the electronic digital computer are roughly the same age, both products of World War II. ENIAC, the world's first digital electronic computer, went into service at the University of Pennsylvania in 1946. The U.S. government's Central Intelligence Agency and National Security Agency were launched a year later, authorized by the National Security Act.

Until relatively recently, national security and computers enjoyed a symbiotic relationship, too. Until the mid-1960s, perhaps even later, the chief U.S. government agencies responsible for national security were also the chief catalysts and funders for computer research, and also the largest customers of the computer industry. Indeed, the appearance of the digital computer even shaped the strategy of national security in the United States, as more and more national security planning became dependent on computer-based models using techniques of systems analysis and operations research. One might even argue that this symbiotic relationship between computers and national security is the primary bearer and symbol of U.S. power in the latter half of the twentieth century, even more so than nuclear weapons.

Computer technology is still important to national security, perhaps of paramount importance. Without computers, modern arsenals and "battle management" and communications would be impossible. The future appears to belong to so-called "smart" weapons, complex systems of command and control, telecommunications, satellites, electronic surveillance, and split-second information processing. The end of the Cold War has appeared to speed up the process of integrating advanced computers into weapons and command systems, rather than slow it down. The United States' overwhelming superiority in information technologies is the key to its superpower status for the foreseeable future.

But a new phenomenon is the threat to national security posed by networked computers, particularly through the Internet. This is accompanied by more than a small amount of irony, as the Internet was, for decades, a project of the U.S. Department of Defense. For a long time, during the period when the Internet was used almost exclusively by scientists, engineers, academics, and a handful of military personnel, the Internet was viewed by experts mainly as a benign and interesting research project, one with modest and limited application to national security objectives. But in the 1990s, and especially in the past two to three years, the Internet has increasingly been regarded by national security officials as a new playing field for international conflict, a new medium in which national security will take on new forms, and one in which the U.S. government agencies responsible for national security have a growing stake. High officials of the CIA, the National Security Agency, the FBI, the White House, and other, less well-known agencies now believe that the Internet is a "critical national asset" that requires their attention and protection. This may signal a new era in the development of the Internet, equal in importance to its commercial potential. In fact, the commercial use of the Internet may be influenced by national security controversies as much as by consumer response to new Internet applications.

This paper will review this controversy, looking first at the history of the Internet's relationship to national security, then providing an overview of the new landscape now that the Internet is increasingly embedded in "critical national infrastructure." The concept of "infowar," or "cyberwar," will be described, along with the attendant difficulties of assessing computer-based threats to national assets. Finally, the paper will offer some thoughts on what this new phenomenon might mean for the future development of the Internet, what strategies policymakers and technology experts should consider, and what dangers lie ahead for democracy and public policy.
The Internet and the Military in Historical Context

As is common, popular knowledge by now, the Internet was first launched as a research project funded and managed by the U.S. Department of Defense Advanced Research Projects Agency (ARPA) in the late 1960s. In 1983, the Defense Communications Agency split the network into two parts, ARPANET and MILNET, the former for the research community and the latter for nonclassified military communications. ARPANET's name was changed to the Internet, and management was turned over to the National Science Foundation. It was also in 1983 that the network adopted TCP/IP, which was perhaps the most important technical decision in the history of the Internet to date, allowing a vast expansion of the Internet that continues at an amazing rate of growth today.

There is a persistent myth surrounding the history of the Internet that it was designed to "sustain a nuclear attack," and that this was the chief research interest of the Internet's Pentagon sponsors. As described in the definitive history of the Internet, When Wizards Stay Up Late, by Katie Hafner and Matt Lyon, the story that lies behind this myth is somewhat complicated [Hafner and Lyon, 1996].

Paul Baran, a RAND Corporation researcher who joined that Air Force-sponsored thinktank in Santa Monica, California, in 1959, "developed an interest in the survivability of communications systems under nuclear attack," write Hafner and Lyon. "He was motivated primarily by the hovering tensions of the cold war, not the engineering challenges involved. . . . Baran knew, as did all who understood nuclear weapons and communications technology, that the early command and control systems for missile launch were dangerously fragile" [54]. At this time, during the late 1950s and early 1960s, the RAND Corporation was the primary source of strategic thinking for U.S. nuclear policy, and the institution was already heavily dependent on computer technology, producing many of the earliest computer models of nuclear war.

RAND researchers were working on sustainable communications systems before Baran joined their ranks, without much success. It was Baran's theoretical work on distributed networked systems that pointed toward a solution. Baran came up with three theoretical innovations that became fundamental to the development of the Internet: a distributed network, network redundancy, and message disaggregation. This was a radical departure from the then universal model of communications based on centralized switching and open, direct circuits.

Baran's work was understood by only a handful of communications experts in the United States, and it was poorly received by the people in charge of improving defense communications, most of whom came from careers rooted in the more conventional model. He halted his work in 1964, convinced that the agencies responsible for military communications would botch the job even if they adopted his ideas. "So I told my friends in the Pentagon to abort this entire program -- because they wouldn't get it right," he told Hafner and Lyon [64]. Instead, he decided to wait for the right moment, with some different kind of organization.

His opportunity emerged a few years later, when Larry Roberts, one of the ARPA officials in charge of investigating computer networks in the late 1960s, discovered Baran's RAND papers. However, note Hafner and Lyon, "Nuclear war scenarios, and command and control issues, weren't high on Roberts' agenda" [77]. Roberts was intrigued by Baran's theoretical ideas of a distributed network from a purely research point of view. Roberts was also interested in a network that would tie together several of ARPA's chief research sites, universities and other institutions conducting experiments funded by the agency. It was Roberts who laid the first foundations of the Internet, relying on contributions from many different sources, including Baran, who became a consultant to the project.

Thus, while Baran's work was motivated by the goal of building a communications network that could survive a nuclear war, this motivation was only a small part of the flow of ideas that built the technical foundations of the Internet.

Even more important is the fact that the Internet was never linked to any critical military application or system. The Internet never played a role in controlling nuclear weapons, for example. The communications network that connected U.S. nuclear facilities, such as between the North American Air Defense Command in Cheyenne Mountain, Colorado -- the hub of the country's "early warning" system -- and the launch control headquarters of the Strategic Air Command in Omaha, Nebraska, was deliberately isolated from the Internet. The scenario portrayed in the popular movie "War Games," in which a teenage computer whiz taps into the nation's nuclear arsenal from his home computer, was never possible in real life. The Defense Department built its own global communications network, the World-Wide Military Communications System (WWMCs, pronounced "Wimmix"), which shared little with the Internet and was not connected to it; indeed, WWMCs was notoriously unreliable and was eventually abandoned.

For a variety of reasons, the development of the Internet, even when it was funded by the Pentagon, scarcely attracted the attention of military planners or national security officials. In the 1960s and 1970s, ARPA was an agency nearly unto itself, run primarily by and for academic researchers who were distant from military culture. ARPA's character began to change in the 1980s, but in the early days of the Internet, the system was viewed almost universally as a research program, not as a precursor to a communications network tied to national security. In fact, it was this research character that contributed to the ease with which the Internet was absorbed by the civilian sector and now by commercial enterprises. The Internet was not burdened with security classifications, black budgets, or secret technical specifications. And, ironically enough, it was this very openness of the Internet's development that reduced its importance in the eyes of career military officers and high national security officials, who were conditioned to believe that anything significant in their fields must be classified and secret.

In short, while the Internet and the concept of "national security" share common roots in history, they developed along separate and divergent paths. This makes it all the more interesting that these paths are now converging again, but in a way that makes the Internet problematic and even threatening to national security.
The New Intersection of National Security and the Internet

In September of 1997, the President's Commission on Critical Infrastructure Protection released a preliminary report calling for a vast increase in funding to protect eight key elements of U.S. infrastructure: electric power distribution, telecommunication, banking and finance, water, transportation, oil and gas storage and transportation, emergency services and government services.

"These are the life support systems of the nation," said the Commission's chairman, retired Air Force General Robert T. Marsh. "They're vital, not only for day-to-day discourse, they're vital to national security. They're vital to our economic competitiveness world wide, they're vital to our very way of life."

"The Internet provides an access point into all these infrastructures," said Marsh. Commission member John T. Davis, representing the National Security Agency, said the government should develop a secure "Next Generation Internet" for official use.

The commission recommended doubling the current federal R&D budget of $250 million for protecting these systems, with increases of $100 million each year after 1999 to $1 billion per year by 2004.

In February, 1998, U.S. Attorney General Janet Reno unveiled a $64 million plan to build a new "command center" to fight "cyber attacks" against U.S. computer systems. This new "command center" is called the National Infrastructure Protection Center, a Justice Department response to the report from the President's Commission on Critical Infrastructure Protection [Glave, 1998].

These are just some of the more recent and visible results of concern over "cyber war," "infowar," "cyberterrorism," and other, related threats now perceived by law enforcement personnel and national security officials as new and important terrain. And these authorities commonly view the Internet as the "highway" upon which these threats will be borne.

The character of the Internet has been dramatically transformed over the past five years, as everyone knows. What began as a communications network for scientists, academics, engineers, and specialists is now a vast, global, communications medium that rivals the public telephone network, television broadcasting, and even radio. The Internet-using population, worldwide, is now over 60 million people, and Matrix Internet Data Services, an Internet demographic consulting company in Austin, Texas, has predicted that, by the year 2002, there could be more than 700 million people using the network. Senior executives in large telecommunications companies, an industry which is now the largest in the world, routinely report that data traffic will soon surpass voice traffic, and that packet-switched networks, like the Internet, may eventually supersede the circuit-switched telephone network worldwide. The Internet "model," of packet-switching, distributed communication, and unmanned digital nodes, appears to be the bedrock for nearly all future communications.

Of particular importance to those charged with national security is the fact that increasing levels of international commerce are conducted over the Internet, and also increasing levels of government service. International funds transfers, now surpassing over a trillion dollars a day, are carried by computer networks. Power grids, banks, government databases, large corporate enterprises, news networks, transportation facilities, and many other essential components of civilized life are increasingly "on the net," delivering services or conducting critical communications over the Internet.

Disruption of such services or communications could, someday it is feared, resemble or approach in severity an actual physical attack such as a military strike or a major terrorist incident. At present, the potential for a computer attack that would produce a major national calamity is controversial. Most computer attacks documented so far have been merely intrusions or annoyances. In many cases, vulnerability to computer attack is shrouded in secrecy or proprietary prudence. In other cases, vulnerability may be exaggerated to enhance the status and commercial value of computer security firms or to improve the negotiating position of government agencies that are seeking more funding or clout.

What is important now, however, is that officials of the U.S. government and experts in the private sector are arguing, persistently, that the growth of the Internet, and its expanding capabilities, combined with the fact that it is increasingly embedded in "critical national infrastructures," makes protection of computers on the Internet a matter of national security. In other words, regardless of the current threat, the future indicates growing vulnerability and thus a growing urgency for protection and vigilance.

Jamie Gorelick, U.S. Deputy Attorney General, told the host of TV's "Nightline" news talk show, Forrest Sawyer, in December of 1997, "My own assessment, Forrest, is that we have a couple of years before there is a really serious threat. We have seen indications in criminal activity, in the plans of foreign nations, in the plans of terrorist groups that lead us to believe that we should be about the process of hardening our computers against attack" [Nightline, 1997].

In yet another irony, what may contribute to the threat of computer attack in the U.S. is the country's unrivaled military superiority. General Marsh said on the same "Nightline" program, "Nobody around the world today would attempt to defeat us on the battlefield. Instead, they will be seeking means to find vulnerabilities in our systems that they can exploit and do serious harm without having to confront us in the conventional armed way of the past."

If the Internet does prove to be a viable means for nations to attack one another, nations capable of such threats will be able to afford a credible threatening status far more cheaply than if they needed vast arsenals of missiles and tanks. A relatively modest investment in the skills of a handful of network trespassers and hackers would become a substitute for immense investments in weaponry. As such, the sources of credible threats could proliferate.

This "new terrain" of computer warfare or cyberterrorism poses some serious and unfamiliar challenges to national security authorities.

First, all forms of warfare in the past have involved a threat to geographically specific assets by equally geographically specific threats -- such as massed armies or ballistic missiles. One of the chief characteristics about computer attacks is their ambiguity in nearly every dimension: it's difficult to ascertain where the attack is coming from, who is behind it, what the motive is, whether it is the work of a determined enemy or merely a curious trespasser, etc. Penetrations that come from trespassers inside the U.S. may not be benign or "domestic." Before the war in the Persian Gulf, for example, there was a report of a U.S. hacker breaking into Pentagon computers and then offering to sell the information to Saddam Hussein (who didn't buy it because he didn't believe it was genuine) [Nightline, 1997].

It's not even clear what the term "cyberwar" describes. If it means an organized and coordinated attack on computer systems by another state government, that may be too high a threshold; it's unlikely we'll see an unequivocal example of this soon, except perhaps by the U.S. attacking an enemy's computers. "Cyberterrorism" may be more likely, but, as in the distinction between war and terrorism by other means, this prospect might call for solutions different than protection from "cyberwar."

If a computer attack were to occur in the midst of some other crisis of national security, says Roger Molander, an expert now at the RAND Corporation, the very ambiguity of the attack may complicate decision-making tremendously. This is a murky world for national security officials.

Second, the United States has historically avoided major military attacks because of its relative isolation from belligerents, a kind of "continental defense." Most of the country's history in military strategy has been to keep conflict as far from the U.S. mainland as possible. But the Internet poses a new dilemma: its global character, and the way it works, allows easy access to almost any networked computer inside the United States, including those running critical systems, from nearly anywhere else in the world. For a determined adversary, there are now millions of entry points to the U.S. heartland, and requiring no logistical effort, in contrast to the obstacles facing adversaries in the past.

Third, because of the fact that computer attacks can come from both inside and outside the U.S., and the fact that the origins of such attacks are difficult to identify promptly, jurisdictional controversies and overlap among law enforcement and national security agencies are already rampant. The U.S. has had a long tradition, for fifty years at least, of separating the jurisdictions of agencies responsible for domestic threats from those responsible for foreign threats. If the Internet is factored into their responsibilities, these jurisdictional boundaries are rendered exceedingly vague and arbitrary, leading to confusion and conflicting interests.

Finally, the biggest issue of all: for the most part, in the past, the U.S. military and its national security allies, such as intelligence agencies, have been charged with protecting military assets first, and using these as offensive weapons or deterrents against enemies. In a "cyberwar" scenario, however, conventional military assets will be useless, and there may be no appropriate offensive weapons available. The military and law enforcement and national security agencies are increasingly faced with protecting private assets, such as corporate computer systems, or other information systems far outside the jurisdiction of the federal government. Given the nature of U.S. democracy, the federal government's powers for forcing protection schemes on private companies or other governmental entities are limited. And, as demonstrated by the ongoing debate over encryption restrictions, the government may have interests quite different from those of private companies, especially those that compete in the global marketplace. Indeed, given the evolving nature of global enterprise, it's commonly unclear where a U.S. company stops and a foreign counterpart or partner begins. The Internet does tend to erase national borders, as does global commerce. The U.S. defense establishment has traditionally been able to circumscribe what constitutes a "national asset," but this is getting more and more difficult to do.
For all these reasons, many of which have emerged only in the past half decade, the Internet is a new factor in national security assessment. And, given the significant influence of national security agencies in setting national political agendas, and in shaping technological trends, this new friction between the Internet and national security is likely to affect the way the Internet develops for the foreseeable future. At stake is whether the Internet can retain its democratic, global, and egalitarian features, or whether it will be absorbed into older patterns of national competition for power and status.
Netwar and Threats to Nations

How big a threat to national security is the Internet?

While the question is obvious, the answer, unfortunately, is not. While advancing technology has made assessing all threats to national security increasingly difficult, assessments of the threat of "cyberwar" or "cyberterrorism" via the Internet may be the most difficult of all, for a variety of reasons.

First, of course, the Internet is constantly changing. Indeed, it may be the most rapidly evolving entity in human history. It is difficult, if not impossible, to fix a "moment" on the Internet to make an assessment that would last more than a few weeks, at most. This is very different than assessing other kinds of vulnerabilities or threats, which change or accumulate much more slowly. During the Cold War, U.S. intelligence sources had a reasonably good idea of the capabilities of the Soviet Union, at least in terms of the raw numbers of its military assets. It's difficult to imagine how the same sources could "count" the threat of Russian hackers, for example, some of whom have penetrated deep into the computers of U.S.-based banks, such as CitiCorp in New York. The Internet has also extended and deepened its reach so broadly over the past few years that it's almost certainly impossible for anyone, or any group of people, to "know" everything it touches at any given time. Not only is the system vast, involving tens of millions of computers, but it is characterized by rapid change, contingency, complexity, innovation, and constant "churn," or the birth and death of new features almost overnight. This is, in short, a risk assessment team's worst nightmare.

Next, even if one were able to narrow one's focus to "critical" systems connected to the Internet, there are no public or even readily available data on how vulnerable such systems might be. Defense computers are buried under layers of secrecy and classification, and private companies are not likely to volunteer such information. We typically only hear about computer vulnerabilities after a break-in, and even then we learn little about the incident, and sometimes the descriptions of break-ins are not accurate, either. A New Jersey State Trooper once told the press that a teenage hacker he had arrested was altering the orbital paths of U.S. defense satellites, which was not only untrue but absurd.

People who reveal computer break-ins often have ancillary reasons for such revelations. Responding to a recent rash of reported break-ins in Pentagon computers, Peter Neumann, one of the world's leading computer security experts, told HotWired News, "Perhaps this is a con game. . . .You put out a system with miserable protection and hope that someone breaks it," he said. "Then you can ask for millions of dollars more to perform further palliative protections, rather than getting to the core of the problem -- significantly ratcheting up the security of the infrastructure" [Glave, 1998].

When officials like General Robert Marsh tell the press that the Internet provides access to many, if not all, of the critical infrastructure systems of the United States, it's difficult to assess this claim, except to suspect that he's right. The nature of the problem is one in which it's unlikely that we'll see detailed government reports on the levels and sources of current risks. Accumulating evidence based on anecdotal reports is likely to be the only information available.

Because of the paucity of hard data, and the difficulty of assessing what computer systems are vulnerable because of being connected to the Internet, it is ipso facto difficult to assess whether there is in fact an Internet-related threat that compares to other kinds of threats. Kevin Poulsen, who appeared on the "Nightline" TV program mentioned earlier, and who was described on that program as a "former computer hacker," said, "I've heard so much talk about the coming info war. I'll be more worried when somebody can actually show me a single case of a hacker doing something that malicious. So far they haven't."

He went on, "The most heinous, coordinated, planned-out, conspiratorial, hacker attack imaginable wouldn't come close to a single bombing of a building. Nobody's ever going to die from anything that happens electronically. The government has held me up as an example of a hacker that had reached the very top. If I wasn't anywhere near having that kind of capability, then what reason is there to think that anybody is?" [Nightline, 1997]

Most examples of computer break-ins have been annoyances and cause for alarm, not serious threats to critical systems. Gene Spafford, another computer security expert, likened hacker break-ins to "being pecked to death by ducks. No one of these instances is really serious. ... But if you've got 10,000 people doing that, its a huge problem" [Glave].

There have been some worrisome computer attacks, such as the attempt by German hackers to secure classified information and sell it to the Soviets, chronicled in Clifford Stoll's book The Cuckoo's Egg[Stoll, 1989]. There is some evidence that there was an attempt to disable Croatian computers during its war with Serbia, and Croatian security experts suspected Serbian programmers for the attacks, although there was no definitive evidence [Pale, 1998]. The 1989 Morris "worm" that brought down thousands of Internet computers, and the 1998 virus that affected Windows computers on the net, highlighted the vulnerability of the system as a whole. The Pentagon has admitted that its computers have been penetrated hundreds of thousands of times. Federal officials have hinted, in press briefings, that they have classified information about far more serious hacking attempts, successes, or penetration capabilities in other countries. And, of course, the Pentagon is busy building an offensive "info-war" capability of its own [Aviation Week and Space Technology, 1998].

But the overall problem facing national security authorities is that this threat of Internet-based terrorism or attack, however grave it might be, is to date not at all tangible to the average citizen, nor is it likely to become more so in the near future unless a catastrophe occurs for "demo" purposes. Their current strategy is to request vast sums of money to prevent something from occuring, not unlike the Year 2000 problem, which the public also barely understands, if at all.

This is, again, far different from the world of the recent past, in which the threat of Hiroshima in the United States was lodged quite vividly in the minds of most citizens. It's considerably more difficult to persuade the public that there is a large potential for threat to the nation via the Internet, when the entire country is on a big campaign to get everyone online, especially schoolchildren, and there is no obvious way to quantify or even nail down the full nature of the threat. Once again, this is "new terrain" for national security advisors.

As everyone knows, "national security" is largely a game of perceptions, a combination of both real and imagined threats and assets. Even during the Cold War, it was controversial how big a threat the Soviet Union was to the U.S.; this controversy continues even today, nearly ten years after the end of that conflict. So it's not surprising that it is controversial whether there is a national security threat posed by the Internet or whether this is a paranoid frame of mind; or, cynically, whether this is related primarily to institutions hoping to increase their budgets and their longevity. This controversy is fueled by the sparse information available about the true level of risk at hand, especially with respect to "critical" systems. Because we can expect that this dearth of information will continue, the controversy about the nature of the threat will no doubt extend far into the future as well. In the digital era, the very nature of the technology paradoxically makes perceptions more important, because tangible facts are harder to come by.

What do we know? When computer security experts are asked whether Internet-networked computers are secure, their answers are almost always along the lines of "not enough," or "not yet." One might discount such answers as self-interest and still conclude that more needs to be done about computer security. The explanation given by security experts about why we don't do more is that the public has not yet demanded more security for computers, and, without significant public demand, companies are not providing it. It's also expensive and sometimes troublesome to secure a computer and to keep it secure, imposing discipline on users and system administrators who would rather not be disciplined. It is common to hear of people learning about computer security the hard way, in a "trial by fire," absorbing a lesson after something nasty has occurred. Obviously if there is a real threat to national security via the Internet, such lessons are not an adequate substitute for prudent policymaking. It is the job of national security officials to prevent catastrophes, not to say "I told you so."

The vexing issue is how we might feel safer without seriously compromising the best features of the Internet, trampling on democracy, or turning into a surveillance society. These are not new concerns; they were not introduced into public debate by the appearance of the Internet. But they have been made rather dramatically more complicated by the first truly significant supra-national sphere of discourse and politics. They are further complicated by the dual role of national security agencies, which is to both protect national assets and to penetrate the defenses of enemies. It is this dual role, embedded in the traditions and histories of national security agencies, which is at the heart of the intense debates about a possible solution to computer-based threats: widespread digital encryption.
The Internet and National Security Agencies

By now, nearly every federal agency within the U.S. government has some department or division responsible for computer security. But the preeminent agencies of the field are still the agencies charged with national security, such as the National Security Agency, the Central Intelligence Agency, the Federal Bureau of Investigation, and, to a lesser extent, the Justice Department and the Secret Service within the Treasury Department. It is important to acknowledge that it is the character of these agencies, their histories and their other responsibilities that give the subject of computer security in the United States a particular kind of atmosphere, largely that of the military and national security community itself. Thus, the "command and control" model of computer security has tended to dominate the U.S. government's approach.

In 1987, the Computer Security Act of the U.S. Congress apportioned responsibilities for computer security to the National Institute of Standards and Technology (NIST), of the U.S. Department of Commerce, for non-classified computer systems; and to the National Security Agency (NSA) for classified systems. This law was the result of a certain level of alarm, on the part of Congress and civil libertarians, during the Reagan administration because of a pair of White House national security directives that pointed toward NSA control over all computers in the U.S. The Computer Security Act was an attempt to mark a boundary for civilian control of unclassified information systems.

However, since the Computer Security Act the U.S. National Security Agency has worked diligently to regain and secure its supremacy over computer security policy. A 1989 "Memorandum of Understanding" between NSA and NIST shifted power back to NSA, and in 1994 President Clinton issued Presidential Decision Directive 29, which set up the Security Policy Board, which has recommended that all computer security functions for the government be merged under NSA control [EPIC].

At the same time that NSA has attempted to impose its own standards on computer security in the U.S., the Justice Department's Federal Bureau of Investigation has tended to extend its responsibilities beyond domestic law enforcement to international crime, counter-terrorism, and counter-intelligence. While officials of the NSA are largely unknown to the public, FBI Director Louis Freeh is a common face in the news, often called upon to testify and make the government's case for control over encryption and computer security in the name of national security. The 1993 bombing of the World Trade Center in New York City, which was apparently connected to a foreign conspiracy in the Middle East, strengthened the FBI's role in monitoring international terrorism immeasurably. Freeh also points to international drug cartels, new foreign sources of organized crime, the international terrorist activities of so-called "rogue states," the frightening potential for the uncontrolled proliferation of small nuclear weapons, and other threats to make the case that the FBI now has a host of new targets.

These two phenomena of recent years have tended to blur the line between domestic security and national security, a blurring that has produced crises of constitutional protections in the past. But Congress has been persuaded by the federal law enforcement and national agencies. The Intelligence Authorization Act of 1997 states:

. . . elements of the Intelligence Community may, upon the request of a United States law enforcement agency collect information outside the United States about individuals who are not United States persons. Such elements may collect such information notwithstanding that the law enforcement agency intends to use the information collected for purposes of a law enforcement investigation or counterintelligence investigation.

Whitfield Diffie, the co-inventor of public key encryption, and his co-author Susan Landau, in their 1998 book Privacy on the Line, comment: "This wording carefully steers clear of permitting the intelligence community to spy on Americans directly, but opens the way for unprecedented collaboration between the intelligence and law enforcement communities" [Diffie and Landau, 1998, 123].

The boundary-free character of the Internet will likely intensify the merging of conventional law enforcement activities inside the U.S. and the activities of national security agencies. The front line in the debate about this trend is encryption policy. In September of 1997, the U.S. Congress' House National Security Committee voted to strengthen controls on the export of digital encryption, reversing a trend toward relaxing such controls, one of the chief goals of the high tech industry. Committee members cited the warnings they received in "classified briefings" as the main reason for their vote.

Congressman Mike Oxley, a Republican of Ohio, told The New York Times, "I would find it difficult to believe that a member who heard the briefing could walk away not committed to addressing security issues. Frankly, I wish everyone interested in this issue could have heard for themselves the alarming briefing that members of our committee heard" [Wayner, 1997].

Constraints of space prevent a comprehensive or even adequate review of the encryption debate here. In summary, the U.S. government's position is that law enforcement and national security authorities need to retain the ability to intercept and interpret communications, including digital data, in order to fulfill their responsibilities to protect the United States from crime and foreign threats. As such, the federal government has proposed a series of arrangements that have all included the concept of "escrowed keys," meaning the ability of authorized officials to acquire an encryption key to unlock scrambled data. Opponents of this scheme, which is the traditional approach in military cryptography, argue that public key encryption, without escrowed keys, is the only safeguard for privacy and authenticated electronic commerce. These opponents also argue that criminals and foreign adversaries will have no incentive to use encryption with escrowed keys -- i.e., the "key escrow" approach will provide keys to the communications of people who obey the law, while others will have easy access to unbreakable encryption algorithms. The ready availability of public key encryption, argue government critics, means that "the horse is out of the barn" already. Moreover, they point out, the task of security in the computer age is for each person, or each computer administrator, to be responsible for computer security, because the task is too immense and complex for bureaucratic oversight. The argument has been framed as one in which the protective schemes are characterized as a choice between armies or locks, and each of these has its attendant interest group.

While the Congress has been largely sympathetic to and supportive of the federal law enforcement and national security agencies, another front has opened up in the U.S. courts.

In early 1997, in a case heard before the United States District Court for the Northern District of California, Bernstein v. United States Department of State , U.S. District Court Judge Marilyn Hall Patel ruled that national security considerations cannot be used to censor cryptographic schemes on the Internet. Daniel Bernstein, a graduate student, created an encryption algorithm called "Snuffle." After four years of correspondence with the U.S. State Department, Bernstein learned that his source code and all other material about "Snuffle" except a research paper were in violation of export control laws and could not be posted to the Internet. Bernstein sued the State Department, claiming that this ban violated his First Amendment rights. Judge Patel agreed with Bernstein, and wrote a stinging rebuke of the government's position. Patel ruled that computer source code is protected as free speech by the First Amendment, and that the government's attempt to ban such speech amounted to "prior restraint," which is unconstitutional. Judge Patel specifically prevented the U.S. government from using claims of a breach of national security to impose prior restraint on the distribution of encryption source code [Cummins, 1997].

The U.S. government has appealed the Bernstein decision to the U.S. Supreme Court, and it is there that a final ruling is expected. If the Supreme Court upholds Judge Patel's ruling, this may close the encryption debate for the foreseeable future; after such a ruling, all encryption source code could be posted to the Internet without intervention by national security or law enforcement agencies. This would effectively kill the means by which such agencies now control the distribution of non-escrow encryption algorithms. On the other hand, if the Supreme Court overturns the Bernstein decision, this will reinforce the role of national security agencies in shaping the future of the Internet. Because of this, the Bernstein case is being watched very closely by both sides of the encryption debate.

The dispute over encryption has put into stark relief the dual nature of the national security mission: the responsibility of such agencies to protect national technological assets and also retain the ability to intercept and interpret digital communications. In the era of the Internet, these two responsibilities are in conflict with each other, thus posing significant dilemmas to national security officials. On the one hand, these agencies are urging businesses, other government agencies, and individuals to protect their computer data from attack. On the other hand, these agencies seek to control the way these people protect such data in order to protect law enforcement and national security agency access to this data. Not surprisingly, both businesses and individuals are hesitant to implement encryption schemes that require turning over keys to people they don't know or whose motives they don't fully comprehend. Because of this hesitancy, the first mission of the security agencies, that of increasing protection for U.S. computer systems, is stymied.

General Marsh, the chairman of the President's Commission on Critical Infrastructure Protection, has told the press he hopes the dispute about encryption policy is resolved soon, because he believes, with justification, that this ongoing dispute is obstructing the implementation of greater security for computer systems. Members of the commission have hinted at their support for a loosening of encryption controls, but have also leaked the information, to The New York Times, that "they are �under fairly strict orders' to fall in line with the FBI's push for key recovery" [Wayner, 1997].

Business executives are understandably reluctant to invest in security systems that may be superseded by other technologies or blocked by court rulings. The U.S. federal government's rapid transition from one policy recommendation to another -- such as from DES to the "Clipper chip" to key escrow to "key recovery� -- has not helped foster confidence in the business community. And of course, some influential business organizations, such as the Business Software Alliance (, are allied with civil libertarians and other opponents of U.S. government policy in the case of encryption standards.

Despite the hopes of General Marsh and others, at present the encryption debate in the United States is so polarized that compromise solutions are not visible nor likely to emerge soon. This polarization has been exacerbated by some ideological and political trends in the U.S. Republican conservatives who now control the U.S. Congress are more sympathetic to law enforcement and national security arguments than was true of the Congress just a few years ago. For example, for many years the U.S. House Subcommittee on Civil and Constitutional Rights was chaired by Rep. Don Edwards, a Democrat and a former FBI agent who was highly critical of any law enforcement encroachment on civil liberties. But Congressman Edwards has retired, and conservative Republicans have taken his place as committee chairmen.

Another new phenomenon is the emergence of a new breed of "cyberlibertarians� -- typically young, talented technologists who reject most of the assumptions of the national security establishment. Some of the more radical of these ideologues have argued that the Internet is the beginning of the end of the nation-state, let alone the end of the national security state. This perspective is not just an isolated intellectual discourse, either: the "cypherpunk" movement, for example, has complex and extensive ties to outlaw hackers, most of them young men, some of whom have adopted the intellectual framework of "cyberlibertarianism" as an ideological justification for criminal penetrations of government computers -- casting themselves as "Thomas Paines" of the digital revolution. Such "counterculture" attitudes are widely shared by educated young people all over the world, perhaps a natural attitude of young people rebelling against authority. But when this attitude is combined with the fact that these same people are the most technically adept in the world, and a number of them are affluent or even wealthy because of this skill -- once again, national security officials are confronting a new and alien environment, one dramatically different from eras of the past, when business leaders and skilled technologists were typically undisturbed by the alleged imperatives of national security. Now, when confronting young leaders of the digital revolution, national security authorities are in hostile territory. The end of the Cold War has given new impetus to calls for a dismantling of national security institutions, and the Internet, with its idealistic potential for global communications between planetary citizens, has come along at just the right time to fuel such ideas. Widespread use of phrases like "the digital revolution" and "Third Wave civilization" (lifted from the work of the Tofflers, and adopted by U.S. Speaker of the House Newt Gingrich) reinforce the popular notion that the "information age" entails an overturning of old regimes, including, perhaps, the centuries-old competition between nation states.

For these reasons, in addition to the prosaic clash of interests between the government and pragmatic corporations that are part of a global marketplace, the encryption debate is the leading edge of a much larger philosophical debate about the role of the state in the information age. It is unfair, of course, to characterize national security authorities as "dinosaurs" due for extinction, the way they are characterized by some of the "cyberlibertarians" -- even a small dose of the daily news is enough to convince most people that there are in fact real threats that continue to justify the need for some national security protections. On the other hand, there is no compelling reason to assume that the missions and structures and traditions and size of national security institutions created during the decades of the Cold War need persist into eternity. Many critics of national security agencies argue reasonably and persuasively that the Cold War should be regarded as an anomaly in U.S. history, a struggle that imposed sacrifices in democratic values that need not be sustained or repeated in the absence of a threat equivalent to the Soviet Union. These critics have quite rightly put on the table for debate the question of whether the "national security state" is an essential or necessary political form for nations in the 21st century, particularly in an era in which the Internet is challenging many assumptions and norms inherited from a pre-Internet period, that of World War II and the decades of the Cold War.

It may seem grandiose to suggest that complex technical debates such as those surrounding digital encryption or the prospect of "cyberwar" are the most important political debates of our time. But this is in fact the case. This is not actually all that surprising, when one considers the catalyst of such debates: the Internet itself, one of the most remarkable, promising, and at the same time vexing creations of human enterprise in the history of the world.
What Should We Do?

National security authorities, like everyone else, are confronted with a world far different than the familiar one of just a few years ago -- the tripartite combination of the end of the Cold War, the new intensity in global commerce and competition, and the information revolution has served to upend almost all previous cognitive models about how the world works. National security bureaucracies, of course, are notoriously resistant to change. But they also have many arguments on their side, as new threats have appeared simultaneously with new ways of communicating and doing business.

National security experts are facing several frustrating dilemmas. First is the need to secure U.S. computer systems while retaining some ability to intercept and interpret digital communications. As many people have pointed out, this dual mission may not only be irreconcilable, but the effort may produce some absurdities and unacceptable impositions on people using computers or developing information technologies, particularly software. Peter Wayner, a reporter for The New York Times who wrote about the Security and Freedom Through Encryption (SAFE) bill considered last year in the U.S. Congress, wrote:

The bill would force developers of new software to seek approval for their products from the United States government even if the products did not explicitly include encryption features. Such approval would be the only way to escape prosecution, [a Congressinal staff member] said. While admitting that this language would add a six- to nine-month delay in releasing new products, the staff member asserted that the computer industry would simply have to build this time into product development cycles [Wayner, 1997].

Given the life-cycles of computer software today, and the prospect of international competition not burdened by such delays, the imposition of a nine-month delay in releasing software products appears fatally ill-advised. The idea that computer software companies might need to have their products reviewed by the government, like prescription drugs, is also alarming and bizarre; the task would likely prove impossible, not to mention absurd.

Thus the implications of several initiatives by the national security community are so onerous, and so out of touch with the imperatives of the digital economy, that their chances of becoming law in the United States appear to be slim. Lawmakers are loathe to alienate national security authorities, but in this case they may have no choice -- the economic health of the United States could be seriously damaged by several of the proposals now on the table.

Moreover, it appears inevitable that uncontrolled public key encryption algorithms will proliferate, despite the resistance of the U.S. government. "Key escrow" systems such as those advocated by U.S. government officials are too vulnerable to compromise, and once keys are released into circulation, all encrypted data is compromised. Public key encryption schemes are already available in a wide variety of products and on the Internet, and there doesn't appear to be much that the government can do about these programs. If the Bernstein decision is upheld by the Supreme Court (and this Court has been consistently vigilant about challenges to the First Amendment), it will be illegal, unconstitutional, to block the distribution of source code, rendering all the efforts at government control moot. It is also significant, of course, that foreign governments do not share the U.S. government position on encryption, which creates a vast "safe harbor" for alternative encryption schemes that, because of the way the Internet works, would be merely a "click away."

The arguments of proponents of public key encryption, such as Diffie and Landau, are generally persuasive. They maintain that communications intercept is a "low-value" activity of law enforcement and national security agencies, and far outweighed by the value of more secure computer systems throughout society. They point out that surveillance of foreign communications is dependent on foreign targets of surveillance agreeing to "escrow" keys with the U.S. government, a rather improbable scenario. Diffie and Landau ask what the consequences would be if policymakers were to "make a mistake" by unregulating encryption.
If cryptography comes to present such a problem that there is popular consensus for regulating it, regulation will be just as possible in a decade as it is today. The laws will change, strong cryptography will not be made part of new products, and the ready availability that government claims to fear will decline again quickly. If, on the other hand, we set the precedent of building government surveillance capabilities into our security equipment, we risk the very survival of democracy [Diffie and Landau, 244].

They go on to say, ". . . government efforts to keep honest citizens from using cryptography to protect privacy continue. Such efforts are unlikely to achieve what governments claim to want, but very likely to cause serious damage to both business and democracy in the process" [245].

Widespread use of public key encryption appears to be the only viable and cost-effective means for truly securing computer systems essential to the functioning of modern society. The task then becomes one of adjusting the roles and activities and "mind-sets" of national security officials to this fact. This needs to be a process of collaborative work, as opposed to the current process that is characterized by polarization, hostility, suspicion and even attitudes that suggest that each side wishes the other side would die off and fade into obscurity. Collaborative work between citizens and national security agencies in the United States is unprecedented, too. There is a long history within such agencies that can be summed up in the phrase, "If only you knew what we know, you'd agree with us." But then, of course, the knowledge referred to is out of bounds, unavailable, secret, incapable of being assessed except by those deemed trustworthy enough to possess such knowledge, which typically means people who already agree with the assumptions of the intelligence and national security communities. This has to change, somehow. The end of the Cold War opens up a historic new opportunity for change.

The U.S. Congress should take the lead. Members of Congress should understand the stakes -- in the case of the intersection of the Internet and national security, the perspective of national security agencies is only one side of the coin, maybe even vastly overbalanced on the other side by the potential damage to business and democracy. Unfortunately, Congress has a history of being cowed and frightened by national security briefings. The Congress needs another leader like Don Edwards, who was not intimidated by officials of the FBI or the CIA. Whether this occurs will remain to be seen; high technology executives need to understand the need for such a leader, even if such a person doesn't agree with other features of the high tech sector's public policy agenda.

The public is not likely to be a major player in this debate. The subjects of national security and technology have always been reserved for elites, and, while this situation may be regrettable for democracy, it should not be expected that it will change soon. Consequently, there needs to be intense work on the part of the business community to persuade the White House and the Congress that the world is now a different place, that the changes recommended by government authorities are dangerous and unworkable, and that business stands ready to cooperate with national security officials in finding new solutions. To a certain extent this is already going on, but the dialogue with national security officials could be improved significantly if their Cold War rhetoric was attenuated or even abandoned.

President Clinton could be the leader this issue needs, but unfortunately he doesn't appear up to the task. He is a President more than usually shaped by the demands of law enforcement and national security authorities -- the Clinton administration has been one of the worst in recent memory for civil liberties in the U.S. The President is also famously averse to friction and confrontation, despite the ominipresence of these qualities during his service. He's probably not going to do anything to alienate either the national security community, upon whose approval much of his stature as a "law and order" President depends, or the high tech community, many of whom helped get him elected. The President's dithering on the issue of encryption may thus set the terms of the debate�a kind of policy fibrillation -- until someone else holds his office or some other leader finds the means for real breakthrough.

In the meantime, proponents of both sides will continue to find opportunities for strategic advantage. Professional societies like the Internet Society, the Association for Computing Machinery, and IEEE should probably increase their efforts to find a workable solution. They might also consider undertaking efforts to educate the public about what's at stake, such as through sponsored television programs or a national campaign of public debate and community meetings. The high tech industry has every reason to help fund such public outreach efforts.

Finally, professional societies need to do more to educate their members that the most fundamental interests of the computing profession are not primarily about technical issues, but are tied up in public policy controversies. The computer industry has demonstrated time and time again that technical obstacles can be overcome, even with astonishing, disorienting rapidity. What hangs up progress in the information age are social and political controversies that have fewer, if any, black and white answers. People in the computer industry need to become far more sophisticated about policy issues, political participation, and how technology affects basic values in society. For too long, public policy has been considered a field separate from technology, and of only marginal importance. This may be changing, but, if so, it is changing too slowly to keep pace with the issues confronting us now. Other technical and scientific fields, particularly physics, do a much better job of integrating public policy work into their professional activities. There are lessons in the experiences of physicists that might help computer professionals, especially because of the tight coupling of the work of physicists with the field of national security.

The global extension of the Internet -- a natural and predictable development of computer networking -- was destined to clash with traditional principles of national security. Admitting this with the benefit of hindsight, of course, does not relieve the pressures of this clash that exist today, many of which are so vexing as to seem nearly insoluble. Two immense forces of great momentum are at odds: technological progress, which takes a million different forms, emerging from countless points around the world; and national security, the gravest and most fundamental public responsibility of the world's richest and most powerful nation. How the frictions between these forces will be resolved is not yet clear. Neither is likely to go away or even fade in strength.

What seems to be required is a new concept of national security that can accommodate the Internet. This does not have to be a radically libertarian utopia, one in which the nation-state itself withers and dies, as seems to be the hope of some young cyber-activists. Nor does it need to be an accommodation in which national security and police surveillance and enforcement are the rulers of the Internet. Any new accommodation would probably need to take uncontrolled, public key, so-called "strong" encryption for granted, as this seems to be inevitable. National security authorities were once faced with another technological revolution of comparable significance -- intercontinental ballistic missiles with nuclear warheads -- and security policy managed to adjust, for better or worse, to this new technology. The same kind of adjustment will now be required. The "national security state" that was a product of the Cold War may no longer be recognizable in ten or twenty years, but neither will any other institutions of modern society, because of the changes tied to the Internet. Because of this, national security officials need to start thinking in fresh ways. Right now they're on the wrong side of history, as noble as their aims might be.

Gary Chapman is director of The 21st Century Project at the LBJ School of Public Affairs at the University of Texas at Austin.


Aviation Week and Space Technology, special issue on "Information Warfare," January 19, 1998.

Cummins, Guylyn R., "National Security Alone Can't Be Used To Censor Cryptographic Speech On The Internet," Daily Transcript, March 12, 1997, available at
Diffie, Whitfield and Susan Landau, Privacy on the Line: The Politics of Wiretapping and Encryption, The MIT Press, 1998.

EPIC, Electronic Privacy Information Center, "The Computer Security Act of 1987," at

Glave, James, "Critics Bash Reno's Cyberwar Plan," HotWired News, February 27, 1998, available at

Hafner, Katie and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet, Simon and Schuster, 1996.

Nightline (ABC News TV talk show), December 9, 1997. Transcript available at

Pale, Predrag, personal interview with the Croatian Deputy Minister of Science and Technology, Predrag Pale, March 16, 1998, Zagreb, Croatia.

Stoll, Clifford, The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Doubleday, 1989.

Wayner, Peter, "Computer Privacy: Your Shield? Or a Threat to National Security?" The New York Times, September 24, 1997.