❌

Reading view

There are new articles available, click to refresh the page.

2024-12-04 operators on the front

At the very core of telephone history, there is the telephone operator. For a lot of people, the vague understanding that an operator used to be involved is the main thing they know about historic telephony. Of course, telephone historians, as a group, tend to be much more inclined towards machinery than people. This shows: websites with information on, say, TD-2, seldom tell you much about the operators as people.

Fortunately, telephone operators have merited more than just a bit of discussion in the social sciences. It was a major field of employment, many ideas in management were tested out in the telephone companies, and moreover, telephone operators were women.

It wasn't always that way. The first central exchange telephone system, where you would meaningfully "place a call" to a specific person, arose in 1877. It was the invention not of Bell, but of Edwin Holmes, a burglar alarm entrepreneur. His experience with wiring burglar alarms back to central stations for monitoring may have made the idea of a similarly-wired telephone exchange more obvious to Holmes than to those in other industries. Holmes initially staffed his telephone exchange the same way he had staffed his alarm and telegraph businesses: with boys. The disposition of these young and somewhat unruly staff members became problematic when they spoke direct with customers, though, so Holmes tried something else. He hired Emma Nutt, the first female telephone operator.

Women telephone operators were a hit. Holmes quickly hired more and customers responded positively. The matter probably had little to do with their gender, but rather with the cultural norms expected of young men and young women at the time, but it takes a certain perspective to differentiate the two (for example, it cannot be ignored that the switch from boys to women as telephone operators also involved the other change implied by the terms: most women telephone operators were hired as young adults, not at 12 or 13 as telegraph boys often were). The way the decision was reported at the time, and sometimes still today, is simply that women were better for the job: calmer, more friendly, more professional, more obedient.

With her extreme youth, her gentle voice, musical as the woodsy voices of a summer day, her always friendly way of answering calls, she is a sensible little thing, tranquilly serene through all the round of jollies, kicks and nerve-racking experiences which are the result of a day's labor. She likes her place, she knows her work, and she is prepared with quick-witted, instinctive readiness for every emergency which comes her way. [1]

Alexander Graham Bell was very much aware of the goings-on at Holmes' company, which AT&T would purchase in 1905. So the Holmes Telephone Despatch Co., the first telephone exchange, became the pattern for many to follow. During the last decades of the 19th century, the concept of exchange telephone service rapidly spread, as did the role of the operator. Virtually all of these new telephone workers were women, building a gender divide in the telephone industry that would persist for as long as the operator.

Operators stood out not just for being women, but also for their constant direct interaction with customers. To a telephone user, the operator was part of the machine. The operator's diminished humanity was not unintentional. The early telephone industry was obsessed with the appearance of order and reliability. The role of fallible humans in such a core part of the system would undermine that appearance, and with the social mores of the time, the use of women would do so even more. The telephone companies were quick to emphasize to customers that operators were precisely trained, tightly managed, and a model of efficiency. The virtues of telephone operators, as described by the Bell companies, reflect the semi-mechanical nature of their professional identities: a good operator was fast, precise, efficient, reliable.

Within the Bell System, new operators attended training schools that taught both the technical skills of telephony (operation of the exchange, basic understanding of telephone technology, et) and the behavior and manner expected from operators. Operators were not expected to bring their personalities to the workplace: they followed a detailed standard practice, and any deviation from it would be seen as inefficiency. In many companies, they were identified by number.

There was, of course, a tension underlying the role of the operator: operators were women, chosen for their supposed subservience and then trained to follow an exact procedure. At the same time, operators were women, in the workforce in a time when female employment remained unusual. The job of telephone operator was one of few respectable professions available to women in the late 19th and early 20th centuries, alongside nursing. It seemed to attract the more ambitious and independent-minded women, and modern studies have noted that telephone operators were far more likely to be college-educated heads of households than women in nearly any other field.

A full examination of telephone operators, their role in the normalization of working women, the suffrage movement, and etc., would require a much better education in the liberal arts than I have. Still, I plan to write a few articles which will lend some humanity to the telephone industry's first, and most important, switching system: first, I will tell you of a few particularly famous telephone operators. Second, I plan to write on the technical details of the work of operators, which will hopefully bring you to appreciate the unusual and often very demanding career---and the women that took it up.

We will begin, then, with one of my favorite telephone operators: Susie Parks. Parks grew up in Kirkland, Washington, at the very turn of the 20th century. After a surprising amount of family relocation for the era, she found herself in Columbus, New Mexico. At age 17, she met a soldier assigned to a nearby camp, and they married. He purchased the town newspaper, and the two of them worked together operating the press. Columbus was a small town, then as well as now, and Parks wore multiple hats: she was also a telephone operator for the Columbus Telephone Company.

The Columbus Telephone Company seems to have started as an association around 1914, when the first telephone line was extended from Deming to a series of telephones located along the route and in Columbus itself. An exchange must have been procured by 1915, when Henry Burton moved to Columbus to serve as the young telephone company's full-time manager. Burton purchased land for the construction of a new telephone office and brought on his sister as the first operator.

Rural telephone companies were a world apart from the big-city exchanges of the era. Many were staffed only during the day; emergency service was often provided at night by dint of the manager living in a room of the telephone office. Operators at these small exchanges had wide-ranging duties, not just connecting calls but giving out the time, sending help to those experiencing emergencies, and troubleshooting problematic lines.

By 1916, Susie Parks sat at a 75-line common battery manual exchange. Unlike the long line multiple boards used at larger exchanges, this one was compact, a single cabinet. When a nearby lumberyard burned in January and the fire damaged the telephone office, Parks stepped in with a handy solution: the telephone exchange was temporarily moved to the newspaper office, where she lived.

The temporary relocation of the exchange would prove fortuitous. Unknown to Parks and everyone else, in February or March, Mexican revolutionary Pancho Villa sent spies into Columbus. His army was in a weakened state, traveling between temporary camps in northern Mexico and attempting to gather the supplies to resume their campaign against the Federal forces of President Carranza.

The exact reason for Villa's attack on Columbus remains disputed; perhaps they hoped to capture US Army weaponry from the nearby fort or perhaps they intended to destroy an ammunition depot to deter US advance into Mexican territory. We also don't know if Villa directed his spies to locate the communications facilities in Columbus, but it's said that they failed to identify the telephone exchange because of its temporary relocation. The spies were evidently not that good at their jobs anyway, as they significantly under counted the number of US infantry stationed at Columbus.

On March 9th, 1916, Villa's army mounted what might be considered the most recent land invasion of the United States [2]. Almost 500 of Villa's men moved into downtown Columbus in the early morning, setting fire to buildings and looting homes. Susie Parks awoke to screams, gunfire, and the glow of a burning town. A day before, her husband had left town for the homestead the two worked. Parks was alone with their baby, and bullets flew through the modest building.

At the nearby infantry camp, two machine gun units came together to mount a hasty defense. While much more formidable than Villa had believed, they were nonetheless outnumbered and caught off guard, some of them barefoot as they advanced towards town with a few M1909s.

Susie Parks was barefoot, too, as gunfire shattered a window of the newspaper office. Keeping her head down, she maneuvered in the dark, knowing that a light would no doubt attract the attention of the raiders. Parks found her way to the exchange and, cord in hand, tried their few long distance leads. El Paso was no good: Villa's forces had cut the line. The line to the north, though, to Deming, had escaped damage. The Deming operator must have had her own fright as Parks described the violence around her. In short order, the message was passed to Captain A. W. Brock of the National Guard.

Somewhere along the way, a bullet or at least a fragment hit her in the throat. Unsure if she would survive, she hid her baby under the bed. According to most accounts, she stayed with the switchboard, keeping a low profile until the battle ended. According to her son, in an obituary, she took up a rifle of her own and made way for the Army camp. I suspect there are elements of the truth in both: she probably did get a gun, but I think she was more intending to defend the baby than the soldiers, who were apparently able to take care of themselves.

The Battle of Columbus ended as quickly as it began, and the exact order of events is told in different ways. Villa may have already given the order to retreat, seeing his substantial losses against the increasingly organized machine gunners from the Columbus camp. In a version more complimentary to our hero, it was the arrival of Brock's company, spotted coming into town, that lead to the withdrawal. In any case, the sunrise appearance of the National Guard in Columbus decisively ended the invasion.

It began a series of campaigns against Villa, culminating in the assignment of General John Pershing to oversee a six-month "Punitive Expedition." They didn't find Villa, but they did prove out the use of air support and truck transport for a wide-ranging expedition through northern Mexico. The experience gathered in the expedition would be invaluable in the First World War soon to follow.

Susie Parks is remembered as a hero. Charlotte Prince, a former first lady of the New Mexico Territory, and the Daughters of the American Revolution presented her with a gold watch and silverware set at a celebration in Columbus's small theater. General Pershing, on his arrival to begin the Punitive Campaign, paid her a visit to commend her for keeping her post through a raging battle [3].

The original Columbus telephone exchange, and other memorabilia of the Columbus Telephone Company and Susie Parks, are on display in the top floor of the Telephone Pioneer Museum of New Mexico.

Parks had set a high standard for her fellow telephone operators, not just in New Mexico but beyond.

The next year, the United States would enter the First World War. Major General Fred Funston, an accomplished military leader and veteran of the Spanish-American War, was favored to lead the US Army into Europe. By bad luck, he died of a heart attack just a couple of months before the declaration of war. Funston was replaced by the General he had sent into Mexico. John Pershing traveled to France as commander of the American Expeditionary Forces.

Upon Pershing's arrival, he found Europe in disarray. Communications in France were tremendously more difficult than the high standard the Army maintained at home. There were both technical and organizational challenges: telephone lines and exchanges had been damaged by fighting, and the Army Signal Corps lacked the personnel to improve service.

The idea to dispatch American telephone operators to Europe likely originated in the Army Signal Corps and AT&T, with whom they already maintained a close relationship. But I like to think that Pershing remembered the bravery of Susie Parks when he signed on to the plan, cabling the US to send send "a force of Woman telephone operators."

At the time, women had been admitted to the military only as nurses, and those nurses were kept far from the front. There was substantial doubt about the fortitude of these women, especially as they would be called on to staff exchanges near combat. The Secretary of War allowed the plan to go forward only on the condition that men would be hired preferentially and women would be carefully selected and closely supervised.

Operators were selected in by the Army Signal Corps in cooperation with AT&T. It was initially thought that they would be found among the staff of the many Bell operating companies, but the practicalities of the AEF (which was headquartered and primarily fought in France in collaboration with French units) required that operators speak both French and English fluently. There were few French-speaking telephone operators, so AT&T expanded their search, hiring women with no telephone experience as long as they were fluent in French and passed AT&T's standardized testing process for telephone proficiency. These recruits were sent to the Bell System's operator training schools, and all selectees attended the Army Signal Corps' training center at what is now Fort Meade.

The first unit of the Signal Corps Female Telephone Operators Unit [4] consisted of 33 operators under the leadership of Chief Operator Grace Banker, who had learned French at Barnard College before finding work at AT&T as an instructor in an operator training school. Their 1918 journey to France was a long and difficult one, as transport ships were in short supply early in the war and subject to German attack. The ferry crossing of the English Channel, not a long voyage by any means, turned into a 48-hour ordeal as the ship was stuck in dense fog in a vulnerable position. Despite the cold and damp conditions, the operators waited two days on deck in preparation to take to the life boats if necessary. Two men on the ship died; at one point French forces mistook its faint outline for an attacker and surrounded it. As Banker would tell the story later, her operators were in good spirits.

Their cheerful disposition in the face of the harsh journey served as good preparation for the conditions the operators faced in the field. There was hardly a barracks or telephone exchange for the women that wasn't plagued by leaks, rats, fleas, or disarray as the AEF scrambled to find facilities for their use. The simple mechanics of the telephone system required that exchanges be located fairly close to concentrations of command staff and, thus, fairly close to the fighting. The operators were constantly in motion, moving from camp to camp, and ever closer to the front.

Banker's first unit of 33 women quickly proved themselves invaluable, providing faster and more reliable telephone service as they leveraged their French to handle all allied traffic and developed directories and route guides to keep up with the rapid work of the Signal Corps' men in building out new telephone lines. The Female Telephone Operators had proven themselves, and Pershing called for more. A few months later, hundreds more were in France or on their way. Despite the War Department's concern about the willingness of women to work in wartime conditions, telephone operators turned out to be as ready to fight as anyone: when AT&T solicited applications from among the Bell companies, they were swamped by thousands of postcard forms.

While some sectors the military were clear that the women operators were brought to Europe for their technical proficiency, there remained a clear resistance to recognition of their work as part of the military art. "Even telephone operators were persistently told that their presence and their girlish American voices would benefit the war effort by comforting home-sick soldiers and lifting their morale" [5]. The operators were, at times, regarded in the same stead as the women "morale volunteers" fielded by organizations like the YWCA.

The military was so quick to categorize them as such that, shortly after their arrival, the YWCA was made responsible for their care. Operators were accompanied by YWCA chaperones, furnished to protect their moral virtues from the soldiers they worked alongside. Despite their long shifts at the exchanges, the YWCA expected them to attend military dances and keep up appearances at social functions. Many of the women associated with the AEF, telephone operators and nurses alike, took to cutting their hair short---no doubt a practical decision given the poor housing and inconsistent access to washrooms, but one that generated complaints from the Army and the YWCA.

In September of 1918, the AEF and French troops---a quarter million men in all---took on their first great offensive. The logistics of supporting and organizing such a large fighting force proved formidable, and the Signal Corps relied on the telephone to coordinate a coherent assault. The thunder of artillery was heard over the chatter of telephone calls. For the duration of the offensive, a system of field phones and hastily laid long-distance connections, known as the "fighting lines," fell under the control of Grace Banker and five operators she hand-picked to move up to the front with her. They donned helmets and coats, toted gas masks, and took up their positions at temporary exchanges, some of them in trenches. Infantry orders, emergency calls for supply, and even artillery fire control passed through their plugboards as the allies took Saint-Mihiel.

As a reward, they moved forward once again, taking up a new "telephone office" at the allied advance headquarters in Bar-le-Duc. There, they camped in old French army buildings and weathered German bombing as they provided 24/7 telephone service for the Meuse-Argonne offensive. Military service was demanding, but still subject to the "scientific management" trend of the time and the particular doctrine of the Bell System. Their long shifts were carefully supervised, subject to performance evaluations and numerical scoring. There was a certain subtext that the women operators had to perform better than the Signal Corps' men who they had replaced.

Fighting ended in November of 1918, although many of the operators were assigned to various post-war duties in Europe (including Grace Banker's assignment to the French residence of President Wilson) during 1919. The first 33 operators had spent 20 months in France before they returned to the United States, where Banker would complain of the low stakes of civilian work.

After the war, the Female Telephone Operators received numerous commendations. Major General H. L. Rogers of the Signal Corps spoke of their efficiency and the quality of the telephone service under their watch. The Chief Signal Officer reported that "a large part of the success of the communications of this Army is due to... a competent staff of women operators." Pershing personally signed letters of commendation to a number of the operators, referring to their "exceptionally meritorious and conspicuous services." Operators who had worked near the front received ribbons and clasps for their involvement in the offensives. Grace Banker, for her own part, was awarded the Army's Distinguished Service Medal. Of 16,000 officers of the Signal Corps in the First World War, only 18 received such an honor.

Considering the decorations these women wore on their Signal Corps jackets as they returned to the United States, it is no wonder that modern accounts often style them as the "first women soldiers." The female nurses of the Red Cross, while far more numerous, were never as close to the front or as involved in combat operations as the operators. The operators were unique in the extent to which they considered themselves---and they were often seen by others---to be members of the Army.

After the war, they would learn at the same time as many of their commanding officers they were not. Earlier, the Army had quietly determined them to be contracted civilian employees. None other than General Pershing himself had ordered them to be inducted into to the army in his original letter to headquarters, and recruiting materials explicitly used the terms "enlistment" and "regular Army," even introducing the term "women soldiers." But even before the first 33 shipped out for France, Army legal counsel had determined that military code prohibited the involvement of women. None of the women were told; instead, they were issued uniforms.

450 members of the Female Telephone Operators Unit worked 12-hour shifts, handling 150,000 telephone calls per day, often not only making connections but serving as interpreters between French and American officers. The Signal Corps' male telephone operators, more experienced in the Army, were of such noticeably poorer performance that they were restricted to night shifts---and even then, only in safe territory well behind the front. Two operators, Corah Bartlett and Inez Crittenden, died in the service of the United States and were buried in France with military honors. Years later, it was noted that because of their critical role in military logistics, the operators were among the first Americans to reach the combat theater and among the last to leave.

They were discharged as civilians---or rather, they were not discharged at all. Because of the Army's legal determination, the women received no Army papers and were deemed ineligible for veteran's benefits or even to receive the Victory Medal which the Signal Corps had promised them.

Despite its recognition of their exceptional service, the military was slow to admit women's role outside of wartime exigency, or even in it. The United States as a whole was even slower to recognize the work of the telephone operators. Despite the introduction of 24 bills to congress, starting in 1927, it was not until 1977 that the operators were declared regular members of the Army and granted military benefits. By the time the act was put into effect in 1979, only 33 operators lived to receive their discharge papers and the Victory Medal.

AEF telephone operator Olive Shaw, who tirelessly lobbied for military recognition of her fellow women, was the first burial at the new Massachusetts National Cemetery in 1980. Her wartime uniform, fitted as always with the brass devices of the Signal Corps and the letters "U.S.," was presented to congress as evidence of their rightful role as veterans in 1977 and cited again, in 2024, when all of the members of the Army Signal Corps Female Telephone Operators Unit were awarded the Congressional Gold Medal. It is now on display at the National World War I Museum in Kansas City.

The Female Telephone Operators Unit laid the groundwork for the induction of women during World War II---the Women's Army Auxiliary Corps and the United States Navy's Women's Reserve, or WAVES, which is remembered today for its exceptional contributions in the fields of cryptography and computer science. It is fitting, of course, that the achievements of the WAVES would be exemplified by another Grace, Rear Admiral Grace Hopper.

"Women's work," far from being frivolous, was now defined as essential to the war effort, and the U.S. military found itself in the uncomfortable position of being dependent on female labor to meet the structural needs of the war economy. Ironically, then, it was the logic of sex segregation in the civilian economy that compelled the U.S. government to grant women entry into the armed services, the ultimate masculine preserve. [5]

[1] "A Study of the Telephone Girl," Telephony magazine (1905).

[2] A 1918 conflict at Nogales, AZ, involving similar combatants, might also lay claim to that description. I will argue in favor of the Battle of Columbus, which was an unprovoked invasion, as compared to the Battle of Ambos Nogales which was more of a border security conflict in reaction to years of rising tensions.

[3] Parks was an interesting figure for the rest of her life. She and her husband continued to move around, buying the Clackamas News in Oregon. Her husband's condition declined, a result of surgical complications and a morphine addiction, and they split up. During the Second World War, Parks found herself back in wartime service, as a sheet metal worker at the Seattle-Tacoma Shipbuilding Company. In 1981, the Deming Headlight, closest newspaper to Columbus, reprinted her obituary from the Seattle Post-Intelligencer. It recounts a half dozen careers, two husbands, and 36 grandchildren.

[4] The members of the Female Telephone Operators Unit are frequently referred to as the "hello girls," but this is a more generic term for telephone operators that would also come to refer to other groups, be used as the title of works about telephone operators, etc. I prefer to stick to something a little more precise.

[5] Susan Zeiger, "In Uncle Sam's Service: Women Workers with the American Expeditionary Force, 1917-1919" (2019).

2024-11-23 cablesoft

As an American, I often feel an intense jealousy of Ceefax, one of several commercially successful teletext services in the UK and Europe. "Teletext" is sometimes a confusing term because of its apparent relation to telecom-industry technologies like the teletypewriter and telegram, but it refers specifically to a "broadcast text" technology usually operated over TV networks. Teletext could be considered the first form of "interactive television," a reimagining of traditional television as a more WWW-like service that allows viewers to navigate through information and retrieve content on-demand.

Despite many, many attempts, interactive television was never particularly successful in the US. Nor, I believe, did it fare well in Europe after the retirement of teletext. It was an artifact of a specific time and place; once PC ownership and internet access expanded they handily filled the niche of interactive text. That feels a little surprising, televisions being a big screen that so many consumers already had in their home, but offerings like MSN TV sucked to use compared to PCs. The technology for interacting with PC software from a couch honestly still isn't quite there [1], and it was even worse in the '90s.

Despite its general failure to launch, Interactive TV was, for a time, a big field with big ideas. For example, you've heard of MPEG-4, but what about MHEG-5? That's the Multimedia Hypermedia Expert Group's effort towards an object-oriented, television-native hypermedia environment, and it's exactly as terrible and fascinating as that description would lead you to believe. But I'm not going to talk about that today. Here's what's on my mind: what if I told you that MSN TV was Microsoft's second attempt at interactive television?

In 1994, Microsoft formed a partnership with two cable television carriers to launch Cablesoft. It was such a big hit that Microsoft spent most of its brief life trying not to talk about it.

You might remember the days when a television with a standard QAM tuner could often pick up on-demand content being watched by other people. And that's the basic interactive television model at work: as a cable customer with on-demand features, your STB presents a set of menus to select a program. When you start it, a video server at the head end selects an unused digital television channel and plays back the content on it. Your STB is sent a command to tune to the correct channel, and all of your playback control actions (play/pause etc) are sent to the video server. Today the video downlink is always encrypted, but in the heyday of CATV on-demand encryption was inconsistent and some providers didn't use it at all, leaving these downlink channels viewable by anyone with a tuner not configured to hide them.

On-demand isn't particularly exciting, and perhaps only barely counts as "interactive television," but it is the most substantial thing to come out of US efforts. The late '80s and early '90s saw plenty of interactive television ideas, which tended to envision the TV as the main way consumers would get information. Because of the limitations of the technology, this exciting world was mostly text. Text on the TV. Nothing would really catch on, even to the trial phase, until interactive TV started to become synonymous with multimedia.

The 1990s were an exciting time in computer multimedia. Larger storage devices (namely CD-ROMs), faster processors, and better display controllers all meant that computers were becoming more and more practical for audio and video. In the video game industry, the period famously lead to a surge of "full motion video" (FMV) games that used live-action cutscenes or elaborate pre-rendered 3D scenes. Of course, most enthusiasts of video game history also wince at the thought of figuring out which exact version of Apple QuickTime will work right with any given title.

Despite its surging popularity, computer multimedia was also in its infancy. Audio and video encoding were dominated by proprietary systems like QuickTime and RealMedia. SOver time, these products and their underlying codecs would largely converge into the relatively consistent and consumer-friendly ecosystem of media formats we use today (i.e. everything is MPEG and consumers don't care about the rest, mostly, as long as they don't want h.265 in a web browser on Linux or something wild like that).

Some of that convergence happened because of vendors actively contributing to standardization and promoting licensing pools, but some of it also happened because one of the biggest players in the PC software industry saw Apple's success with QuickTime and didn't want to fall behind. Microsoft developed a major focus on multimedia, leading to their own family of codecs, containers, and protocols, some of which remain in common use today. What's more, Microsoft had a long-running fascination with the television distribution industry, which it tended to view as the future of media delivery due to its very high capacity compared to telephone lines. Microsoft itself, and its executives as individuals, had a variety of interests in cable TV starting in the '90s. Perhaps most prominently, Paul Allen was controlling owner of Charter for a decade, and Microsoft invested a billion dollars in Comcast in 1997 to support their effort to pivot towards data.

So, the application of Microsoft technology to cable television was inevitable. Microsoft brought Tele-Communications Inc (TCI, now part of Comcast) and Time Warner (somehow not part of Comcast yet) on board as CATV partners and set about building Microsoft Media for CATV. Or, perhaps, TCI and Time Warner formed a joint initiative to develop an interactive TV platform and selected Microsoft as a partner. The history is a little fuzzy, but somehow, these three companies ended up in high-level talks about a new, standard platform for technology-enabled TV. Cablesoft, as they called it, would include an electronic program guide, TV shopping, and, most importantly, on-demand streaming media.

I'm trying not to say this over and over again, but Microsoft and Bill Gates and Paul Allen were all kind of obsessed with streaming media and on-demand delivery in the 1990s. It's hard to keep track of all the failed ventures they either launched or invested in, there were several each year. If you read into the history of the TV distribution industry Paul Allen especially just keeps popping up in weird places. It's fascinating to me because our modern experience shows that they were very much right, in that on-demand streaming delivery via computers would become the dominant way media is distributed. But they were also pathologically ahead of their time; Paul Allen was basically trying to do Netflix in 1993 and all of these efforts just sucked. The infrastructure was simply not there and the many companies trying to build it tripped over each other as often as they made progress.

To be fair, Microsoft was not the only faction making repeated stabs at streaming media, and by 1994 investors were already starting to tire of it. A 1994 News Tribune (Tacoma) article on Cablesoft's announcement captures the attitude with this spectacular quote from industry newsletter editor Denise Caruso: "Anybody who bothers to get excited about another interactive TV trial at this point deserves everything they get in terms of disappointment." In 1994! All These Goddamned Streaming Services is a complaint pretty much as old as computer multimedia. I wonder what Denise Caruso would have to say about Tubi.

As you know, things didn't get a whole lot better. In 1996, the Boston Globe's "FastTrack" technology column began: "Interactive television is the Loch Ness Monster of the information age---much talked about but rarely seen." And it's not hard to find these quotes, those are like the first two search results in the archive I use. The consumer internet was barely a thing and industry commentators were already rolling their eyes at each new streaming service.

The difference, of course, is this: back in the '90s, these streaming multimedia efforts were collapsing fast, generally before they signed up actual consumers. Now they collapse very slowly, after producing about a hundred original TV series that none of us will ever hear of. Say what you will of Cablesoft, at least they didn't make Tall Girl and Tall Girl 2.

So what did they make? Look, Cablesoft didn't get very far, and there's not a lot of historical information about them. You have to be careful not to confuse Cablesoft with CableSoft, a completely separate company that was working on the exact same thing at pretty much the same time (CableSoft had spun off of television technology giant General Instrument and thus had a considerable advantage, but it didn't work out for them either) [2].

By early 1994, Microsoft was already involved in other interactive TV ventures, leading to a somewhat critical interview of future Microsoft CTO and cookbook author Nathan Myhrvold by the Seattle Post-Intelligencer. "In the long run," he opined, "it's very likely there will be some form of a smart TV... it's not very input intensive, you don't have a keyboard for your TV." 100% correct!

But then the interviewer, Jim Erickson, asks something along the line of "what's with these three different interactive TV things that Microsoft is doing at once?" Myhrvold answers that "there is more uniformity and more synergy than may meet the eye with the series of things that we have done so far," which sounds like a comedy sketch of a Google exec explaining the difference between Duo and Allo. Erickson digs a little deeper, asking what's going on with Cablesoft, prompting Myhrvold to say "it's a funny thing to give status on something you never announced and never admitted to." And that is a very interesting response indeed.

The Wikipedia article is an absolute stub, giving us just one tantalizing factoid that has my practically foaming at the mouth: "...a custom version of the Windows NT operating system known as NTAS, which was essentially a series of fine-tuning efforts to drive ATM switches." We'll get back to that. But the Wikipedia article also says that Cablesoft was announced in 1994, which isn't wrong, but is a little misleading. As far as I can tell, Microsoft "announced" Cablesoft in March 1994 only under duress. Rumors of Cablesoft started to swirl about nine months earlier, in 1993, and the media did not look on it very positively. The most widely published article quoted then-chairman of Apple John Sculley accusing Microsoft of an anticompetitive move to corner the interactive TV market.

There is, of course, nothing more quintessentially Microsoft than an anticompetitive move to capture a market that would never actually emerge.

The first widespread mention of Cablesoft running under the headline "Big Software-Cable Deal Criticized" does a bit to explain Microsoft's odd cageyness about Cablesoft, repeatedly denying that any final deal had been signed and even downplaying the likelihood of the product launching. TCI and Time Warner refused to talk about it. Charmingly, a Phil Rowe of Battle Creek, Michigan wrote in to the editor of the Battle Creek Enquirer that Microsoft, TCI, and AT&T (I think Rowe was just confused? AT&T had its own interactive television efforts going on) would soon monopolize the interactive TV market, and that to hold them off, Battle Creek should swiftly franchise wireless cable.

It seems that Cablesoft died under the same cloud that it emerged. No one is really that clear on what happened. A trial program apparently launched where TCI and Microsoft employees in the Puget Sound area could try it out. It must not have lasted very long, by 1995 an article about Microsoft's antitrust woes listed Cablesoft as one of the ventures that Microsoft had abandoned due to the scrutiny. "Everyone backed off," an anonymous Microsoft employee told another reporter. "They were all afraid that this thing would be regulated out of existence."

Cablesoft didn't make much of a contribution to the business, but was it technically significant? And what about that customized version of Windows NT? Denise Caruso comes up again: an archived version of her personal website is the Wikipedia article's main source. She wrote:

Code-named Tiger and now called the Microsoft Media Server, the innovative design is based on a version of the Windows NT operating system, called NTAS, that uses standard PCs and cutting-edge ATM (asynchronous transfer mode) networking products to deliver video, audio, animation and information services into the home.

Streaming media was difficult in 1999; it was very difficult in 1993, when Microsoft's efforts began. Hard disks were slow, and head contention meant that it was very hard to serve multiple video streams from one disk. Networks were slow and, worse, had high levels of latency and jitter compared to what we are used to today. Feasibly providing real-time unicast streams to a large set of users would require some sort of large, very high performance storage system---or, in a strategic move that has repeatedly revolutionized the server side, a lot of consumer hardware and a clever system of coordination.

Microsoft technical report 96-09 describes the Tiger Video Fileserver. It was released after the Cablesoft project had faltered and never mentions it by name, but it clearly describes the head-end equipment for an on-demand video streaming system. Its authors, including Myhrvold, include a half dozen people with long careers in distributed systems and high performance storage.

A Tiger fileserver consists of multiple consumer PCs running Windows NT. Each of these nodes, called "cubs," has multiple hard disks. Files are separated into blocks (64kB-1MB) which are distributed across disks in the cluster; there are no constraints on the nature of the files except that they must be at the same playback bitrate. This constraint exists because the entire Tiger system operates on a synchronized timeslot schedule, consisting of block service times which are equal to the time required to read one block from disk, plus some margin for transient events and error recovery.

When a viewer requests a video, a controller node allocates the viewer to slots in a schedule of block service times and cubs. This is done such that each successive block of a given video will be handled by a different disk, and such that no one disk will be needed by more than one viewer in a given block service time. In other words, video playback consists of a series of cubs each delivering a single block of the file in order, and each disk retrieves only one block at a time. Because the block service time (and thus rate at which the schedule is executed) is appreciably shorter than the time viewers spent playing back back that same block, the cubs are able to support multiple viewers and still deliver blocks on time.

Because disks were not fast enough to reliably perform two block reads within a block service time, and aggregation of multiple viewers into one logical stream remained an elusive challenge, Tiger used a simple hack to avoid noisy neighbor problems: the controller ensured that there was only one viewer of a given file at a given time index. In practice, if two user were to hit "play" on the same movie at the exact same time, the controller would slightly delay the beginning of streaming to one of the viewers in order to introduce a time offset. Combined with the striping of each file across multiple nodes, this naturally distributed load to allow a large number of simultaneous viewers of the same media without having to create additional replicas of the media.

The controller determines and distributes the schedule in advance, and each cub is permitted (and expected) to retrieve blocks early as its I/O allows. But cubs are required to send that block in the correct schedule slot, so that storage buffering occurs only at the cub level and the outgoing network stream is in perfect realtime. When the file is distributed across cubs and disks, extra copies of each block are stored for redundancy, in case of a disk or cub failure. Extra slack in the block service time allows a failed block retrieval to be moved to a different cub. Secondary blocks are allocated by organizing the cubs into a ring. Each primary block has secondary copies stored on one or more cubs "to the right," and each cub is responsible for monitoring the liveness of its neighbor "to the left" and assuming its schedule if required.

The physical layout of files on each disk is optimized for fault load; the "primary" copy of each block is stored on the (physical) outer part of the disk surface while the inner part of the disk surface is used for secondary (backup) copies. Because the outer surface of the disk moves physically faster, it can be read more quickly. By placing primary blocks on the outer half, the disk's normal "primary" workload runs at nearer the disk's maximum read speed, leaving spare time for retrieval of secondary blocks when block sizes and bitrates are optimized for the disk's average read speed. When primary blocks are lost due to disk failure, they are automatically restored to a new disk as soon as one is available.

Besides real-time streaming of files, Tiger also supported ad-hoc read and write operations. These were performed on an opportunistic basis, sent to cubs as "extra" jobs to execute when they were ahead on carrying out scheduled reads. When viewers fast-forwarded or rewound playback, these opportunistic jobs were used to jump-start playback at other points in the file, with the caveat of reduced reliability.

During the course of normal video playback, individual blocks will come from an arbitrary sequence of different cubs. There are several approaches to the network design, and Tiger supports UDP over both Ethernet and ATM, but ATM is preferred. ATM is Asynchronous Transfer Mode, a network protocol that originated in the telephone industry as part of the ISDN stack. Unlike Ethernet, ATM was designed for real-time data streams, and uses prescheduled time-division muxing to provide guaranteed-bandwidth virtual circuits over a switched fabric. This made ATM inherently more suited to streaming media than Ethernet, a difference that Ethernet only made up for with quality of service protocols and, mostly, just getting to be so fast that streaming media mostly worked out despite having only intermittent, opportunistic access to the network media.

Microsoft further enhanced ATM for the Tiger application by introducing the ATM "funnel," a multipoint-to-point networking mode that allows many cubs to send packets into a single virtual circuit. ATM subdivides packets into multiple frames, meaning that if two cubs were to send packets too close together, they may become interleaved (both violating the design of IP-over-ATM and complicating the work of the viewer). To resolve this problem, Tiger uses a token-passing scheme where each cub transmits its block and then passes a token to the next cub in the schedule for that viewer. The implementation of this token-passing ATM variant is one of two customizations to the NT kernel involved in Tiger.

The other will be familiar to readers in the modern high-performance networking industry: Tiger implemented a basic form of kernel-bypass networking in which the network interface read the file block directly from the read buffer via DMA. Tiger thus required some special kernel-mode code to implement this DMA-to-UDP mode and ensure that video data passed over the bus only twice, once from the disk controller to memory, and once from memory to the network controller.

These kernel features, which I believe were less modifications than device drivers, seem to be the "customizations" that Caruso referred to. To my frustration, the connection to the term "NTAS" seems to be mistaken. I cannot find any instance of Microsoft using it in relation to Cablesoft or Tiger. Most likely it arose from confusion with the branding of NT 3.1's server edition as NT Advanced Server; NT 3.1 must have been the basis for the Tiger system since it was released in 1993 and had considerable emphasis on performance as a network file server to compete with Netware.

The technical report describes a model Tiger deployment: Five Gateway Pentium 133MHz machines served as cubs, each with 48MB of RAM, three 2GB Seagate drives, and an OC-3 ATM adapter. Larger than usual 2KB sectors were used on the hard disks for better throughput when handling large files (this is an interesting detail since support for non-default sector sizes was apparently rather cantankerous in the PCs and storage interfaces of the time). A Gateway 486/66 machine served as the controller for the cluster. Ten 486/66s, each attached to the ATM network by 100Mbps fiber, served as test clients.

The controller, on the other hand, used 10Mbps ethernet to communicate with the five cubs, while the cubs communicated among themselves using the ATM network. The paper explains this by noting that the 486-based controller was very slow compared to the Pentium-based cubs, perhaps no benefit was seen in the added cost of extending the ATM network to the controller.

This system, with a total of 15 data drives, stored a little over 6 hours of media at 6Mbps. 0.75MB blocks were used, for a block playback time of roughly one second. Based on calculations, the OC-3 interfaces were the bottleneck of the system, allowing the five cubs to provide a total of 68 simultaneous streams. The 486-based viewer machines actually weren't fast enough to decode that many streams, so some requested videos and simply discarded the packets, while others actually checked the received blocks for correctness. Based on this sampling method, a lost or late block rate of only 0.02% was observed. Performance data collected from the cubs indicated that, with faster network controllers, they would have kept up with additional viewers.

Despite Cablesoft's failure to ever reach the market, Tiger appears to have included novel work in several areas: scheduling real-time demand across a cluster, distributed storage with striping and replication for fault-tolerance and performance. At the bottom line, Tiger demonstrated the use of a cluster of commodity PCs to perform a task that, at the time, was seen as requiring specialized and costly high-performance machines from SGI or Oracle.

In the end, it wasn't enough. Caruso again:

Unlike the PC business, where it has ultimate leverage over all the links of a relatively short chain, Microsoft has no native influence in the large and existing infrastructures it wants to penetrate (cable networks, telcos and content providers)---except that it is Microsoft.

Cablesoft ended so quietly that we will probably never know for sure, but I think that Caruso's larger argument in her article about Cablesoft is right: licensing software for STBs and cable headends would never bring in Bill Gates money. Microsoft's real play was for revenue share: wrestling matches and hotel pornography had proven that on-demand content could move a lot of money, and as the operator of the technical platform, Microsoft stood to impose royalties. 5% of pay-per-view revenue is the kind of thing that motivates Microsoft. When Microsoft went to major cable operators to try to standardize them on its platform, the cable oligopoly saw the software monopoly coming in for a slice of the pie. They probably thought they could do it just fine on their own.

And they turned out to be right. Interactive TV would die off as a buzzword in the US market, but not that much later, in the '00s, General Instrument successor Motorola would partner with the manufacturers of some of those specialized costly machines and quietly introduce video on demand to the American cable consumer as a standard offering of digital TV packages.

I think they met more success because they didn't try as hard: '00s on-demand infrastructure involved some formidable machines, like the Broadbus (later Motorola) B-1 that served thousands of simultaneous video streams by storing the entire active video library in RAM. But no one was talking about hypertext or smart TVs or the TV as the center of digital family life. The aspects of interactive TV that were familiar to the television industry, electronic program guides and more convenient pay-per-view, did just fine once they became technically feasible.

What of Tiger?

As part of NT4 in 1996, Microsoft introduced NetShow. NetShow would later be known as Microsoft Media Server, and then Windows Media Server, before fading into obscurity. It was a real-time media server that used a proprietary protocol to deliver Advanced Streaming Format media, competing directly with streaming pioneer RealNetworks. The details are fuzzy enough that I have a hard time saying this for sure, and Tiger as a distributed system definitely didn't make it past Cablesoft, but it does seem very likely that NetShow is a descendent of Tiger. Over time, the Windows Media family moved over to industry standard protocol RTSP, and then the streaming server was merged into IIS. Perhaps nothing of Tiger survived into the 21st century. But, you know how Microsoft is. Maybe there's a bit of Tiger inside of Windows to this day.

[1] As a dedicated computer-on-TV person, I use a ThinkPad Trackpoint Keyboard II as the "remote control." It's the best I've found so far but still large and clunky compared to a traditional remote. More recently, my husband also added an HDMI-CEC adapter that allows the TV remote to control the computer via a daemon that generates keyboard events. This is pretty slick for applications like Plex and Steam Big Picture that were designed with television use in mind, but in the web browser the experience leaves much to be desired. We're basically facing all the same struggles as similar dweebs did thirty years ago.

[2] And neither should be confused with Cablesoft Ltd., an English company that developed software for the telephone and television industry during the same years, or CableSoft, a company that made a stock quotation and analytics product a couple of years later.

2024-11-09 iron mountain atomic storage

I have quipped before about "underground datacenters," and how they never succeed. During the late decades of the Cold War and even into the '00s, the military and (to a lesser extent) the telecommunications industry parted ways with a great number of underground facilities. Missile silos, command bunkers, and hardened telephone exchanges were all sold to the highest bidder or---often in the case of missile silos---offered at a fixed price to the surrounding land owner. Many of them ended up sealed, the new owner only being interested in the surface property. But others...

There are numerous examples of ex-defense facilities with more ambitious owners. There ought to be some commercial interest in a hardened, underground facility, right? After all, the investment to build them was substantial. Perhaps a data center?

There are several ways this goes wrong. First, there are not actually that many data center clients who will pay extra to put their equipment underground. That's not really how modern disaster recovery plans work. Second, and probably more damning, these ventures often fail to anticipate the enormous cost of renovating an underground facility. Every type of construction is more expensive when you do it underground, and hardened facilities have thick, reinforced concrete walls that are difficult to penetrate. Modernizing a former hardened telecom site or, even worse, missile site for data center use will likely cost more than constructing a brand new one. Indeed, the military knows this: that's why they just sold them, often at rock-bottom prices.

Even if these "secure datacenters" almost never succeed (and rarely even make it to a first paying client), they've provided a lot of stories over the years. CyberBunker, one of the less usual European examples (a former NATO facility), managed to become entangled in cybercrime and the largest DDoS attack ever observed at the time, all while claiming to be an independent nation. They were also manufacturing MDMA, and probably lying about most of the equipment being in a hardened facility to begin with.

So that's obviously a rather extreme example, sort of a case study in the stranger corners of former military real estate and internet crime. But just here in New Mexico I know of at least two efforts to adopt Atlas silos as secure datacenters or document storage facilities, neither of which got off the ground (or under the ground, as it were). It seems like a good idea until, you know, you actually think about it. You might recall that I wrote about a secure data center claiming to be located in a hardened facility with CIA and/or SDI ties. That building doesn't even appear to have been hardened at all, and they still went bankrupt.

What if I told you that they were all barking up the wrong tree? If you really want to make a business out of secure underground storage, you need something bigger and with better access. You need a mine.

It will also be very important to make this play in the early Cold War, when there was a much clearer market for hardened facilities, as evidenced by the military spending that period building them rather than selling them off. The 1980s and on were just too late.

There are actually several rather successful businesses built on the premise of secure, hardened storage, and they are distinctively pre-computer. The best known of them is diversified "information management" firm Iron Mountain. You know, with the shredding trucks. And an iron mountain, or rather, an iron mine by that name.

Like most of the large underground facilities that are still commercially successful today, the story of Iron Mountain involves mushrooms. Efficient cultivation of mushrooms requires fairly tightly controlled conditions that are not dissimilar to those you already find underground: cool temperatures, high humidity, and low light. Culinary mushrooms are widely produced in large caves and former mines, which often provide more convenient access, having been designed for bulk transportation of ores.

This might be a little surprising, because we tend to think of underground mines as being small, cramped spaces. That's true of many mines, often for precious metals, for example. But there are also mines that extract ores that are present over large areas and have relatively low value. This requires an efficient method of removing a very large quantity of rock. Modern mines employ some very clever techniques, like "block caving," where a very large rock mass is intentionally collapsed into a prepared chamber that it can be scooped out of like the bottom of a hopper. One of the most common methods, though, and one that has been in use for a very long time, is "room and pillar" mining.

The idea is pretty simple: you excavate a huge room, leaving pillars to hold up the material above. Depending on the economics, geology, etc., you might then "retreat," digging out the pillars behind you as you work your way out of the room. This causes the room to collapse, ideally away from the working area, but not always. Retreat mining is dangerous and doesn't always produce ore of that much value, so a lot of mines didn't do it. They just left the rooms, and their pillars: huge underground chambers, tens and hundreds of thousands of square feet, empty space. Many were dug back into a hill or mountain, providing direct drive-in access via adits. Most, almost all, successful underground storage facilities are retired room and pillar mines.

In the first-half of the 20th century, mushroom cultivation was the most common application of underground space. That's what led "Mushroom King" Herman Knaust to purchase the disused Mt. Thomas iron mine near Livingston, NY in 1936. Knaust's company, Knaust's Cavern Mushrooms, was the largest producer of culinary mushrooms in the world. Ten crops of mushrooms were produced each year in the Livingston mine, and as Knaust continued to grow his operations, the mushroom mine became one of the area's principal employers. Knaust dubbed it Iron Mountain.

By 1950, things had changed. Knaust had at least two motivations for his pivot: first, the US mushroom industry was rapidly weakening, upset by lower-cost mushrooms imported from Europe and Asia. Second, WWII had come to a close, and the Cold War was beginning.

In 1952, Knaust told reporters of his experience working with refugees from Europe resettled in New York. Most of them had lost everything to bombing, and they told Knaust how they had attempted to hide their most valuable possessions, and their paperwork, in safer places. The Germans, Knaust read, had come up with the best hiding place of all: disused mines. During the course of the war, the Nazi administration stored valuables ranging from gold bullion to original works of art in former mines throughout their occupied territories. Some of them were large-scale logistical operations, with rail access and organized archival systems.

Now, in the age of nuclear weapons, Knaust thought that this kind of protection would be in more demand than ever. In 1951, he renovated the old mine and installed new ventilation equipment. Most importantly, he bought a 28-ton vault door secondhand from a failed bank. A generator and a security force of armed former police officers rounded out Knaust's new venture: Iron Mountain Atomic Storage.

The bank vault door was mostly for show, and Knaust's description of the mine as "A-bomb proof" and "radiation proof" somewhat stretch the science. But Knaust was a born marketer; his version of nuclear alarmism drew the attention of corporate America like the Civil Defense Administration's pamphlets gripped the public. The entrance to his mine was a sturdy stone block building with iron bars over the windows and "Atomic Storage Vaults" inscribed at the top. He was sure to tell reporters of his estate, home to the world's only mushroom-shaped swimming pool. Over the years of newspaper coverage, the bank vault door at the front of the mine got heavier and heavier.

In the event of a nuclear attack, Knaust reasoned, banks could lose their records of deposits. Insurers could lose track of their policies. Companies of all kinds could lose inventory sheets. The resulting economic chaos would be as destructive as the bomb that started it. If we were to shelter lives, we also needed to shelter information. By the time Iron Mountain Atomic Storage was open for business he had already signed up the first customers, who shipped copies of their records for storage in individual vaults constructed within chambers of the mine. The East River Savings Bank, of New York, proudly described how each of their branches microfilmed new deposits daily for transport to the Mountain.

Iron Mountain sold its services to individuals as well. For $2.50 a year, a consumer or small business could pack records into a tin can for storage in the mine. The cans could be stored or retrieved with local agents stationed in major cities, who sealed them in the presence of the customer and shipped them to the mine by courier.

By 1954, Iron Mountain boasted over 400 customers, mostly large corporations and institutions. It was a surprising hit with newspapers: about 150 rented space to store their archives. Recordak, a subsidiary of Kodak that provided microfilming services, set up a branch office at the mine with representatives who could convert records to microfilm or turn them back into full-sized versions on demand. The consumer part of the business was reorganized in partnership with the Railway Express Agency, one of the descendants and ancestors of our modern American Express and Wells Fargo. Individuals and small businesses could deposit records with any Railway Express agent and request records sent back to their nearest train station.

They quickly faced competition. Perhaps the most interesting was Western States Atomic Storage Vaults, who purchased a disused South Pacific Coast Railroad tunnel in the Santa Cruz mountains. The railroad right of way, near Zayante, California, received a similar conversion to caged storage units. At least a half dozen underground storage companies would be organized between 1950 and 1960.

The atomic storage industry was not always an easy one. Iron Mountain had a slow start, signing up few customers after their initial set of banks and newspapers. The Cuban Missile Crisis gave a considerable boost to sales, though, and revenue almost doubled in 1963 alone. Iron Mountain's inventory expanded from paper and microfilm records to original works of art, and they purchased a second mine, a former limestone mine nearby at Kingston, NY, to expand. They added office and dormitory facilities at both sites, to protect both their extensive staff of clerks and representatives of their customers in the event of nuclear war. "What good are the records if everyone in the firm is blown up," Iron Mountain's executive vice president offered.

Inside of Iron Mountain, behind the vault door, steel doors with combination locks protected individual vault suites ranging from closets to hundreds of square feet. Racks held boxes and cans of individual records deposited by smaller customers. The whole facility was maintained at 70 degrees Fahrenheit and 50% humidity, a task greatly eased by the surrounding earth.

They were dedicated to the privacy of their customers but also had a hard time passing up an opportunity for promotion, telling reporters of some of the publishers and television companies that stored their archives at Iron Mountain, and hinting at a "major New York City art museum" that leased space for its collection. Individual customers included doctors, stamp collectors, and "a whole lot of people who aren't talking as long as the outside world lasts."

The mid-1960s were the apex of the Cold War in the popular consciousness, and Iron Mountain's luck would not last through the broader decline in planning for all-out Soviet attack. Now called Iron Mountain Security Storage, quotes for newspaper articles shifted their focus towards civil unrest. In a world of campus riots, president James Price said, universities were moving their academic records underground.

Iron Mountain's good (for business) and bad (for society) mood must have been infectious, because competitors flourished even in 1970. Bekins, the moving and storage company, purchased 200 acres in the Diablo mountains of California. They intended to open the first underground storage facility specifically built for that purpose, and plans included a hotel and heliport for convenient customer access. The July 11th, 1970 edition of The Black Panther's Black Community News Service contains perhaps them most blunt assessment of the Bekins plan, one that would prove prescient.

The possibility of World War III is not as much an immediate threat to the life and well being of America's greedy capitalists as is the strong probability of the more severe "political consequences" that might be meted out by the masses, the people, for selfish crimes committed against them.

A year later, the plan had expanded to shelter for 1,000 "executives and office workers" for up to 30 days, an airport that could serve business jets, and a computer and communications center. Bekins said that it would help corporations survive a nuclear war, but was even more important in the event of rioting or terrorism. Bekins specifically called out unrest at UC Berkeley, and damage it had caused to academic records, as evidence of the need.

Blaine L. Paris, number one stooge manager of Bekins Company.., acknowledged that the hideaway, hideout survival shelter's main draw is the widespread fear, on the business executive level, of bombings, the random tossing of molotov cocktails, possibilities of kidnapping...

Some large companies, Bekins said, were planning to set up alternate corporate headquarters in the facility as soon as it opened. It would be something like the Mount Weather of the corporate world. Two years passed, and Bekins reimagined the facility as a regular-use business park rather than a contingency site, but still underground. Joseph Raymond, Bekins' director of Archival Services, quipped that employees might be more productive underground where they'd be "free from the distractions of the surface."

Bekins had bad news coming. The era of atomic storage had come to an end. Corporate fear of popular revolution proved insufficient to fund the ten million dollar project. The Bekins facility would never break ground. Iron Mountain, quietly and under cover of their typical boosterism, had run out of money.

In 1971, a group of investors formed Schooner Capital and bought them out. Their strategy: to focus on business records management and compliance, and largely drop the "underground" and "atomic" part. Beginning in 1978, Iron Mountain built dozens of new storage facilities that were normal, above-ground warehouses. At the same time, they shifted the focus of their sales from security to "information management." New legal requirements and tax regulations meant that records retention had become a complex and costly part of many businesses; Iron Mountain offered to outsource the entire matter. Their clerks collected records from businesses, filed them away, and destroyed them when retention was no longer required.

Iron Mountain remains the largest company in the business today. Most US cities have an expansive Iron Mountain warehouse somewhere on their outskirts, and their mobile shredding trucks are a regular sight in business districts. Still, a certain portion of the Cold War attitude remains. Unshredded records are said to be transported in unmarked vehicles, to avoid attracting attention. Iron Mountain facilities are not exactly hidden, but their locations are not well publicized, and they continue to use armed guards. Distinctive red "Restricted Area" signs surround each one.

And they still have plenty underground.

When you look into the history of Iron Mountain, you will see frequent reference to the Corbis Collection. The story of Corbis would easily make its own article, but the short version is that Corbis was founded by Bill Gates as a sort of poorly-thought-out electronic picture frame company. Over the span of decades, they amassed one of the world's largest private collections of historic photographs and media, and then collapsed into an influencer marketing firm. It is often noted that the Corbis collection, of over 15 million photographs spanning 150 years, is stored at Iron Mountain. This isn't quite correct, but it's wrong in an interesting enough way to make it worth unpacking.

In the 1950s, the Northeast Pennsylvania Industrial Development Commission (NPIDC) formed a task force to investigate opportunities for the reuse of the state's growing number of abandoned coal mines. Coal is mined almost entirely by the room and pillar method, and while there are practical challenges in reusing coal mines in particular, the amount of space involved was considerable. The NPIDC's first proposal was right in line with the cold war: they proposed that the Civil Defense Administration use the mines to store their stockpiles of equipment and supplies.

The Civil Defense Administration wasn't interested, they were worried that firedamp (flammable coal gases) would make the mines dangerous and high humidity would cause stored equipment to rust. Still, the idea rattled around the state of Pennsylvania for years, and sometime around 1953 one such mine near Boyers, PA was purchased by the newly formed National Storage Company. National Storage became one of Iron Mountain's key competitors.

Iron Mountain has become as large as it is by following a fine American economic tradition: monopolization. It outlasted its erstwhile atomic storage competitors by buying them. Western States Atomic Storage Vaults and their railroad tunnel, National Storage and their coal mine, and at least two other similar ventures became part of Iron Mountain in the 1990s.

It is the former National Storage facility in Boyers that holds the Corbis collection. It has a notable neighbor: the largest tenant at Boyers is the United States Office of Personnel Management, which famously holds both clearance investigation files and federal employee retirement records down in the old mine. In 2014, the Washington Post called the Boyers mine a "sinkhole of bureaucracy", describing the 600 OPM employees who worked underground manually processing retirement applications. These employees, toiling away in a literal paperwork mine, were the practical result of a decades-long failed digitization program.

Underground storage is still a surprisingly large business. Some readers may be familiar with "SubTropolis," an extensive limestone mine near Kansas City, which offers 55 million square feet of underground space. SubTropolis has never particularly marketed itself as a hardened or secure facility. Instead, it offers very cost-effective storage space with good natural climate control. Tenants include refrigerated logistics companies and the National Archives. There are a number of facilities like it, particularly in parts of the eastern United States where the geography has been amenable to room and pillar mining.

That's the irony of Iron Mountain: their original plan was a little too interesting. Iron Mountain continues to operate multiple underground facilities, both their own and those they have acquired. Some of them, including Boyers, even have datacenters. The clients are mostly media companies, with original materials they cannot easily duplicate, and legacy government and financial records that would be too costly to digitize. Sony Music stores their studio masters with Iron Mountain, a big enough operation that some of Iron Mountain's underground sites have small recording studios to allow for restoration without removing the valuable originals from safekeeping. Miles of film are stored alongside miles of pension accounts. No one talks about nuclear war. The bigger fear is fire, which is more difficult to contain and fight in these old mines than in purpose-built archival warehouses.

There are only so many masters to store, and the physical volume of corporate records is quickly declining. Atomic vaults hit a limit to their growth. The total inventory of underground corporate storage facilities in the United States today is much the same as it was in the 1960s, with more closing than opening. Offsite records storage is shrinking overall, and Iron Mountain is effectively in the process of a pivot towards (above-ground) datacenters and services.

Still, when you read about Mark Zuckerberg's 5,000 square foot bunker in Hawaii, or Peter Thiel's planned underground project in New Zealand, one can't help but wonder if the predictions of Bekins, and the Black Panthers, were just ahead of their time.


I hope you enjoy this kind of material on Cold War defense and culture. It's one of my greatest interests besides, you know, anything underground. For those of you who support me on Ko-Fi, in the next day or two my supporter newsletter EYES ONLY will be a short followup to this piece. It will discuss underground storage facilities of a slightly different kind: the records vaults constructed by the Church of Scientology and the Latter-Day Saints, and the extent to which these facilities also reflect Cold War concerns.

I am also working on something about waste-to-energy facilities that will probably be an EYES ONLY article, as a companion to an upcoming CAB article on the history of an experimental Department of Energy biomass power plant in Albany, Oregon. But first, I will write something about computers. I have to every once in a while.

2024-10-26 buy payphones and retire

PAYPHONES at High Volume

Existing sites! Earn BIG $$. Money Back Guarantee!

Dropshipping AliExpress watches, AI-generated SEO spam websites... marginally legal and ethical passive income schemes, that serve to generate that income mostly for their promoters, can feel like a modern phenomenon. The promise of big money for little work is one of the fundamental human weaknesses, though, and it has been exploited by "business coaches" and "investment promoters" for about as long as the concept of invesstment has existed. We used to refer mostly to the "get rich quick" scheme, but fashions change with the time, and at the moment "passive income" is the watchword of business YouTubers and Instagram advertising.

And what income is more passive than vending machine coin revenue? Automated vending has had a bit of a renaissance, with social media influencers buying old machines and turning them into a business. The split of their revenue between vending machine income and social media sponsorship is questionable, but it's definitely brought some younger eyes to an industry that is as rife with passive income scams as your average spam folder. Perhaps it's the enforcement efforts of the SEC, or perhaps today's youth just need a little more time to advance their art, but I haven't so far seen a vending machine hustle quite as financialized as the post-divestiture payphone industry.

For much of the history of the telephone system, payphones were owned and operated by telephone carriers. As with the broader telephone monopoly, there were technical reasons for this integration. Payphones, more specifically called coin operated telephones, were "dumb" devices that relied on the telephone exchange for control. In the case of a manual exchange, you would pick up a payphone and ask the operator for your party---and they would advise you of the price and tell you to insert coins. The coin acceptor in the payphone used a simple electrical signaling scheme to notify the operator of which and how many coins you had inserted, and it was up to the operator to check that it was correct and connect the call. If coins needed to be returned after the call, the operator would signal the phone to do so.

With the introduction of electromechanical and then digital exchanges, coin control became automated, but payphones continued to use specialized signaling schemes to communicate with the coin control system. They had to be connected to special loops, usually called "coin lines," with the equipment to receive and send these signals. The payphone itself was a direct extension of the telephone system, under remote control of the exchange, much like later devices like line concentrators. It was only natural that they would be operated by the same company that operated the control system they relied on.

Well, a lot of things have changed about the payphone industry. The 1968 Carterfone decision revolutionized the telephone industry by allowing the customer to connect their own device. Coin operated telephones in the traditional sense were unaffected, but Carterfone opened the door to a whole new kind of payphone.

In 1970, burglar alarm manufacturer Robotguard blazed the trail into a new telephone business. They imported a Japanese payphone that was a little different from the American models of the time: it implemented coin payment internally. Robotguard connected the payphone through one of their burglar alarm autodialers, a device that was already fully compliant with telephone industry regulations, and then hooked it up to a Southwestern Bell telephone line in a department store in in St. Louis. By inserting a dime, the phone was enabled and you could make a local call (the autodialer was used, in part, to limit dialing to 7 digits to ensure that only local calls were made).

Robotguard had done their homework, consulting the same law firm that represented Carterfone in the 1968 case. They believed the scheme to be legal, since the modified Japanese payphone behaved, to the telephone company, just like any other customer-owned phone. The New York Times quotes Southwestern Bell, whose attitude is perhaps best described as resignation:

Spokesmen for the Southwestern Bell Telephone Company, the operating company in that area, acknowledge that the equipment is in the store, that it is working as described and that it appears completely legal. There is nothing they can do about it at this time, they say.

There was, indeed, nothing that they could do about it. Robotguard had introduced the Customer-Owned Coin-Operated Telephone, or COCOT, to the United States. Payphones were now a competitive business.

Despite a certain air of inevitability, COCOTs had a slow start. First, there would indeed be an effort by telephone companies to legally restrict COCOTs. This was never entirely successful, but did result in a set of state regulations (and to a lesser extent, federal regulations related to long-distance calls) that made the payphone business harder to get into. More importantly, though, the technical capabilities of COCOTs were limited. The Robotguard design could charge only a fixed fee per call, which made it a practical necessity to limit the payphone to local calls. Telephone company payphones, which allowed long-distance calls at a higher rate, had an advantage. Long-distance calls were also typically billed by minute, which made it important for a payphone to impose a time limit before charging more. These capabilities were difficult to implement in a reasonably compact, robust device in the 1970s.

A number of articles will tell you that COCOTs became far more common as a result of payphone deregulation stemming from the 1984 breakup of AT&T. I would love to hear evidence to the contrary, but from my research I believe this is a misconception, or at least not the entire story. In fact, payphones were deregulated by the Telecommunications Act of 1996, but that was done in large part because COCOTs were already common and telephone companies were unhappy that conventional payphones were subject to rate regulation while COCOTs were not [1].

Divestiture did definitely open the floodgates of COCOTs, although I think that the advances in electronics around that time were also a significant factor in their proliferation. In any case, several manufacturers introduced COCOTs in 1984 and 1985.

These later-generation COCOTs were significantly more sophisticated than the mechanical system used by Robotguard. To the user, they were pretty much indistinguishable from carrier-operated payphones, charging varying rates based on call duration and local or long distance. This local simulation of the telephone exchange's charging decisions required that each COCOT have, in internal memory, a prefix and rate table to determine charges. Early examples used ROM chips shipped by their manufacturer, but over time the industry shifted to remote programming via modems. These sophisticated, electronically-controlled coin operated phones that did not rely on an exchange-provided coin line came to be known as "smart payphones" and even, occasionally, as "smartphones."

Smart payphones greatly simplified payphone operations and were even adopted by the established telephone companies, where they could save money compared to the more complex exchange-controlled system. But they also made COCOTs completely practical, as good to the consumer as any other payphone. As COCOTs became remotely programmable, the payphone business started to feel like a way to generate---dare I say it---passive income. All you had to do was collect the coins. Well, that and keep the phone in working order, which would become a struggle for the thinly staffed and overleveraged Payphone Service Providers (PSPs) that would come to dominate the industry.

One of the new entrants into the payphone business was a company that specialized in exactly the kind of remote management these new smart payphones required: Jaroth Inc., which would do business as Pacific Telemanagement Solutions or PTS. Today, PTS is the largest PSP in the United States, but that isn't saying a whole lot. They enjoyed great success in the 1990s, though, and were so well-positioned as a PSP in the '00s that they often purchased the existing payphone fleet from former Bell Operating Companies that decided to abandon the payphone business.

The 1990s were a good time for payphones, and they were also a good time for investment scams. Loose enforcement of regulations around investment offerings, the Dot Com Boom, and a generally strong economy created a lot of opportunities for "telecom entrepreneurs" that were more interested in moving money than information.


The problem of 1990s telecommunications companies funded in unscrupulous ways is not at all unique to payphones, although it did reach a sort of apex there. I will take this opportunity to go on a tangent, one of those things that I have always wanted to write an article about but have never quite had enough material for: MMDS, the Multichannel Multipoint Distribution Service.

MMDS was, essentially, cable television upconverted to a microwave band and then put through directional antennas. It was often marketed as "Wireless Cable," sort of an odd term, but it was intended as a direct competitor to conventional cable television. I think it's fair to call it an ancestor of what we now call WISPs, using small roof-mounted parabolic antennas as an alternative to costly CATV outside plant. Some MMDS installations literally were early WISPs: MMDS could carry a modified version of DOCSIS.

Wireless cable got a pretty bad rap, though. If you pay attention to WISPs, you will no doubt have noticed that while the low capital investment required can enable beneficial competition, it also enables a lot of companies that you might call "fly by night." Some start out with good intentions and just aren't up to the task, while some come from "entrepreneurs" with a history of fraud, but either way they end up collecting money and then disappearing with it.

MMDS had a huge problem with shady operators, and more often of the "history of fraud" type. Supposed MMDS startups would take out television and newspaper ads nationwide offering an incredible opportunity to invest in this exciting new industry. The scam took different forms in the details, but the most common model was to sell "shares" of a new MMDS company in the four-to-five-digit range. Investors were told that the company was using the capital to build out their network and would shortly have hundreds of customers.

In practice, most of these "MMDS startups" were in cities with powerful incumbent cable companies and, even worse, preexisting MMDS operators using the limited spectrum available for such a wideband service. They never had any chance of getting a license, and didn't have anyone with the expertise to actually build an MMDS system even if they got one. They just pocketed the money and were next seen on a beach in Mexico or in prison, depending on the whims of fortune.

These wireless cable schemes became so common, and so notorious, that if you asked a lot of people what wireless cable was the two answers you'd get are probably "no idea" and "an old scam."


It only takes a brief look at newspaper archives to find that the payphone industry was a little sketchy. There are constant, nationwide, near-identical classified ads with text like "buy and retire now" and "$150k yearly potential" and "CALL NOW!". Sometimes more than one appear back to back, and they're still nearly identical. None of these ads give a company name or really anything but a phone number, and the phone numbers repeat so infrequently that I suspect the advertisers were intentionally rotating them. This was pretty much the Craigslist "work from home" post of the era.

To understand payphone economics better, let's talk a little about how the payphone business operated. Telephone companies had long run payphones on the same payment model, by finding a location for the payphone (or being contacted by the proprietor of a location) and then offering the location a portion of revenue. In the case of incumbent telcos, this was often a fixed rate per call. So someone owned the location and the payphone operator paid them in the form of a royalty.

COCOTs enabled a somewhat more complex model. A COCOT might be located in a business, connected to a telephone company line, and remotely programmed by a service provider, all of which were different companies from the person that actually collected the money. The revenue had to get split between all of these parties somehow, but COCOTS weren't regulated and that was all a matter of negotiation.

Much like the vending machine industry today, one of the most difficult parts of making money with a payphone was actually finding a good location---one that wasn't already taken by another operator. As more and more PSPs spread across the country, this became more and more of a challenge. So you can imagine the appeal of getting into the payphone hustle without having to do all that location scouting and negotiation. Thus all the ads for payphone routes for sale... ostensibly a turnkey business, ready to go.

Ah, but people with turnkey, profitable businesses don't tend to sell them. Something is up.

Not all of these were outright scams, or at least I assume some of them weren't. There probably were some PSPs that financed expansion by selling or leasing rights to some of their devices. But there were also a lot of... well, let's talk about the second largest PSP of the late '90s.

Somewhere around 1994, Charles Edwards of Atlanta, Georgia had an idea. His history is obscure, but he seems to have been an experienced salesman, perhaps in the insurance industry. He put his talent for sales to work raising capital for ETS Payphones, Inc., which would place and operate payphones on the behalf of investors.

The deal was something like this: ETS identified locations for payphones and negotiated an agreement to place them. Then, they sold the payphone itself, along with rights to the location, to an investor for five to seven thousand dollars a pop. ETS would then operate and maintain the payphone while paying a fixed monthly lease to the investor who had purchased it---something like $83 a month.

It was a great deal for the investors---they didn't need any expertise or really to do any work, since ETS arranged the location, installed the phones, and even collected the coins. In fact, most investors purchased phones in cities far from where they lived, such was the convenience of the ETS model. There was virtually no risk for investors, either. ETS promised a monthly payment up front, and the contract said that they would refund the investor if the payphone didn't work out.

The ETS network was far larger than just Edwards could manage. Most of the investment deals were sold by independent representatives, the majority of them insurance agents, who could pick it up as a side business to earn some commission. Edwards sold nearly 50,000 payphones on this basis, many of them in deals of over $100,000. Small-time investors convinced of the value by their insurance agents, many of them retirees, put over $300 million into ETS from 1996 to 2000.

There was, as you might have guessed, a catch. One wonders if the payphones were even real. I think that at least many of them were; ETS ran job listings for payphone technicians in multiple cities and occasionally responded to press inquiries and complaints about malfunctioning payphones bearing their logo. Besides, the telecom industry recognized ETS as a huge PSP in terms of both installed base and call volume.

What definitely wasn't real was the revenue. ETS was a ponzi scheme. In 2000, the SEC went for Charles Edwards, showing that ETS had never been profitable. Edwards sponsored a NASCAR team and directed millions of dollars in salary and consulting fees to himself, but in the first half of 2000 ETS lost $33 million. The monthly lease payments to investors were being made from the capital put in by newer investors, and even that was drying up.

SEC v. ETS went on for six years, in good part due to an appeal to the Supreme Court based on ETS' theory that a contract that paid a fixed, rather than variable, monthly rate could not be considered a security. In 2006, Charles Edwards was convicted of 83 counts of wire fraud and sentenced to thirteen years in prison.

Edwards was far from the only coin-op fraudster. ETS was not unusual except in that it managed to be the largest. When a class-action firm and several state attorneys general went after ETS, their press releases almost always mentioned a few other similar payphone schemes facing similar legal challenges. Remember all of those classified ads? I suspect some of them were ETS, but ETS also had a more sophisticated sales operation than two-line classifieds. Most of them were probably from competitors.

The payphone industry crashed alongside ETS; ETS almost certainly would have collapsed (albeit likely more slowly) even if it had been above board. Increasing cellphone ownership from the '90s to '00s made payphones largely obsolete, and more and more established telcos and PSPs decided to drop them. One of the reasons for PTS's ascent was its willingness to buy out operators who wanted out: in 2008, PTS bought most of AT&T's fleet. In 2011, they bought most of Verizon's fleet. Almost every incumbent telephone company got out of the payphone business and most of them sold to PTS.

Given all that, you might think that payphone scams were only a thing of the '90s. And they mostly were, but you can imagine that there was an opportunity for anyone who could adapt the ETS model to the internet age.

Pantheon Holdings did just that. It's even more difficult to untangle the early days of Pantheon than it is ETS. Pantheon operated through a variety of shell companies and brands, but "the Internet Machine Company" was perhaps the most to the point. Around 2005, Pantheon built "internet kiosks" where customers could check their email, print documents, and even make phone calls for a nominal cash or credit card payment. Sometimes called "global business centers," these kiosks were presented as an exciting business opportunity to mostly elderly investors who were given the opportunity to buy one for just $18,000.

Once again, the kiosks were real, but the revenue was not. Pantheon placed the machines in low-traffic locations and did nothing to market them. By 2009, more than a dozen people had been convicted of fraud in relation to the Internet Machines.

Pantheon kiosks still turn up on the junk market.

[1] I spent quite a bit of time researching the history of payphone regulation to try to understand exactly what did change in 1984, how many COCOTs operated and on what legal basis from 1970-1984, etc. I did not have much success. What I can tell is that COCOTs were very rare prior to 1984 (so rare that the FCC apparently didn't know of any, according to a 1984 memo, despite the 1970 example), and by the late '80s were very common. The FCC seems to have taken the view, in 1984, that COCOTs had always been legal, and just weren't being made or used on any significant scale. That's somewhat inconsistent, though, with the fact that suddenly after 1984 divestiture a bunch of companies started making COCOTs for the first time. My best guess right now is that from 1970-1984 COCOTs were probably legal but were something of a gray area because of the lack of any regulations specifically applying to them. Some combination of divestiture broadly "shaking up" the phone industry, electronics making COCOTs much more feasible, and who knows what else lead multiple companies to get into the COCOT business in the mid-'80s. That lead the FCC to issue a series of regulatory opinions on COCOTs that consistently upheld them as legal, culminating in the 1996 act dropping payphone regulation entirely.

2024-10-19 land art and isolation

Prescript: I originally started writing this with the intent to send it out to my supporter's newsletter, EYES ONLY, but it got to be long and took basically all day so I feel like it deserves wider circulation. You will have to tolerate that it begins in the more conversational tone I use for the supporters newsletter. I am going to write a bit about some related local works of art and send that to EYES ONLY instead.

Over on pixelfed I posted these two photos.

Turrell installation at The Crystals

Turrell installation at The Crystals

It's Saturday morning, I have coffee and the cat is here and the work thing I was planning to do has mercifully turned out to not need to be done today, so I have time to kill. Let's talk a little bit about Art before I take on a real project for the day.

So let's talk about the photos first, and then we'll sort of widen our view to the big picture. Neither photo is that good, tbh, I really enjoy architectural photography but I rarely have more than my phone. It's actually rather difficult to find any good angles to photograph this particular corner from anyway, which is one of the common criticisms of it as a work of art. But I have to actually say what it is! It's a corner of the upper floor of The Shops at Crystals, an upscale shopping center attached to the Aria on the Las Vegas strip. It opened in '09, and the building was designed by prominent architect Daniel Libeskind. It is, in my opinion, not really that interesting of a building. It has a combination of dead mall vibes and Las Vegas energy that is not all that compelling, and it doesn't have the surrealism of The Forum Shops at Caesars, the Las Vegas destination I recommend if you want to see a real shopping mall situation. But, it is connected to an APM (the Aria Express), and I will take the slimmest of excuses to ride an APM.

There aren't a lot of good things I can say about Las Vegas architecture, but one compliment I can extend is that the casino developers have mostly retained a tradition of commissioning fine art for their buildings. Nothing can really unseat Dale Chihuly's "Fiori di Como," the 2,000 square foot, 40,000 pound glass sculpture that occupies the ceiling of the Bellagio's lobby. But the Shops at Crystals took a pleasingly modernist direction by commissioning James Turrell. Turrell is probably the most prominent member of the "Light and Space movement," which can be simply described as an installation art view of architecture with a particular emphasis on architectural lighting. Turrell's portfolio includes an array of works he calls "Skyspaces," often vaguely gazebo-like structures with apertures in their ceilings intended to frame the sky as if it were a canvas. Most examples are in private ownership, the only one I have seen is "Dividing the Light" at Pomona College (Turrel's alma mater) in Claremont, California. I do recommend a visit.

There's something important to understand about the Skyspaces: the idea is more or less that the art is the sky, and the space is only there to structure your perception of it. Turrell has described them as naked-eye observatories. The naked-eye observatory has a long tradition, perhaps unsurprisingly, since it was the only kind of observatory until the development of the telescope. Stonehenge is a rather famous one (I am referring, of course, to the Stonehenge of Odessa, Texas. What else?). A lot of Turrell's work is like this, creating a space to structure your view of the world beyond it, and he is particularly interested in the sky as a subject. Stick a pin in that.

So back to the shopping mall. Turrell did a scattering of different projects for the Crystals (I am hereforward dropping "The Shops At" for sanity), including the monorail station. Now, I know you are thinking, what could possibly appeal to me more than a James Turrell monorail station? Why have I not up and moved to Las Vegas to devote my life to the preservation and interpretation of this remarkable artifact?

Well, because it kind of sucks, is why. Almost all of the work Turrell did at the Crystals feels badly hampered by the design of the larger building and the practical necessities of an upscale shopping mall. It's hard to produce that remarkable of a perceptual experience in a wide, crowded hallway. The monorail station doesn't even always have Turrell's lighting turned on, is my experience. What stands out more is the space I photographed, basically an awkward mezzanine floor that exists mainly to be a hallway to the monorail station. You can't help but feel that it was handed over to Turrell because they realized they'd made a mistake and there wasn't really anything else they could do with the square footage. If you're in Las Vegas I would go to the Crystals and take a look, because it is something, but it's a very long ways from a masterpiece. Pretty much every view of it is intermediated by escalators or food court signage, it's jarringly out of context, and exists in a broadly uncomfortable part of the building that is too far from the ground floor to appeal to shops. Instead, it hosts a traveling exhibition on Princess Diana. And if that's not dead mall energy, I don't know what is.

So why are we talking about this, besides that I get to gush a little about Turrell? Well, I really wanted to talk a little bit about land art, and I think these smaller Turrell works are a good inroads. Land art can be succinctly described as art that makes use of, or consists of, the landscape. One of the prototypical works of land art is the Spiral Jetty, built by Robert Smithson on the shores of the Great Salt Lake. It is what it says on the tin, a rock jetty that reaches out into the lake before turning in on itself in a tightening spiral. It was installed in 1970 and wore away with time, so for many years it was completely submerged below the water. But the water is now receding, and it is revealed once again, as a faint rock spiral in a dry plain. There is some active debate over whether or not it would be appropriate to make repairs. It is often an accepted part of land art that it will change over time due to natural processes. At the same time, the Spiral Jetty's exact fate (to be submerged and then later beached, as it were) was not foreseen and is largely a result of human disruption of the ecosystem. So you can make arguments either way.

In any case, it is an interesting aspect of the work (which merits a discussion that makes up more than half of the Wikipedia page) that it exists, today, mostly in the form of photographs. For much of its history it was entirely invisible, and more recently it has reemerged, but in a severely eroded state. If you are killing time in Utah on the wrong side of the lake, say because you have been at the nearby Golden Spike National Historic Park, you should go see it. Or perhaps not see it, as it no longer resembles Smithson's original creation. This is, ultimately, the fate of all land art, as it is the fate of the land. Think about that while you look at it.

One of the most famous works of landscape art is Walter De Maria's "The Lightning Field," in Catron County, New Mexico. The Lightning Field has something in common with the Spiral Jetty and several other prominent works of land art: the Dia Art Foundation. Dia was founded in New York City in the '70s with Schlumberger money, and had a rather explicit mission of funding art works that are particularly expensive. This focus on large projects and the timing of their heyday lead them naturally to land art. I would wager, just off the top of my head, that roughly half of the prominent works of land art in the United States are owned by Dia.

Land art really is big, and that can be its undoing. Turrell's Skyspace at Pomona ran over $2 million, and it's not even really land art, I'm just making a connection there in service of what I originally set out to talk about but still haven't gotten to. I'm not sure if the total original cost of The Lightning Field has been publicized, but it relied on grant funding from multiple art foundations and the state, and a refurbishment about a decade ago ran nearly half a million.

But, well, let's be real, it's not that unusual for original works of fine art to run into the millions, and that's especially true of sculpture which often requires fairly sophisticated fabrication and installation techniques. Land art at its best tends to rely on sophisticated construction techniques, as well. The Lightning Field, for example, required five months of extensive surveying by land and air. Part of this was to produce what we would now call a digital elevation map, in order to create the field's flat top despite the varying terrain.

I am sort of purposely not describing The Lightning Field here. It's not that it defies explanation, it's actually very easy to describe. In an article that De Maria wrote himself for "Art Forum" to describe the project, he says that "the sum of the facts does not constitute the work or determine its esthetics." This is sort of a pretentious thing for a sculptor to say before rattling off a bunch of large numbers, but he has a point: The Lightning Field is literally a bunch of poles stuck in the ground, which is easy to tell you, but gives you very little idea of what it is actually like.

This is a common and important aspect of land art. When I worked for Meow Wolf, we talked a lot about "immersive art," which is pretty much the term that has come to describe "whatever Meow Wolf is." If we allow ourselves some rose-colored glasses, most land art projects were a form of immersive art, an earlier form that seems to predate the forthright commercialization of the projects I got to work on, like Meow Wolf's Omega Mart. To be fair, the commercialization is part of the work, but it sure is on display. Immersive art means that "you have to be there," and "you have to be there" is a great opportunity for the real estate developer.

Ah, but this is indeed a rosy view of the past. There is another very interesting trait of prominent works of land art. Since I have done such a paltry job of describing The Lightning Field, go ahead and look up some photos. They're worth a thousand words, and so you'll get... several thousand words of information. You will quickly notice that there are very few photos of The Lightning Field, and some of them turn out to be crops or recolors of the others. The Dia Art Foundation, apparently according to Walter De Maria's wishes, prohibits photography, or even the presence of cameras.

I have never been to The Lightning Field. This is also a surprise, I'm in Catron County reasonably often. But it costs either $250 or $150 to visit depending on the season, and the larger issue I have encountered is that the visiting nights sell out immediately.

There is no public access to The Lightning Field. You have to make arrangements with Dia to stay the night at a cabin on the property. Lots of people do this and end up writing travelogues about how the experience changed them, and God knows that I probably will too, some day. But after reading enough of these travelogues you start to go a little mad. You are reading someone else's accounting of an experience---something that is fundamentally an experience, not a sight or a sound, not even space or light---that you have not had and probably never will. It's like when you meet someone and the main and only thing they have to talk about is their nomadic travels of Europe, but instead of the routine lifestyle of people with Silicon Valley salaries and few attachments, it is supposedly a great work of art. Like photographs of the spiral jetty, personal essays of The Lightning Fields describe something that hardly exists.

The magazine articles replace the art.

The charmingly-named journal "Art World Follies" featured an essay about The Lightning Fields by art historian John Beardsley. It is titled "Art and Authoritarianism." It's an exercise in self-control to not quote nearly the entire thing here, but it is available on JSTOR if you have just three pages worth of time to kill. Let me take just this, from the introduction:

...The directive posture assumed toward the viewer by De Maria and Dia suggests that both artist and patron lack confidence in either the quality of the work or the discernment of the viewer.

I can respect that Dia has certain practical concerns that encourage them to limit access to the site, such as preservation of both the artwork and the delicate desert ecosystem that it incorporates. But Beardsley points out that, like most landscape art, The Lightning Field is in a remote location. Distance and unimproved roads provide a natural limiting effect on visitation to these sites; I've been to the Spiral Jetty several times and seldom seen more than one other visitor around---fewer than are permitted to stay at The Lightning Field each night. And that's another Dia-owned site, although not one originally commissioned by them. They clearly do not apply such restrictive measures to everything.

It might be tempting to attribute it to finances, especially after seeing Beardsley complain about the required $30 donation, which has gone up by 500% at the least. I do think that Dia has some financial struggles, but no doubt they could raise more revenue by accepting more visitors to the site.

So, while it is tempting to blame Dia for their restrictive attitude, it's clear that Walter De Maria shares in the fault. Many of the restrictions are in place at his request; the whole notion that you can only see the artwork through a 24-hour stay in a remote cabin to which you were transported by Dia staff was apparently part of his vision.

Artist's vision or not, it is an affront to the public.

There are no doubt some regional politics at play. I cannot help but view Dia with skepticism. Dia is a high-society NYC institution with galleries in New York and, incongruously, several of its most prominent holdings in remote parts of Utah and New Mexico. That land art requires land is self-evident, but it also tends to require a degree of isolation. The most prominent and ambitious land art projects, even when conceptualized in the more populous East, tend to be actualized in the West. Here, land is our most important asset, and in places like Catron County, it often seems to be our only asset.

So you can see the appeal to land artists. But you can also see the indignation when those land artists claim the very land as their art, and keep it for themselves.

To say that The Lightning Field is a betrayal of Western values is probably a little over the top. Besides, it takes only a casual reference to the Bundys to show that those values are not universal. But it is fair to say that the kind of person who travels the west and is interested in land art is the type of person who is interested in the land itself, and holds it dearly. There is something unbearable about Walter De Maria, an artist from and in New York City, making his most famous work out of a slice of our desert and then narrowly dictating the terms on which we can see it.

The earth did not set forth one hundred thousand acres of lava across El Malpais and then establish an elaborate booking policy, except that you must put in the effort to cross such difficult terrain. The beautiful and unique Quebradas are remote but open to anyone willing to make the trip. De Maria seems to view his work as part of this tradition: "the land is not the setting for the work but a part of the work." And yet he apparently thinks of himself as being far above it.

Much of the beauty of the land is in the discovery. De Maria writes that "the sky-ground relationship is central to the work." Anyone who loves the desert can tell you that the sky-ground relationship is different everywhere you look, that it does not photograph well but must be experienced, that the most important and striking examples of it are found by chance, or by the dedication of long hours spent looking. Yet The Lightning Field, ironically, offers no such experience. It is intensely curated, guarded by Dia's many restrictions, relentlessly interpreted for its scant audience by the demands its now-dead creator makes of them. "Isolation is the essence of Land Art," he said.

Smithson built the Spiral Jetty and then walked away. It is technically owned by Dia but you would be hard-pressed to tell it apart from the public land that surrounds it. It is, nonetheless, truly isolated. At the lightning field, they make a show of leaving visitors on their own, but in a way that provides the exact opposite message: you are being allowed to glimpse something special, but only on its creator's terms, a creator who has made very certain that his presence will not be forgotten.

I shouldn't be too hard on the Dia foundation. Not only the Spiral Jetty, but also Nancy Holt's "Sun Tunnels" are owned by Dia and open to the public in the way typical of things found in the desert. Like the land itself, they leave it to the visitor to have an experience of their own.

Unfortunately, this demand for isolation that turns to isolationism has become too typical of land art.

We started, you might remember, with James Turrell. His work at the Crystals is open to the public in the most commercial sense possible, perhaps to its detriment. Even there, he has made a concession to isolationism. By far the best part of his multi-installation work there, titled "Akhob," was upstairs in the Luis Vuitton. Access required not only a reservation but passing muster with the high-end store's doorman. Unfortunately, it doesn't seem to have survived COVID: Luis Vuitton stopped advertising Akhob in 2021 and, today, its fate is unknown besides that there is no access. Like the submerged Spiral Jetty, we can experience it only by photos, unless perhaps some shift in the economy causes the waters to recede.

Turrell, as you might suspect, can be viewed as a land artist. The Skyspaces have a natural connection to the land art movement, and as Turrell became more ambitious, his projects became larger, taking on the scale of landforms. Since 1979, Turrell has been working on "Roden Crater." It is a cindercone near Flagstaff that may one day become the greatest of the Skyspaces, excavated and reconstructed as a naked-eye observatory.

It is a huge vision that has faced a great deal of struggle. Despite many announced opening dates, it remained "under construction," closed to the public for 45 years. More recently, a $10 million donation from Kanye West and a partnership program with Arizona State University have brought in renewed funding, but a "tentative opening date" of 2024 looks set to pass just like the last five. In the mean time, it is mainly ASU students (a few of which have been entitled to visit the site by ASU's partnership agreement) and celebrities that have been found worthy. Besides Kanye West, with Kim Kardashian as a +1, Drake recently used Roden Crater as an Instagram backdrop. Clearly, the mandatory donation to experience the isolation of Roden Crater is a great deal more than $250. Perhaps I shouldn't be so cynical, but the main benefactor of Roden Crater is the Dia Art Foundation, and plans call for a set of cabins by which visitors will experience it.

Massive land art projects have a tendency towards vaporware, but that's not to say that they never escape. Michael Heizer's "City" is in a similar vein to Roden Crater, also initiated in the 1970s, also blowing through its proposed opening dates, also admitting no visitors due to its incompleteness. But City made it out: in 2022, it was finally declared open. It is managed by the Triple Aught Foundation, which has apparently learned a thing or two from Dia.

Triple Aught Foundation, the 501(c)(3) that oversees and operates Michael Heizer's City, has complete discretion as to the acceptance of any visitor request. City is located on private property and only invited guests are permitted on the property. All other visitors will be denied access to the property. Invited guests must advise Triple Aught Foundation of any medical conditions. The sculpture City is a registered work, protected by federal copyright law. Triple Aught Foundation has a strict copyright enforcement policy regarding unauthorized photographing or filming of the work. No unauthorized reproductions, public display or distribution of copies of the work, in whole or in part are permitted. Anyone violating this policy will be immediately asked to leave.

Only six visitors are allowed per day, three days a week, for a maximum of three hours, weather permitting, for a fee of $150, from May to November. Reservations are available on a strictly first-come-first-served basis.

You, I can say with a fair degree of confidence, will only ever experience City in the form of the few photographs the Triple Aught Foundation has seen fit to release. The Lightning Field, Roden Crater, they are all submerged, not by the waters of the Great Salt Lake but by the inability of their creators to let land art be like the land itself: unrestricted, unconfined, unassuming.

In the essay collection "LAND/ART New Mexico," curator Lucy Lippard writes that "I've come to the reluctant conclusion that Land Art is for city people."

I live between inhabited and mostly uninhabited areas---which makes this essay a kind of NIMBY rant: not in my backyard, not on my back forty. Given the fact that I have spent my life writing about art (sometimes Land Art), and ranting about the importance of public art, this sounds like a kind of betrayal. But it's hard to imagine what kind of art would work here, at the edge of a tiny village in north central New Mexico, looking out across a highway to private ranchlands and distant mountains. When I was a citydweller, I might have welcomed the sight of some visual extravagance, or oddity, or subtle highlights to my daily surroundings. But the fact remains that even semi-rural New Mexico is hard to improve upon.

Perhaps that's the problem, perhaps land art as a movement is fundamentally at odds with appreciation of the land itself. We might view Turrell, Smithson, De Maria, Heizer as entitled for thinking that the land needed their help. It is already art, and it always has been.

But we are humans, and we have always been inclined to interpret the land in the context of its impact on us, and our impact on it. So often that impact is happenstance, and more often for the worse than for the better. There must be some room to manipulate the land entirely by intent, in service of aesthetics and meaning rather than commercial exploitation. Indeed, the Land Art movement viewed itself in part as an anti-commercial backlash to the museums and galleries that held, and confined, so many forms of fine art. And yet, some of the greatest works of the movement are displayed in conditions more restrictive, more removed from the nature of the land, than the upper floor of a Las Vegas Luis Vuitton.

"Isolation is the essence of Land Art." I am inclined to agree. But isolation is not made, it is found. De Maria went to Catron County to seek it out, but somehow left thinking that he had created it. This is not New York City, you do not find space to think and experience behind a rope stanchion and a guest list. The land is already there, and land artists should trust their audience to experience it.

Spiral Jetty, 2021

2024-10-12 commercial HF radio

According to a traditional system of classification, "high frequency" or HF refers to the radio spectrum between 3 and 30 MHz. The label now seems anachronistic, as HF is among the lowest ranges of radio frequencies that see regular use. This setting of the goalposts in the early days of radio technology means that modern communications standards like 5G are pushing major applications into the EHF or "extremely high frequency" band. The frontiers of basic radio technology now lie in the terahertz range, where the demarcation between radio waves and light is blurred and the known techniques for both only partially apply. HF, by contrast, is ancient technology. HF emissions can be generated by simple, brute-force means. Ironically, this makes HF a bit difficult: the incredible miniaturization and energy efficiency of modern electronics makes HF radio hard to receive and transmit in a reasonable footprint, one of several reasons that HF radio sees little consumer use.

Let's briefly consider the propagation characteristics of the HF band, which are its most remarkable property. HF frequencies have long enough wavelengths that they can reflect and refract in the earth's atmosphere. Somewhat like the skin effect observed with AC electricity or surface tension in liquids, HF emissions have a tendency to bend their path to follow dielectric boundaries. All of these effects are mercurial and difficult to predict; the reliability of sky or ground wave HF propagation can depend on the time of day, the weather, the number of sunspots. All of this makes HF radio a bit of a pain in the ass, but it can be worth it to achieve a feat that higher radio bands cannot: propagation beyond the line of sight.

As a rule of thumb, radio emissions in the VHF band and above behave much like light. Many materials are more transparent to RF than they are to light, but still, most modern radio communications will not propagate beyond the horizon, over a hill, or even past a sturdy building. An HF transmission, by contrast, can be received around the globe in good conditions.

HF radio thus appeals mostly to users that desire long-range communications with minimal infrastructure, and that have the sophistication (of operating practice or technology) to handle the vagaries of HF. The usual suspects are militaries, who fall more on the side of technical sophistication by using computer-driven link establishment systems, and amateurs, who enjoy the complexity of the operating practice. Other major HF radio applications include international broadcasting (often of either national or religious propaganda), intelligence and law enforcement agencies, and communication with ships and aircraft at sea. Nearly all of these applications are giving way to satellite communications, but the relative simplicity and low cost of maintaining HF equipment, and its independence from vulnerable satellites, give it enduring appeal to government users.

Let's consider a few interesting examples of these government applications, although they are not the focus of this article. Military radio is a curious combination of secretive and well-documented. The US military possesses multiple major radio systems, but is theoretically consolidating onto the Joint Tactical Radio System (JTRS). JTRS is an infamous military acquisition debacle and has run vastly overbudget and behind schedule while failing to deliver on many of its promises, but is now in daily use and consists of a diverse lineup of software-defined radios that can operate a variety of modes across many bands. In other words, it is a conceptually fairly simple system that hides enormous, combinatorial complexity in its software. This is modern military tradition.

Non-military systems are more recognizable to those without a background in 1990s military technology programs. One of the largest civilian government HF radio systems is COTHEN, the Customs Over-The-Horizon Enforcement Network. COTHEN was constructed, as the name suggests, by Customs and Border Patrol. It is now widely used by other federal law enforcement agencies with remote field operations, as well as by the military when cooperating with law enforcement agencies. The principle day-to-day business on COTHEN is drug interdiction by CBP and the Coast Guard. COTHEN employs second-generation Automatic Link Establishment (ALE), a popular system developed by the military to allow HF radios to "discover" working frequencies between two locations. Besides various mobile radios, COTHEN has fixed radio sites throughout the country, including one in the high plains east of Albuquerque.

There are many smaller HF radio systems operated by executive agencies, mostly for continuity of operations. Many are integrated into SHARES, a joint system sponsored by the Department of Homeland Security. For a more specific example, the Department of Energy has installed HF radio equipment at many of its facilities including national laboratories, infrastructure sites, and the historic AEC campus at Germantown. The DoE operates its own ALE network, but most (and very probably all) of its sites are also seconded to SHARES. DoE also makes use of HF radio for communications with Office of Secure Transportation vehicles.

We understand that there are government applications of HF radio, that's probably no surprise to anyone. But what about commercial applications? The complexity of HF operations, the size of the antennas, and ready availability of other communications options (like the internet) limit the appeal of HF to business users. Still, there must be some? Obtaining a license to use HF radio is a reasonably simple procedure, the FCC has allocated a number of HF ranges to the IG (industrial/business pool) service. As is usual in the IG service, frequency allocations are not exclusive and there may be other users. Interestingly, FCC regulations place a significant break point at 25MHz: 25MHz to 30MHz is technically HF, but FCC rules don't really differentiate between these frequencies and the more common business pool uses in VHF. Below 25MHz, though, special rules apply.

47 CFR 90.266(b):

Only in the following circumstances will authority be extended to stations to operate on the frequencies below 25 MHz:

(1) To provide communications circuits to support operations which are highly important to the national interest and where other means of telecommunication are unavailable;

(2) To provide standby and/or backup communications circuits to regular domestic communications circuits which have been disrupted by disasters and/or emergencies.

As is often the case in federal regulation, there are some additional terms that require a little closer reading. 47 CFR 90.35, which governs the Industrial/Business Pool radio service, has a table of possible frequency allocations. For the range below 25MHz, the table specifies the following restriction, in part:

(c)(i) Only entities engaged in the following activities are eligible to use this spectrum, and then only in accordance with s 90.266:

(A) Prospecting for petroleum, natural gas or petroleum products;

(B) Distribution of electric power or the distribution by pipeline of fuels or water;

(C) Exploration, its support services, and the repair of pipelines; or

(D) The repair of telecommunications circuits.

Available bandwidth at these low frequencies is already rather constrained by the allocations, but the situation is worse than it appears. HF propagation behavior mean that radio operators seldom have their choice of anywhere in the HF spectrum; usually there are only limited "windows" in which propagation is good. It would not take very many users to create congestion in this valuable long-range spectrum, so licenses are limited to users without other options.

Unsurprisingly, special restrictions below 25MHz mean that frequencies just above 25MHz are quite popular. For example, from 25MHz-25.5MHz you can find a veritable who's-who of the petroleum industry, who use HF radio for communications with off-shore oil platforms. You also find some other peculiar licenses in this range. For example, Ritron is a manufacturer of radio equipment and holds a license for the use of 25-50MHz for the purpose of demonstrating and testing that equipment. This 25-50MHz range is more or less "low band VHF," which was formerly in reasonably common business use, but is now becoming rare. Online discussion leads me to believe that it may not be possible (or is at least very difficult) to obtain IG licenses for the low band today, but some people that had them continue to renew them.

When I say "people" here, I mean it. One of the more common types of licensee for this range is... people, filling out their application with a title of "self" or "person." Although it is public record I am hesitant to give names or addresses of these people, but it explains a lot about them that you can almost always (invariably, from my spot-checking) find an amateur radio license for the same person. I have known some amateur radio operators that obtained their General Radiotelephone Operator license, a broad license for maintaining certain types of commercial radio equipment, more or less for the hell of it. I suppose obtaining a low-band IG license is similar, and adds some more bands to your potential operations.

Let's limit our consideration further, then, to the rarefied frequencies below 25MHz. Besides special justification, applicants for these frequencies are limited to certain emission types (generally narrow 2.8kHz emissions), must use equipment capable of tuning across the entire range, must submit their written communications plan, and are prohibited from testing or exercises that exceed seven hours per week. There are relatively few such licenses, the ULS returns just 61.

I totaled up the licenses by user type. For companies that provide radio services, I categorized them with the industry they normally serve, except when that industry was emergency communications itself. This sometimes required a bit of a subjective call, as we'll see when looking at some specific licenses. But here are license counts by type:

  • Telecom providers: 23
  • Electrical utilities: 15
  • Emergency communications/disaster relief contractors: 11
  • Petroleum: 7
  • Railroads: 2
  • Ranches: 1
  • Weird: 1

Let's discuss these a bit by category. Telecom providers are a fairly obvious user group, as are electrical utilities. Both types of organizations operate infrastructure over large areas and will be expected to begin recovery quickly after a natural disaster or another event that disrupts conventional communications infrastructure. These licensees include AT&T, with the single largest license count (9), Pacific Bell dba AT&T California at the second largest (4), down to smaller entities like the Grand River Dam Authority. A notable selection is National Grid USA Service Company, which despite its generic and almost sketchy name holds a fixed location license for the Nine Mile Point Atomic Power Station among other power plants.

Verizon New York Inc. holds a license for a number of fixed locations including an inconspicuous brick building (reminiscent in its design of early telephone infrastructure but likely today a remote exchange) in Philmont, NY and a mountaintop site near Schenectady. The license seems to cover multiple telephone exchanges, only some of which have apparent HF antennas... some of the locations listed may be historic, nearly all commercial HF licenses are old, with renewal histories stretching back to the beginning of online records in 2001. Besides, virtually all of these licenses include either a nationwide mobile or nationwide temporary fixed location, making the listed locations less important than they might otherwise be.

Most licenses are also nonspecific as to frequency. It is the nature of HF radio that any given frequency will not reliably provide good propagation. The frequency lists on commercial HF licenses routinely stretch on for multiple pages of 20 ranges each, giving operators ample choice. Besides, the CFR requires these licensees to be frequency-agile across the band, in part because the FCC may require them to stop using a given frequency at any time.

The next largest category of licensees are companies that provide communications services to relatively nonspecific customers. Some, such as L3Harris, are principally in the defense industry but likely also provide services to infrastructure providers and regional governments. A few are tiny, like Hazard Zone Technology LLC, which does business from a residential address. Judging by forum discussions of the FCC's approach to HF licensing, it is possible that some of these smaller licensees are largely fictitious entities created to entice the FCC to issue a sub-25MHz license despite the restriction to emergency communications. While not conclusive, the fact that the same individual holds both a general radiotelephone license and an amateur extra license with a vanity call sign is certainly suggestive of a certain personality. There are several such examples, not an inconsiderable portion of the 61 total.

Others are familiar names in an unfamiliar context. Cisco Systems holds a license that lists their San Jose and Research Triangle Park campuses, besides a nationwide mobile location. This license is likely intended to cover the use of HF equipment as a backup point-to-point link for customers of fully managed industrial communications services, but it's hard to say exactly. They have apparently demonstrated a mobile communications trailer for emergency response coordination at conferences.

Just those three categories get us into a long tail. A railroad, CSX, holds two licenses. A ranch holds one; it's unclear how the ranch would qualify under the requirements but they may have held the license long enough to be grandfathered. The justification listed on the license is simply farm operations, not a 90.266 justification statement as found on most of these. The sub-25MHz restrictions have applied since 1983, making it difficult but certainly not impossible to hold a grandfathered license. One license is held by a missionary transportation group, they may have also held the license since before 1983 and list a location in Guam. They likely use HF to communicate with facilities on outlying islands.

There is exactly one license that I have described as "Weird." It was issued to the Bran Ferren Corporation in 2000. Bran Ferren is a bit of a character, a former executive at Walt Disney Imagineering who apparently has a side business of building off-road vehicles. This helps explain the justification statement, "MANUFACTURER AND DEVELOPER OF ALL TERRAIN COMMUNICATIONS VEHICLE." I would like to know exactly which vehicle this relates to, but the Bran Ferren Corporation keeps a low profile as compared to Ferren's main business venture, Applied Minds. Bran Ferren Corporation holds an active USDOT motor carrier number, but has had zero vehicles inspected in the last two years. That may not mean anything, they are listed as a private (not for hire) carrier and inspections may not be required. It's all a bit of a head-scratcher.

These 61 licenses for sub-25MHz commercial radio represent only a tiny fraction of the activity in the HF band. Besides amateur radio, a long list of government users are authorized to use HF radio by the NTIA, rather than FCC. Indeed, the NTIA master file includes at least hundreds of entries under 25MHz, more detail will have to wait on my finally finishing the parser for the more recent format they have used for FOIA disclosures (universal rule: if you spend many hours writing tools to parse a PDF export of a relational database back into a relational database, they will change the format of the PDF).

High-frequency traders have created renewed interest in HF radio, because of its low latency for global communications and the increasing ease of implementing automatic link negotiation with SDRs. Sniper in Mahwah has some well-known writing on this topic. To date, the FCC has authorized this activity only on an experimental basis. In 2023, a group called the Shortwave Modernization Coalition (SMC) and consisting of a group of HFT firms submitted a petition for rulemaking to allow regular use of the 2-25MHz range. The petition opened docket RM-11953, which remains open. Most recently, the FCC conducted a series of meetings between the SMC and federal spectrum users to discuss the impacts of such a new radio service.

Various documents filed by SMC do contain interesting details. SMC members are operating at least fourteen experimental HF sites, with one operating since 2016 and most since 2020. The median transmit power (EIRP) is 21.5kW, and the experimental licenses authorized 5.06-30MHz while SMC members conducted most activity between 6.675 and 21MHz. The ARRL and various amateur radio operators have filed comments opposing the change, a competing HFT radio operator has discussed a counter-proposal that would impose performance obligations on commercial HF radio operators ("use it or lose it" rules). The FCC has not yet produced a Notice of Proposed Rulemaking, the next step in the process. They are not obligated to, and it is possible the proposal will not reach that stage. For its part, the federal government itself has weighed in (in the form of comment from the NTIA), and has requested that the FCC either extensively study how such commercial HF use would mitigate interference with federal users, or exclude the use of any frequencies allocated to federal use. This includes certain bands like the Radio Astronomy service allocation at 13MHz that exist as a matter of federal policy due to foreign agreements. For my own part, I am skeptical that the FCC will act on the SMC petition unless the scope of the SMC's proposed use is reduced.

Finally, it is worth noting that these commercial HF licenses do not represent the full extent of private industry use of HF radio for continuity of operations. Some critical infrastructure operators have been sponsored by federal agencies as operators of SHARES stations. However, SHARES documents suggest that there are relatively few of these stations (around 100), suggesting that they almost completely overlap with commercial HF licenses. State and municipal governments also operate HF radio stations, which are generally licensed under public safety radio services. A curious exception is the City of Lafayette, Louisiana, which holds an IG commercial license for HF frequencies. I suspect that license is actually for the use of the publicly-owned Lafayette Utilities System (it lists electrical distribution as justification), and was issued in the name of the City of Lafayette for bureaucratic reasons.

2024-09-26 the GE switched services network

We currently find ourselves in something of a series, working our way from private lines to large private line systems like the four-wire private-line national warning system. Let's continue to build on the concept of the private line into large corporate systems.

In principle, a large organization in want of a private telephone system could build one out of a set of private lines and switches, such as under a Centrex CU (Customer Unit) arrangement. And this did happen: one common type of private line was the tie line, a private line used to link two switches together (which could be PABXs or Centrex) so that users could call from one to the other without using a conventional dial telephone line. This could save money, if usage of the tie line was heavy enough and especially if the two switches were far enough part that a standard call would be long-distance.

Consider a corporation with two large offices, each with a PABX. If they are in different local calling areas, calls between them placed by dial line would be long distance. If employees at the two offices call each other often, the long distance bills would add up to more than the fixed monthly cost of a tie line from one office to the other. There are a few different ways to solve this problem, such as getting WATS (wide area telephone service) at one or both offices, but it illustrates the general idea that getting a fixed private line can sometimes be a cost-saving measure compared to placing a lot of calls over standard dial service.

But what about a bigger organization, with many offices? You can imagine that getting a huge number of tie lines between different offices, planning where those tie lines should be located and how many were needed on each link, could become a feat of traffic engineering on par with the telephone company's own work. It might be easier to just pay the telephone company to work it out, and indeed, that's what large organizations often did.

So let's say the telephone company meets this request by designing a scheme of tie lines and Centrex exchanges. It's not so far off the mark to say that this describes AUTOVON. AUTOVON was a complete system of tandem exchanges and at least semi-private telephone lines provisioned by AT&T [1] for use by the military. The problem with this arrangement is that it is very expensive: the customer is paying the telephone companies to purchase, install, and maintain a huge amount of hardware, just for the customer's private use.

Now, comparing AUTOVON on a price basis is both difficult and unfair. Difficult because AUTOVON was paid for in a somewhat complex way, by the military paying a central, cumulative rate for the entire system and then performing cost recovery from individual user agencies and installations using a non-trivial cost assignment calculation. It is also often said that the Bell System themselves did not recover the full cost of AUTOVON from the military and that it was, to some extent, subsidized by other telephone services.

And I say unfair, because AUTOVON was more than just a private telephone network. It was a hardened private telephone network, with four-wire service and a precedence capability that required the development of novel equipment. It wasn't really expected to save money compared to the public telephone system, because it was acquired in order to provide capabilities that the public telephone system did not.

Still, we can safely say that AUTOVON was expensive. A 1979 study by the Defense Communications Agency, responsible for AUTOVON cost recovery at that time, comes out to an impressive total of $255,492,000 in AUTOVON operating costs for FY 1978. By way of example, the report puts the monthly service cost of a two-way capability from CONUS to Europe with priority precedence at $1,182. Obviously this example case is one of the most expensive, but I still shudder to imagine a monthly phone bill of over eleven hundred dollars in 1979 money. The military was willing to swing the stiff cost of AUTOVON because, first, it was the military and they were willing to swing the stiff cost of many things, and second because AUTOVON's military capabilities would be very expensive to build by any means. It was the Cold War, after all, and it could be said that outspending the Soviet Union was a military objective.

The situation was rather different when it came to non-military communications. The civilian federal government ran up some enormous telephone bills between its many offices, and initially considered purchasing an AUTOVON-like system to serve as a private network between federal offices. The concept simply wasn't cost effective, it likely would have increased the cost of federal telephone calls overall. The Federal Telecommunications System or FTS would eventually come to be, but not in the form of a private switched system. It is, after all, intuitive that cost savings would not come from installing a great deal of dedicated hardware. Rather, the Bell System would have to find a way to serve these large institutional customers with less investment. And that was the Common Control Switching Arrangement, or CCSA.

It is very tempting to draw an analogy between the CCSA and virtualization in contemporary computing, but it is probably more accurate to draw an analogy between the CCSA and LPARs in IBM hardware, or even more aptly, to virtualization's early precedent in the limited subdivision capability of Babbage's difference engine. Let's stop indulging the temptation and explain it more directly: a CCSA is created by configuring existing telephone switches to treat a subset of their lines as part of a separate network.

The technical details by which this was achieved varied significantly by the switch. CCSAs were introduced in the early 1960s and could be configured on the #5 crossbar exchange, where "configuration" consisted of strapping or jumpering certain components of the switch to operate independently of the others. CCSAs continued just about to the modern era, where configuration became a matter of selecting the appropriate lines in the business office system that generates configurations for computer-controlled exchanges.

I think that it's most interesting to examine the CCSA by way of example---by looking at a specific, real CCSA. BSP 310-200-007 I2 (1966) conveniently provides a directory of the code numbers that were used to identify CCSAs within the telephone system. Number 02 is FTS, the Federal Telecommunications System. I didn't bring it up without reason, the concept of the CCSA was developed in large part in order to bring the cost of FTS under control. We can ponder what happened to number 01, but I'm guessing that AT&T used that code for testing and thus reserved it, or maybe even to identify the public telephone system.

One could think of the normal, public telephone system as just another CCSA, although as I understand it this was not the nature of the actual implementation. Another appealing analogy for the CCSA is the VLAN, we could think of these CCSA network numbers as VLAN tags. In this analogy, the public telephone system is the PVLAN, sometimes called 0 or sometimes called 1 at the whim of vendors. If you are familiar with VLANs, that somewhat illuminates why I say that the public telephone network is not just another CCSA: it is the "untagged" network into which equipment not capable of CCSAs and lines not attached to a CCSA are presumed to exist. Anyway, that's all besides the point, what other CCSAs existed?

04, General Electric. 05, New York Central (railroad), 06 Lockheed, 07 State of California, 08 AUTOVON (used to facilitate expansions of AUTOVON over non-AUTOVON telephone infrastructure, as a more cost effective way to provide AUTOVON lines at smaller installations), 09 American Airlines, 10 Boeing, 11 Westinghouse, 12 Western Electric, 13 IBM, 14 North American Aviation. That's the complete list as of 1966, and while short, it is a who's-who of the industrial giants of the post-war United States. Plus the State of California. Most state governments used large Centrex-and-WATS arrangements, but some combination of the large size of California and GTE's different approach to the network steered them in the CCSA direction.

Of these CCSAs, I will focus on General Electric. There are two reasons: first, GE had an early and large CCSA---the largest CCSA outside of the federal government, at the time. Second, I was an intern at a failing GE business in a large, half-abandoned corporate campus [2] during the summer that the last vestige of the GE Switched Services Network, as AT&T called it, was retired. Among my scattershot duties was working on the decommissioning of the campus's Nortel PABX in favor of Cisco UCM. GE SSN would go with it, replaced by IP trunking between UCM sites.

In 1963, the GE SSN spanned fifteen central offices ranging from New York to Los Angeles, all #5 crossbars. It was intended to provide voice as well as data at 1200bps. Unlike some (mostly federal) CCSAs, it was designed to provide standard two-wire dial service only, without station-to-station four-wire connections or call precedence. In other words, it was a standard telephone network, but intended to make calls between GE offices more reliable and less costly than calls over the long-distance telephone network.

One of the complications of the GE SSN, and of CCSAs in general, is the diversity of telephone equipment in use across the different corporate offices. The GE of 1963 had Centrex service, step-by-step PABXs, crossbar PABXs, key systems, and manually operated PBXs. All of these were integrated into a 7-digit dialing scheme for GE SSN, with the NNX prefix (different from NXX used in the public telephone network by prohibiting a 0 or 1 in the second digit position) identifying a location on the network such as a PBX, and the four-digit subscriber number generally being the telephone's local extension, padded with arbitrary digits as needed to be four digits long. Of course, the details were less tidy, with smaller locations sharing prefixes and some locations acting almost like toll stations with single telephones on the GE SSN and selecting it by key.

In general, though, at PABX-served locations, extension users had a choice as to how to place their call. The configuration wasn't the same at every office, but the recommended practice was to use a 9 prefix (or "exit code") to dial on the public telephone network, and an 8 prefix to dial on the GE SSN. Most PABXs have some version of this capability: specific trunks can be selected for outgoing calls based on the dialing prefix.

At locations with manual exchanges and locations without compatible PABXs, GE SSN calls had to be placed with the assistance of the local PBX operator. Still other locations used a small PABX connected via tie line to a larger PABX at a larger office, in this case the dialing prefix "18" was recommended to first dial a trunk from the satellite PABX to the main PABX, and second from the main PABX to the telephone exchange providing GE SSN service.

Indeed, let's reflect a bit on the wiring scheme involved.

CCSAs were served by Offices, like the fifteen I mentioned for GE. FTS and AUTOVON had more offices (AUTOVON's CCSA office list tellingly includes the proper AUTOVON exchanges), but most CCSAs had fewer, sometimes only a handful. Between these offices, trunk capacity could be shared with normal telephone traffic, giving CCSAs a significant cost advantage. Individual phones (or PABXs or etc) on CCSAs needed to be connected to an actual CCSA office, though, in order to have access to the CCSA at all. This made practical CCSAs sort of a hybrid situation.

A given GE office might have lines running directly to the serving office, for example in New York City where there were two offices on the GE SSN. Offices that weren't near a 5XB included in the scheme, though, would need to somehow be connected up to one. The BSPs are not replete with details, but presumably this was done using the fairly conventional foreign exchange service.

I think I have mentioned this before but I will provide a very short summary. In the VoIP industry, it is extremely common to identify the "ends" of a telephone subscriber loop using the terms FXO and FXS, for Foreign Exchange Office and Foreign Exchange Station. Confusingly, the terms refer to where a given connection "goes" rather than where it "is," with the result that a Foreign Exchange Office connection is what you would plug into a phone, and a Foreign Exchange Station connection is what you would plug into a device like an ATA---something that provides talk battery, ringing, etc., the traditional role of the telephone office.

So the "Office" and "Station" part of those terms makes sense, besides the fact that they are arguably the opposite way around from what you would first think. But what about the Foreign Exchange? Well, these terms predate VoIP by decades, and were originally used to identify the ends of a foreign exchange service.

Foreign exchange was a specific type of private line that allowed a phone to be connected to a different central office from the one that physically served it. There are different reasons that this was useful, but a common one had to do with long-distance rates and suburban areas: it was historically common in large metro areas that suburbs could call into the city at local rates, but suburbs could not call into other suburbs at local rate... that would be a long distance call. You can see that this provides a bit of an economic advantage to city phones. So if you are, say, a plumber with your shop located in a suburb, you might pine after a big-city telephone line that would allow more of your prospective customers to call you for free. Foreign exchange could solve that problem.

When you ordered foreign exchange service, the telco connected your phone line, at the distribution frame of your local central office, directly to a private line. The private line went to a central office in the Big City, where it was connected at the distribution frame to a local line served by the switch. In practice there were complications and details to how this was set up, but this description gives you the idea: your telephone was now connected to the switch in a different central office from the one your local loop was actually connected to.

Foreign exchange service was expensive, because it took up private line capacity, so various combinations of WATS, InWATS, zenith numbers, toll-free numbers, etc. have pretty much replaced it. Some of the terminology got stuck in our modern telephone parlance, though. FXO and FXS were designations used by the telco to keep track of which ends of the private line needed to connect to what equipment. Why was I talking about this, though? Oh, right, because large CCSAs in practice also relied on what was basically foreign exchange service in order to connect outlying locations to CCSA offices.

It's a good thing, here, that most of these GE offices had PABXs. This limited the number of outside lines that needed to actually go to a CCSA office. What I am calling foreign exchange lines could also be viewed as tie lines, not really serving phones but providing trunks from PABXs to CCSA offices.

The nature of the CCSA is pretty much in its name: it is a switched private line service, but it makes use of common control equipment to minimize costs. I have tried to make terminology a little simpler here, but I have kept saying "GE SSN." Switched Services Network, or SSN, seems to have been a term used by the Bell System to refer to any private switched network. A CCSA was one of the ways of implementing an SSN, and seems to have been the most common throughout telephone history. There were not many truly private switched systems. AUTOVON could be considered an example, although it had requirements above and beyond typical telephone service. That leaves the FAA as a purer example, as the FAA used a significant number of private line services for both switched and unswitched communications between air traffic control sites and equipment.

Incidentally, a precursor to AUTOVON was called SCAN, the Switched Circuit Automatic Network. SCAN was a US Army four-wire system, because four-wire service was required for the cryptographic equipment of the era to function. AUTOVON seems to have inherited its four-wire nature directly from SCAN. Skimming through a telephone tariff almost always turns up some interesting details, one of them being that a few state telephone tariffs describe Switched Service Networks as being private line service based on either CCSA or SCAN. Given that these same 2024 telephone tariffs define SCAN as a federal government service for secure communications, this definition (and the presence of an entry for SCAN at all) seems to be purely a holdover from some fifty-year-old tariff documents. It does go to show that non-common control switched services networks were uncommon enough that telcos viewed AUTOVON as the odd exception to the CCSA rule.

I am trying not to get too tied up in the history of AUTOVON, because it is easily its whole own article. I do think it is fair to say that the CCSA emerged largely as a response to the high price of AUTOVON, as building FTS based on the pattern of AUTOVON was deemed completely unrealistic on a cost basis. FTS launched in 1963, not far behind AUTOVON at all, but consisted of CCSA service for long-distance calling with PABXs in government offices furnished under lease agreements. Once the heavy lifting had been done for FTS, it was natural to extend CCSA as an offering to large private companies.

FTS also holds some of the seeds of the Bell System's undoing. High costs and lackluster service plagued FTS in its early years. During the 1970s the General Service Administration, which was responsible for FTS, decided to introduce a competitive bidding process for long-distance capacity. That lead to companies like Western Union and MCI joining the network, and the introduction of least-cost routing to select the carrier. These ideas would also spread from government to private industry, helping to set up the industry-wide tensions that culminated in the 1982 breakup of AT&T.

A common term in the telephone industry is "Universal Service." In the modern world, universal service is understood to be the goal of providing telephone service to all customers. The Universal Service Fund, for example, levies a fee on telephone lines to subsidize the provision of telephone service to those who would otherwise be unable to afford it. This is a recent invention. 1960s documentation on CCSAs makes repeated reference to Universal Service, a very different form of the concept championed by AT&T more or less until its breakup: that everyone would be served by one, unified telephone system, the Bell System. During the early days of the telephone, before AT&T's monopoly was cemented, Universal Service was a rallying cry against competing telephone companies, whose independent networks interfered with the ability of any telephone to call any other. In the mid-century, it became a term somewhat like public switched telephone network (PSTN). CCSAs were capable of Universal Service, where desired, on a somewhat limited basis, in that CCSA exchanges could be configured to allow calling outside of the CCSA, into the public system.

It is amusing, then, that AT&T was so willing to abandon the ideal of universal service when their customers offered to pay for it. But that's business, and in the mid-century, AT&T was one of the biggest businesses in the world. CCSAs show us some of the ups and downs of the age of the telephone monopoly: CCSAs were an innovative concept that was rapidly developed and delivered, first to the government and then to private customers, over a span of just a few years. They were also frightfully expensive, and offered a new level of lock-in that kept customers from using competitive carriers.

It's hard to find any contemporary information about GE's private telephone system. It has almost entirely vanished from history, except in the form documented by the early BSPs. I don't even quite remember what it was called when I was at GE, I think it might have had "star" in the name. There was a printed directory to help you figure out the correct office code and means of transforming an extension in order to dial over the system. I don't think the copy I saw was recent, I'm not sure if a recent copy even existed. The system was barely used at all. It was replaced by a common modern arrangement, least cost routing with IP trunks.

When calling another GE office, the Cisco Call Manager installation would connect the call over the data network to the other office's Call Manager. Very practical, very easy to use, kind of boring.

[1] Even prior to divestiture, the practical construction and operation of these telephone systems was split between AT&T Long Lines and the telephone operating companies. Many, but not all, of these were subsidiaries of AT&T. In the case of AUTOVON, for example, we must consider that non-AT&T subsidiary GTE built and operated part of the system. I'm trying not to get bogged down in this complexity, but I'm also trying not to keep writing "AT&T" when referring to work done by multiple companies, some of them independent. Please do me the kindness of understanding that when I use terms like "the telephone company" or even "the Bell system" I am trying to encompass all of the parties, AT&T, Bell Operating Companies, independents, etc. that were involved in this work. The term I use may not be exactly correct.

[2] It was GE Intelligent Platforms, and the beautiful but poorly maintained corporate estate in Charlottesville, itself the remains of a failed joint venture, might have been a more suitable exterior for Severance than Bell Labs Holmdel.

2024-09-14 the national warning system

Previously on Deep Space Nine, we discussed the extensive and variable products that AT&T and telephone operating companies sold as private lines. One of the interesting properties of private line systems is that they can be ordered as four-wire. Internally, the telephone network handles calls as four-wire with separate talk and listen pairs (or at least, it did before digitization). For cost reasons, though, service to individual customers is virtually always two-wire, with talk and listen combined onto a single pair via hybrid transformers. Four-wire private lines are just about the only exception.

Why? Well, one of the major advantages of four-wire service to the telephone instrument is that it avoids the echo and sidetone that normally occur within the hybrid transformers. On a call between two telephones, this effect is acceptable and even desirable. In conference systems, though, with many phones attached, echo accumulates until the line is almost unusable. Prior to the introduction of DSP technology to "clean up" the audio, multiparty conferences were a lot more limited than we take for granted today... except for the four-wire private line systems specifically built for large conference calls. The most notable of these is the National Warning System, or NAWAS, operated by AT&T for the Federal Emergency Management Agency (FEMA).

FEMA has an interesting history. It is most directly a product of the Department of Housing and Urban Development, where was originally established in 1973 with the responsibility for coordinating reconstruction after natural disasters. Over time, a series of federal reorganizations expanded FEMA and added additional roles. Most notably, in 1979 President Jimmy Carter instituted a major reorganization of federal emergency agencies that dissolved the Office of Civil Defense, made FEMA an independent agency, and placed all civil defense responsibilities within FEMA. As a result, part of FEMA is a direct descendent of the civil defense efforts at the peak of the Cold War. FEMA operates the government relocation bunker at Mt. Weather, for example, and by the same coin is responsible for the dissemination of attack warnings to the contiguous United States.

The origins of NAWAS can be traced back to the Civil Defense Warning System (CDWS), often known as the "Bell and Lights System," which was introduced in the 1960s and is itself an extension of some earlier precedents. The various iterations and renaming of NAWAS make it a little bit difficult to trace its history exactly. Wikipedia says that NAWAS was formed in 1978, a reasonable claim given that FEMA organized around that same time. But NAWAS cannot have been new in '78: AT&T published a BSP covering the "OCD NAWAS," prior to FEMA's existence, a full decade earlier in 1968. Indeed, NAWAS and the CDWS or "Bell and Lights" must have operated in parallel, as both had BSPs issued that same year.

That's not actually that surprising: one of the reasons that the nation's emergency communications networks kept being replaced is because the requirements kept changing. CDWS was designed primarily as an automated or machine-to-machine system, capable of activating air raid sirens and sounding local alarms automatically. During the height of the Cold War this was a good fit for the intent. The emergency scenario was nuclear attack, and a warning would need to be disseminated as rapidly as possible for optimum lifesaving effect. The Office of Civil Defense once targeted a 30-second timeline from declaration of an alert to the American public becoming aware.

Even as CDWS was put into service, though, OCD was aware that a more extensive communications capability would be required to distribute information on attack outcomes, evacuation and recovery efforts, and to enable continuity of government even in a possible scenario of devolution of control to local emergency response authorities. That need would be served by NAWAS: not an alarm system, but a voice communications system, ready for two-way use on state, regional, and national scales.

Over time, FEMA has shifted its emergency alert programs away from nuclear conflict and towards the "All-Hazards" model, in which the scope of the systems is interpreted broadly and the primary use tends to be natural disaster and weather alerts. The "All-Hazards" concept came about mostly because of a sense that FEMA and the National Weather Service were pointlessly duplicating capabilities; so because of All-Hazards thinking, FEMA NAWAS distributes critical weather alerts and NWS's weather radio network distributes FEMA alerts. NAWAS has thus changed its identity from a wartime system to a more general emergency management system.

Let's see how NAWAS actually works. NAWAS is, at its core, a network of interconnected four-wire conference circuits. A national line connects the Warning Centers to the Regional Warning Centers, eight regional loops connect the Regional Warning Centers to states and other federal warning points, and 48 state circuits connect warning points within each state. Each of these circuits or loops is essentially a party-line or conference line. If you pick up one of the phones, you'll hear if someone is talking on any of the others.

Nationwide messages generally originate in the National Warning Center or its alternate. Historically, the National Warning Center was at Cheyenne Mountain, Colorado (with NORAD where alerts would most likely originate), and the Alternate National Warning Center was at Denton, Texas. There was a second alternate facility in 1968 at Olney, Maryland. At some point, FEMA seems to have "promoted" the sites to remove Cheyenne Mountain, making Denton the primary and Olney the alternate.

Denton and Olney were both the locations of FEMA regional headquarters, which have gone by various names over time like Special Facilities or Federal Support Centers. They were originally built by Civil Defense or FEMA (depending on the year) to coordinate the recovery from nuclear attack, and as such they were hardened. Perhaps the most famous is the Region 8 "FEMA Bunker" at the Denver Federal Center. Other FEMA regional headquarters tended to be either in more remote areas or were built by repurposing existing military facilities (such as that at Olney, a reused Nike missile site), and as such kept a lower profile.

It's hard to know what exactly is going on today. In the 2016 NAWAS Operating Manual, the most recent version that seems to have been made public, the Alternate Warning Center is directly named as Olney, MD but the location of the primary is left unsaid. The Olney facility has been transferred to the Naval Surface Warfare Center and is no longer in use by FEMA. A document suggests that the Alternate Warning Center may have moved to Thomasville Federal Center, near Atlanta. Denton, TX remains a major FEMA site and may still be the primary.

The primary control circuit connected the (originally) three warning centers with the eight regional headquarters, but considering that some of the regional headquarters were themselves warning centers, it had fewer points that it might sound like. That circuit served as one of several ways (including the HF radio system FNARS) that these major FEMA sites could communicate with each other in a major disaster, but it was less important for alert dissemination.

The more important parts are the regional circuits, which are configured as loop to provide redundancy against a line break. The regional circuits connect a FEMA Regional Headquarters, at least one Warning Center for redundancy, federal alerting points (an AT&T document lists Coast Guard stations as an example), and a primary and alternate warning point within each state. FEMA would use these circuits as the main way to distribute a nationwide alert, reading it directly to the state Warning Points, which are generally the offices of the state's emergency management agency. For example, in New Mexico, the Primary Warning Point is the Department of Public Safety office in Santa Fe; the Alternate Warning Point is the state Emergency Operations Complex at the National Guard complex by Santa Fe.

There are likely some additional levels of redundancy in most cases. For example, state emergency planning documents imply that the Department of Energy has a NAWAS site at Kirtland Air Force Base that is connected to at least the regional and state networks, and possibly the national control circuit as well. It thus serves as an additional contingency for distribution of alerts on the state network, were the Primary and Alternate Warning Points to be lost.

Each state circuit is left in part to the discretion of the state, with criteria stated by FEMA. Generally, it includes the state warning points along with county emergency operations centers and major infrastructure facilities like hospitals and power plants.

Along with the specialized purpose of NAWAS come specialized equipment. NAWAS relies on a system that AT&T calls SS1, presumably Signaling System 1, but is more similar to a selective calling scheme than to a more general signaling system. SS1 appears fairly similar to the control pulses used by CDWS, but repurposed to ring phones on NAWAS to get a user's attention. On the national control circuit, for example, the National Warning Centers have SS1 transmitters that can be used to signal SS1 receivers at the Regional Headquarters to ring. Keep in mind that this is conference system, so there is no real sense of "placing a call" or "hanging up." The telephones are always connected. The provision of selective calling is just to get another user's attention so that they pick up the phone. In practice, they likely won't even pick up the phone, as most NAWAS sets are equipped with an always-on speaker to monitor activity on the circuit.

The SS1 selective ringing system also allows the National Warning Centers to selectively call state warning points on the regional circuits. State warning points, at least those operated by the state itself, are able to selectively call other sites on the state network. And that's actually nearly the limit of the SS1 capabilities. All other selective calling is done by "voice paging," basically yelling into the phone in the hopes that the party you want is listening to their speaker.

There is one other interesting capability of SS1, which requires understanding the structure of the network, beyond just the circuits. At each state primary warning point, the regional NAWAS circuit is actually bridged to the state warning circuit, so that any traffic on the regional circuit will also be heard on the state circuit. Essentially, in its "normal" state, the whole state circuit is just a leg of the regional circuit, and all sites on it hear regional traffic. This ensures that a warning read by one of the National Warning Points will be heard as quickly as possible through the over 2,200 phones in the total NAWAS system. This connection is only present on one pair, though, so it's one way: the state circuit hears traffic on the regional circuit, but the regional circuit does not normally hear traffic on the state circuit.

As originally designed, a foot switch in each state Primary Warning Point disconnects the two networks when depressed, allowing the Primary Warning Point to "speak" on the state circuit only. The foot switch basically selects which of the two networks the Primary Warning Point phone will transmit onto, but also disconnects the bridge so that the Primary Warning Point can speak on the state network even if the regional network is busy. You can imagine, though, that this would pose a problem when a critical alert needs to be distributed. To guard against a stuck footswitch or just a particularly chatty state emergency manager, the National Warning Centers can send an SS1 code that will throw a relay to bypass the footswitch and reconnect the regional and state networks. This code would be sent just before any critical nationwide warning.

The use of four-wire conference systems usually requires a slightly different type of telephone set, anyway. One of the goals of four-wire systems is often to function as a "squawk box" or "hotline" (one of the many definitions of the word), an always-on system that can be heard on a speaker at every connected location. As a result, every NAWAS warning point has a loudspeaker. There is also a telephone handset, used to speak into the system. Because there are many phones on each NAWAS circuit and you know how large Zoom calls tend to get, each NAWAS handset has a push-to-talk button. You don't see these very often today, but Western Electric offered telephone handsets with a PTT button mounted on the inside of the handle as a standard product. Apparently depending on the preference of the installation site, the loudspeaker is automatically disabled either when the handset is picked up or only when the PTT button is depressed. This prevents feedback or echo. It might seem a little odd to hear a conversation on the speaker but speak into a handset, but this was very common in mid-century telephony, as the echo and feedback problems of a true "speakerphone" proved hard to solve.

At State Warning Points, an indicator light attached to the telephone set helps the operator understand what circuit they are connected to. Green means that the footswitch has been depressed to disconnect the state network from the regional network. White means that another location, presumably the State Alternate Warning Point, has depressed their footswitch to disconnect the two systems (the bridge was duplicated at the two sites, but one site pressing their footswitch would disconnect the bridge at the other site as well). A red light indicated that the National Warning Center had bypassed the footswitches to distribute an alert.

BSP 310-530-901 LL Issue A (1968) provides a general description and operating information on NAWAS as originally built. Much of the BSP relates to the bureaucracy of operating a special service on a nationwide telephone system: it lists, for example, the specific plant control offices responsible for the testing and maintenance of each circuit. The Cheyenne Mountain Central Office must have been an interesting place, with primary responsibility to handle trouble reports and network management on the national and regional circuits. Each regional circuit was assigned to a CO within the region as well, typically one in a major city near the corresponding FEMA regional headquarters.

These plant control offices are informed that they must report all outages and work done on the circuits to OCD, and that they must take special precautions to ensure uptime on the NAWAS circuits. 8.05 states, in characteristically dry AT&T tone, that "the circuit may be required by the customer at a moment's notice due to the nature of the business for which it is used." I have heard anecdotes that switchmen working in central offices were sometimes used to hearing routine chatter on the NAWAS, and I see that the BSP suggests (but does not require) that control offices responsible for the system have a dedicated monitoring speaker. Proving that customer service is always the hard part, the BSP requires that these control offices make regular visits to customer sites and emphasize the importance of keeping the loudspeaker volume turned up, using the PTT button on the handset, and promptly reporting any trouble.

The BSP also lays out a process for routine maintenance on this critical, high-uptime system, which AT&T refers to as "circuit line-up." Such line-ups are performed only on Saturday, and must be authorized by the National Warning Center and announced to all Warning Points. Once started, the line-up time allows the test rooms at each control office to perform routine quality measurements on the circuits and detect any problems requiring maintenance. The test room must keep the circuit audible on a speaker for the duration, so that they can immediately stop their testing should the National Warning Center transmit that they need the circuit returned to carry emergency traffic. Following each line-up, the National Warning Center would place test calls to each State Warning Point to verify correct functioning, and the test room was expected to stay on the line to monitor this process. One can imagine that the whole thing felt like a hassle to the test room personnel who had to stay perpetually on edge.

The modern Operations Manual for NAWAS is available via FOIA, or at least a version from 2016. It mostly describes the details of operating the station equipment, which have not changed all that much since the 1968 BSP. The original Western Electric sets have been replaced by NAWAS terminals manufactured by Comlabs, a small company focused on emergency communications. They appear to be modified AT&T or Lucent phones with a new dialpad (including the elusive DMTF ABCD "digits"), an alert light, speaker volume knob, and presumably the electronics swapped out for four-wire operation. Phones used in local warning points have no keypad at all, since they are not intended to issue alerts. The pulse-based SS1 system has been replaced by DTMF; selective calling is done using four-digit numbers while commands like linking and unlinking regional and state networks use the DTMF "A" digit.

Besides some modernization of the terminals, the sites on the system have expanded along with its scope. NWS forecast offices, NOAA facilities like the Hurricane Center, and FEMA's "MERS" mobile response teams have been added to the regional circuits. The NWS runs its own round of tests to ensure it can distribute weather alerts to State Warning Points.

You might be somewhat familiar with NAWAS. I have mentioned it before, and in general, it's one of the best known of the government's various emergency communications system. No small part of this is due to a compelling bit of drama available in the documents. The manual has scripts.

This is the FEMA (Alternate) Operations Center. A nuclear weapon detonated in (city, county, state) at _____ Zulu. Radioactive fallout is possible! Persons in (city, county, state) should be advised to remain under cover and await further instructions from state or local authorities. Residents are advised to take protective actions in accordance with local community shelter plans and to be alert for further instructions from state or local authorities. Residents in all other areas are advised that protective action is not required at this time.

This is actually somewhat lengthy considering that the audience for the message is mostly state emergency management agencies. Alerts of this type on NAWAS would be proceeded by an audible alert tone to attract attention, and followed by a roll-call of all state warning points to ensure that the message was received. Local authorities would do... something. In the event of an attack, they will theoretically activate local sirens with a wavering tone for 3-5 minutes. FEMA procedures dictate that this signal exclusively indicates a confirmed impending or in-progress attack. In practice, it is either the same as or not readily distinguishable from the signal used for tornadoes in many tornado-prone areas of the country---generally the only areas with sirens at all. FEMA procedures have often coped poorly with the use of warning systems for purposes other than those that were apparent during the Cold War.

The manual provides scripts for other scenarios, ranging from a detection by NORAD (probably actually by intelligence community assets) of possible fires to to reentering space debris, errant weapon launches (nuclear or conventional), and various natural disasters. One of the newer additions to the procedure is use by the Pacific Tsunami Warning Center to disperse warnings of tsunamis affecting the west coast. This is one of the newest disaster scenarios to prompt serious investment in mass notification, and large parts of the west coast are now equipped with sirens in case of tsunami.

NAWAS has a close connection to IPAWS, the Integrated Public Alert and Warning System. Most NAWAS alerts would simultaneously be issued as IPAWS alerts, distributed across radio and television stations and via the newer Wireless Emergency Alert system. Much like IPAWS, only federal authorities can issue national NAWAS alerts, but regional authorities like state governors are permitted and even encouraged to use state NAWAS circuits to disseminate local alerts. Indeed, most state governors residences have NAWAS terminals installed for ready access.

NAWAS has provided reliable service for many decades, but now shows its age. Private-line telephone systems like NAWAS are fundamentally challenging to harden against attack and disaster due to their fixed routing. Besides, the statewide conference line capability of NAWAS now seems rather limited compared to the popularity of text messaging in emergency management. In 2022, FEMA awarded AT&T a $167 million contract to modernize multiple FEMA communications systems, including NAWAS. The plans for NAWAS are vague:

AT&T will transition the NAWAS legacy technologies to newer services available via EIS through a well-planned, phased, cost-effective, and non-disruptive approach to the new solution with government oversight.

Sounds great. Can't wait for the new solution.

So there we have it, a large, nationwide communications network made up of four-wire private lines. There have actually been a number of these used over time, including notably the Strategic Air Command's C2 network which involved both radio links and private lines terminating at telco-furnished turrets at SAC bases. Some smaller-scale four-wire systems remain in use today for local emergency management purposes.

Later, possibly as soon as next week, we will take a look at a much more complex type of private line service: Common Carrier Switching Arrangements. But I might get sidetracked on the road to CCSAs once again, and post first about the federal private line services that lead to the invention of the CCSA: AUTOVON and the Federal Telephone System (FTS).

2024-09-08 private lines

I have been meaning, for some time, to write about common carrier switching arrangements (CCSAs). These could be considered an early form of products like "virtual private ethernet:" a private telephone network that was served by the same switching machines that handled the public telephone system. A CCSA is, in effect, a "virtual telephone network." AT&T operated a number of these for both government agencies and large private organizations, and they might be viewed in a way as precursors to the large CENTREX-and-WATS arrangements that became a common fixture of state governments and school districts.

The problem is that I fear I am putting the cart before the horse. CCSAs, and even the fully private telephone systems they were intended to replace, are basically the extreme extension of the private line. Besides, private lines are an important part of the history of computing, as well: they were the pattern for the digital "leased line" services that formed the bulk of computer network connections through the early days of the internet.

Let's ease into it by starting with an important source in telephone history: the Bell System Practices.

Large organizations tend to function like religions. This is more or less overt, depending on the organization and the fashions of the time. For example, in the period we will mostly discuss in this article, a number of large companies published song books. IBM is the best known for this type of corporate spirit, ranging from their de facto hymnal to the "Think" signs customarily installed in IBM worker's offices. The television series "Severance" frequently referred to this aspect of post-war corporate culture, depicting a corporation that unified its employees through songs, quotations, and an internal museum. This type of pseudoreligious corporate culture is, of course, taken to an extreme in "Severance," but nonetheless resembles real practices that still echo through corporate America today.

Severance depicts its ominous corporation with exterior shots of the Bell Laboratories offices at Holmdel, a prestige design by Eero Saarinen that dates to an era in which the architecture of corporate offices was often an idealistic representation of their intended culture [1]. This is just a little bit ironic, as Severance is clearly patterned more after the computer industry than AT&T, which was comparatively subtle in its corporate religion. Still, there were aspects of the sublime: Saul Bass's pitch film for AT&T's 1960s rebrand devotes much of its length to a hagiography of the Telephone Men. Universal Service, AT&T's stated goal, was both as ambitious and ever-changing as "redemption," personified as the enormous, golden "Spirit of Communication" depicted in sculpture and mural at former AT&T offices. And AT&T had signs! Of course they had signs; the best known being the "Bell System Safety Creed" that hung in most work areas.

This is all to preface my use of the word "doctrine." The kind of shared culture that companies attempted to establish through song books is also an important part of practical operations. We associate doctrine mostly with religion, but more broadly, a doctrine is a set of codified, shared beliefs and practices that define the work of an organization. The military has doctrine in this sense, and AT&T has doctrine as well. Much of that doctrine was compiled into a lengthy, almost sacred text: the Bell System Practices, or BSPs.

BSPs are an invaluable source of information, and fortunately organizations like TCI have put a lot of effort into finding and preserving the remaining copies. Unlike publications like the Bell System Technical Journal (BSTJ), BSPs were for internal use only, and so the collection available today is scattered and incomplete. That's not the only challenge when using BSPs for research. Fitting the analogy to holy texts, the organization of the BSPs is arcane and has undergone a number of changes. For the latter half of the 20th century, BSPs were identified by nine digit numbers, separated into three groups of three digits. The first group identified the Division, the second the Section, and the third the Document.

BSP 000-000-000 provides an index to the Divisions. Each division begins with a section index, numbered NNN-000-000. Sections are typically grouped by their first digit, producing topics referred to by NNN-N. NNN-0 is devoted to indexes, NNN-1 is usually a "general information" topic. There are loose conventions for document numbering. For the weightier topics, -100 is often the overview or general procedures. There are as many exceptions as examples of these rules. Further complicating document lookup, BSPs were published roughly from 1930 to 1990, numbering conventions were changed and sections reorganized, and documents were replaced by updates identified by an issue number. The aspect of time can make it a frustrating and time-consuming process to match up a set of BSPs that not only cover a topic but represent a single point in time, rather than being confused by a decade of changes from one to the next.

We can hope that, one day, a monastic order of telephone enthusiasts will take up the compilation and, while they're at it, illumination of the BSPs. They will produce a Xanadu-esque compendium of AT&T doctrine navigable along dimensions of both time and space; a Grand Enfilade of communications technology. For my own part, I am perhaps alarmingly close but not quite ready to take a vow of silence, move yet further into the desert, and devote my life to the task. For the time being we'll have to settle for search engines, the indexing efforts of a few dedicated people, and no small amount of sweat.

Enough of that, let's take a look at divisions 310 through 312, which cover private lines for voice and data, as well as special switched services. A discussion of private lines starts rather naturally with 310-300-100, Two Point Private Line Systems: Two Point Private Line Telephone Circuits, Voice Only---Description.

Of course, we will not start there, because of the joy of BSPs: Naturally, I have searched several archives and not been able to find a copy of 310-300-100. We can fill in some of this gap from 310-300-300 and 310-300-500, though, Test Objectives and Test Procedures. 310-300-300 I1 (1975) 1.04:

The circuits discussed in this section are served over nonswitched facilities with no access to the message network. A two-point private line involves a channel between two terminal locations. The channel may or may not be entirely a metallic path.

A private line is, at its essence, a pair of wires that runs from one location to another. The locations are customer-specified, and the private line is not connected to anything else, only to the two service points. Some version of this service has existed for pretty much the entire history of the telephone, although it has often, as you would imagine, been very expensive.

The private lines covered in this section are for voice use only. This is important, because voice use implies a specific set of requirements. The topic of private lines, leased lines, and related services is actually a very difficult one to discuss succinctly. First, because there is a 100-year history of these services, and the technology, capabilities, and use-cases have all changed over time. Second, because there is a very diverse set of uses for private lines, each with a different set of requirements.

Voice use, for example, implies a customary telephone voice passband of about 300-3400Hz, and more precisely limited by the 8kHz sampling rate of digital telephony. This might strike you as surprisingly narrow, but it's enough for reliable speech intelligibility. It really is quite narrow though, part of the reason that "HD Voice" codecs now offered by VoIP technology sound startlingly different from traditional calls.

The fact that the circuit may not be entirely metallic is important as well. It might be a little unclear what a "nonmetallic" path would even mean, but remember that we are looking at a 1975 revision. By that time digital telephony was in full swing; a customer's "pair from one place to another" may very well be digitized, carried by the TDM switching network, and converted back to analog (analogized?) near the other end. The possibility of carrying private line service over the digital switching system radically reduced the cost of private lines, and enabled the proliferation of "leased lines" for use with computers.

But, it did have limitations. A private line of the 1930s would have been able to carry polarity reversal signaling, a common scheme at the time for alarm monitoring and remote control. A nonmetallic path was not electrically continuous, so it could not be used for any type of electrical signaling. It would only work for the signaling scheme it was provisioned for, namely, voice.

The BSPs describe a set of tests intended to ensure suitability for that purpose. There are standards for loss, frequency response, and noise. We also learn that private lines for voice use can be configured for ringing, so that one end can signal the other to pick up the phone. Elsewhere in the BSPs the supported types of ringing are enumerated, although they no doubt changed over time: ringing by a ring key (today we'd call that a ring button), by a hand-crank generator, by central office equipment, or no ringing at all.

I would like to cover this topic without getting too stuck in terminology, but it's hard to avoid because there is a lot of terminological confusion. For example, what is the difference between a "private line" and a "leased line" anyway? Well, it's mostly a matter of who's talking. AT&T seems to have always used the term "Private Line" except when discussing data services in the context of switched data networks, in which case they use "Leased Line" to refer to a fixed-capacity data connectivity arrangement. We'll get to that in a bit.

Another problem is how to describe the configuration of private lines. A private line provisioned for voice use, as described in 310-300, is not connected to any switching equipment but would be connected to battery power (AT&T called this "common battery," meaning that the phones did not require their own batteries to function). Private lines with common battery were sometimes referred to as "wet," while private lines without common battery were "dry." The latter were mostly used for non-telephone signaling applications.

Private lines might also be connected to other types of equipment to provide useful features. For example, you might wonder about how ringing applied by the central office would work. A popular type of private line for voice, especially later on, was the "Private Line Automatic Ringdown" (PLAR), also just called a "Ringdown" circuit. These private lines are configured so that picking up the phone at either end automatically applies ringing voltage to the other end.


Shameless cross-promotion: I uploaded a YouTube video yesterday where, among other things, I briefly mention that "hotline" is a heavily overloaded term that means different things in different contexts. One of the things that "hotline" often refers to is a PLAR. Mostly, though, I try out a very cheap PABX I bought on the internet and find out that, while not exactly great, it is certainly pretty good for the price.


Another option on private lines was whether they were two-wire or four-wire. A typical telephone uses only two wires, one pair. If you think about it, that's a little bit magical. The trick is a device called a hybrid transformer, installed in each telephone, that basically "subtracts" the signal it is transmitting from the signal on the line in order to isolate the "receive" signal. At a modern telephone exchange, another hybrid transformer splits them out once again for handling by the exchange equipment. This is a two-wire circuit, and has the advantage of lower-cost wiring. On private lines, you can also get four-wire service, with dedicated pairs for transmit and receive (also called, for obvious reasons, talk and listen). The major advantage of four-wire circuits is that they make "conference calling" much easier. When you put a lot of telephones onto one two-wire circuit, it becomes very difficult to manage the cumulative echo produced by the imperfection of the hybrid transformers. A full four-wire system avoids this problem.

What were these private lines for? Well, you can probably imagine a few applications for voice private lines, but keep in mind that for much of their history they were very expensive... prior to the use of TDM digital networks, you were basically paying AT&T for the installation and upkeep of however many miles of telephone wire were required to get between the two locations... plus carrier equipment, line conditioning, etc.

One of the cool things about the BSPs, and one of the reasons they were kept internal, is that it's not at all unusual for them to go into detail on specific customers. We can get a feel for the use-cases for these service offerings from the specific customers that used them heavily enough to merit BSPs. For two-point private line telephone service, that customer is the FAA. The FAA has long had a tight relationship with AT&T, rivaled only by the military for reliance on dedicated telephone infrastructure. Air-traffic controllers used voice private lines both to talk to each other across facilities, and to remotely operate the radios they used to communicate with aircraft.

Voice private lines were also popular in other situations where people across multiple locations needed to communicate in real-time. For example, television and radio networks often had private lines between studios and network control centers to aid the staff in coordinating the start and stop of programs from different locations. The telephone system was an obvious choice, because for the early history of radio and television most networks used "broadcast-grade" private lines (with larger bandwidths than voice lines, especially for television!) to distribute their programming to the member stations. The radio and television networks were a huge business for AT&T prior to the use of satellite transponders for the same purpose.

I know of some more exotic applications as well. Some critical infrastructure or large industrial facilities, and in particular nuclear power plants with their extensive emergency preparation requirements, had four-wire private line systems that linked their control rooms to contingency sites, disaster response agencies, and even the homes of their senior staff. These were configured as conference lines for use in coordinating an emergency response. Indeed, emergency management was another major application of private lines, and I will eventually write about FEMA's nationwide four-wire private line system, still used for dissemination of emergency warnings to state and regional emergency management agencies.

These conference lines are, as you have likely suspected, not really two-station private line systems. They are multistation voice systems, described in 310-405-100. TCI has a much older issue of this BSP, from 1957, and it describes the electrical resistor networks used to achieve the "conference line" configuration in which the talk pairs are mixed and sent on the listen pairs of all stations. 310-405-100 I1 (1957) 1.03:

One of the most important requirements of a multistation private line circuit is that the volume delivered to each receiver be substantially the same no matter which station is talking. The above holds true even when two or more circuits are switched together.

Indeed, and we are still struggling with this today!

1.04 is interesting:

Various signaling systems and combinations of signaling systems can be used to "call" the various stations on a private line circuit. Some types of signaling systems are: loudspeaker signaling, manual code ringing, ringdown signaling, 600-1500-cycle (2-tone) selective signaling, and code selective signaling.

Explaining all of those would be an article of its own, but it is an interesting note that multistation systems challenge the typical sense of "ringing" on a telephone line, and conventional ringing doesn't seem to have been very common on four-wire systems. A common replacement is what is described as "loudspeaker signaling" here, where the stations have an always-on speaker and you get someone's attention by... yelling at them, basically.

AT&T has never limited themselves to voice. There have long been various types of control or signaling equipment available, and 310-435 describes the SC2 Selective Control System available for use with multistation private lines. This telegraph-like system involves a control station sending series of coded pulses, which are detected by satellite stations that open and close relays in response. There are still a lot of products like this today, but the SC2 as a private line offering is a good time to make a point about the historic phone system: it used to be that all telephone equipment was the property of the telephone company. That included the SC2 control and satellite stations, and they had a terminal strip on the outside of the cabinet that exposed the relay contacts and served as the demarcation point between telco and customer property.

That has an interesting implication for these private line systems: the operating company needed a way to test and diagnose them. As a result, larger or higher-criticality private line systems usually included some kind of terminal equipment at the customer location that provided test functionality. On four-wire systems, a loopback test was the norm: some sort of signal sent on the line would cause a relay to shunt the talk pair to the listen pair at the customer premises, allowing the test board at the exchange to send signals all the way "around" the private line. The SC2 had a pretty complicated piece of terminal equipment, since it had to decode the signal pulses, and as a result there is an extensive test procedure with parts that can be performed remotely and parts that must be done on-site. The BSP cautions that arrangements must be made with the customer to disconnect their equipment before a telephone technician starts opening and closing relays to test.

A final common application of voice private lines was for "tie lines," or any of the other names they went by. These were lines that ran between two switching systems, providing something like a long-distance trunk between them. Imagine a corporation with two offices, each with a private branch exchange. The corporation could contract for a private line between the two PBXs that served as a tie line, and the operators at each location could then use it to directly connect calls between the two offices. Besides avoiding the operators having to dial those calls, tie lines could also save money over long-distance calling, if utilization was high enough. A number of services offered by AT&T and operating companies basically amounted to different ways of using tie lines, so Division 311 "Switched Special Services Systems" covers tie line configurations including WATS and operating company-managed PBXs.

Of course, if you have read this far, you are probably wondering about data. In the contemporary computer industry we associate this kind of private service entirely with the leased lines of the '70s and '80s. Division 312 covers Private Line Data Systems and Services, beginning with Electronic Telegraph Loops and teletypewriters. In the 1950s, "data" pretty much meant teletypewriter, and there is a dedicated customer section on Western Union who used AT&T circuits to extend their own network. The first half-dozen topics are devoted to various types of telegraph systems, including DC telegraphy (over continuous metallic circuits) and carrier telegraphy (combining multiple telegraph channels onto one private line using frequency division muxing).

312-8 is where we get to the good stuff, Data Sets. Data set was the term used by AT&T for what we now know as a modem, for example, the Bell 103 modem is more properly the 103 Data Set. The term "data set" predates acoustic modems and is somewhat more general, though, having been used by AT&T to refer to simple relay closure systems as well. Still, by the 1978 312-000-000 index there are a range of different data sets available, covering different speeds and applications. The systems covered in this section are intended for use with a complete private line data system, that is, the lines themselves were part of the system. They were typically private lines that terminated at dedicated equipment in the telephone exchange, where a data bus was used to carry signals between different lines.

It is in Division 313 that we find what we'll recognize today: Voice and Voiceband Data Circuits. These are data systems that operate over voice-type lines, using acoustic modem methods to encode data within the frequency response of those lines. This division is actually surprisingly short, for a reason. 313-100-100 I2 (1982) 1.01:

Telephone company testing of circuits that terminate in customer premises outlined in these sections includes only that portion of the circuit up to the network interface. The circuit testing procedure does not include customer premises terminal equipment.

In practice, a lot of voiceband equipment would be provided by AT&T, but they are establishing a clear separation of concerns between a voice-type private line and the modem used with it. It's important to understand that "network interface" as used here has a specific meaning within the telephone industry, it is the demarcation point between the telephone network and the customer's equipment.

Higher-speed telephone modems like the Bell 201, capable of 2400bps, could be used on four-wire circuits for full-duplex operation. Indeed, four-wire circuits were quite common for data use as well because they enabled full duplex operation. AT&T was also not the only option for voiceband modems. Division 314, Digital and Analogue Data Transmission Systems, is dedicated to a variety of modems and I49 (1983) includes familiar names like IBM and Data General.

This practice of using voiceband equipment to put data over private lines, without requiring extensive special equipment at the central office, started a shift in data communications practices that greatly blurred the lines between different types of service. Some of the confusion of terms we encounter today comes from these gray lines. The Bell 103 is a data set and can be used on private lines, but it can also be used on a conventional dial line. Similarly, the lines between voice and data were blurred. SAGE incorporated a nationwide computer network that operated over voiceband modems, often considered the first precursor to the modern internet. The digitization of the telephone network would further complicate definitions.


I have a long list of telephone-related topics to cover and I will probably never get to all of them, although I sure will do my best. It takes quite a bit of my free time to write these articles, and I'm also starting work on a more ambitious project around telephone history in more of a reference format. I'd appreciate your support in pursuing these projects---consider supporting me on ko-fi, which will also get you my subscribers-only newsletter EYES ONLY.


During the 1960s, AT&T introduced TDM digital trunks to the telephone network. Using digital technology, a large number of telephone calls could be digitized into samples and those samples multiplexed onto a single high-speed data connection between two telephone exchanges. This method of multiplexing was more reliable and less prone to noise than analog FDM methods, and it could be adapted to a wide variety of carriers. Over the following decades the telephone network underwent a wholesale conversion to digital, and it is now typical that the only "analog" parts of an analog telephone call are the last mile connections between the exchange and the customer premises. Modern telephone exchanges digitize calls at the line cards and it remains digital until reaching a line card on the other end. We were doing data over voiceband, and then we were doing voiceband over data.

The implied result, data over voiceband over data, was in fact very common and the apex of dial-up internet standards (v.90/v.92) assume that the underlying telephone connection is digital. The oddity of stacking multiple layers of digitization was far more apparent on private lines, though, where there was no requirement to retain compatibility with a standard telephone loop.

And thus the leased line was introduced. The closest thing I know of to a technical difference between a "private line" and a "leased line" is that a private line is assumed to be private over the entire span, while a "leased line," in practice, refers only to the last-mile. The actual telephone network is digital and using TDM or even packet switching methods, there is no need for a dedicated physical connection between two central offices to carry data. The leased line just allows a customer a way to insert data into the telephone network.

While v.92 assumes a bidirectional digital connection, it is limited to 56kbps because of properties of standard phone lines including the use of companding. Leased lines didn't have to put up with this limitation, they could omit the functions of a telephone line card and instead deliver a digital signal all the way to the customer premises. While the telephone network itself went through several major iterations of digital media (including the charmingly named "plesiochronous" network), the last-mile digital connection to customers has been well-standardized for a surprisingly long time: the T-carrier.

Let's talk quickly about the Digital Hierarchy, the scheme around which the TDM telephone network was designed. TDM involves packing samples into time slots in a round-robin fashion, so the digital hierarchy is similarly organized around cycles. As a hierarchy, those cycles get larger and larger. A telephone call consists of 8-bit samples at 8kHz, which multiples to 64kbps of data. That 64kbps channel, in the digital hierarchy, is referred to as DS0 or Digital Signal 0. By the somewhat arbitrary but pragmatic design of the digital hierarchy, 24 DS0s are multiplexed to form a DS1. These DS designations refer to the actual payload, not to the carrier technology used to transmit it. DS1, in the United States, was most commonly carried by Transmission System 1, commonly called T1 or T-carrier. 24 64kbps channels adds up to 1.544mbps, and that's exactly what a T1 delivered.

This connection was digital the entire way through, with no need for conversion to analog and the resulting quantization noise and bandwidth limitations. T1 was originally designed for trunk connections between telephone equipment, but it became quite natural to extend T1 connections to customer sites as a form of high-speed (for the time) data. T1 was the typical format of the leased line even into the '00s, and most people my age probably remember having dial-up service and coveting the remarkable speed of a T1 connection.

This is not to say that leased lines were limited to T1. There were higher-speed systems used within the telephone network that carried higher steps on the digital hierarchy, DS2, DS3, and so on, and these could be extended to customer premises as well. At the high end, larger businesses would be placed directly on a SONET fiber-optic ring via add-drop multiplexers, an arrangement capable of multi-gigabit speeds.

It's a little odd, actually, that the "leased line" we think of has very little to do with actual telephones. It's a digital network connection much like we use today, except that it functions on top of the provisioned-bandwidth, synchronous, TDM network originally built in order to carry telephone calls. There are probably still organizations today running ethernet over SDH for their internet connection, and it won't feel much different from anything else we use.

[1] The use of architecture as a symbol of corporate power, once a fundamental part of the computer and telecommunications industry, is largely lost today. There are more than a few reasons, but one of them is the development history of Silicon Valley. It is remarkable how underwhelming the offices of today's most powerful corporations are, consisting of scattered low-rise office parks with no identity besides the earth tones and angles of the 1970s. Of course, when modern tech companies do build prestige headquarters, they tend to be unspeakably ugly. I'm speaking mostly of Meta, I will give Apple's effort a mediocre grade.

2024-08-31 ipmi

I am making steady progress towards moving the Computers Are Bad enterprise cloud to its new home, here in New Mexico. One of the steps in this process is, of course, purchasing a new server... the current Big Iron is getting rather old (probably about a decade!) and here in town I'll have the rack space for more machines anyway.

In our modern, cloud-centric industry, it is rare that I find myself comparing the specifications of a Dell PowerEdge against an HP ProLiant. Because the non-hyperscale server market has increasingly consolidated around Intel specifications and reference designs, it is even rarer that there is much of a difference between the major options.

This brings back to mind one of those ancient questions that comes up among computer novices and becomes a writing prompt for technology bloggers. What is a server? Is it just, like, a big computer? Or is it actually special?

There's a lot of industrial history wrapped up in that question, and the answer is often very context-specific. But there are some generalizations we can make about the history of the server: client-server computing originated mostly as an evolution of time-sharing computing using multiple terminals connected to a single computer. There was no expectation that terminals had a similar architecture to computers (and indeed they were usually vastly simpler machines), and that attitude carried over to client-server systems. The PC revolution instilled a WinTel monoculture in much of client-side computing by the mid-'90s, but it remained common into the '00s for servers to run entirely different operating systems and architectures.

The SPARC and Solaris combination was very common for servers, as were IBM's minicomputer architectures and their numerous operating systems. Indeed, one of the key commercial contributions of Java was the way it allowed enterprise applications to be written for a Solaris/SPARC backend while enabling code reuse for clients that ran on either stalwarts like Unix/RISC or "modern" business computing environments like Windows/x86. This model was sometimes referred to as client-server computing with "thick clients." It preserved the differentiation between "server" and "client" as classes of machines, and the universal adherance of serious business software to this model lead to an association between server platforms and "enterprise computing."

Over time, things have changed, as they always do. Architectures that had been relegated to servers became increasingly niche and struggled to compete with the PC architecture on cost and performance. The general architecture of server software shifted away from vertical scaling and high-uptime systems to horizontal scaling with relaxed reliability requirements, taking away much of the advantage of enterprise-class computers. For the most part, today, a server is just a big computer. There are some distinguishing features: servers are far more likely to be SMP or NUMA, with multiple processor sockets. While the days of SAS and hardware RAID are increasingly behind us, servers continue to have more complex storage controllers and topologies than clients. And servers, almost by definition, offer some sort of out of band management.

Out-of-band management, sometimes also called lights-out management, identifies a capability that is almost unheard of in clients. A separate, smaller management computer allows for remote access to a server even when it is, say, powered off. The terms out-of-band and in-band in this context emerge from their customery uses in networking and telecom, meaning that out of band management is performed without the use of the standard (we might say "data plane") network connection to a machine. But in practice they have drifted in meaning, and it is probably better to think of out-of-band management as meaning that the operating system and general-purpose components are not required. This might be made clearer by comparison: a very standard example of in-band management would be SSH, a service provided by the software on a computer that allows you to interact with it. Out-of-band management, by contrast, is provided by a dedicated hardware and software stack and does not require the operating system or, traditionally, even the CPU to cooperate.

You can imagine that this is a useful capability. Today, out-of-band management is probably best exemplified by the remote console that most servers offer. It's basically an embedded IP KVM, allowing you to interact with the machine as if you were at a locally connected monitor and keyboard. A lot of OOB management products also offer "virtual media," where you can upload an ISO file to the management interface and then have it appear to the computer proper as if it were a physical device. This is extremely useful for installing operating systems.

OOB management is an interesting little corner of computer history. It's not a new idea at all; in fact, similar capabilities can be found through pretty much the entire history of business computing. If anything, it's gotten simpler and more boring over time. A few evenings ago I was watching a clabretro video about an IBM p5 he's gotten working. As is the case in most of his videos about servers, he has to give a brief explanation of the multiple layers of lower-level management systems present in the p5 and their various textmode and web interfaces.

If we constrain our discussion of "servers" to relatively modern machines, starting say in the late '80s or early '90s, there are some common features:

  • Some sort of local operator interface (this term itself being a very old one), like an LCD matrix display or grid of LED indicators, providing low-level information on hardware health.
  • A serial console with access to the early bootloader and a persistent low-level management system.
  • A higher-level management system, with a variable position in the stack depending on architecture, for remote management of the machine workload.

A lot of this stuff still hangs around today. Most servers can tell you on the front panel if a redundant component like a fan or power supply has failed, although the number of components that are redundant and can be replaced online has dwindled with time from "everything up to and including CPUs" on '90s prestige architectures to sometimes little more than fans. Serial management is still pretty common, mostly as a holdover of being a popular way to do OS installation and maintenance on headless machines [1].

But for the most part, OOB management has consolidated in the exact same way as processor architecture: onto Intel IPMI.

IPMI is confusing to some people for a couple of reasons. First, IPMI is a specification, not an implementation. Most major vendors have their own implementation of IPMI, often with features above and beyond the core IPMI spec, and they call them weird acronyms like HP iLO and Dell DRAC. These vendor-specific implementations often predate IPMI, too, so it's never quite right to say they are "just IPMI." They're independent systems with IPMI characteristics. On the other hand, more upstart manufacturers are more likely to just call it IPMI, in which case it may just be the standard offering from their firmware vendor.

Further confusing matters is a fair amount of terminological overlap. The IPMI software runs on a processor conventionally called the baseboard management controller or BMC, and the terms IPMI and BMC are sometimes used interchangeably. Lights-out management or LOM is mostly an obsolete term but sticks around because HP(E) is a fan of it and continues to call their IPMI implementation Integrated Lights-Out. The BMC should not be confused with the System Management Controller or SMC, which is one of a few terms used for a component present in client computers to handle tasks like fan speed control. These have an interrelated history and, indeed, the BMC handles those functions in most servers.

IPMI also specifies two interfaces: an out-of-band interface available over the network or a serial connection, and an in-band interface available to the operating system via a driver (and, in practice, I believe communication between the CPU and the baseboard management controller via the low-pin-count or LPC bus, which is a weird little holdover of ISA present in most modern computers). The result is that you can interact with the IPMI from a tool running in the operating system, like ipmitool on Linux. That makes it a little confusing what exactly is going on, if you don't understand that the IPMI is a completely independent system that has a local interface to the running operating system for convenience.

What does the IPMI actually do? Well, like most things, it's mostly become a webapp. Web interfaces are just too convenient to turn down, so while a lot of IPMI products do have dedicated client software, they're porting all the features into an embedded web application. The quality of these web interfaces varies widely but is mostly not very good. That raises a question, of course, of how you get to the IPMI web interface.

Most servers on the market have a dedicated ethernet interface for the IPMI, often labelled "IPMI" or "management" or something like that. Most people would agree that the best way to use IPMI is to put the management network interface onto a dedicated physical network, for reasons of both security and reliability (IPMI should remain accessible even in case of performance or reliability problems with your main network). A dedicated physical network costs time, space, and money, though, so there are compromises. For one, your "management network" is very likely to be a VLAN on your normal network equipment. That's sort of like what AT&T calls a common-carrier switching arrangement, meaning that it behaves like an independent, private network but shares all of the actual equipment with everything else, the isolation being implemented in software. That was a weird comparison to make and I probably just need to write a whole article on CCSAs like I've been meaning to.

Even that approach requires extra cabling, though, so IPMI offers "sideband" networking. With sideband management, the BMC communicates directly with the same NIC that the operating system uses. The implementation is a little bit weird: the NIC will pretend to be two different interfaces, mixing IPMI traffic into the same packet stream as host traffic but using a different MAC address. This way, it appears to other network equipment as if there are two different network interfaces in use, as usual. I will leave judgment as to how good of an idea this is to you, but there are obvious security considerations around reducing the segregation between IPMI and application traffic.

And yes, it should be said, a lot of IPMI implementations have proven to be security nightmares. They should never be accessible to any untrusted person.

Details of network features vary between IPMI implementations, but there is a standard interface on UDP 623 that can be used for discovery and basic commands. There's often SSH and a web interface, and VNC is pretty common for remote console.

There are some neat basic functions you can perform with the IPMI, either over the network or locally using an in-band IPMI client. A useful one, if you are forgetful and keep poor records like I do, is listing the hardware modules making up the machine at an FRU or vendor part number level. You can also interact with basic hardware functions like sensors, power state, fans, etc. IPMI offers a standard watchdog timer, which can be combined with software running on the operating system to ensure that the server will be reset if the application gets into an unhealthy state. You should set a long enough timeout to allow the system to boot and for you to connect and disable the watchdog timer, ask me how I know.

One of the reasons I thought to write about IPMI is its strange relationship to the world of everyday client computers. IPMI is very common in enterprise servers but very rare elsewhere, much to the consternation of people like me that don't have the space or noise tolerance for a 1U pizzabox in their homes. If you are trying to stick to compact or low-power computers, you'll pretty much have to go without.

But then, there's kind of a weird exception. What about Intel ME and AMD ST? These are essentially OOB management controllers that are present in virtually all Intel and AMD processors. This is kind of an odd story. Intel ME, the Management Engine, is an enabling component of Intel Active Management Technology (Intel AMT). AMT was pretty much an attempt at popularizing OOB management for client machines, and offers most of the same capabilities as IPMI. It has been considerably less successful. Most of that is probably due to pricing, Intel has limited almost all AMT features to use with their very costly enterprise management platforms. Perhaps there is some industry in which these sell well, but I am apparently not in it. There are open-source AMT clients, but the next problem you will run into is finding a machine where AMT is actually usable.

The fact that Intel AMT has sideband management capability, and that therefore the Intel ME component on which AMT runs has sideband management capability, was the topic of quite some consternation in the security community. Here is a mitigating factor: sideband management is only possible if the processor, motherboard chipset, and NIC are all AMT-capable. Options for all three devices are limited to Intel products with the vPro badge. The unpopularity of Intel NICs in consumer devices alone means that sideband access is rarely possible. vPro is also limited to relatively high-end processors and chipsets. The bad news is that you will have a hard time using AMT in your homelab, although some people certainly do. The upside is that the widely-reported "fact" that Intel ME is accessible via sideband networking on consumer devices is typically untrue, and for reasons beyond Intel software licensing.

That leaves an odd question around Intel ME itself, though, which is certainly OOB management-like but doesn't really have any OOB management features without AMT. So why do nearly all processors have it? Well, this is somewhat speculative, but the impression I get is that Intel ME exists mostly as a convenient way to host and manage trusted execution components that are used for things like Secure Boot and DRM. These features all run on the same processor as ME and share some common technology stack. The "management" portion of Intel ME is thus largely vestigial, and it's part of the secure computing infrastructure.

This is not to make excuses for Intel ME, which is entirely unauditable by third parties and has harbored significant security vulnerabilities in the past. But, remember, we all use one processor architecture from one of two vendors, so Intel doesn't have a whole lot of motivation to do better. Lest you respond that ARM is the way, remember that modern ARM SOCs used in consumer devices have pretty much identical capabilities.

It is what it is.

[1] The definition of "headless" is sticky and we have to not get stuck on it too much. People tend to say "headless" to mean no monitor and keyboard attached, but keep in mind that slide-out rack consoles and IP KVMs have been common for a long time and so in non-hyperscale environments truly headless machines are rarer than you would think. Part of this is because using a serial console is a monumental pain in the ass, so your typical computer operator will do a lot to avoid dealing with it. Before LCD displays, this meant a CRT and keyboard on an Anthro cart with wheels, but now that we are an enlightened society, you can cram a whole monitor and keyboard into 1U and get a KVM switching fabric that can cover the whole rack. Or swap cables. Mostly swap cables.

2024-08-19 mining for meteors

Billboards

Route 66 is often viewed through the lens of its billboards. The Jack Rabbit Trading Post, a small store a few miles out of Joseph City, would hardly be remembered were it not for its billboards spanning four states. The tradition of far-advance billboards is still observed today. Albuquerque's roadside stop operator Bowlin puts billboards six hours and two freeway exchanges out from its combined gas station-Dairy Queens. One can ponder the mystery of "The Thing" (near Benson, Arizona) throughout nearly the entire state of New Mexico.

So, if you have driven anywhere over a several-hundred-mile span of Interstate 40 (the modern-day successor to Route 66 in this area), you are probably aware of the Meteor Crater [1]. At Meteor, Arizona, between Flagstaff and Winslow, the Meteor Crater earns its nonspecific name: it was the first crater definitely shown to have been the result of a meteor impact. It is a spectacular sight, almost 4,000 feet wide and 600 feet deep. It is also known by another name: Barringer Crater, for the family that has owned it for over one hundred years.

Today, Meteor Crater is one of the few traditional Route 66 roadside attractions to have held on to much of its vitality. A steady flow of visitors pay admission to see the crater and its attached visitors center, somewhat ostentatiously styled as the Barringer Space Museum. The museum focuses on the crater's two major connections to space: first, that it was formed by a meteor that came from there. Second, and most importantly, that the crater has been used as a training site for astronauts since the Apollo era.

I assume it has been calculated that these are the two topics that draw visitors, because the museum devotes almost no space at all to what I consider the most fascinating of the crater's many stories: that the crater was a mine. Forget space; the Meteor Crater is the greatest artifact of a fascinating, but brief, chapter of American mining history.

Meteorites

But first, as dismissive as I may be, we must talk a bit about space. During the 19th century, detailed observations of meteor showers established that the bright trails of light seen in the night sky---previously assumed to be some atmospheric phenomenon---must in fact be the collision of objects from space with the atmosphere. The idea that rocks or something were flying out of space and into the upper reaches of our world naturally suggests that some of them might make it all the way down. Indeed, as early as 1803 such a meteorite [2] had been found on the ground in France following a meteor shower, although the idea that it had fallen from above was not universally accepted until much later.

Chemists analyzed a number of meteorites recovered during that era and found that almost all of them contained significant amounts of iron, and their geological oddities (such as the inclusion of spherical globules of metal) supported the idea that they had formed in space. By the turn of the 20th century, it was generally understood that meteors were chunks of mostly iron that came from somewhere out there and collided with our planet, and that some of them made it all the way to the ground.

What was not well understood by that time was the fate of those meteorites. Since the 17th century it was known that there were craters on the moon, and the idea that they had been formed by impacts is nearly that old as well. Even three hundred years later, though, it was far from a settled matter. Volcanic activity was also a promising explanation, and one that many felt to be more comfortably within the bounds of reason. In the year 1900, many astronomers would have been quite dismissive of the moon-meteorite theory, as the opinion of the day favored a moon with an active volcanic core.

The idea that impacts had formed vast craters on Earth must have been even more far-fetched. Besides, the known meteorites were quite small, more prone to knocking holes in roofs than in the very desert. It was in this context that the Meteor Crater was first examined.

Okay, we're done with that boring space stuff. Let's talk about mines!

In 1891, a small mining firm based out of Albuquerque received a sample from a prospector in Arizona. The prospector had found an ore vein in a remote part of that territory that he believed to be quite valuable. Indeed, an assayer put the included sample at a remarkable 77% iron, with a bit of lead, silver, and gold to boot. Observing its purity and unusual structure, the assayer figured that it had been melted in a furnace.

The mining firm, amazed by this new ore and skeptical of the assayer's attribution to a furnace, circulated parts of their sample among a number of business leaders---men who might put up the money for a large-scale mining operation. One of them, James Williamson of Civil War fame and by then an executive the Atlantic and Pacific Railroad, sent his sample to mineralogist Albert E. Foote of Philadelphia for his thoughts on its commercial value and exploitability. Foote, recognizing the sample's unusual structure, knew immediately that the prospectors claim of a vein two miles long and up to forty yards wide could not be entirely true [3]. It was a meteorite.

In truth, the "ore" was one of a number of small fragments (where "small" still often surpassed 100 pounds) that could be found over a large area near CaΓ±on Diablo and a feature known to locals as "Crater Mountain." Foote described the "so-called 'crater'", "the sides of which are so steep that animals that have descended into it have been unable to escape and have left their bleached bones at the bottom." Foote carefully searched the rim, but found no evidence of volcanic activity, leaving the crater's origin a mystery. Foote's blindness to the crater's connection with the meteorite fragments is almost hard to believe, but he seems to have taken it as a coincidence.

The remarkable quantity of oxidized black fragmental material that was found at those points where the greatest number of small fragments of meteoric iron were found, would seem to indicate that an extraordinarily large mass of probably 500 or 600 pounds had become oxidized while passing through the air and was so weakened in its internal structure that it had burst into pieces not long before reaching the earth.

Indeed, that same year, the chief geologist of the USGS visited the crater and reached the same conclusion. The crater's proximity to the meteorites seemed to be chance. The crater itself must be the result of a vast steam explosion, which was, after all, not an unknown phenomenon in that part of the Arizona territory.

The late 19th century saw the formation of the Division of Forestry (of the Department of Agriculture) and a related reorganization of the General Land Office, precursors to today's Forest Service and Bureau of Land Management. Large areas of public land would be withdrawn from the GLO's management (which consisted mostly of sale to private owners) and reserved as National Forests. In eastern Arizona, this controversial task fell on GLO agent S. J. Holsinger. He traveled the region extensively in studying forest issues and establishing the boundaries of what would become the Coconino National Forest. Although he never saw it for himself, he heard stories of an enormous crater, near what he called Coon Mountain or Coon Butte. The stories held that fragments of iron, from a meteor, had been found within and around it.

The record is unclear on where or how, but one day in October of 1902, Holsinger found himself in a casual conversation with a mine engineer by the name of Daniel Barringer. Perhaps they sat by each other in a saloon or a train station. Perhaps Barringer told Holsinger that he worked in iron mining, and Holsinger said something along the lines of "I'll tell you about some iron." In any case, Holsinger described the meteorites, and the crater---as they had been described to him. Some of the locals, Holsinger said, had a theory: that the crater had actually been formed by the meteorites. That a single meteorite larger than ever seen before had crashed into the earth, burying itself far below, and leaving behind the feature known as Coon Mountain, or Crater Mountain, or Sunset Knoll as the USGS had labeled it on maps.

Barringer was hooked. He wrote back to Holsinger for more information, he researched the area, and he involved his friend Benjamin Tilghman, the inventor of sandblasting and a general scientific type in the pattern of the era, in the pursuit. They became convinced that the USGS had been wrong, and the locals right: that it was not just a crater, but a Meteor Crater. So, they bought it.

Iron

Although Barringer was well-qualified as a geologist and was clearly fascinated by the crater, his interest in it was not purely scientific. In a 1905 paper making his argument for its meteoritic origin, Barringer reports that the iron fragments from the meteorite had already been commercially exploited. A nearby merchant had hired laborers to search the area around the crater for the "iron ore," and they had found some pieces as heavy as 1,000 pounds. Of the smaller pieces that they collected, the merchant estimated that between his efforts and those of another businessman, perhaps 15 tons of the iron-rich material had been shipped away for smelting.

Barringer and his business partners had found thousands of fragments, ranging from over 200 pounds to less than an ounce. These "CaΓ±on Diablo siderites" could be over 90% iron, the rest being mostly nickel.

Among the evidence he cites for the meteoritic theory, Barringer observes that every single one of these fragments had been found immediately on the surface, and only a handfull had ever been found inside of the crater. They were not themselves buried by impact; they seemed to be found right where they lay after they were ejected from the crater with great force.

Barringer's tone in the 1905 paper is, well, critical, particularly where it comes to the USGS geologist who had declared the crater a product of a steam explosion, Gilbert. Barringer suggests that Gilbert's rejection of the meteoritic theory could only be the result of a profound failure to notice geological inconsistencies that ought to have been obvious. The presence of a great amount of fine silica in every direction from the crater (sometimes called "rock flour," this very fine sand is the result of impact forces instantaneously shattering a large area of bedrock), the distribution of iron fragments neatly centered on the crater, and the upturned and sideways geological layers found in the rim all indicated that the rim had been "heaved out" of the crater itself by a great force.

To definitively prove his theory, though, Barringer relied on trenches in search of buried fragments. If Barringer was correct, there should be fragments of the meteorite mixed randomly within the other ejecta of the crater. It took some effort, but eventually Barringer reported clear cases, including a large iron fragment found underneath a slab of sandstone that must have come from 400 feet below the surface.

Still, Barringer had bigger plans: he intended to find the original meteorite itself--the core from all of the fragments had broken. With a horse-driven drill, his employees sunk numerous shafts from 200 to over 1,000 feet deep. Many seemed to strike iron material, but Barringer believed them to be mere fragments, not the great cluster of broken iron that must be present somewhere beneath the crater.

His belief in this huge iron core explains the mining patent he took out on the behalf of his company, Standard Iron. He planned to find it and sell it. By his estimate, based on a crude analysis he developed, it would have a market value in the range of one billion dollars.


If you have made it this far, you are either fairly committed to my writing or very bored. Either way, would you consider supporting my work with a monthly contribution? Contributors receive EYES ONLY, a special newsletter on special topics (but mostly computer trivia).

https://ko-fi.com/jbcrawford


Shafts

Barringer was enthusiastic about his meteor theory in a charmingly turn-of-the-century way. He had invented, he said, the field of "meteoritics," or the study of meteors. A central concern of meteoritics was the ballistics of a meteorite: what happened when a meteorite struck the ground.

Today, thanks in large part to World War II munitions research and the nuclear weapons program, we have considerable theoretical and experimental information on how solid objects penetrate soil and rock---a topic that Sandia Laboratory, operating with similar zeal for the new frontiers of science, dubbed "terradynamics." This work began in the 1930s; Barringer was too early to benefit. He had to develop a similar theory on his own.

Understanding the geology of the crater involved a huge effort, particularly in such a remote location. Barringer had to build out a considerable operation, and he had to do so by remote. He was already a known figure in Arizona, having launched a particularly successful mine, and he worried that anyone that got word of his interest in the crater would attempt to jump his claim. Instead of traveling to Arizona, be brought in relatives and friends as business partners and even hired Holsinger away from the Division of Forestry to act as land manager. Barringer and Holsinger chartered a railroad to the site, established mill sites on the Little Colorado River and Oak Creek, and constructed a water reservoir. A camp was established at the base of the rim, and an expanding workforce started on holes, trenches, and a shaft straight down in the center of the crater.

Barringer first reached the crater in 1904, where he learned that the central shaft had been abandoned at 200 feet of depth due to the fine silica forming a quicksand that quickly filled the excavation. He opted for a different approach, buying a 4" drill and sinking five smaller shafts in the crater floor. Most of these shafts ended when they struck meteorite fragments too hard for the drill. One, managing to dodge any large pieces of iron, found no new meteoritic material after 550 feet, and reached intact bedrock at 1000 feet. These observations lead Barringer to conclude that the crater had originally been deeper but had been partially filled back in by the material that was thrown into the air.

Barringer thought that a steam hoist might allow them to excavate the quicksand more quickly than it filled, allowing progress on the larger shaft in the crater's center. Barringer called off the drilling effort and shifted focus once again to the shaft, spending 1905 excavating a larger clean shaft to the level of the quicksand. In 1906, the race against the soil was attempted, but failed. The next year, drilling resumed at an accelerated pace, with sixteen new boreholes completed in 1907.

Each bore found meteor fragments down to nearly 600 feet, but there was no evidence of the meteorite itself. Barringer began to develop a new theory: the meteorite was not directly below the crater.

He conducted a series of simple ballistic experiments: he shot a rifle into the ground. The impact of the bullet into the desert soil, he reasoned, would behave similarly to the impact of a meteorite into the same. His easy target shooting lead to an interesting finding. Regardless of the angle at which he shot the ground, the resulting hole appeared round. The meteor, he realized, likely didn't come straight down; it struck at an angle.

Analyzing the rim, he came to believe that the meteorite would be found somewhere beneath the south rim. More fragments were found to the north, and the south rim had been lifted higher than in other directions. He had a new target, but there was a problem... he was out of money. Barringer had started the project with considerable wealth and several partners, but he had spent $100,000 searching and Tilghman, his friend and investor in the project, decided to back out.

Progress at Meteor Crater was slow for the next decade as Barringer marketed the project to prospective investors. It was not until 1918 that he signed up a new partner, the United States Smelting and Refining Company. USSRC signed up to put $75,000 into exploratory drilling, but $60,000 was spent installing supporting infrastructure, including a ten mile water pipe, before drilling began from near the top of the south rim.

Progress was slow, and stopped entirely at 282 feet when the drill became jammed and the $75,000 exhausted. Barringer seems to have pulled off some feat of salesmanship by convincing USSRC to continue to support the project, and in 1921 a horizontal tunnel was dug from the interior of the crater to the end of the drill, revealing the two iron balls that had stopped the work. While the tunnel plan was innovative, it was not particularly effective, as the bore turned out to be in poor shape and difficult to continue.

Drilling practically started over again, reaching 600 feet at the end of 1921. Barringer's theory appeared to pan out: at 1,100 feet, well below the crater floor, meteor fragments started to appear. The drill stuck again at 1,300 feet, but not before finding dense a span dense with iron fragments. Drilling to that depth had cost $200,000, and while promising, none of the material recovered had been of any significant value. USSRC pulled out.

Barringer was back to fundraising, courting various mining companies including one, United Verde Extension, who rejected the project based on their conclusion that the crater was the result not of a meteor, but of a steam explosion. You see, even in 1924, Barringer's theory of the crater's origin came off as crackpot. The steam explosion theory was still the accepted one, and the crater had attracted surprisingly little attention from professional scientists. Barringer's abrasive response to the USGS survey was no doubt a factor; a USGS geologist by the name of N. H. Darton who had worked for Gilbert during the original survey took the cause to heart.

Throughout Barringer's mineral exploration of the crater, Darton published papers arguing for a steam explosion and dismissing the meteorite theory. He refused to change the name of the site on USGS maps away from Coon Butte until 1916, when he begrudgingly accepted "Crater Mound." Barringer's son, Brandon Barringer, wrote that a USGS geologist once told him that he thought the meteor theory to be correct but "it would cost me my job if I was heard saying so."

Drilling resumed again that year, the effort of a new stockholder corporation that leased the crater from Standard iron and worked on the advice of Barringer. This corporation, the Meteor Crater Exploration and Mining Company, was backed in part by Boston business magnate Quincy Adams Shaw. Shaw would bring about Barringer's vindication, and his downfall.

One of Daniel Barringer's sons, Daniel Barringer Jr., followed his father into the study of meteoritics. He would make a discovery similar to his fathers: a letter to a mining journal described the discovery, in 1921, of a large iron meteorite near a hole in the area of Odessa, Texas.

Barringer Jr. arranged a deal with G. M. Colvocoresses, a smelter executive and one of the investors in the Exploration Company, to inspect a set of mines in Texas on his behalf. This provided a convenient excuse to travel through Odessa, where he examined the area and found signs much like those near CaΓ±on Diablo. In 1926, he discovered the world's second known meteor impact crater, the Odessa Crater.

Incidentally, something else had been discovered near Odessa: oil. The Odessa Crater turned out to be owned by an oil interest, which was not interested in selling.

Fragments

According to a pattern, the Exploration company completed a new shaft near the USSRC effort, which struck water at 600 feet and about $200,000 in expense. Shaw, beginning to question the project, engaged the services of an expert: astronomer Forest Ray Moulton. Moulton was a distinguished scientist, a professor at the University of Chicago, and even better, an expert in ballistics, having been in charge of ballistics research at the Aberdeen Proving Ground during the First World War.

Moulton seemed to accept the meteorite theory from the start. After all, despite the objections of the USGS, General Electric cofounder and MIT president Elihu Thomson had visited the crater and reached the same conclusion. Still, it gives some of the flavor of the debate that Barringer sometimes referred to his scientific supporters as "converts."

Moulton's report, published in 1929, was a mixed result for Barringer. He concluded that the crater was indeed the result of an impact by a meteorite, perhaps 50,000 to 3,000,000 tons. He also concluded that the meteorite would never be found. The impact energy was more than enough to vaporize it entirely, leaving only the fragments scattered across the plain.

Three months later, Daniel Barringer died.

His dreams of one billion dollars of iron and nickel buried beneath the crater died with him. While his family carried on the effort, the Great Depression ensured that only two further shafts would be drilled, after which mining exploration ended.

By 1930, the passage of time, the discovery of the Odessa Crater, and Moulton's report had solidified the meteorite theory to such a degree that the Meteor Crater became generally accepted as just that. Meteor mining, though, was over before it had started.

Billboards

If not valuable, meteor craters remain unique and fascinating geological features. The Odessa Crater was leased by Ector County as a tourist site, and efforts by various parties including the Texas Memorial Museum lead to the discovery of a six-ton meteorite at the bottom of a second, smaller crater nearby. Efforts by the Works Progress Administration and later the Barringer Family to locate the meteorite that formed the larger main crater failed, leaving a covered shaft still visible today.

At Meteor Crater itself, the mission has changed in a classic Route 66 fashion: from mining to tourism. In 1953, the Standard Iron Company renamed itself to the Barringer Crater Company. The Company continues to operate Meteor Crater as a tourist attraction and scientific resource, hosting NASA training efforts and geological experiments. Ongoing research at Meteor Crater produced many conclusions about meteorites and impact events, including extensive research by Eugene Shoemaker.

It was Shoemaker who directed NASA to the site for training purposes: he had been considered as an astronaut himself, but excluded for medical reasons. Instead, he took to Meteor Crater as a moon of his own. It is due in large part to Shoemaker's comparisons between Meteor Crater and the lunar surface that we now know the the craters of the Moon to be a result of meteorites as well.

Today, Meteor Crater appears as a roadside stop, not that different from Jack Rabbit or "The Thing." Billboards precede it by hours, ignored by their audience of long-haul truckers. It is dusty, and minimally staffed, and has the feeling of a forgotten place; a spacesuited mannequin and a 4D Experience only add to the impression of a tourist trap with a brighter past. The museum, with so much focus on the only tangentially related Apollo program, forms a stark contrast with brass plaques devoted to a hagiography of the Barringers. They bring you out to see the crater, but then they say very little about what is in the crater. You have to read between the lines to find the reason why: Barringer never found what he was looking for.

Still, there are things that you cannot see anywhere else. The crater itself, bigger than the Sedan Crater excavated by a nuclear weapon, can only hint at the unthinkable energy released some 50,000 years ago. A huge fragment of the meteorite, weighing 3/4 ton and named for Holsinger, is prominently displayed in the museum. Apartments built for staff remind us of just how remote "between Flagstaff and Winslow" once was.

Barringer's obsession with meteorites was sincere. Based on his writings, it predated the discovery of the Meteor Crater, and he devoted as much of his life to better understanding the crater as he did to mining it. His family and the Barringer Crater Company continue to make grants in meteoritics research, and each year they give the Barringer Medal. Its first honoree was Shoemaker, its most recent Canadian geology professor John Spray, who studies deformation and friction at extreme speeds and pressures.

Like so much of Route 66, it feels dated, and more than a little tacky. And like Route 66 as well, it is a fascinating chapter in the story of the American West. An eastern businessman, as eccentric as he was passionate, left for Arizona in pursuit of a wild idea. He was only half right.

[1] Curiously, the most recent billboards for Meteor Crater mainly feature an anthropomorphic rabbit, apparently a character designed for a "4D experience" at the museum. The connection between the rabbit and the crater is left unexplained; an odd mascot that you might be tempted to ascribe to furries except that if the furries were in charge it would have looked better. I cautiously speculate that it may have been intended as a reference to the Jack Rabbit.

[2] A meteor is seen in the sky, a meteoroid is the actual object that burned up to be seen as a meteor, and a meteorite hits the ground. It is said that you can remember this by recalling that meteorites are "right" in that they successfully made it all the way. As with so many memory devices, I think you could also argue this one the other way, so it's not really that helpful.

[3] Foote's paper to the AAAS about the discovery makes it clear that this kind of wild exaggeration was not unusual when coming from prospectors. "There were some remarkable mineralogical and geological features which together with the character of the iron itself, would allow of a good deal of self-deception in a man who wanted to sell a mine."

2024-08-12 a pedantic review of the las vegas loop

Did you hear that Elon Musk dug a tunnel under the Las Vegas Convention Center?

I think it is pretty universally known by now that the "Las Vegas Loop" is impractical, poorly thought out, and generally an embarrassment to society and industry. I will spare an accounting of the history and future of the system, but I will give a bit of context for the unfamiliar reader. The Las Vegas Loop is a (supposed) mass-transit system built and operated by The Boring Company for the Las Vegas Convention and Visitors Authority at the Las Vegas Convention Center. Besides four (ish) stations in the Convention Center, it has been expanded to serve Resorts World as well. It will, according to plan, be expanded to as many as 93 stops throughout the Las Vegas metropolitan area, despite the mayor of Las Vegas calling it "impractical" and "unsafe and inaccessible." This odd contradiction comes about because The Boring Company is footing a very large portion of the construction cost, while much of the rest is coming from casinos and resorts, making it extremely inexpensive for regional government agencies.

In practice, the Loop consists of a set of mostly double-bore tunnels of small diameter, which are traversed by Tesla Model 3 and Tesla Model X vehicles manually driven by humans at up to 40 mph. They have more recently switched to Model Y, but the operations manual I have predates that change, so let's stick with the older models for consistency. Each vehicle seats up to four. The system is nominally a PRT, or personal rapid transit, as the drivers take you to the specific station you request. The tunnel to Resorts World is single bore, and can admit vehicles in only one direction. A simple signaling scheme serves to prevent vehicles meeting head-on in single tunnels. While Loop and Boring Company marketing focuses heavily on the single underground station, all other stations are above ground. In the current state, I think it is actually somewhat generous to call the Loop an underground system, as most maneuvers and operations occur at surface level. It is perhaps best thought of as a taxi system that makes use of underground connectors to bypass traffic. Future expansion plans involve significantly more tunnel length and more underground stations, which will probably cause the system overall to feel more like a below-surface transit system and less like an odd fleet of hotel courtesy cars.

I am not going to provide a general review of the system, because many others have, and you can probably already guess what I think of it. Instead, I want to focus on some aspects that have not been as heavily discussed in other reporting: detailed operational practices, and safety and communications technology.

We are fortunate that, as part of its fire safety permitting, the Loop has been required to file its operations manual with Clark County. Unfortunately, the newest revision I can find online is 2021's Revision 7, which predates the Resorts World station and may be out of date in other ways as well. Still, it appears to be substantially correct, and much of what I will discuss is based on Revision 7 of the manual alongside several trips I have taken in the system.

Interestingly, the operations manual refers to the system only as the "Campus-Wide People Mover" or CWPM. This term seems to date to the original solicitations by LVCVA, but is not used in marketing.

Rules and Discipline

Like most detailed policies, the operations manual is an interesting read for the pedantic. Some parts are odd in a classically Elon Musk way, like the manual's use of "What's Elon like?" as the first example of a question that passengers might ask a driver. Other parts are weird in a more conventional way, like a paragraph that says first that the tunnels are connected to the operations control center (OCC) by single-mode fiber, second by two redundant fibers, and third by two single-mode fibers taking separate paths. I am pretty sure the correct interpretation of this paragraph is that there are two fiber routes and they just said that three times to pad for length, but it's hard to be totally sure. Why it's so important to clarify that it's single-mode is anyone's guess, perhaps because during the review stage a regulator asked. You find this kind of thing a lot in these sorts of policies, which are usually edited extremely carefully for regulatory compliance and not at all for plain reading.

One interesting aspect of the Loop, that has been rather heavily reported on, is that the whole thing feels remarkably cheap. Perhaps that's not surprising, as The Boring Company's central claim is to be able to construct underground transit on a tight budget. They have indeed delivered on this promise in the construction of the Loop, but it's hard not to feel like they did so more by pinching pennies and eating development costs than by innovation. Nothing speaks to this more than the photo of the OCC the manual provides, which depicts two cheap office chairs at a long desk in a room with worn linoleum floors and distinctly portable-building vibes. It is, by far, the most underwhelming mass-transit control room I have ever seen. I strongly suspect that the OCC is a reuse of the old on-site construction office. To be fair, they will surely have to build something more sophisticated for future expansion, as the current space only accommodates two operators. We will assume it is temporary.

Early in the manual, hiring and training requirements are discussed. They are fairly standard for transportation drivers, with hiring requirements mostly amounting to a clean driving record and a clean drug test. Drivers must undergo 10 hours of in-vehicle training, including a four hour half-shift endurance exercise with mock passengers. There is classroom (or more likely computer-delivered) training as well, but the manual doesn't enumerate it. Drivers are required to wear a provided uniform with plain black shoes and no jewelry or accessories, and are prohibited from initiating conversations with passengers.

Which leads us somewhat naturally to the next section of the manual, on rules and discipline. There is usually a wide gap between rules written into policy and rules followed in practice, with "real" rules being determined by enforcement behavior---what practically can be enforced, and what supervisors choose to enforce. For example, the manual prohibits the drivers listening to the radio in the cars, something that zero percent of the drivers I have had complied with. There is no discipline outlined in the manual for this infraction, so, is it even really a rule?

The speed limits for the system are 40 in straight tunnels, 30 in turns, 15 on ramps, and 10 in stations. Most of this was unsurprising except for the "15 on ramps" part, as my drivers have consistently taken full advantage of the electric vehicle's torque, hitting 30 before the end of the descent ramp. This would appear to be a violation of policy. But, it's interesting to note, discipline (a "demerit") is only listed for a speed excursion of at least 5 seconds. Because of the short length of the ramps, it is likely not actually possible to incur a demerit for violating the 15 mph ramp speed limit. I wonder if the authors of this policy realized that.

Drivers are strictly prohibited from using any assistive driving features. This is sort of a moot point in practice, as maintenance staff are required to disable the assistive driving features of the vehicles before they are put into service in the Loop. This isn't at all surprising considering the highly regulated nature of transit operations, but it is pretty funny considering that The Boring Company originally promised automation, and that closely related Tesla has made self-driving a key part of their marketing.

Emergency Procedures

So, with the rules stuff out of the way, let's talk about emergency procedures. One of the problems with underground transit is that tunnels can be very dangerous: they are enclosed spaces where exits may be far away, and in a fire they can quickly fill with toxic and opaque smoke. Many historic incidents have illustrated the inherent danger of tunnels, and so modern tunnel designs incorporate extensive safety measures which typically include smoke extraction systems, evacuation guidance, emergency exits or refuges, and increasingly, fire suppression systems.

The Loop has been widely criticized for incorporating very few of these features. It does have a basic smoke evacuation system, but there is no evacuation guidance in the tunnels (no signage to indicate the nearest exit in low visibility conditions), and no evacuation points or refuges except for, oddly enough, marked refuges at the end of some tunnels that seem to be largely an ADA compliance measure because the ramps are too steep to be considered ADA-compliant egress (they are remarkably steep!).

To be fair, I think some of these criticisms are somewhat overblown. It does appear to be possible to open the car doors just about fully within the tunnels, although I think they are likely to strike the walls. And the thing is, the tunnels are very short! like really short! some tunnels seem to be shorter than the typical interval between refuge points in modern highway tunnels, and those that are longer probably aren't longer by much. The expansion system may incorporate more extensive safety measures due to longer tunnel runs.

Evacuation procedure basically consist of driving the car out of the tunnel, via the next station. If evacuation must be made in the opposite direction, the manual says the driver must await instructions from the OCC, as they are not generally permitted to drive in reverse. This is probably an accommodation for the poor rear visibility of the vehicle; drivers are normally prohibited by policy from driving in reverse. The OCC would likely have to coordinate vehicles reversing out by track warrant (tunnel warrant?) to avoid collisions. This is a common pain point with evacuation of train tunnels, for example, where there may not be a cab on the rear and even if there is there may not be enough time for the operator to switch ends.

In the event it becomes necessary to abandon the vehicle, the driver is to have passengers get out, and then lead them to the closest exit. The driver will presumably have to know the nearest exit by heart, since there isn't clear evacuation guidance in the tunnel. The manual addresses difficulty opening the vehicle doors, a common concern with Teslas that have electrically operated door releases. My understanding is that both the Model 3 and the Model X do have a mechanical release for all passenger doors, although it's pretty hidden on the rear doors. Oddly enough, the manual doesn't seem to know that, as the Loop operations manual strongly implies that there is no manual release for the rear doors of a Model 3. I would think that authorities would have immediately noticed that implication, so it makes me wonder if the Model 3 rear door release (a wire loop hidden under a panel) was simply ruled out as infeasible to use in an emergency scenario. Of course, that's odd, because the operations manual... just doesn't tell you what to do with rear passengers in a Model 3. You are basically SOL, as far as the operations manual is concerned. Only passengers of a Model X are allowed to escape in a scenario where the vehicle loses power. To my eyes, that is by far the biggest unresolved problem with the emergency operations plans. Perhaps a later revision of the manual addresses it, because it seems like more of a documentation error than a real problem.

The manual does not address evacuation procedures. There may be a document with that information that did not make it to the internet. At at least two points in the tunnels I spotted golf carts (the type with four rows of seats) stashed in corners, and I suspect they would be used if there was a need to retrieve passengers from a disabled car. My husband pointed out that due to their larger seating capacity and faster boarding/deboarding, the Loop would likely achieve a higher capacity if they just shifted operations entirely to the Club Cars.

Customer Service

There is a section of the manual on customer service and interactions with customers I actually don't think it's that unusual for this kind of policy, so I don't want to mock it too much, but I will tell you this: drivers are told to keep conversations as short as possible and give as little information about themselves as possible. They are not to tell passengers how long they have worked for The Boring Company, how old they are, their last names, or information about TBC employee counts or pay rates, even if asked. There is a surprisingly long (in comparison to other items) script for answering questions about the flamethrower. Drivers are told to tell passengers that the Loop operates at "about 35 average and 50 maximum," an interesting answer since 40 is the maximum speed in any segment and exceeding it for more than 5 seconds would lead to a demerit (exceeding 50 for more than 5 seconds would lead to suspension).

The correct answer to "What's Elon like" is "He's awesome [inspiring / motivating / etc.]", and drivers are not sure how often he is around. There is a whole section about how to answer questions about Elon Musk, including how to respond to questions about his tweets. I enjoy that Elon Musk is a person such that every employee of one of his companies needs some basic press training on how to deal with his social media habits. Drivers are, if we take this script more literally than its authors probably thought out, to say that they do not have personal experience of Elon Musk smoking weed.

The operations manual spends more page length on answering questions about Elon Musk and The Boring Company than it does on fires.

Communications Technology

So, what communications technology does the Loop employ? Communications in tunnels is an interesting problem, especially in a life-safety critical environment. Unfortunately, The Boring Company has opted for a pretty boring approach that also seems... questionably safe?

From the manual and various public press, we can infer that the tunnel has some type of LTE. "Leaky" cables are an interesting RF technique often used for tunnels, but the Loop tunnels are short enough that directional antennas at the end might be sufficient. Still, it seems very plausible that there is leaky feeder embedded in the overhead light trough.

Loop operators are equipped with an iPad and a bluetooth headset. The iPad runs a very basic looking app with a "call" button the operator can use to reach the OCC. It's probably using a very straightforward third-party library to either make a VoIP (e.g. WebRTC) call or to use the iOS dialer to do the same (does iPad OS have a dialer? I don't know how these newfangled Apples work).

An interesting note - you might notice that Tesla vehicles characteristically include a big touchscreen in the middle. The Las Vegas Loop vehicles do run modified software, but based on the 2021 operations manual and the scant more recent information I can find, it has been modified only to allow the OCC to remotely access vehicle status information and cameras. There don't appear to be any extra driver-facing features, leaving a need for the iPad.

But what about a failure? Well, there is an auxiliary system, one that is very similar to that used in other types of underground transportation systems. At regular intervals in each tunnel, as well as at stations and other critical points, there is a "blue light station." The blue light station consists of a blue fire pull device that presumably reports an emergency to the OCC (but probably does not automatically trigger an evacuation, as that can be very bad), as well as a phone. The phone is configured as a "hotline" in the modern-traditional sense, meaning it automatically dials the OCC when taken off hook. This appears to be the only emergency communications system, although it seems unlikely to me that they don't have a public safety repeater system in the tunnels (e.g. for 900MHz P25), as fire authorities often require them.

Mass notification can serve as an important secondary communications system, either when there are problems with primary systems or in emergencies that require action as quickly as possible. Here, the Loop takes an interesting approach. The operations manual includes a screenshot of the OCC operator interface, and the amount of screen space devoted to controlling the tunnel's RGB gamer lights seems odd (given that they use white lighting except for during special events, it seems like this is more of a marketing concern than an operations task). That makes a lot more sense when you discover that the mass notification plan is all based on the lighting: in an evacuation scenario, the lights will flash red and white in the intended direction of travel, and remain solid red in the direction of danger. I honestly think this is a clever use of the lighting equipment and I like how it indicates the direction of evacuation, but I do worry a little about whether or not the color of the lighting is set over a life-safety grade network, or just via the LTE or something.

Accessibility

A quick side-note about accessibility. Transit enthusiasts probably know that ADA requirements for public transportation are quite strict, and there's not much of a way around offering wheelchair service. The Boring Company seems to mostly address this question by saying that passengers are expected to transfer from wheelchair into sedan, which... sucks and isn't going to pass ADA review. So they have a secret: a GEM cart. GEM is a brand of Polaris, the parent company of RZR and Indian Motorcycle and assorted other small vehicle brands, aimed at institutional customers. GEM carts are golf-cart or neighborhood electric vehicle (NEV) class electric vehicles with 72-volt systems, and they make a wheelchair accessible version. Apparently the Las Vegas Loop now uses one to ferry around wheelchair customers.

The punch line here is... remember what my husband said about the golf cart we saw? GEM makes carts that seat five in addition to the driver, with a higher seating position and open sides or optionally large doors for faster board/deboard. Even with the 25mph stock speed limiter for NEV/LSV regulatory compliance (and believe me, with some adjustments to the motor controller they can go faster), I suspect that switching the Loop entirely to GEMs would increase its total capacity. And the GEMs honestly suck, in the world of light electric vehicles. They just kind of pulled off a regulatory capture move and got the NEV rules written to pretty much require something that sucks as much as they do for street legality.

Subjective Experience

So as I said, this is not a review, just trying to focus on some things of interest to transit, communications, and policy dweebs. Which I assume pretty much describes my core readers. But I do want to point out a couple of oddities that add to the "wow, this is cheap" sensation:

The ride is surprisingly rough, even in a Model Y with highway-grade suspension. I am concerned that they may not be able to do much better when paving in the confined tunnels, given that I don't think standard paving equipment would fit in the loading gauge. The ride experience was not "oooo electric car luxury," it was more on par with the Orlando Airport APM100s with sketchy steering gear.

For the segment that requires tickets (to Resorts World), the ticketing system is based on a QR code. The customer-side implementation is fine enough, but the ticket checking is laughable. It's an iPad where you have to show a QR code to the front-facing camera, meaning you have to present the QR code with your phone facing away from you, looking at the image on the iPad for alignment. It is very awkward and there is no reason for it besides cheapness. Plus there's not really any way for the attendant to see if the ticket is valid without standing awkwardly close to you to look at the same iPad screen you are, and indeed, I accidentally scored a free ride by merit of the attendant's inability to see the actual result of the ticket check.

The stations are not especially well thought out. People walking in and out of the stations have to cross the path of the Loop vehicles in some places. The attendants are supposed to direct people and, for trips to Resorts World, collect fare, but the design of some stations lacks a chokepoint at which to do so. The attendants have to kind of chase people down after they've already walked straight to a vehicle.

The tunnel to Resorts World is one-way. Its portal is connected to the West LVCC station by a tunnel, but the station and the Resorts World portal are actually in the same parking lot. They seem to have adopted a practice of cars one way going through the tunnel, and cars the other way just driving... across the parking lot. This is very funny to experience and contributes a lot to the feeling that the Loop is only marginally an underground system. I doubt the original designers intended for this outcome, it seems like the money spent on the connecting tunnel was completely wasted, but I'm assuming that eliminating one segment of single-track tunnel helped with throughput. Their approach to managing traffic at the Resorts World portal also involves a sort of approach-pattern-esque architecture where every car has to drive in a circle around the portal before entering, which is funny.

This stuff matters in my mind because it gets to the question of what the Loop... is for? The capacity of the Loop is very low. The expansion plan calls for a lot of tunnels, doubled up for capacity in places, but targets only 90k passengers per day. That would put it at around 8x the current daily ridership of the monorail, but with a vastly larger network of stations. Presumably they will expand fare collection, and I would have to think that tickets will actually become fairly expensive. so it's probably not intended to be a high-capacity, low-cost option.

So what else could it be? Well, some press and discussion around the Loop figures it as more of a luxury option: something that casinos can comp for high rollers, that will spare people dealing with the general disaster of getting around the strip. But it also doesn't feel like that. The outdoor stations, need to quickly board and deboard a sedan, and general chaos level of the stations (i.e. attendant chasing you down for ticket) make it feel more "courtesy car" than "black car."

I don't know, they could totally dress it up a bit and make it feel fancier. Some paint here and there, train the attendants better, do more to direct traffic. They could! But right now I think the best way to describe the Las Vegas Loop is... "cheap and amateurish." Surprisingly fitting with the Las Vegas vibe, in a way.

2024-07-31 just disconnect the internet

So, let's say that a security vendor, we'll call them ClownStrike, accidentally takes down most of their Windows install base with a poorly tested content update. Rough day at the office, huh? There are lots of things you could say about this, lots of reasons it happens this way, lots of people to blame and not to blame, etc., etc., but nearly every time a major security incident like this hits the news, you see a lot of people repeating an old refrain:

these systems shouldn't be connected to the internet.

Every time, I get a little twitch.

The idea that computer systems just "shouldn't be connected to the internet," for security or reliability purposes, is a really common one. It's got a lot of appeal to it! But there's not really that many environments where it's done. In this unusually applied and present-era article, I want to talk a little about the real considerations around "just not connecting it to the internet," and why I wish people wouldn't bring it up if they aren't ready for some serious considerations.

We Live in a Society

In the abstract, computers can perform valuable work by doing, well, computation. In practice, the computation is rarely that important. In industry, there is a lot more "information technology" than there is "computation." Information technology inherently needs to ingest and produce information, and while that was once facilitated by a department of Operators loading tapes, we have found the whole Operator thing to be costly and slow compared to real-time communications.

In other words, the modern business computer is almost primarily a communications device.

There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems. These interconnections often cross organizational and geographical boundaries.

I am thinking, for example, of the case of airline reservation and scheduling systems disabled by the CrowdStrike, er, sorry, whatever I called them incident. These are fundamentally communications systems, and have their origins as replacements for the telephone and telegraph. It is not possible to simply not internetwork them, because networking is inherent to their function.

Networking is important to maintenance and operations

But let's consider systems that don't actually require real-time communications to perform their business purpose. Network connectivity still tends to be really valuable for these.

For one, consider maintenance: how does a system obtain software updates if you have no internet connection? How is that system monitored?

And even if you think you can avoid those requirements by declaring a system "complete" and without the need for any updates or real-time monitoring or intervention, business requirements have the frustrating habit of changing over time, and network connectivity reduces the cost of handling those changes tremendously.

What does it mean for a system to not be connected to the internet?

First, we need to consider the fact that there are as many forms of "not connected to the internet" as there are ways of being connected to the internet. For this reason alone, proposing that a system shouldn't be internet-connected is usually too nonspecific to really discuss. Let's consider a menu of possibilities:

List 1:

  1. A single device with no network connection at all.
  2. A system of devices that is "air-gapped" in the strictest sense, with no connection to any network other than its private local-area one, where data never crosses the security boundary.
  3. That same system, but someone carries DVD-Rs across the security boundary to introduce new data to the private network.
  4. That same system, but a cross-domain solution or "data diode" allows movement of data from a wider (or lower-security) network into the private (or higher-security) network.
  5. That same system, where the cross-domain solution does not have a costly and difficult to obtain NSA certification.

List 2:

  1. A system of devices which interconnect over a private wide-area network using fully independent physical infrastructure with physical precautions against tampering.
  2. That same system, but the independent physical infrastructure is run through commodity shared ducts.
  3. That same system, but the infrastructure is leased dark fiber.
  4. That same system, but the infrastructure is wavelengths on lit fiber.
  5. That same system, but the infrastructure is "virtual private ethernet" implemented by the provider using, let's say, MPLS.
  6. That same system, but the infrastructure is "virtual private ethernet" implemented by the provider using a tunneling solution with encryption and authentication.

List 3:

  1. A system of devices which interconnect over a common-carrier network (such as, we might even dare say, the internet), where private network traffic is tunneled through encryption and authentication performed by hardware devices.
  2. That same system, but the hardware devices do not have a costly and difficult to obtain NSA certification.
  3. That same system, but the tunneling is performed by a software solution that is well-designed such that it configures the operating system network stack, at a low level, to prevent any traffic bypassing the tunnel, and this has been validated by someone much smarter than me.
  4. That same system, but not so well designed and validated by someone like me.
  5. That same system, but the "software solution" is like Wireguard and an iptables script that has been "thoroughly tested" by someone on Reddit.

List 4:

  1. A system of devices which interconnect on a private network that has interconnection to the internet that is strictly limited by policy-based routing or other reliable methods, such that only very narrowly defined traffic flows are possible.
  2. That same system, but the permissible network flows are documented in some old Jira tickets and some of them were, you know, just thrown in to make it work.
  3. That same system, but it's basically protected by a firewall that's pretty liberal about outbound flows (maybe with IPS or something), and pretty restrictive about inbound flows.

List 5:

  1. An AWS private VPC without any routing elsewhere.
  2. An AWS private VPC with PrivateLinks and other AWS networking baubles that allow it communicate with other private VPCs.
  3. That same system, but some of the interconnected VPCs can route traffic to/from the internet.
  4. An AWS private VPC with NAT GW and IGW but the security groups are set up pretty tight in both directions.

These are all things that I have seen described as non-internet-connected. Take a moment to work through each list and mark the point at which you think that is no longer a reasonable claim. It's okay, I'll wait.

I'm not going to provide threat modeling for all of these scenarios because it would go on for pages, but you can probably see that pretty much every option is at least slightly different in terms of attack surface and risk.

This might seem like an annoying or pedantic argument, but this is actually the biggest reason I get irritated when people say that something should never be connected to the internet. What do they mean by that? When someone says that an airline reservation system shouldn't be internet-connected, they clearly don't actually mean the strictest form of that contention (no network connection at all) unless their name is Adama and they liked when airline reservation centers had big turntables of paper cards they spun around to check off your seat. They must mean one of the midpoints presented above, which are pretty much all coherent positions, but all positions with different practical considerations.

This ambiguity makes it hard to actually, seriously consider the merits of dropping internet connectivity.

Non-internet connected systems are so very, very annoying

In my day job, I work with a wide variety of clients with a wide variety of cultures, IT architectures, and so on. Some of them are in highly regulated industries or defense or whatever, and so they actually conduct software operations in networks with either no internet connectivity or tightly restricted internet connectivity.

When I discover this to be the case, I mentally multiply all of the schedule/cost estimates by a factor of, I would say, 3 to 10, depending on where they fall on the above lists (usually 3x to 5x for list 5 and 10x to a bajillion times forever for list 1, just rule of thumb).

Here's the thing: virtually the entire software landscape has been designed with the assumption of internet connectivity. Your operating system wants to obtain its updates from online servers. If you are paying for expensive licenses for your operating system, the vendor probably offers additional expensive licenses for infrastructure to perform updates within your private network. If you are getting your operating system for free-as-in-beer, there's a good bet you can figure it out yourself, but if you're using anything too new and cutting-edge it might be a massive hassle.

But that just, you know, scratches the surface. You probably develop and deploy software using a half dozen different package managers with varying degrees of accommodation for operating against private, internal repositories. Some of them make this easy, some of them don't, but the worst part is that you will have to figure it out about fifty times because of the combinatorial complexity of multiple package managers, multiple ways of invoking them, and multiple environments in which they are invoked.

If you are operating a private network, your internal services probably don't have TLS certificates signed by a popular CA that is in root programs. You will spend many valuable hours of your life trying to remember the default password for the JRE's special private trust store and discovering all of the other things that have special private trust stores, even though your operating system provides a perfectly reasonable trust store that is relatively easy to manage, because of Reasons. You will discover that in some tech stacks this is consistent but in others it depends on what libraries you use.

A bunch of the software you use will want to perform cloud licensing and get irritated when it cannot phone home for entitlements. You will have to go back and forth with your vendors to figure out a workaround somewhere between "add these ninety seven /16s to your firewall exceptions" and "wait six months while we figure out the internal process to issue you a bespoke licensing scheme."

All of your stuff that requires updates or content updates will have some different process you have to follow to obtain those updates and then provide them internally. Here's a not at all made up example, but a real one I have personally lived through: you will find that a particular (and particularly hated) enterprise software vendor provides content updates for offline use only through a customer support portal that is held over from three acquisitions ago, and that it is only possible to get an account in that customer support portal by getting an entitlement manually added in a different customer support portal held over from two acquisitions ago. It will take over three months of support tickets and escalations through your named account executive to get accounts opened in successively older customer support portals until you can finally get into the right one, which incidentally has an invalid TLS cert you are reassured is not something to worry about. Once you download your offline content update, you will find that the documented process to apply it no longer works, and it will take a long email chain with one of the engineers to get the right instructions. You paid a five-figure sum for a 1-year license to this software and it has now nearly elapsed while you figured out how to use it. You will of course get an extension on that license pro bono, because this is enterprise software sales and what is a quarter worth of my salary between friends, but they won't manage to issue the extension license until after your original one has already expired, causing a painful interruption in CI pipelines and a violent revolution by the developers.

I am sorry, you are not my therapist, I will try to stop remembering that dark time in my career. Don't worry, the software in question seems to have fallen out of favor and cannot hurt you.

So, like, that's an over-the-top example (but seriously, a real one!), but you get the point. It's not really that any individual part of operating in an offline environment is hard---I mean some of them are, but most of them aren't. It's a death by a thousand cuts. Every single thing you ever do is harder when you do not have internet connectivity, and you will pay for it in money and time.

The largest problem by far is that almost everyone who develops software assumes that their product will not need to operate in an offline environment, and if they find out that it does they will fix that with duct tape and shell scripts because it only matters for a small portion of their customers. You, the person with the offline environment, will become the proud owner of their technical debt.

None of this really needs to be that way, it's just how it is! There are not really that many offline environments, and they tend to be found in big institutions that have adapted to the fact that they make everything cost more and take longer, and are surprisingly tolerant of vendors who perform a three stooges routine every time you say "air-gap," because that's what pretty much every vendor does. Except for like Red Hat, I genuinely think Red Hat is pretty good about this, but you betcha what you save in time you are paying in cash.

Not many people do this

That's kind of the point, right? The problem with non-internet-connected environments is that they are rare. The stronger versions, things from List 1 and List 2, are mostly only seen in defense and intelligence, although I have also seen some banks with pretty impressive practices. You will note that defense and intelligence, and even banks, are also famously industries where everything costs way too much and takes way too long. These correlations are probably not coincidences.

Even the weaker forms tend to be limited to highly-regulated industries (finance and healthcare are the big ones), although you see the occasional random software company that just takes security really seriously and keeps things locked down. Occasionally.

Okay, let's stop just complaining

Here's the thing: I genuinely do not think that "fewer systems should be connected to the internet" is a bad idea. I really wish that things were different, and that every part of the software industry was more prepared and more comfortable operating in environments with no or limited internet connectivity. But that is not the world that we currently live in! So let's get optimistic, what should we be doing right now?

  1. Apply restrictive network policy on as much of your stuff as possible. Cloud providers generally make this easier than it has ever been before, it's not all that easy but it's also not all that hard to operate a practical non-internet-routed environment in AWS. If you stay within the lanes of all the AWS managed services, it's mostly pain-free. You will pay for this, but, you know, AWS always gets their check anyway.

  2. Build software with offline environments in mind. Any time that you need to phone home to get something, provide a way to disable it (if practical) or a way to override the endpoint that will be used. If the latter, keep in mind that you will also need to come up with a way for a customer to feasibly host their own endpoint. If you keep to simple static files, that's really easy, just nginx and a directory or whatever. If it's an API or something, well, you're probably going to have to ship your internal implementation. Brace yourself for the maintenance overhead.

  3. Try to think about the little assumptions that go into connecting to other services that become more complex in an offline environment. Please, for the love of God, do not assume you can reach LetsEncrypt. But that's not the only TLS problem, offline environments virtually always imply internal certificate authorities. Use the system trust store. Please. I am begging you.

  4. Avoid fetching any kind of requirements or dependencies at deploy time. One of the advantages Docker supposedly brought us was making all of the requirements of a given package self-contained, but then I still run into Docker containers that can't start if they can't reach the npm repos or something. And now I have yet another place to fix configuration and trust store and etc., in your stupid Docker container. It has made things more difficult instead of less.

Have I mentioned that Docker, paradoxically, actually makes offline environments more difficult to manage? Yeah, because virtually every third-party Docker container has at least a TLS trust store you'll have to modify. Docker is, itself, a profound example of how the modern software industry simply assumes that everything is running On The Internet.

Anyway

I wrote this out in a bit of a huff because I have seen "why were they connected to the internet at all?" like four times in response to the CrowdStrike incident. I know, I am committing the cardinal sin of taking things that people on the internet say seriously, but I feel obligated to point out: internet connectivity is pretty much completely orthogonal to what happened. CrowdStrike content updates are the kind of thing that, in a perfect world, you would promptly make available in your offline environment. In practice, an internal CrowdStrike update mirror would probably lag days, weeks, months, or years behind, because that's what usually ends up happening in "hard" offline environments, but that's a case of two wrongs making a right.

Which they do, more often than you would think, in the world of information technology.

Don't worry, I'll be back next time with something more carefully written and less relevant to the world we live in. I just got in a mood, you know? I just spent like half the day copying Docker images into an offline environment and then fixing them all. I have to find something to occupy the time while a certain endpoint security agent pegs the CPU and makes every "docker save" take ten minutes.

2024-07-20 minuteman missile communications

A programming note: I am looking at making some changes to how I host things like Computers Are Bad and wikimap that are going to involve a lot more recurring expense. For that and other reasons, I want to see if y'all would be willing to throw some money my way. If everyone reading this gave $3 a month, we could probably buy Jimbo Wales a nice lunch or something.

I do not intend to paywall anything that I post here. Instead, I'm going to take some of the things that tend to be very long Mastodon threads (how this article originally started!) and send them out to supporters, probably about once a month. They'll be things that I wouldn't post here, usually because they're too short or don't quite have a hook to make a while article interesting.

So consider supporting me on ko-fi---or don't, it's your decision, and I respect that. I'll probably work a plug into each article but I promise not to be annoying about it. I'm also going to be making some new YouTube videos and I'll probably make those available to supporters first, so that's something else to look forward to.

Speaking of annoying, this is kind of a long and dry one. I started looking into something called HICS after visiting a historic site, posted about it a bit on Mastodon, realized that there just wasn't a lot of good historic information about it in general. And I felt like if I was going to talk about communications in Minuteman missile fields, I also had to cover how they would get their war orders.

Look on the bright side: it's got pictures!

Blast door of LCC

Minuteman Missiles

Since the early days of the Cold War, the United States has maintained a nuclear triad: independent capabilities to deliver nuclear weapons from land, sea, and air. The sea and air components are straightforward, consisting of submarines carrying submarine-launched ballistic missiles (SLBMs) and long-range strategic bombers. The land leg is more often forgotten: the intercontinental ballistic missiles, or ICBMs.

The US ICBM arsenal currently consists of 400 Minuteman III missiles emplaced throughout the midwest: the Air Force's 90th Missile Wing, 150 missiles, Wyoming Nebraska, and Colorado; the 91st Missile Wing, 150 missiles, in North Dakota; and the 341st Missile Wing, 100 missiles, in Montana. Historically, there were as many as 1,000 active Minuteman missiles, to say nothing of retired missile programs like Titan and Atlas. At least three Minuteman missile facilities are now historic sites open to the public, a somewhat incongruous experience considering their broad similarity to the facilities still in active use. Many others are abandoned, typically in various states of permanent destruction to satisfy treaty obligations. Fifty Minuteman IIIs of the 341st are currently held in an inactive "reserve" state, out of service but ready for future emplacement, a notable situation given the strict limits Cold War treaties place on stockpiled ICBMs.

Minuteman employed a significantly different launch configuration from earlier ICBM programs. Large facilities were difficult to protect from a first strike; distance was the only effective protection from increasingly accurate Soviet weapons. Scattered single facilities were difficult and costly to staff. Minuteman selected a compromise point: clusters of ten independent Launch Facilities (LFs), spaced miles apart and called a "flight," are remotely monitored and operated from a single Missile Alert Facility (MAF). Groups of four to five missile flights constitute a squadron, and about four squadrons compose a wing, which is supported by an Air Force Base.

In each Missile Alert Facility, a Missile Combat Crew Commander (MCCC) and Deputy Missile Combat Crew Commander (DMCCC) lock themselves into an underground capsule called a Launch Control Center (LCC) for each 24 hour watch. Originally designed by Boeing, the LCC resembles the interior of an aircraft more than a building, fitting its crew of Air Force officers. The LCC is an isolated, self-contained system with all of the equipment needed to monitor, configure, and launch the missiles. A surface building above contains a security control center and quarters for the security force, responsible not only for the MAF but also for the ten LFs under its supervision. Still, the surface building is both powerless over and largely unneeded by the LCC beneath it. In the event of nuclear war, it was assumed, surface structures in these sparsely populated but strategically critical parts of the country would be wiped cleanly away from the earth. Only the hardened infrastructure would remain: the LCCs, the LFs, and communications infrastructure.

Missile Alert Facility

Minuteman missile combat crews had more duties than just to wait. They performed remote tests on the missiles, launch, and control equipment; they monitored alarm systems that reported malfunctions and remotely supervised the work of maintenance crews; and they monitored the security systems that protected the unmanned LFs, authorizing access and dispatching security forces on the surface to any unknown intrusion. Still, their primary responsibility, the one for which we all know them, is the disposition of Emergency War Orders (EWO).

In fine Air Force tradition, actions that may very well presage the end of the world as we know it are presented in the form of a checklist. EWOs are authenticated against codes and secrets. Two keys, just like in the movies, are inserted. The missiles are enabled by remote command. Targeting information is transmitted to the missiles. The keys are turned, the launch code sent, and the rest is automated, under the control of the missiles themselves. The missile crew are miles away, so the Real Thing, the Shit Hitting the Fan, must feel pretty anticlimactic. The only apocalyptic horsemen they'll see are a series of indicator lights on the MCCC's console: LCH CMD. LCH IN PROC. MISSILE AWAY.

The realities of ICBM operation are fascinating and incline one towards drama. Somehow the work of submarine and bomber crews seems more ordinary; they are at least "out there," in or near enemy territory. Missile crews are sealed in a very small room buried below a small building in a corner cut out of a farm field near, but not too near, to a highway for logistical convenience. They are entirely dependent on electronic communications, not only to receive their orders, but even to use their weapons. They have the original email job: since 1962, they have served primarily to send and receive messages, mostly by text.

With the rather purple introduction complete, I am going to talk about this communications technology. But first, just a little more preface.

Most of the reduction of the Minuteman force has been a direct response to treaties, which imposed progressively lower caps on the nuclear stockpile. Some reductions were more of a historical accident. In the 1980s, political considerations lead to a decision to "temporarily" deploy the Peacekeeper missile, with 10 MIRV (multiple independent reentry vehicle) warheads, to a set of fifty silos of the 90th Missile Wing's 400th Missile Squadron, in Wyoming. The Peacekeeper, fielded late in the Cold War, was a profoundly controversial program. The fifty Peacekeepers retrofitted into Minuteman silos would be the only ever installed, and their temporary homes became permanent. Most of the missile's warheads were removed for compliance with Start II treaty, which Russia never ratified and the United States withdrew from. Still, for cost-savings reasons, the odd-duck Peacekeeper program was terminated. The last Peacekeeper missiles were retired in 2005.

I got it into my head to write a detailed description of the missile field communications system because of my visit to QUEBEC-01, one of the five MAFs associated with these Peacekeeper missiles, and now a Wyoming State Historical Site. For that reason, I will most closely describe the communications system as installed in the 90th Missile Wing, the remainder of which is still active today. Minuteman missile fields were built over a period of years by different contractors and have since been through multiple modernization programs. Each change has introduced inconsistencies. While I will point out some of the more interesting variations between Minuteman installations, this is best taken as a description of the "average" Minuteman squadron, one that is typical of the others but does not exactly exist.

Although I am not exactly aiming for academic rigor, this information is based mostly on documents available through the Defense Technical Information Center, which include both original documentation from the Minuteman program and more recent documents related to modernization programs, proposed changes, and the retirement of many Minuteman facilities. I have supplemented those documents with recollections by former Air Force personnel when available, and as always, I welcome any corrections or additional information. One of the pleasures of writing about military history is the tendency of veterans to reach out to me with corrections and stories; I apologize that I am not always good about getting back to people, particularly phone calls.

The Minuteman III, a fairly direct evolution of the original Minuteman design, is still in active service. Many detailed materials about the Minuteman program are probably currently classified, most of the others are formerly classified and thus have not consistently made their way to archives. Certain basic questions remain frustratingly unanswered. That's just how it goes.

I'm also going to try really hard not to be too annoying with the acronyms, but it's not easy.

Personnel

While there were originally three-person crews, Minuteman LCCs have had two crew members for many decades, the MCCC and DMCCC. The MCCC is superior to the DMCCC, but the nature of missile operations and such a small crew mean that their roles are somewhat more complex than commander and deputy. Missile crews operate according to the "two-person concept," a general prohibition on any person working alone. This rule is intended to improve safety, reduce mistakes, and most importantly, mitigate the risk of an unauthorized launch. There are additional safeguards against unauthorized launch in the Minuteman system which will be discussed later. Both the MCCC and the DMCCC are required to initiate a launch.

The MCCC sits at a console that is focused around monitoring and control of the launch facilities. They have ready access to procedures and documentation. The DMCCC sits at a separate console that is focused on communications. Their chair slides on rails, allowing them to access the equipment racks and teleprinters to the sides of the DMCCC position. The DMCCC is primarily responsible for communications, so we are most interested in the equipment under their control.

Missile Alert Facilities were designed with a goal of self-containment for survivability. Most communications equipment is within the LCC itself, readily available to the DMCCC so that they can at least diagnose problems, if not make a repair. To this end, DMCCCs receive significant training on technical details of the communications and computer systems.

Should a problem occur outside of the LCC, the MCC would request assistance from the Air Force Base. Several Air Force ratings had expertise in communications equipment, ranging from communications technicians that would investigate problems within the LF to cable splicers that would repair damage in the outside plant.

The Minuteman program is somewhat unusual in the extensive construction of long-distance communications equipment by the Air Force. AT&T's role in the missile fields was surprisingly limited; most communications followed routes fully under the control of the Air Force.

External Communications

We can generally divide Minuteman communications systems into two categories: external and internal. External communications systems are primarily used by the Missile Combat Crew (MCC) to receive orders, including Emergency War Orders authorizing the use of nuclear weapons. Internal communications systems are used within the missile field, primarily to allow the MCC to communicate with the launch facilities under their control. Some details blur the lines: for example, there are communications systems which allow the MCC to contact their Air Force Base, where support facilities and maintenance crews are found. I will consider these internal systems, but you could argue for the opposite.

The external communications systems available to Minuteman crews have varied over time. Perhaps the most exotic was the Survivable Low Frequency Communications System (SLFCS), based on the LF equipment used by the Navy for communications with submarines. Missile facilities are not underwater, but nuclear detonations cause significant disruption to the atmosphere that greatly interferes with radio propagation in the HF range. LF communications are expected to be less affected in a nuclear combat environment. SLFCS specifically operated between 14kHz and 60kHz. Some, but not all, Minuteman MAFs were equipped with a magnetic loop antenna, about 6' in diameter, buried shallowly underground.

All MAFs were equipped with HF antennas, although they were decommissioned in the 1980s. The HF antennas are described as hardened, but it is not feasible to truly harden an HF antenna. They must be fairly large, and HF does not penetrate the ground well, making it impractical to bury them. Instead, hardened HF antennas are perhaps better described as "hidden" HF antennas. The typical design is a monopole that stores in a long, narrow silo underground, awaiting post-attack deployment. MAFs had two separate hardened HF antennas, one for transmit, and one for receive.

The receive antenna was the most critical, as it would be needed to receive war orders over the High Frequency Global Communications System (HFGCS). HFGCS is one of the primary ways that an Emergency War Order would be distributed to Air Force units including both missiles and bombers. The hardened HF receive antenna assembly actually included six 160' monopoles: one was extended for normal use, but in the event of nuclear attack, the five others were stored telescoped in a silo about 30' deep and could be deployed by a small explosive charge. There were, in the parlance apparently used by the Air Force, five "reloads."

The HF transmit antenna, being less important in an attack scenario, had only one replacement. A "soft" HF transmit antenna of conventional design was in normal use but backed up by a single 120' hardened antenna stored in a separate silo. A 50' radius buried ground plane surrounded the hardened antenna.

Near the MAF surface building, a small, white metal cone protrudes from the ground. The cone consists of a huge cast steel blast deflector with a depression in the center, which is covered by a fiberglass cone. The cone houses a compact UHF antenna. This antenna, a 1970s upgrade, can receive war orders via several satellite systems or directly from an aircraft such as the E-6. The E-6 airborne command post can serve in various roles, including as a "Looking Glass" airborne command post (taking control of nuclear forces in the event of a loss of ground-based command posts) or a "TACAMO" Take Charge and Move Out relay, transmitting a war order from elsewhere to forces in the field.

Hardened UHF antenna

UHF communications are essentially line of sight, especially with the use of a partially in-ground hardened antenna. In a scenario with extensive loss of communications infrastructure, particularly with ASAT warfare disabling military communications satellites, an E-6 or another similarly equipped aircraft would fly over Minuteman missile fields and deliver an emergency war order directly to each LCC.

Complimenting this capability, in-service Minuteman launch facilities have themselves been equipped with a similar UHF antenna. Looking Glass and TACAMO aircraft actually have Air Force missileers on board who serve as an Airborne Launch Control Center (ALCC). In the event of a loss of most of the LCCs in a missile field, an ALCC can issue launch instructions directly to each LF without any need for the regular missile crews or internal communications infrastructure.

During the early '90s, a super high frequency (SHF) small satellite terminal was installed at each active MAF. It is housed in a small, white radome at the top of a pole near the surface building. SHF is widely used by modern military satellite systems, such as the Air Force's Wideband Global SATCOM.

Satellite terminal

Each of these antennas is connected by buried conduits to radio equipment in the LCC's radio racks. At the DMCCC's position, the telephone console allows the DMCCC to talk or listen on each radio by selecting it with a pushbutton. Over time, the radios were also attached to more modern digital systems. Depending on the year, teleprinters or computer displays would receive text messages via the UHF and SHF radio systems.

The external radio systems were actually all backup or secondary. The primary means of nuclear C2 within the Strategic Air Command and, later, Global Strike Command, has long been a digital computer network. While Minuteman installations began with just a teleprinter to receive orders via leased telephone line, the late '60s saw the introduction of the Strategic Automated Command and Control System (SACCS), which was itself replaced by the Strategic Air Command Digital Network (SACDIN). A small (for the era) computer in a rack to the right of the DMCCC's station allowed two-way messaging with SAC headquarters over leased telephone lines. Reportedly, these and other Bell System telephone lines to LCCs were carried by buried telephone cable to small hardened exchange buildings serving each missile field. I have not yet researched this topic closely.

In Peacekeeper LCCs as well as Minuteman LCCs from the '70s to the '90s, the specific computer used for this purpose was called the Command Data Buffer (CDB). The CDB was connected to both SACCS/SACDIN and the internal communications network, in order to accurately relay targeting information to the missiles. This will be discussed later in the context of rapid retargeting. In the 1990s, the REACT system was installed for a similar purpose.

Quebec-01 LCC was equipped with a teleprinter with a selector between SACDIN and AFSAT (UHF satellite) receivers. I'm not sure why the teleprinter was retained after the CDB upgrade, very possibly just for redundancy.

SACDIN Teleprinter

Somewhere between external and internal, each LCC had access to two dial telephone lines. These connected directly to the PSTN. Some circumstantial evidence leads me to think these dial lines were shared with the surface building, which probably explains the need for two. The dial lines were mostly used to contact support crews at the Air Force Base for routine maintenance issues.

HICS Digital Communications

HICS cable splice

I was most interested in the internal communications systems, which have garnered less historical documentation than the external links. In most cases, there is only really one: the Hardened Intersite Cable System, HICS.

HICS consists of multipair pressurized telephone cables trenched between Minuteman facilities. HICS carries digital traffic for C2, and many Air Force documents use the term "HICS" exclusively to mean the digital channel, but the same cables carried multiple voice pairs. Let's consider the digital capability first, though.

HICS, as originally installed, operated at 1.3Kbps. Details on the actual encoding are hard to come by, but given the time period I assume it was generally similar to the AFSK schemes used by other early telephone data links. The punch line, of course, is that the 1.3Kbps stuck---based on some Air Force journal articles on options for upgrades, it seems that contemporary Minuteman III fields still communicate over HICS at 1.3Kbps. Remember that when we get to retargeting.

The topology of the digital HICS network is rather interesting. It was designed for redundancy and reliability, but prior to most of our modern understanding of computer networking. There's a mix of a few different ideas.

One of the things I'm not completely confident of is the size of the collision domain within HICS, or how much of the cable network was a common bus. From reading between the lines of some different reports and considering the overall design, I'm fairly confident that the entire digital HICS network was a single shared bus within each flight, and I think it is likely that it was a shared bus within each squadron. This bus would be tens of miles long with multiple branches, a challenging electrical situation that perhaps explains why the Air Force has repeatedly found it to be infeasible to make HICS faster.

HICS cable map

Thanks to minutemanmissile.com for this image of the Warren AFB/90th Missile Wing HICS map, which is far more legible than the photo I had taken.

Each of the LCCs, denoted in the map by open rectangles, is connected to four "loops" of HICS cable. The legs of adjacent loops to the LCC are shared, though, so it's perhaps easier to describe this way: each LCC is surrounded by a ring of HICS cable, to which it is connected by four legs spread roughly 90 degrees apart. This design gives a fair amount of redundancy, a break anywhere in the ring or even breaks of more than one of the LCC's legs would still leave a working path.

This latter scenario was probably one of the designer's greatest concerns, as the LCCs would be obvious targets for inbound nuclear attacks. The cable layout provides four-times redundancy on the cables to the LCC, but no redundancy at all on the cables to individual LFs. That tells you a lot about their threat modeling. Facilities were spaced far enough apart that a precision strike on an LF would probably disable only that single LF; a precision strike on an LCC, though, could potentially disable ten LFs at once. As the accuracy and power of nuclear warheads improved, it seemed more likely that a first strike would succeed in disabling at least some LCCs. A lot of the complexity of HICS is intended to account for that possibility.

Each of the LFs is connected to the ring via a single leg. In some cases, multiple LFs are along the same leg. These are often the same long runs that connect the rings of two different LCCs together. In general, each LCC ring is connected to two of its neighbors, although sometimes it will instead have two redundant connections to a single neighbor. Based on the map and situation on the ground, these inter-flight connections don't seem to have required any active equipment, only a splice case. That supports the theory that an entire squadron was a shared bus, although it's possible that a separate pair to the "foreign" LCC ring would home-run to the LCC to allow separately sending messages to either. The network doesn't seem to have that kind of selective routing capability, though, so I find it unlikely.

Digital messages do seem to have been packetized, and were distributed through the network on a "flood fill" basis. That is, every active node on the digital network repeated every message it received. You might wonder about flow control and the avoidance of cycles; only a very primitive method was used. Each node, after transmitting a message, would "lock out" the cable it was transmitted on for a long enough period for the node on the other end to finish repeating the message.

This explanation is a little more difficult to understand, though, when applied to the actual layout of the HICS system. What exactly constitutes a node? You will note that the map distinctly shows intersquadron connections, with both thicker lines and open circles where they connect to an LCC ring. My theory is that these intersquadron connections where the only places where active repeating of messages was required. Whether or not repeating messages between squadrons was selective is unclear. Did a message from an LCC to one of its nearby LFs get repeated across squadrons to the opposite end of the field? If repeating had originally been completely non-selective, I suspect that was changed as part of the work done to facilitate retargeting.

We can infer certain things about the HICS network from the equipment in Quebec-01. For example, HICS must have had a fair number of active repeaters. along a path.

HICS diagnostic panel

Inside of the LCC, we find something like a tiny long-distance telephone test desk. A rack includes pressurization alarms for five cables (we know of four legs to the LCC ring, is the fifth perhaps a cable to the local telephone exchange?), and a fault isolation panel. When a cable seemed to have been lost, the DMCCC could use this panel to locate the problem along the cable. This probably relied on a loopback test feature of the repeaters, but I'm not sure exactly the operating principle. Further down in the rack is what appears to be a cable power supply.

Repeaters were definitely installed inside the LCCs, but the number of selections on this test panel makes me suspect that there were also in-line repeaters on the cable, perhaps taking power from that power supply. This is entirely speculative, but A repeaters may have been located along the LCC ring and B repeaters on legs, making the two-knob selector arrangement useful to test a specific repeater on a specific leg.

HICS termination point

In the equipment side of the LCC, where the generator and chiller are located, we also find the terminations of the HICS cables. Note the mostly empty rack that would have held repeater equipment, and the air dryer and flow gauges for cable pressurization.

Finally, I should talk a bit about the exception to all of this: the 321st Missile Wing, in North Dakota, was built later than the 90th and 91st and by a different contractor. Sylvania, not Boeing, won the bid to build the LCCs and LFs. Much of the equipment is the same, but Sylvania did inject a few of their own ideas, and one of them was radio redundancy for HICS. The 321st apparently had a simplified HICS topology; I'm not sure of the details but I would guess that they may not have provided the four redundant cables to each LCC.

To make up for it, each LCC and LF in the 321st is equipped with a large, buried antenna, a grid-like arrangement of crossing dipoles that took up an area similar to the sewage lagoons outside of the fence. These antennas made up a medium-frequency, ground-wave communications network that could be used as an alternative to HICS. The 321sts redundant radio system, apparently called "Deuce" at the time, could be viewed as a precursor to the later nationwide GWEN radio C2 network. It seems to have carried the same digital messages as HICS, and the DMCCC had selectors to choose whether messages would be sent by cable or radio.

HICS Voice Communications

Now, let's take a close look at the DMCCC's communications console, which tells us a lot about the voice capabilities.

DMCCC communications console

They get a lot of buttons! There appear to be two separate busses, I'm not completely sure of the significance of that layout. I know from an airman's anecdote that the DMCCCs could conference together LF phones and the dial telephone lines, and sometimes did so that maintenance crews stuck at an LF overnight could make apologies to their families. This makes me think that it is not a matter of "one selection per bus," but rather probably indicates that lines can only be joined within a bus. The logic behind that design is not clear to me.

Anyway, let's see if we know enough to explain all of these buttons. Some are easy: for the speaker and handset, there are selectors each of the radios. The "LF Lines" correspond to each of the ten LFs, numbered 2-11 since the LCC is numbered as site 1. We see the two dial lines, regular telephone lines provided at each site, and they are even labeled with their phone numbers. The five-digit notation dates this hand-written addition to the 2L-5N era, which probably persisted unusually late in rural Wyoming.

The rest of the buttons correspond to specific pairs that would emerge in different places in the HICS network. The "SCC" button likely allows communications with the security command center in the surface building, just up the elevator from the LCC. The "LCC" button I am less sure of; perhaps it was a party line of other LCCs in the squadron? The "LCC Ring" selections must correspond to the four HICS cable rings extending from the LCC, but I'm not sure which devices would be found on those pairs. They may be "order wires," available in the splice boxes and as jacks at sites and normally used only by maintenance crews working on cables outside of the LFs.

The EWO buttons are interesting. EWO is, of course, Emergency War Order, but in the context of Minuteman was also the term used for party lines connecting the Air Force Base to the LCCs. These could be used, of course, as a redundant way to deliver EWOs, as well as for general communications across the missile field. There are two for redundancy: one was routed via AT&T infrastructure, following a cable from the LCC to a telephone exchange. The other was routed via HICS. I am not sure why only one merits a "RNG" button, that could apply ringing voltage to get the attention of other stations.

I am assuming, by the magic of speculation, that the "OPR" button on the left of each bus probably selected which bus the headset was connected to. There is also a dial, for use with the dial lines.

These voice connections within the field were of critical importance because of Minuteman's strict security posture. The unattended LFs were equipped with intrusion alarms for physical security, initially a bistatic radar system more similar to that used at Titan, and later a DSP-based monostatic radar system called the IMPSS. Any personnel or, reportedly, large rabbits approaching an LF would cause an alarm, and security forces were dispatched to investigate unless the intruder used a HICS voice circuit to authenticate themselves to the LCC. The process of "penetrating" a secure LF could take a missile crew thirty minutes or more, and involved multiple calls to the LCC as different alarms were triggered.

HICS Outside Plant

Old aerial images and historical documents from the Air Force give us some insight into the construction of HICS. HICS cables were installed in open trenches and then covered, rather than placed directly with a vibratory plow as would become common later. Splices were done in large holes with scotch-lok connectors and cast iron splice casings. Over time, many of the splice casings had to be replaced due to premature corrosion, and different materials were tried before settling on brass. A cathodic protection system was installed as a permanent solution to the problem.

I have done my best to trace some of the HICS cables along their routes. The holes used for splicing are sometimes visible as scars, but it does not appear that any manholes were installed; instead splices were made near RoW markers and will have to be excavated for repairs. The lack of manholes suggests that there may not be active equipment along the cable routes, I'm not really sure.

The RoW markers used by the Air Force are substantially similar to the style used by AT&T at the time. They are round wooden posts, about 6' tall, with metal bands around the top. Unlike AT&T, the Air Force used white bands, and it seems that there are always two. Shorter markers are used in some places, I suspect where there are splice cases not near road crossings. Where the cables cross roads, the Air Force usually installed gates with sturdy metal posts in the roadside fences. Sometimes these gates are the easiest evidence to find in aerial photos.

One of my biggest questions is about the inter-squadron relays. The map depicts them as nodes, but they aren't located at LFs or other facilities. I wondered if there might be active equipment, but I found one of the locations where in inter-squadron cable takes off from an LCC ring and there is no indication of even a manhole. In case you might be interested, here is a KML with the cable tracks I have worked out so far.

Cross-Flight Communications

HICS served primarily to allow an LCC to communicate within the ten missiles in its flight. However, the entire squadron and then the entire missile wing were interconnected, and Minuteman took advantage of this capability for several purposes.

First, a specific LCC in each squadron was designated as the Squadron Command Post (SCP). The SCP was capable of sending launch orders to any missile within the squadron, and of countermanding launch orders issued by any of the squadron's LCCs. This provided a measure of protection against a destroyed or compromised LCC.

Over time, the security of Minuteman missiles was further enhanced by the addition of a "vote to launch" system. Minuteman missiles can only be launched if launch orders are sent by at least two LCCs, requiring a total of four individuals.

In some Minuteman fields (and all current fields), a Wing command post serves in a role similar to the SCP but across the entire wing. It provides one central point where the entire wing's missile inventory can be monitored and, if necessary, controlled.

Alarms

Besides missile C2 and voice communications, one of the main functions of the HICS was the reporting of alarms within unattended LFs to the LCC. Some alarms, particularly related to the missile itself, would be sent by the missile guidance computer over the HICS digital network. These alarms would be printed by the teleprinter below the DMCCC's desk, along with confirmations of commands received and other routine traffic.

There was also a second, dedicated alarm system called the Voice Reporting Status Assembly or VRSA. VRSA seems to have relied on its own pairs in the HICS cables, and resembled the simple alarm reporting telephones coming into use by AT&T. The DMCCC could select an LF and press a button to send a tone, which triggered a device at the LF to "read back" any status alarms via voice recordings. At the time this almost certainly involved some interesting magnetic tape equipment, but I haven't found much information on the LF end of the VRSA. Toggle switches on the VRSA console allowed the DMCCC to reset the alarm device in the LF, clearing any recorded alarms that weren't active.

The VRSA was an upgrade over the original Minuteman installations, which used a very similar panel to send a safe/arm tone to the LFs as part of the launch process. Since part of standard testing practice was for DMCCCs to flip the toggle switch for an LF to remove the "safe" tone and arm the site, it was a fairly obvious evolution to have the site report any faults in response to a tone. Reportedly, the VRSA panels were the original safe/arm panels with modifications.

Retargeting

When Minuteman was originally installed, each missile's targeting data was loaded from tape using equipment in the LF. To retarget a missile, a maintenance crew had to travel to the site, access it, run the new target tape through equipment in the LF that sent the data to the guidance computer, and complete a recalibration of the inertial reference platform in the missile. This was something like a 12-hour process overall, and retargeting a squadron would take weeks.

Fixed targets were practical when the "enemy" was self-evidently the Soviet Union and any attack would be all-out. Over the 70-year lifespan of the Minuteman program, though, the geopolitical and military environment has changed. There are now other nuclear adversaries, and their military assets are increasingly mobile. The biggest challenge to Minuteman's targeting was the Soviet Union's development of road-mobile ICBMs like the RT-21. To eliminate the USSR's nuclear capability, we would have to fire on these mobile systems wherever they were located. Aerial and satellite surveillance could be surprisingly effective in keeping track of these large, slow-moving TELs, but the Minuteman missiles could not be retargeted to keep up with that intelligence.

In response, a series of enhancements were made (often as part of the Minuteman II program) to introduce "rapid retargeting." Rapid retargeting allowed the missiles to be retargeted from within the LCC. During the 1970s, a computer system called the Command Data Buffer (CDB) was installed in each LCC. The CDB could receive targeting parameters from SAC and then transmit them to the LFs. It was theoretically possible to retarget missiles shortly before launch. In practice, the "shortly" wasn't very achievable

DMCCC station with CDB

The HICS network was capable of 1.3Kbps, and because of the "flood" design of the network, that was essentially 1.3Kbps of total capacity across a single collision domain. In other words, only 1.3Kbps of total traffic could be handled, with far less available from point to point when the network is under heavy use. Further, enhancements to the Minuteman system added cryptographic authentication of messages over HICS and, later, encryption of the messages themselves. The added overhead of the cryptographic system further reduced network capacity.

Retargeting a squadron of Minuteman missiles via the CDB took over 20 hours. Retargeting a single missile could take 30 minutes.

CDB represented a major step forward in Minuteman C2, particularly with its real-time messaging capability. Retargeting was still a severely limited capability, though.

During the 1990s, active Minuteman sites were upgraded to the Rapid Execution and Combat Targeting System, or REACT. More than just an upgrade for retargeting, REACT brought a completely new control system that significantly changed the layout of LCCs. Instead of sitting at opposite ends of the tube, REACT put the MCCC and DMCCC directly alongside each other and centralized almost all control functionality onto computer displays.

It also further refined retargeting: retargeting an entire squadron now takes only ten hours. More radically, though, a single missile can be retargeted in only a couple of minutes, making it feasible to retarget a missile just before firing in a limited attack scenario.

The future of HICS

While both over budget and behind schedule, the Sentinel program is expected to replace the Minuteman missiles. Sentinel will likely be an in-place upgrade, installing new missiles and control systems in the existing Minuteman silos. It has been clear for decades now that HICS isn't capable of meeting modern expectations, so Sentinel will include a complete replacement.

Various options including DSL over HICS cables and radio were considered, but the current plan is to trench new fiber-optic cables across the launch fields. They're less interesting, but fiber optic cables have both capacity and reliability advantages over telephone cables, and could easily remain in service for the life of the Sentinel program.

2024-07-13 the contemporary carphone

Cathode Ray Dude, in one of his many excellent "Quick Start" videos, made an interesting observation that has stuck in my brain: sophisticated computer users, nerds if you will, have a tendency to completely ignore things that they know are worthless. He was referring to the second power button present on a remarkably large portion of laptops sold in the Windows Vista era. Indeed, I owned at least two of these laptops and never gave it any real consideration.

I think the phenomenon is broader. As consumers in general, we've gotten very good at completely disregarding things that don't offer us anything worthwhile, even when they want to be noticed. "Banner blindness" is a particularly acute form of this adaptation to capitalism. Our almost subconscious filtering of our perception to things that seem worth the intellectual effort allows a lot of ubiquitous features of products to fly under the radar. Buttons that we just never press, because sometime a decade ago we got the impression they were useless.

I haven't written for a bit because I've been doing a lot of traveling. Somewhere in the two thousand miles or so we covered, my husband gestured vaguely at the headliner of our car. "what is that button for?" He was referring to a button that I have a learned inability to perceive: the friendly blue "information" button, right next to the less friendly red "SOS" button. Most cars on the US market today have these buttons, and in Europe they're mandatory (well, at least the red one, but I suspect the value-add potential of the blue on is not one that most automakers would turn down). And there's a whole story behind them.

It all started in 1996 at General Motors. Wikipedia tells us that it actually started with a collaboration of General Motors (GM), Electronic Data Systems (EDS), and Hughes Electronics. That isn't incorrect, but misses the interesting point that EDS and Hughes were both subsidiaries of GM at the time. GM was a massive company, full of what you might call vim and vigor, and it happened to own both a major IT services firm (EDS) and a major communications technology company (Hughes). It was sort of inevitable that GM would try integrating these into some sort of sophisticated car-technology-communications platform. They went full-steam ahead on this ambitious project, and what they delivered is OnStar.

But first, a brief tangent into corporate history. I won't say much about GM because I am not an automotive history person at all, but I will say a bit about EDS and Hughes. Hughes was obviously the product of a notable and often enigmatic figure of history, Howard Hughes. GM owned Hughes Electronics because Howard Hughes had cleverly placed his business ventures under the ownership of a massive and brazen tax shelter called the Howard Hughes Medical Institute. Howard Hughes died without leaving a will or any successor as trustee of HHMI, putting HHMI into an awkward legal and organizational struggle as it abruptly pivoted from "Howard Hughes' personal tax scheme" to "independent foundation that incidentally owned a major defense contractor." HHMI ultimately made the decision to turn Hughes' business empire into an endowment, and sold Hughes Aircraft. General Motors was the high bidder. A set of confusing details of the Hughes amalgamation, like the fact that Hughes Aircraft didn't own all of the Hughes aircraft, lead to the whole thing becoming the Hughes Electronics subsidiary of GM.

After GM essentially stripped it for parts, Hughes Electronics lives on today under the name DirecTV. The satellite internet company that actually markets under the name Hughes is, oddly enough, one of the parts that GM stripped off. Hughes Communications became part of EchoStar, operator of Hughes Electronics competitor DISH Network, which then spun Hughes Communications out, and then bought it back again. You can't make this stuff up. The point is that the strange legacy of Howard Hughes, the HHMI, and GM's ownership of Hughes Aircraft mean that the name "Hughes" is now sort of randomly splashed across the satellite communications industry. It's sort of like how Martin Marietta still paves freeways in Colorado.

Electronic Data Systems is not quite as interesting, but it was run by two-time minor presidential candidate Ross Perot, so that's something. GM dumped EDS almost immediately after launching OnStar. EDS eventually became part of Hewlett Packard, which, by that time, had become a sort of retirement home for enterprise technology companies. It more or less survives today as part of various large companies that you've never heard of but have nonetheless secured 9-digit contracts to do ominous things for the Department of Defense.

What a crowd, huh? It's a good thing that nothing strange and terrible happened to General Motors in approximately 2008.

So anyway, OnStar. OnStar was, basically, a straightforward evolution of the carphone backed by a concierge-like telephone service center. In that light, it's an unsurprising development: the carphone was just on its way out in the mid-'90s, falling victim to increasingly portable handheld phones. Hughes, by its division Hughes Network Systems, was an established carphone manufacturer but seems to have had few or no offerings in the mobile phone space [1]. To Hughes long-timers, OnStar was probably an obvious way to preserve the popularity of carphones: build them into the car at the factory, with factory-quality finish.

GM had their own goals. Ironically, it is in large part due to GM's efforts that built-in telephony is so common (and yet so ignored) in cars today. The situation was much different in 1996: OnStar was a new offering, only available from GM. It had the promise of competitive differentiation from other automakers, but for that to work, GM would have to differentiate it from the carphones widely available on the aftermarket. This tension, the conflict between "we built a carphone in at the factory" and "carphones are going out of style," probably explains why OnStar marketing focused on safety and security.

"General Motors has come up with the ultimate safety system" lead a '96 newspaper article. Marketing materials prominently positioned roadside assistance, automatic emergency calls on airbag deployment, remote door unlock, and locating stolen cars. These were features that your average carphone couldn't offer, because they required closer integration with the vehicle itself. OnStar was more than a carphone, it was a telematics system.

"Telematics" is one of those broad, cross-discipline concepts that we don't really talk about any more because it's become so ubiquitous as to be uninteresting. Like Cybernetics, but without a tantalizing but lost historical promise in Chile. Telematics has often been more or less synonymous with "putting phones into cars," but is more broadly concerned with communications technology as it applies to moving vehicles. There is a particular emphasis on the vehicle part, and telematics has always been interested in vehicle-specific concerns like positioning, navigation, and the collection of real-time data.

Telematics was already a developed field by the '90s, although the high cost and large size of communications equipment made it less universal than it is today. OnStar would lead one of the biggest changes in the modern automotive industry: the extension of telematics from commercial and industrial equipment to consumer automobiles. In doing so, it would introduce select GM drivers to an impressive set of benefits, almost a form of ambient computing. It would also start a cascade of falling dominoes that lead, rather directly, to a remarkable lack of privacy in modern vehicles and getting an email that something mysterious is wrong with your car two to four hours after the tire pressure light comes on. The computer gives and takes.

And what of EDS? EDS provided the other half of OnStar's differentiation from a mere carphone. OnStar was not only integrated into your vehicle, it was backed by a team of Service Advisors with training and tools to use that integration. The OnStar equipment included a GPS receiver, still a fairly cutting-edge technology at the time, and continuously provided your location to the OnStar service center in Michigan. Advisors had access to maps and travel directories and the ability to dispatch tow trucks and emergency responders. They could even send a limited set of remote commands to OnStar vehicles. The infrastructure to support this modern telematic call center was built by EDS, and the staff of human advisors provided a friendly face and a level of flexibility that was difficult to achieve by automation alone.

Besides emergencies and roadside assistance, the advisors could solve one of the most formidable problems in automotive technology: navigation. When GM's advertising and press coverage strayed from emergency assistance, they focused on concierge-like services focused around navigation. OnStar could direct you to gas or food. They could not only reserve a hotel room, but get you to the hotel. If you have seen the wacky turn-by-turn navigation technology that proliferated in the late 20th century, you might wonder how exactly that worked. Did an advisor stay awkwardly on the line? No, of course not, that would be both awkward and costly. They read out driving directions, which the OnStar equipment recorded for playback.

I really wish I could find a complete description of the user experience, because I suspect it was bad. The basic idea of recording spoken guidance and playing it back for reference is a common feature in aviation radios, but that's mostly for dealing with characteristically terse and fast-talking ground controllers, and usually consists of a short playback buffer that always starts from the beginning. Given the technology available, I suspect the OnStar approach was similar, but just with a... longer playback buffer. Thinking about listening through the directions over and over again to find one turn gives me anxiety, but it was 1996.

Technology advanced like it always does, and by the mid '00s at least some GM vehicles had the ability to display turn-by-turn instructions, provided by OnStar, as the driver needed them. Fortunately there are videos from this era, so I know that the UX was... better than expected, but strange. It's odd to see an LCD-matrix radio display, with no promise of navigation features, start displaying large turn arrows and distances after an OnStar call. One of the interesting things about OnStar is that the "human in the loop" nature of OnStar features makes it sort of a transitional stage between cassette tapes and Apple CarPlay. OnStar allowed human operators and remote computer systems to do the hard parts, allowing cars to behave in a way that seemed very ahead of their time.

One of the interesting things about OnStar, given the constant mention of satellites in its marketing, was the lack of actual satellite communications. Hughes, a satellite technology company, was involved. Articles about OnStar coyly refer to satellite technology, or say it's "powered by satellites." Of course, OnStar cost $22.50 a month in 1996, and $22.50 a month didn't entitle you to so much as look at a satellite phone in 1996. The satellite technology was limited to the GPS receiver; all voice communications were cellular. AMPS, specifically. The first several generations of OnStar, into the early '00s, relied on AMPS.

Telematics, telemetry, and the applications we now call "IoT" often struggle with the realities of communications networks. AMPS, often just referred to as "analog," was the first cellular communications standard to reach widespread popularity. For over a decade, everything cellular used AMPS. Then CDMA and GSM and even, may we all shed a tear, iDEN took over. These were digital standards with improved capacity and capabilities. It was inevitable that they would replace AMPS, and with the short lifespan of a consumer cellular phone, devices without support for digital networks naturally faded away... except for a bunch of them. OnStar and burglar alarms are two famous AMPS-retirement scandals. The deactivation of AMPS networks in 2008 left cars and alarm communicators across the country unable to communicate, and prompted a series of replacement programs, lawsuits, trade-in deals, lawsuits, and more lawsuits that are influential on how cellular networks are retired today (meaning: as rarely as possible).

The obsolescence of OnStar equipment in older vehicles by AMPS retirement left a black mark on OnStar's history that still hangs over it today. It was, I think, a vanguard of the larger impacts of fast-changing technology being integrated into cars. While vehicles have indeed become more reliable over time, there is an ever-present anxiety that new cars are more like consumer electronics, built for a three-year replacement cycle. The forced retirement of half a million OnStar buttons is probably one of the most visible examples of automotive equipment failing due to industry change rather than age.

In 2022, 2G cellular service was retired in the United States. With it went another generation of OnStar-equipped vehicles. For a combination of reasons, though, both a more conservative approach to 2G retirement in the cellular industry and likely GM's planning further ahead, only two model years were impacted.

Incidentally, Ford also had an offering very much like OnStar, called RESCU and introduced in 1996 as well. It was pretty universally agreed by automotive journalists at the time that RESCU was more primitive than OnStar and amounted mostly to a knee-jerk "we also have one of those" response to GM's launch. RESCU is perhaps worth mentioning, though, for its contribution to the lineage of Ford's SYNC platform, at least in the form of gratuitous all-caps.

In 2002, GM offered OnStar for licensing to other automotive manufacturers. Subarus, among others, began to sprout blue buttons in the overhead. But what had happened to competitive differentiation? Well, automotive technology tends to go through two phases: First, it differentiates. Second, it's mandated. The originating manufacturer can make quite a bit of money off of both.

In 1995, a year prior to the launch of OnStar, the National Highway Transportation Safety Administration (NHTSA) was already investigating the possibility of an Automated Collision Notification (ACN) system. ACN would automatically call 911 in the event of a dangerous crash, improving driver safety. As far as I can tell, GM is not the origin of the ACN concept. NHTSA's work on ACN started with the National Automated Highway System (NAHS), an ambitious technology development program launched in 1991 that imagined a very different self-driving car from the ones that we see today. The NAHS involved mesh networking between automated vehicles to form "platoons," close-following cars (for fuel efficiency) that synchronized their control actions. The mesh network would extend to road-side signaling systems, and would lead eventually to the end of traffic signals as cars automatically negotiated intersection time slots.

The NAHS never came to be and probably never will, but the NHTSA's retro-futuristic graphics of '90s sedans linked by blue waves echo through my childhood like they do through the pages of Popular Mechanics and the academic literature on self-driving. Or, they did, until a new generation of Silicon Valley companies coopted self-driving for their own purposes. This is not an entirely fair take on the history, I am certainly applying rose-colored (or is it cerulean blue?) glasses to the NAHS, but I think it is hard to argue that there has not been a loss of ambition in our vision of the self-driving future. For one, we stopped drawing blue waves on everything.

Anyway, GM may not have created ACN. If anyone, I think that honor might fall on Johns Hopkins University. But they sure did get involved: by 1996, the year OnStar launched, Delco Electronics was building ACN prototypes for NHTSA. Delco Electronics, a division of GM (Delco's history is closely intertwined with that of Hughes in this period, parts of Delco were and would be parts of Hughes and vice-versa). Over the following years, GM really jumped in: OnStar was ACN, and ACN should be mandatory.

Here's the thing: it's never really worked. The move of introducing a technology and then pushing for it to become mandatory is a fairly well-known one in the automotive industry, and to its credit, has lead to numerous safety advancements in consumer cars and no doubt a meaningful reduction on fatalities (to its discredit, it is often cited as one of the reasons for steeply rising prices on new cars).

Universal OnStar has come tantalizingly close. Europe mandated "eCall," functionally identical to ACN, in 2018. I'm not sure how directly GM was involved, but there are GM patents in the licensing pool required to implement eCall, so it's at least more than "not at all." But despite its increasing presence, ACN isn't required in the US. Automakers aren't even consistent on whether it's standard or a paid add-on.

GM is still hacking away at this one... as recently as last year, GM was taking federal grants to study ACN and propose standards. In collaboration with CDC, GM developed a system called AACN that uses accelerometer data to predict the severity of injury to occupants and difficulty of rescue. It's installed in newer OnStar vehicles, and Ford has even licensed it for Ford SYNC, but the data rarely goes anywhere at all... 911 PSAPs that receive the calls from ACN systems aren't equipped to receive the extra metadata; extensions to E911 to facilitate AACN data exchange are another thing that GM is actively involved in.

GM really seems to have put a 30-year effort into mandating OnStar in the US, but they just can't get it over the finish line. In the mean time, OnStar has stopped mattering.

GM's program to license OnStar to other automakers was short-lived. I'm not sure exactly why, but GM also gave up on their "OnStar For My Vehicle" aftermarket product. Even as OnStar continued to gain features, its ambition waned. I think that the problem was simple: by the mid-2000s, putting a phone into a car was becoming pretty easy. Besides, the "connected car" offered too many advantages for any automaker to turn down. Can you imagine the benefits of storing location history for the entire fleet of vehicles you've sold? You can sell that to the insurance industry! You know GM did, of course they did, and of course it's the subject of an ongoing class-action lawsuit.

OnStar just stopped being special. I was actually a little surprised to notice that the blue button in the overhead of my modern Subaru isn't an OnStar button; Subaru stopped licensing OnStar in '06. It's just another manifestation of Subaru StarLink, a confusing menagerie of vaguely-telematic features that are mostly built on contract by Samsung. Once the car has an LTE modem for remote start and maintenance telemetry and selling your driving habits to LexisNexis, throwing in a button that makes a phone call is hardly an engineering achievement.

You know, sometimes it feels like smartphones can only incidentally make phone calls. With the move to VoLTE, it's not even really a deeply-embedded functionality any more. "Phone" is just an application on the thing that, for reasons of habit, we call a "phone."

The legacy of OnStar is much the same: of course your car can make phone calls, GM shoved a carphone in the trunk in 1996 and it's still in there somewhere. It's just one of a million things modern vehicle telematics do, and frankly, it's one of the least interesting ones. Ironically, GM is taking the carphone back out: in 2022, GM discontinued the OnStar telephone service. It's no longer possible to have a phone number assigned to your car and use OnStar for routine calls. Everyone uses an app on their phone for that.

[1] I am excluding here their satellite phones, although they were surprisingly advanced for the mid-'90s and probably would have competed well with cellular phones if the service wasn't so costly.

2024-06-08 dmv.org

The majority of US states have something called a "Department of Motor Vehicles," or DMV. Actually, the universality of the term "DMV" seems to be overstated. A more general term is "motor vehicle administrator," used for example by the American Association of Motor Vehicle Administrators to address the inconsistent terminology.

Not happy with merely noting that I live in a state with an "MVD" rather than a "DMV," I did the kind of serious investigative journalism that you have come to expect from me. Of These Fifty United States plus six territories, I count 28 DMVs, 5 MVDs, 5 BMVs, 2 OMVs, 2 "Driver Services," and the remainder are hard to describe succinctly. In fact, there's a surprising amount of ambiguity across the board. A number of states don't seem to formally have an agency or division called the DMV, but nonetheless use the term "DMV" to describe something like the Office of Driver Licensing of the Department of Transportation.

Indeed, the very topic of where the motor vehicle administrator is found is interesting. Many exist within the Department of Transportation or Department of Revenue (which goes by different names depending on the state, such as DTR or DFA). Some states place driver's licensing within the Department of State. One of the more unusual cases is Oklahoma, which recently formed a new state agency for motor vehicle administration but with the goal of expanding to other state customer service functions... leaving it with the generic name of Service Oklahoma.

The most exceptional case, as you'll find with other state government functions as well, is Hawaii. Hawaii has deferred motor vehicle administration to counties, with the Honolulu CSD or DCS (they are inconsistent!) the largest, alongside others like the Hawaii County VRL.

So, the point is that DMV is sort of a colloquialism, one that is widely understood since the most populous states (CA and TX for example) have proper DMVs. Florida, third most populous state, actually has a DHSMV or FLHSMV depending on where you look... but their online services portal is branded MyDMV, even though there is no state agency or division called the DMV. See how this can be confusing?

Anyway, if you are sitting around on a Saturday morning searching for the name of every state plus "DMV" like I am, you will notice something else: a lot of... suspicious results. guamtax.com is, it turns out, actually the website of the Guam Department of Revenue and Taxation. dmvflorida.org is not to be confused with the memorable flhsmv.gov, and especially not with mydmvportal.flhsmv.gov. You have to put "portal" in the domain name so people know it's a portal, it's like how "apdonline.com" has "online" in it so you know that it's a website on the internet.

dmvflorida.org calls itself the "American Safety Council's Guide to the Florida Department of Motor Vehicles." Now, we have established that the "Florida Department of Motor Vehicles" does not exist, but the State of Florida itself seems a little confused on that point, so I'll let it slide. But that brings us to the American Safety Council, or ASC.

ASC is... It's sort of styled to sound like the National Safety Council (NSC) or National Sanitation Foundation (NSF), independent nonprofits that publish standards and guidance. ASC is a different deal. ASC is a for-profit vendor of training courses. Based on the row of badges on their homepage, ASC wants you to know not only that they are "Shopper Approved," "Certifiably Excellent (The Stats To Prove It)," they have won a "5-Star Excellence Award" (from whom not specified), and that the Orlando Business Journal included their own John Comly on its 2019 list of "CEOs of the Year."

This is the most impressive credential they have on offer besides accreditation by IACET, an industry association behind the "continuing education units" used by many certifications, and which is currently hosting a webinar series on "how AI is reshaping learning landscapes from curriculum design to compliance." This does indeed mean that, in the future, your corporate sexual harassment training will be generated by Vyond Formerly GoAnimate based on a script right out of ChatGPT. The quality of the content will, surprisingly, not be adversely affected. "As you can see, this is very important to Your Company. Click Here [broken link] to read your organization's policy."

In reality, ASC is a popular vendor of driver safety courses that businesses need their employees to take in order to get an insurance discount. Somewhere in a drawer I have a "New Mexico Vehicle Operator's Permit," a flimsy paper credential issued to state employees in recognition of their having completed an ASC course that consisted mostly of memorizing that "LOS POT" stands for "line of sight, path of travel." Years later, I am fuzzy on what that phrase actually means, but expanding the acronym was on the test.

We can all reflect on the fact that the state's vehicle insurance program is not satisfied with merely possessing the driver's license that the state itself issues, but instead requires you to pass a shorter and easier exam on actually driving safely. Or knowing about the line of sight and the path of travel, or something. I once took a Motorcycle Safety Foundation course that included a truly incomprehensible diagram of the priorities for scanning points of conflict at an intersection, a work of such information density that any motorcyclist attempting to apply it by rote would be entirely through the intersection and to the next one before completing the checklist. We were, nonetheless, taught as if we were expected to learn it that way. Driver's education is the ultimate test of "Adult Learning Theory," a loose set of principles influential on the design of Adobe Captivate compliance courses, and the limitations of its ability to actually teach anyone anything.

This is all a tangent, so let's get back to the core. ASC sells safety courses and... operates dmvflorida.org?

Here's the thing: running DMV websites is a profitable business. Very few people look for the DMV website because they just wanted to read up on driver's license endorsements. Almost everyone who searches for "<state name> DMV" is on the way to spending money: they need to renew their license, or their registration, or get a driving test, or ideally, a driver's ed course or traffic school.

The latter are ideal because a huge number of states have privatized them, at least to some degree. Driver's ed and traffic school are both commonly offered by competitive for-profit ventures that will split revenue in exchange for acquiring a customer. I would say that dmvflorida.org is a referral scam, but it's actually not! it's even better: it's owned by ASC, one of the companies that competes to offer traffic school courses! It's just a big, vaguely government-looking funnel into ASC's core consumer product.

In some states, the situation is even better. DMV services are partially privatized or "agents" can submit paperwork on the behalf of the consumer. Either of these models allow a website that tops Google results to submit your driver's license renewal on your behalf... and tack on a "convenience fee" for doing so. Indeed, Florida allows private third-parties to administer the written exam for a driver's license, and you know dmvflorida.org offers such an online exam for just $24.95.

You can, of course, renew your driver's license online directly with the state, at least in the vast majority of cases. so how does a website that does the same thing, with the same rates, plus their own fee, compete? SEO. Their best bet is to outrank the actual state website, grabbing consumers and funneling them towards profitable offerings before they find the actual DMV website.

There's a whole world of DMV websites that operate in a fascinating nexus of SEO spam, referral farm, and nearly-fraudulent imitation of official state websites. This has been going on since, well, I have a reliable source that claims since 1999: dmv.org.

dmv.org is an incredible artifact of the internet. It contains an enormous amount of written content, much of it of surprisingly high quality, in an effort to maintain strong search engine rankings. It used to work: for many years, dmv.org routinely outranked state agency websites for queries that were anywhere close to "dmv" or "renew driver's license" or "traffic school." And it was all in the pursuit of referral and advertising revenue. Take it from them:

Advertise with DMV.ORG

Partner with one of the most valuable resource for DMV & driving - driven by 85% organic reach that captures 80% of U.S drivers, DMV.ORG helps organize the driver experience across the spectrum of DMV and automotive- related information. Want to reach this highly valued audience?

dmv.org claims to date back to 1999, and I have no reason to doubt them, but the earliest archived copies I can find are from 2000 and badly broken. By late 2001 the website has been redesigned, and reads "Welcome to the Department of Motor Vehicles Website Listings." If you follow the call to action and look up your state, it changes to "The Department of Motor Vehicles Portal on the Web!"

They should have gone for dmvportal.org for added credibility.

In 2002, dmv.org takes a new form: before doing pretty much anything, it asks you for your contact information, including an AOL, MSN, or Yahoo screen name. They promise not to sell your address to third parties but this appears to be a way to build their own marketing lists. They now prominently advertise vehicle history reports, giving you a referral link to CarFax.

Over the following months, more banner ads and referral links appear: vehicle history reports, now by edriver.com, $14.99 or $19.99. Driving record status, by drivingrecord.org, $19.99. Traffic School Online, available in 8 states, dmv-traffic-school.com and no price specified. The footer: "DMV.ORG IS PRIVATELY OPERATED AND MAINTAINED FOR THE BENEFIT OF ITS USERS."

In mid-2003, there's a rebranding. The header now reads "DMV Online Services." There are even more referral links. Just a month later, another redesign, a brief transitional stage, before in September 2003, dmv.org achieves the form familiar to most of us today: A large license-plate themed "DMV.ORG" logotype, referral links everywhere, map of the US where you can click on your state. "Rated #1 Site, CBS Early Show."

This year coincides, of course, with rapid adoption of the internet. Suddenly consumers really are online, and they really are searching for "DMV." And dmv.org is building a good reputation for itself. A widely syndicated 2002 newspaper article about post-marriage bureaucracy (often appearing in a Bridal Guide supplement) sends readers to dmv.org for information on updating their name. The Guardian, of London, points travelers at dmv.org for information on obtaining a handicap placard while visiting the US.

You also start to see the first signs of trouble. Over the following years, an increasing number of articles both in print and online refer to dmv.org as if it is the website of the Department of Motor Vehicles. We cannot totally blame them for the confusion. First, the internet was relatively new, and reporters had perhaps not learned to be suspicious of it. Second, states themselves sometimes fanned the flames. In a 2005 article, the director of driver services for the Mississippi Department of Transportation tells the reporter that you can now renew your driver's license online... at dmv.org.

dmv.org was operated by a company called eDriver. It's hard to find much about them, because they have faded into obscurity and search results are now dominated by the lawsuit that you probably suspected is coming. The "About Us" page of the dmv.org of this period is a great bit of copywriting, complete with dramatic stories, but almost goes out of its way not to name the people involved. "One of our principals likes to say..."

eDriver must not have been very large, their San Diego office address was a rented mail box. Whether or not it started out that way is hard to say, but by 2008 eDriver was a subsidiary of Biz Groups Inc., along with Online Guru Inc and Find My Specialist Inc. These corporate names all have intense "SEO spam" energy, and they seem to have almost jointly operated dmv.org through a constellation of closely interlinked websites. In 2008, eDriver owned dmv.org but didn't even run it: they contracted Online Guru to manage the website.

Biz Groups Inc was owned by brothers Raj and Ravi Lahoti. Along with third brother David, the Lahotis were '00s domain name magnates. They often landed on the receiving end of UDRP complaints, ICANN's process for resolving disputes over the rightful ownership of domain names. Well, they were in the business: David Lahoti owns UDRP-tracking website udrpsearch.com to this day.

Their whole deal was, essentially, speculating on domain names. Some of them weren't big hits. An article on a dispute between the MIT Scratch project and the Lahotis (as owners of scratch.org) reads "Ravi updated the site at Scratch.org recently to includes news articles and videos with the word scratch in them. It also has a notice that the domain was registered in 1998 and includes the dictionary definition of scratch."

Others were more successful. In 2011, Raj Lahoti was interviewed by a Korean startup accelerator called beSuccess:

My older brother Ravi was the main inspiration behind starting OnlineGURU. Ravi owned many amazing domain names and although he didn't build a website on every one of his domains, he DID build a small website at www.DMV.org and this website started doing well. Well enough that he saw an opportunity to do something bigger with it and turn it into a bigger business.

And he is clear on how the strategy evolved to focus on SEO farming:

Search Engine Marketing and Search Engine Optimization has definitely been most effective in my overall marketing strategy. The beautiful thing about search engines is that you can target users who are looking for EXACTLY what you offer at the EXACT moment they are looking for it. Google Adwords has so many tools, such as the Google Keyword Tool where you can learn what people are searching for and how many people are searching the same thing. This has allowed me to learn about WHAT the world wants and gives me ideas on how I can provide solutions to help people with what they are looking for.

Also, San Diego Business Journal named Raj Lahoti "among the finalists of the publication's Most Admired CEO award" in 2011. So if he ever meets John Comly, they'll have something to talk about.

The thing is, the relationship between dmv.org and actual state motor vehicle administrators became blurrier over time... perhaps not coincidentally, just as dmv.org ascended to a top-ranking result across a huge range of Google queries. It really was a business built entirely on search engine ranking, and they seemed to achieve that ranking in part through a huge amount of content (that is distinctly a cut above the nearly incoherent SEO farms you see today), but also in part through feeding consumer confusion between them and state agencies. I personally remember ending up on dmv.org when looking for the actual DMV's website, and that was probably when I was trying to get a driver's license to begin with. It was getting a bit of a scammy reputation, actual DMVs were sometimes trying to steer people away from it, and in 2007 they were sued.

A group of website operators in basically the same industry, TrafficSchool.com Inc and Driver's Ed Direct, LLC, filed a false advertising suit against the Online Guru family of companies. They claimed not that dmv.org was fraudulent, but that it unfairly benefited from pretending to be an official website.

Their claim must have seemed credible. At the beginning of 2008, before the lawsuit made it very far, dmv.org's tagline changed from "No need to stand IN LINE. Your DMV guide is now ON LINE!" to "Your unofficial guide to the DMV." This became the largest indication that dmv.org was not an official website, supplementing the small, grey text that had been present in the footer for years.

The judge was not satisfied.

See, the outcome of the lawsuit was sort of funny. The court agreed that dmv.org was engaging in false advertising under the Lanham Act, but then found that the plaintiffs were doing basically the same thing, leaving them with "unclean hands." Incidentally, they would appeal and the appeals court would disagree on some details of the "unclean hands" finding, but the gist of the lower court's ruling held: the plaintiffs would not receive damages, since they had been pursuing the same strategy, but the court did issue an injunction requiring dmv.org to add a splash screen clearly stating that it was not an official website.

The lawsuit documents are actually a great read. The plaintiffs provided the court with a huge list of examples of confusion, including highlights like a Washington State Trooper emailing dmv.org requesting a DUI suspect's Oregon driving records. dmv.org admitted to the court that they received emails like this on "a daily basis," many of them being people attempting to comply with mandatory DUI reporting laws by reporting their recent DUI arrest... to Online Guru.

The court noted the changes made to dmv.org in early 2008, including the "Unofficial" heading and changing headings from, for example, "California DMV" to "California DMV Info." But those weren't sufficient: going forward, users would have to click "acknowledge" on a page warning them.

It is amusing, of course, that the SEO industry of the time interpreted the injunction mainly in the SEO context. This was, after all, a website that lived and died by Google rankings, part of a huge industry of similar websites. Eric Goldman's Technology and Marketing Law Blog wrote that "My hypothesis is that such an acknowledgment page wrecks DMV.org’s search engine indexing by preventing the robots from seeing the page content."

The takeaway:

This suggests a possible lesson to learn from this case. The defendants had a great domain name (DMV.org) that they managed to build nicely, but they may have too aggressive about stoking consumer expectations about their affiliation with the government.

It's wild that "get a good domain name and pack it with referral links" used to be a substantial portion of the internet economy. Good thing nothing that vapid survives today! Speaking of today, what happened to dmv.org?

Well, the court order softened over time, and the acknowledgment page ultimately went away. It was replaced by a large, top-of-page banner, almost comically reminiscent of those appearing on cigarettes. "DMV.ORG IS A PRIVATELY OWNED WEBSITE THAT IS NOT OWNED OR OPERATED BY ANY STATE GOVERNMENT AGENCY." Below that, the license plate dmv.org logotype, same as ever.

Besides, they reformed. At sustainablebrands.com we read:

Over our 10-year history, DMV.org’s mission has shifted entirely from profit to purpose. We not only want to bring value to our users by making their DMV experience easier, we ultimately want to reduce transportation-related deaths, encourage eco-friendly driving habits, and influence other businesses to reduce their carbon footprints and become stewards of change themselves.

This press-release-turned-article says that they painted "the company’s human values on our wall, to remind ourselves every day what we’re here for and why" and that, curiously, dmv.org "potentially aim[s] to" "eliminate Styrofoam from local eateries." The whole thing is such gross greenwashing, bordering on incoherent, that I might accuse it of being AI-generated were it not a decade old.

dmv.org lived by Google and, it seems, it will die by Google. Several SEO blogs report that, sometime in 2019, Google seems to have applied aggressive manual adjustments to a list of government-agency-like domain names that include irs.com (its whole own story) and dmv.org. Their search traffic almost instantaneously dropped by 80%.

dmv.org is still going today, but I'm not sure that it's relevant any more. I tried a scattering of Google queries like "new mexico driver's license" and "traffic school," the kind of thing where dmv.org used to win the top five results, and they weren't even on the first page. Online Guru still operates dmv.org, and "dmv.org is NOT your state agency" might as well be the new tagline. Phrases like that one constantly appear in headings and sidebars.

They advertise auto insurance, and will sell you online practice tests for $10. Curiously, when I look up how to renew my driver's license in New Mexico, dmv.org sends me to the actual NM MVD website. That's sort of a funny twist, because New Mexico does indeed allow renewal through private service providers that are permitted to charge a service fee. I don't think dmv.org makes enough money to manage compliance with all these state programs, though, so it's actually returned to its roots, in a way: just a directory of links to state websites.

Also, there's a form you can fill out to become a contributor! Computers Are Bad has been fun, but I'm joining the big leagues. Now I write for dmv.org.

2024-06-02 consumer electronics control

In a previous episode, I discussed audio transports and mention that they have become a much less important part of the modern home theater landscape. One reason is the broad decline of the component system: most consumers aren't buying a television, home theater receiver, several playback devices, and speakers. Instead, they use a television and perhaps (hopefully!) a soundbar system, which often supports wireless satellites if there are satellites at all. The second reason for the decline of audio transports is easy to see when we examine these soundbar systems: most connect to the television by HDMI.

This is surprising if you consider that soundbars are neither sources nor sinks for video. But it's not so surprising if you consider the long-term arc of HDMI [1], towards being a general-purpose consumer AV interconnect. HDMI has become the USB of home theater, and I mean that as a compliment and an insult. So, like USB, HDMI comes in a confusing array of versions with various mandatory, optional, and extension features. The capabilities of HDMI vary by the devices on the ends, and in an increasing number of cases, even by the specific port you use on the device.

HDMI actually comes to this degree of complexity more honestly than USB. USB started out as a fairly pure and simple serial link, and then more use-cases were piled on, culminating in the marriage of two completely different interconnects (USB and Thunderbolt) in one physical connector. HDMI has always been a Frankenstein creation. At its very core, HDMI is "DVI plus some other things with a smaller connector."

DVI, or really its precursors that established the actual video format, was intended to be a fairly straightforward step from the analog VGA. As a result, the logical design of DVI (and thus HDMI) video signals are pretty much the same as the signals that have been used to drive CRT monitors for almost as long as they've existed. There are four TMDS data lines on an HDMI cable, each a differential pair with its own dedicated shield. The four allow for three color signals (which can be used for more than one color space) and a clock. Two data pins plus a shield, times four, means 12 pins. That's most, but not all, of the 19 pins on an HDMI connector.

A couple of other pins are used for an I2C connection, to allow a video source to query the display for its specifications. A couple more are used for the audio return channel or ethernet (you can't do both at the same time) feature of HDMI. There's a 5V and a general signal ground. And then there's the CEC pin.

The fact that CEC merits its own special pin suggests that it is an old part of the standard, and indeed it is. CEC was planned from the very beginning, although it didn't get a full specification as part of the HDMI standard until HDMI 1.2a. Indeed, CEC is older than HDMI, dating to at least 1998, when it was standardized as part of SCART. But let's take a step back and consider the application.

One of the factors in the decline of component stereo systems is the remote control. In the era of vinyl, when you had to get off the couch to start a record anyway, remote controls weren't such an important part of the stereo market. The television changed everything about the way consumers interact with AV equipment: now we all stay on the couch.

I think we all know the problem, because we all lived through it: the proliferation of remotes. When your TV, your VCR, and your home theater receiver all have remote controls, you end up carrying around a bundle of cheap plastic. You will inevitably drop them, and the battery cover will pop off, and the batteries will go under the couch. This was one of the principal struggles faced by the American home for decades.

There are, of course, promised solutions on the market. Many VCR remotes had the ability to control a TV, and often the reverse as well. If you bought your TV and VCR from the same manufacturer this worked. If you didn't, it might not, or at least setup will be more complex. This is because the protocols used by IR remotes are surprisingly unstandardized. Surprisingly unstandardized in that curious way where there are few enough IR transceiver ICs that a lot of devices actually are compatible (consider the ubiquitous Philips protocol), but no one documents it and detailed button codes often vary in small and annoying ways.

So we got the universal remote. These remotes, often thrown in with home theater receivers as a perk, have some combination of a database of remote protocols pre-reverse-engineered by the manufacturer and a "learn" mode in which they can record unknown protocols for naive playback. Results were... variable. I heard that some of the expensive universal remotes like Logitech Harmony (dead) and Control4 (still around) were actually pretty good, but they required some emphasis on the word "expensive." Universal remotes were sort of a mixed bag, but they were fiddly enough to keep working that consumer adoption doesn't seem to have been high.

So, another approach came to us from the French. In the Europe of the 1970s, there was not yet a widely accepted norm for connecting a video source to a TV (besides RF modulation). France addressed the matter by legislation, mandating SCART in 1980. Over the following years, SCART became a norm in much of Europe. SCART is a bit of an oddity to Americans, as it never achieved a footprint on this continent. That's perhaps a bit disappointing, because SCART was ahead of its time.

For example, much like HDMI, SCART carried bidirectional audio. It supported multiple video formats over one cable. Most notably, though, SCART was designed for daisy chaining. Some simple aspects of the SCART design provided a basic form of video routing, where the TV could bidirectionally exchange video signals with one of several devices in a chain. The idea of daisy-chainable video interconnects continuously reappears but seldom finds much success, so I'd call this one of the more notable aspects of SCART.

That's not why we're here, though. Another interesting aspect of SCART was its communications channel between devices. The core SCART specification included a basic system of voltage signaling to indicate which device was active, but in 1998 CENELEC EN 50157-1 was standardized as a flexible serial link between devices over the SCART cable. Most often called AV.link, this channel could be used for video format negotiation, but also promised a solution to multiplying remotes: the AV.link channel can transmit remote control commands between devices. For example, your TV remote can have play/pause buttons, and when you push them the TV can send AV.link play/pause commands to whichever video source is active.

AV.link is a very simple design. A one wire (plus ground) serial bus operates at a slow (but durable) 333bps with collision detection. Devices are identified by four-bit addresses chosen at random (but checked for collision). Messages have a simple format: a one-byte header with the sending and receiving addresses, a one-byte opcode, and then whatever bytes are expected as parameters to the opcode.

AV.link is one of those standards that never quite got its branding together. Unlike, say, USB, where a consistent trademark identity is used, AV.link goes by different names from different vendors. Wikipedia offers the names nexTViewLink (horrible), SmartLink (mediocre), Q-Link (lazy), and EasyLink (mediocre again). One wonders if consumers were confused by these different vendor brands for the same thing, it's not a situation that happens very often with consumer interconnects.

When HDMI was developed, the provision of a pin for AV.link was pretty much copied over from SCART. Originally, the functionality wasn't even really specified, and just assumed to be similar to SCART. Later HDMI versions included a much more complete description of CEC as a supplement. Hardware support for CEC is mandated for devices like TVs as part of the HDMI certification process, but curiously, software support isn't really included. As a result, it is very common, but not universal, that TVs fully support CEC. Other AV devices like home theater receivers almost universally have CEC support. Computers almost universally do not, as cost and licensing considerations mean that GPUs do not provide a CEC transceiver.

Inconsistent implementations are not the only way that CEC is a little sketchy. Remember how different vendors referred to SCART AV.link by different names? CEC has the same problem. I won't bother with the whole list, but the names you're more likely to have seen include Samsung Anynet+, LG SimpLink, and... well, Philips EasyLink is still with us. In practice, a lot of people seem to ignore these names, and CEC is a lot more common than Anynet+ when discussing Samsung TVs. That doesn't stop Samsung from pushing their own branding in their menus and port labeling, though.

Because CEC inherits the AV.link features designed for SCART, it has a surprisingly rich featureset. For example, if you have an HDMI switch with real CEC support (these don't seem to be that common!) and a TV with software support, the TV can discover the topology of connected devices and remote control the switch to use the switch inputs as an extension of its own input selection menu.

Most CEC features are more prosaic, though. Considering the list of high-level features in the specification, "One Touch Play" means that a device can indicate that it has video to show (causing a TV to turn on and select that input) while "System Standby" means that a device being turned off can tell the other devices on the bus to turn off as well. "One Touch Record," "Deck Control," "Tuner Control," "Device Menu Control," and "System Audio Control" are all variations on devices forwarding simple remote commands (play, pause, up, down, etc) to other devices that might care about them more. For example, when you use a TV with a stereo receiver or soundbar, it should forward volume up/down commands from the remote to the audio device via System Audio Control.

Considering the decline of component systems, there are basically two common scenarios where CEC is used today. These are really the same scenario in a lot of ways, but they vary in the details.

  • The connection of a television to a home theater receiver. In these kinds of configurations, the home theater receiver is often used for at least some video switching. That means that the receiver is sending video to the TV, receiving audio from the TV via audio return channel (ARC), and both are exchanging commands. For example, the receiver can use CEC to turn on the TV when a video input is selected. Conversely, the TV can turn on the receiver when it is turned on.
  • The connection of a television to a soundbar. In this case, ARC is used to send audio from the TV to the soundbar. There really is no video involved in this scenario, so in a sense the HDMI cable is mostly unused. CEC is used by the TV to control the soundbar. Because soundbars don't often have a remote that the user cares to keep around, this control tends to be mostly unidirectional, used by the TV to turn the soundbar on and off.

It is interesting, isn't it, that an interconnect with four very-high-speed serial video channels is often put into use in a scenario where those channels are useless. Instead, the much lower-rate ARC and CEC channels are the important ones. Well, think about USB as a power connector... these things happen.

CEC could be used in much more complex scenarios. For example, if you had a DVR connected to your TV via CEC, you could browse the electronic program guide (EPG) on your TV and choose a program to record. This would cause the TV to use CEC Timer Programming to send the program details from the TV EPG to the DVR to schedule the recording. How widely was this ever used? I don't know, I suspect not very, because these days DVRs are almost invariably provided by the cable or satellite company, who expect you to use the DVR's EPG rather than your TVs anyway.

This is actually one of the scenarios where you see ARC used for reasons other than synchronizing control of an audio output: set-top boxes (STBs). Media companies that distribute STBs, mostly cable and satellite operators, tend to be in a bit of a war to own your television watching experience. They face stiff competition from "Smart TVs." I have a suspicion that the complete proliferation of smart TVs is largely an artifact of the television manufacturers trying to win advertising surface area away from the STB manufacturers, who have traditionally held most of it via the EPG.

As some evidence of this fight, consider the case of Xfinity Xumo (formerly Flex), the compact STB that Xfinity offers to its internet customers for free. Since it's advertised to people who don't necessarily have any TV service from Xfinity, it's not really a conventional STB. It's more of a slightly-weird-but-free Roku or Amazon Fire Stick. It doesn't really offer anything that your TV doesn't already, but unlike your TV, it's controlled by Comcast. This gives them the opportunity to upsell you on IPTV services, but Comcast never seems to have pursued this route that far. Mostly it gives them the opportunity to advertise to you, and to grab some partner revenue from various streaming apps.

Anyway, that was a bit of a digression. The point is that Comcast and Dish Network and all of their compatriots don't want you using your TV, they want you using your STB. So they give you a big chunky remote ("With Voice Control!") and the STB attempts to use CEC to control the TV so you never have to touch its small, svelte remote ("With Voice Control!") and split their sponsored content revenue with LG.

That's an interesting detail of this whole landscape, isn't it? CEC was developed as a solution to a technical problem: people had multiple devices, and hauling around multiple remotes was frustrating. Over the decades since, it has evolved into a strategy to address a business problem: everyone that sells you AV equipment prefers that you passionately navigate their on-screen menus while completely forgetting about those of your other components.

That's pretty much what's happening with the audio devices as well. TV manufacturers want to capture as much of your entertainment attention and budget as possible, so ideally they sell you a TV and their matching soundbar system (which can be fairly inexpensive since it is closely coupled to the TV and needs very little of its own control logic). CEC here is an under-the-hood implementation detail, something that happens behind the scenes to make your soundbar do the few things it does.

Say you're a higher-end customer, though, with a home theater receiver. The AV receiver industry has been surprisingly unambitious about capturing Platform Revenue, probably because soundbars have pretty much eliminated everything but higher-end, "audiophile"-focused brands. These companies either lack the technical resources to develop a good Entertainment Platform or don't think their customers will respond well to yet another remote with a Pluto TV button. I would like to say it's mostly the latter, but given my experience with the on-screen design and mobile apps of several leading AV receiver manufacturers, I suspect it's mostly the former.

So CEC functions perhaps the most as it was originally intended: you can mainly interact with your TV, and CEC carries control messages to the receiver as needed so that you don't need to find its remote to select the right input. Conceptually you can even use the TV to control non-video functions. For example, my particular combination of a Samsung TV and Yamaha receiver implements CEC completely enough that I can turn on the receiver, select the turntable preamp input, and control the volume via the TV if I want to. Then I still have to get up to actually put a record onto the turntable, and now the TV is just on the whole time, so this isn't that appealing in practice. I am still rummaging fro the receiver's own remote, that or using its terrible Android app.

In the STB scenario, something like Xfinity X1 or Dish Hopper, we have an inversion of control: the only remote you'll need, they hope, is the STB's remote. It will remote control the TV via CEC as needed. This inevitably sets up a power struggle where your Smart TV gets lonely and wants attention. I am mostly kidding about this emotional interpretation of the situation, but obviously the TV manufacturer does have an incentive to distract your attention from the STB, which probably has something to do with the tendency of Smart TVs to pop up a lot of on-screen chrome whenever you turn them on.

The coolest thing about CEC, in my mind, is that unlike HDMI it is multi-drop. That is, when you connect a bunch of HDMI sources to a multi-input TV or receiver or another switching device, they can all be connected to the same unified CEC bus. That means that HDMI devices can communicate with each other via CEC even when there's no active video or audio connection.

CEC even has a fairly complex addressing mechanism to take advantage. CEC physical addresses are assigned based on bus topology, and a mapping protocol is used to advertise a correspondence between new physical addresses and logical addresses. Logical addresses, the same 4-bit addresses from AV.link, are assigned based on capabilities. Typically logical address 0 will be the TV, 1 and 2 will be recorders, 3 will be a tuner, 4 a playback device, 5 a home stereo receiver. You can have a fairly large component setup where everything is controllable by sending CEC messages to standard logical addresses.

And other aspects of CEC are designed to accommodate these kinds of more complex networks. For example, when the user selects a device to watch on their TV, the TV can send a "Set Stream Path" message (opcode 0x86). The parameter on this message is the physical CEC address of the desired device, and any CEC switches in the path are expected to see the message and select the appropriate input to form a path from the selected device to the TV. It's a little bit of centrally-controlled circuit switching right in your entertainment center. Neat!

You can even do broadcast messaging across the entire CEC topology. TVs often use this to discover what devices they're connected to, saving the user some menu setup. That's about the only time you'd notice it, though: like CEC's other more advanced capabilities, routing and multi-device messaging are rarely used outside of its very simplest application.

I want to pass some sort of tidy moral judgment, but CEC is a hard case. It's kind of a mixed bag. The more basic functionality tends to work well and adds convenience. The more complex functionality tends to either not work or be buried deeply enough in configuration menus that no one uses it. It inevitably leads to some weird, inelegant behavior. My husband will put a cab ride video on the TV and then Spotify Cast to the receiver. But then what if you want to listen to the video audio? Easiest way to get the receiver switched back to the TV audio is, of course, to turn the TV off and on again. When you turn it on, it uses CEC "One Touch Play" to signal the receiver to select it again. The particular convergence of technologies here leads to a strange tic, sort of a superstitious behavior, that works fine but feels bad.

If you're a weirdo like me, you use your TV heavily as a monitor for a computer. You might find the gap here rather conspicuous: when I wiggle the mouse to wake the computer up, the TV doesn't turn on. HDMI keeps gaining features, video games are a big driver of high-end PC and television sales, there is an inevitable convergence happening between "monitor" and "TV," and between "video source" and "computer." But the computer video industry is, well, a little slow to catch on.

You might remember that it took an awkwardly long time for PC GPUs to have consistent support for HDMI audio, and then it was still weird and sketchy for a good few years. Well, we haven't even quite made it to that point on the CEC front. I don't think any conventional PCs have CEC transceivers. The solution, if you are mad enough to want one, is a USB CEC adapter. They're basically passthrough devices for HDMI, they just tap the CEC pin and hook it up to a UART. Not many companies make them but they're cheap enough. Software support is... minimal, but it'll let Kodi turn your TV on.

It's fun to think about, though. You know how CEC is multi-drop? You could hook up multiple computers to an HDMI switch and they could talk to each other with CEC. You could use some vendor-specific opcodes to convey IP. You could log onto the internet over HDMI, at 333bps. You could put OpenSC over IP over HDMI CEC and turn your lights on via your stereo receiver. What a dream! I was going to say you could do DMX-512 over CEC but actually at CEC's slow speed the register-broadcast model of DMX would become a pretty significant problem.

You could also log onto the internet over HDMI at 100Mbps, but that's using different pins, your GPU definitely doesn't support it, and I don't even know of a way to do HDMI Ethernet from a PC. CEC may be a bit of an awkward cousin but at least it's more popular than HDMI Ethernet.

[1] pun not intended

2024-05-25 grc spinrite

I feel like I used to spend an inordinate amount of time dealing with suspect hard drives. I mean, like, back in high school. These days I almost never do, or on the occasion that I have storage trouble, it's a drive that has completely stopped responding at all and there's little to do besides replacing it. One time I had two NVMe drives in two different machines do this to me the same week. Bad luck or quantum phenomenon, who knows.

What accounts for the paucity of "HDD recovery" in my adult years? Well, for one, no doubt HDD technology has improved over time and modern drives are simply more reliable. The well-aged HDDs I have running without trouble in multiple machines right now support this theory. But probably a bigger factor is my buying habits: back in high school I was probably getting most of the HDDs I used second-hand from the Free Geek thrift store. They were coming pre-populated with problems for my convenience.

Besides, the whole storage industry has changed. What's probably more surprising about my situation is how many "spinning rust" HDDs I still own. Conventional magnetic storage only really makes sense in volume. These days I would call an 8TB HDD a small one. The drives that get physical abuse, say in laptops, are all solid state. And solid state drives... while there is no doubt performance degradation over their lifetimes, failure modes tend to be all-or-nothing.

I was thinking about all of this as I ruminated on one of the "holy grail" tools of the late '00s: SpinRite, by Gibson Research Corporation.

The notion that HDDs aren't losing data like they used to is supported by the dearth of data recovery tools on the modern shareware market. Well, maybe that's more symptomatic of the complete hollowing out of the independent software industry by the interests of capitalism, but let's try to dwell on the positive. Some SEO-spam blog post titled "Best data recovery software of 2024" still offers some classic software names like "UnDeleteMyFiles Pro," but some items on the list are just backup tools, and options like Piriform Recuva and the open-source PhotoRec still rank prominently... as they did when I was in high school and my ongoing affection for Linkin Park was less embarrassing [1].

Back in The Day, freeware, shareware, and commercial (payware?) data recovery software proliferated. It was advertised in the back of magazines, the sidebar banner ads of websites, and even appeared in the electronics department of Fred Meyer's. You also saw a lot of advertisements for services that could perform more intensive methods, like swapping an HDD's controller for one from another unit of the same model. These are all still around today, just a whole lot less prominent. Have you ever seen an Instagram ad for UnDeleteMyFiles Pro?

First, we should talk a bit about the idea of data recovery in general. There are essentially two distinct fields that we might call "data recovery": consumers or business users trying to recover their Important Files (say, accounting spreadsheets) from damaged or failed devices, and forensic analysts trying to recover Important Files (say, the other accounting spreadsheets) that have been deleted.

There is naturally some overlap between these two ventures. Consumers sometimes accidentally delete their Important Files and want them back. Suspects sometimes intentionally damage storage devices to complicate forensics. But the two different fields use rather different techniques.

Let's start by examining forensics, both to set up contrast to consumer data recovery and because I know a lot more about it. One of the quintessential techniques of file system forensics is "file carving." A file carving tool examines an arbitrary sequence of bytes (say, from a disk image) and looks for the telltale signs of known file formats. For example, most common file formats have a fixed prefix of some kind. ZIP files start with 0x504B0304, the beginning of which is the ASCII "PK" for Phil Katz who designed the format. Some formats also have a fixed trailer, but many more have structure that can be used to infer the location of the end of the file. For example, in ZIP files the main header structure, the "central directory," is actually a trailer found at the end of the file.

If you can find the beginning and end of a file, and it's stored sequentially, you've now got the whole file. When the file is fragmented in the byte stream (commonly the case with disk images), the problem is a little tougher, but still you can find a lot of value. A surprising number of files are stored sequentially because they are small, some filetypes have internal structure that can be used to infer related blocks and their order, and even finding a single block of a file can be useful if it happens to contain a spreadsheet row starting "facilitating payments to foreign officials" or, I don't know, "Fiat@".

You end up doing this kind of thing a lot because of a detail of file systems that all of my readers probably know. It's often articulated as something like "when you delete a file, it's not deleted, just marked as having been deleted." That's not exactly wrong but it's also an oversimplification in a way that makes it more difficult to understand why that is the case. There's a whole level of indirection due to block allocation, updating the bitmap on every file delete is a relatively time-consuming process that offers little value, actually overwriting blocks would be even more time consuming with even less value, etc. Read Brian Carrier for the whole story.

Actually, screw Brian Carrier, I've written before about the adjacent topic of secure erasure of computer media.

My point is this: these forensic methods are performed on a fully functional storage device (or more likely an image of one), where "recovery" is necessary and possible because of the design of the file system. The storage device, as hardware, is not all that involved. Well, that's really an oversimplification, and points to an important consideration in modern data recovery: storage devices have gotten tremendously more complex, and that's especially true of SSDs.

Even HDDs tend to have their own thoughts and feelings. They can have a great deal of internal logic dedicated to maintaining the disk surface, optimizing performance, working around physical defects on the surface, caching, encryption, etc. Pretty much all of this is proprietary to the manufacturer, undocumented, and largely a mystery to the person performing recovery. Thinking of the device as a "sequence of bytes" throws out a lot of what's really going on, but it's a necessary compromise.

SSDs have gone even further. Flash storage is less durable than magnetic storage but also more flexible. It requires new optimizations to maximize life and facilitates optimizations for access time and speed. Some models of SSDs vary from each other only by their software configuration (this has long been suspected of some HDDs as well, but I have no particular insight into Western Digital color coding). Even worse for the forensic analyst, the TRIM command creates a whole new level of active management by the storage device: SSDs know which blocks are in use, allowing them to constantly remap blocks on the fly. It is impossible, without hardware reverse engineering techniques, to produce a true image of an SSD. You are always working with a "view" of the SSD mediated by its firmware.

So let's compare and contrast forensic analysis to consumer data recovery. The problem for most consumers is sort of the opposite: they didn't delete the file. If they could get the sequence of bytes off the storage device, they could just access the file through the file system. The problem is that the storage device is refusing to produce bytes at all, or it's producing the wrong ones.

Techniques like file carving are not entirely irrelevant to consumer data recovery because it's common for storage devices to fail only partially. There are different ways of referring to the physical geometry of HDDs, and besides, modern storage devices (HDDs and SSDs alike) abstract away their true geometry. Different file systems also use different terminology for their own internal system of mapping portions of the drive to logical objects. So while you'll find people say things like "bad cluster" and "bad sector," I'm just going to talk about blocks. The block is the smallest elementary unit by which your file system interacts with the device. The size of a block is typically 512B for smaller devices and 4k for larger devices.

A common failure mode for storage devices (although, it seems, not so much today) is the loss of a specific block: the platter is damaged, or some of the flash silicon fails, and a specific spot just won't read any more. The storage device can, and likely will, paper over this problem by moving the block to a different area in the storage medium. But, in the process, the contents of the block are probably lost. The new location just contains... whatever was there before [2]. Sometimes the bad block is in the middle of a file, and that sucks for that file. Sometimes the bad block is in the middle of a file system structure like the allocation table, and that sucks for all of the files.

More complicated file systems tend to incorporate precautionary measures against this kind of thing, so the blast radius is mostly limited to single files. For example, NTFS keeps a second copy of the allocation table as a backup. Journaling can also provide a second source of allocation data when the table is damaged.

Simpler file systems, like the venerable FAT, don't have any of these tricks. They are, after all, old and simple. But old age and simplicity gives FAT a "lowest common denominator" status that sees it widely used on removable devices. PhotoRec, while oriented towards the consumer data recovery application, is actually a file carving tool. It's no coincidence that it's called PhotoRec. Removable flash devices like SD cards have simple controllers and host simple file systems. They are, as a result, some of the most vulnerable devices to block failures that render intact files undiscoverable.

What about the cases where file isn't intact, though? Where the block that has become damaged is part of the file that we want? What about cases where a damaged head leaves an HDD unable to read an entire surface?

Well, the news isn't that great. Despite this being one of the most common types of consumer storage failure for a decade or more, and despite the enormous inventory of software that promises to help, your options are limited. A lot of the techniques that software packages used in these situations lack supporting research or are outright suspect. Let's start on solid ground, though, with the most obvious and probably safest option.

One of the problems you quickly encounter when working with a damaged storage device is the file system and operating system. File systems don't like damaged storage devices, and operating systems don't like file systems that refuse to give up a file they say exists. So you try to copy files off of the bad device and onto a good one using your daily-driver file browser, and it hits a block that won't read and gets stuck. Maybe it hangs almost indefinitely, maybe you get an obscure error and the copy operation stops. Your software is working against you.

One of the best options for data recovery from suspect devices is an open-source tool called ddrescue. ddrescue is very simple and substantially similar to dd. It has one critical trick up its sleeve: when reading a block fails, ddrescue retries a limited number of times and then moves on. With that little adaptation, you can recover all of the working blocks from a device and so likely recover all of the files but a few.

Besides, just retrying a few times has value. Especially on magnetic devices, the result of reading the surface can be influenced by small perturbances. An unreadable sector might be readable every once in a while. This doesn't seem to happen as much with SSDs due to the dynamics of flash storage and preemptive correction of weak or ambiguous values, but I'm sure it still happens every once in a while.

At the end of the day, though, this method still means accepting the loss of some data. Losing some data is better than losing all of it, but it might not be good enough. Isn't there anything we can do?

HDDs used to be different. For one, they used to be bigger. But there's more to it than that. Older hard drives used stepper motors to position the head stack, and so head positioning was absolute but subject to some mechanical error. Although this was rarely the case on the consumer market, early hard drives were sometimes sold entirely uninitialized, without the timing marks the controller used to determine sector positions. You had to use a special tool to get the drive to write them [3]. It was common for older drives to come with a report (often printed on the label) of known bad sectors to be kept in mind when formatting.

We now live in a different era. Head stacks are positioned by a magnetic coil based on servo feedback from the read head; mechanical error is virtually absent and positioning is no longer absolute but relative to the cylinder being read. Extensive low-level formatting is required but is handled completely internally by the controller. Controllers passively detect bad blocks and reallocate around them. Honestly, there's just not a lot you can do. There are too many levels of abstraction between even the ATA interface and the actual storage to do anything meaningful at the level of the magnetic surface. And all of this was pretty much true in the late '00s, even before SSDs took over.

So what about SpinRite?

SpinRite dates back to 1987 and is apparently still under development by its creator Steve Gibson. Gibson is an interesting figure, one of the "Tech Personalities" that contemporary media no longer creates (insert comment about decay in the interest of capitalism here). Think Robert Cringely or Leo Laporte, with whom Gibson happens to cohost a podcast. In my mind, Gibson is perhaps most notable for his work as an early security researcher, which had its misses but also had its hits. Through the whole thing he's run Gibson Research Corporation. GRC offers a variety of one-off web services, like a password generator (generated, erm, server-side) and something that displays the TLS fingerprint of a website you enter. There's a user-triggered port scanner called ShieldsUp, which might be interesting were it not for the fact that its port list seems limited to the Windows RPC mapper and some items of that type... things that were major concerns in the early '00s but rarely a practical problem today.

It's full of some gems. Consider the password generator...

What makes these perfect and safe? Every one is completely random (maximum entropy) without any pattern, and the cryptographically-strong pseudo random number generator we use guarantees that no similar strings will ever be produced again. Also, because this page will only allow itself to be displayed over a snoop-proof and proxy-proof high-security SSL connection, and it is marked as having expired back in 1999, this page which was custom generated just now for you will not be cached or visible to anyone else. ... The "Techie Details" section at the end describes exactly how these super-strong maximum-entropy passwords are generated (to satisfy the uber-geek inside you).

You know I'm reading the Techie Details. They describe a straightforward approach using AES in CBC mode, fed by a counter and its own output. It's unremarkable except that just about any modern security professional would have paroxysms at the fact that he seems to have implemented it himself. Sure, there are better methods (like AES CTR), but this is the kind of thing where you shouldn't even really be using methods. "I read it from /dev/urandom" is a far more reassuring explanation than a block diagram of cryptographic primitives. /dev/urandom is a well-audited implementation, whatever is behind your block diagram is not. Besides, it's server side!

My point is not so much to criticize Gibson's technical expertise, although I certainly think you could, but to say that he doesn't seem to have updated his website in some time. A lot of little details like references to WEP and the fact that the PDFs are Corel Ventura output support this theory. By association, I suspect that GRC's flagship product, SpinRite, doesn't get a lot of active maintenance either.

Even back around 2007 when I first encountered SpinRite it was already a little questionable, and I remember a rough internet consensus of "it likely doesn't do anything but it probably doesn't hurt to try." A little research finds that "is SpinRite snake oil?" threads date back to the Usenet era. It doesn't help that Steve Gibson's writing is pervaded by a certain sort of... hucksterism. A sort of ceaseless self-promotion that internet users associate mostly with travel influencers selling courses about how to make money as a travel influencer.

But what does SpinRite even claim? After a charming disclaimer that GRC is opposed to software patents but nonetheless involved in "extensive ongoing patent acquisition" related to SpinRite, a document titled "SpinRite: What's Under the Hood" gives some details. It's undated but has metadata pointing at 1998. That's rather vintage I see several reasons to think that there have been few or no functional changes in SpinRite since that time.

SpinRite is a bootable tool based on FreeDOS. It originated as an interleaving tool, which I won't really explain because it's quite irrelevant to modern storage devices and really just a historic detail of SpinRite. It also "introduc[ed] the concept of non-destructive low-level reformatting," which I won't really explain because I don't know what it means, other than it seems to fall into the broad category of no one really knowing what "low level formatting" means. It's a particularly amusing example, because most modern software vendors use "low level formatting" to refer explicitly to a destructive process.

SpinRite "completely bypasses the system's motherboard BIOS software when used on any standard hard disk system." I assume this means that SpinRite directly issues ATA commands, which probably has some advantages, although the specific ones the document calls out seem specious.

In reference to SpinRite's data recovery features, we read that "The DynaStat system's statistical analysis capability frequently determines a sector's correct data even when the data could never be read correctly from the mass storage medium." This is what I remember as the key claim of SpinRite marketing over a decade ago: that SpinRite would attempt rereading a block a very large number of times and then determine on a bit-by-bit basis what the most likely value is. It seems reasonable on the surface, but it wouldn't make much sense with a drive with internal error correction. That's universal today but I'm not sure how long that's been true, presumably in the late '90s this was a better idea.

That's probably the high point of this document's credibility. Everything from there gets more suspect. It claims that SpinRite has a proprietary system that models the internal line coding used by "every existing proprietary" hard drive, an unlikely claim in 1998 and an impossible one today without a massive reverse engineering effort. Consider also "its second recovery strategy of deliberately wiggling the drive's heads." It seems to achieve this by issuing reads to cylinders on either side of the cylinder in question, but it's questionable if that would even work in principle on a modern drive. You must then consider the use of servo positioning on modern drives, which means that the head will likely oscillate around the target cylinder before settling on it anyway.

This gives the flavor of the central problem with SpinRite: it claims to perform sophisticated analysis at a very low level of the drive's operation, but it claims to do that with hard drives that intentionally abstract away all of their low level details.

A lot of the document reads, to modern eyes, like pure flimflam, written by someone who knew enough about HDDs to sound technical but not enough to really understand the implications of what they were saying. The thing is, though, this document is from '98 and the software was already a decade old at the time! The document does note that SpinRite 3.0 was a complete rewrite, but I suspect it was the last complete rewrite and probably carried a lot of its functionality over from the first two versions.

I think that SpinRite probably does implement the functionality that it claims and that those features might have been of some value in the late '80s and much of the '90s. Then technology moved on and SpinRite became irrelevant. Probably the only thing that SpinRite does of any value on a modern drive is just rewriting the entire addressable area, which gives the controller an opportunity to detect bad blocks and remap them. That should also happen in the course of normal operation, though, and even tools dedicated to that purpose (like the open-source badblocks) are becoming rather questionable in comparison to the preemptive capabilities of modern HDDs. This type of bad-block-detecting rewrite pass is probably only useful in pathological cases on older devices, but it's also the only real claim of the vast majority of modern "hard drive repair" software.

It seems a little mean-spirited to go after GRC for their old software, but they continue to promote it at a cost of $89. The FAQ tells us that "SpinRite is every bit as necessary today as it ever was β€” maybe even more so since people store so much valuable personal 'media' data on today's massive drives." I resent the implication of the scare-quoted "media," Mr. Gibson, but what I do with my hard drives in my own home is none of your business.

The FAQ tells us "SpinRite is often credited with performing "true miracles" of data recovery," but is oddly silent on the topic of SSDs. Some dedicated Wikipedia editor rounded up a number of occasions on which Gibson said that SpinRite was of limited or no use with SSDs, and yet the GRC website currently includes the heading "Amazingly effective for SSDs!" There is no technical explanation offered for how SpinRite's exceptionally platter-centric features affect an SSD, nor mention of any new functionality targeting flash storage. Instead, there's just anecdotal claims that SpinRite made SSDs faster and a suggestion that the reader google a well-known behavior of flash storage for which SSD controllers have considerable mitigations.

It is an odd detail of the GRC website that most of the new information about the product is provided in the form of video. Specifically, videos excerpted from recent episodes of Gibson and Laporte's podcast "Security Now." Security Now is weekly, so I don't think that SpinRite promotional material makes up a large portion of it, but it does seem conspicuous that Gibson uses the podcast as a platform for 15 minute stories about how SpinRite worked miracles. These segments, and their mentions of how SpinRite is a very powerful tool that one shouldn't run on SSDs too often, absolutely reek of the promotional techniques behind Orgone accumulators, Hulda Clark's "Zapper," and color therapy. It is, it seems, quack medicine for the hard drive.

I don't think SpinRite started as a scam, but I sure think it ended as one.

A lot of this was already apparent back in the late '00s, and I can't honestly say that bootleg copies of SpinRite every improved anything for me. So why did I love it so much? The animations!

SpinRite's TUI was truly a work of art. Just watch it go!.

[1] I recently bought the 20th anniversary vinyl box set of Meteora, which emphasizes that (1) 20 years have passed and (2) I am still a loser.

[2] This kind of visible failure seems uncommon with SSDs, likely because SSD controllers tend to read out the flash in a critical, suspicious way and take preemptive action when the physical state is less than perfectly clear cut. In a common type of engineering irony, the fact that flash storage is less reliable than magnetic media requires aggressive management of the problem that makes the overall system more reliable. Or at least that's what I tell myself when another SSD has gone completely unresponsive.

[3] Honestly this doesn't seem to have been typical with any hard drives by the microcomputer era, which makes perfect sense if you consider that these hard drives were sold with bad sector lists and therefore must have been factory tested. The whole "low level formatting" thing has been 70% a scam and 30% confusion with the very different technical tradition of magnetic diskettes, since probably 1990 at least.

2024-05-15 catalina connections

Some things have been made nearly impossible to search for. Say, for example, the long-running partnership between Epson and Catalina: a query that will return pages upon pages of people trying to use Epson printers with an old version of MacOS.

When you think of a point of sale printer, you probably think of something like the venerable Epson TM-T88. A direct thermal printer that heats small sections of specially coated paper, causing it to turn black. Thermal paper of this type is made in various widths, but the 80mm or 3 1/8" used by the TM-T88 is the most common. The thermally-reactive coating on the paper incorporates some, umm, questionable chemicals, but moreover, the durability of direct thermal prints is poor. The image tends to fade over not that long of a timespan. Besides, the need for special paper is an irritation.

So, there are other technologies available. Thermal transfer, in which a ribbon of ink (I suspect actually a thermoplastic) is pressed against the paper and heated to cause the ink to stick, is often used for more durability-sensitive applications like warehouse labeling. The greater flexibility of paper (or plastic) stock sees thermal transfer used in specialty applications as well, like conference attendee badges. Thermal transfer printers tend to be more expensive and more complex than direct thermal, though, and are rarely used at the POS.

Impact printers are actually fairly common in a POS-adjacent application. These printers punch metal pins against an inked ribbon, pushing it against the paper to leave a mark. Impact printers were actually the norm for receipt printing prior to the development of inexpensive thermal printers. They remain popular in restaurant kitchens: the plain paper they use is less readily damaged by oils, and won't turn entirely black if exposed to too much heat, as might happen when a ticket is clipped above a grill. Impact receipt printers today are often referred to as kitchen ticket printers as a result.

Impact receipt printers, and many impact printers in general, have a neat trick: you can manufacture an ink ribbon in two colors, say, black on one half and red on the other. By either using two sets of impact pins or shifting the position of the impact head, either black or red can be printed. Dual-color printers with black and red ribbons became ubiquitous for kitchen tickets, although the red doesn't tend to reproduce well from an old, dry ribbon.

The ability of impact printers to use plain paper had another advantage: slip printing. A slip printer is a device intended to print characters on a small piece of paper inserted into it. Historically they were often used by bank tellers to print account and reference numbers onto deposit slips, for later auditing. In other applications they functioned as more sophisticated "received" stamps, adding not just the time and date but customer account or transaction numbers to received paperwork. The legal profession has a tradition of "Bates numbering," which traces its history to a rather different printing device, but Bates numbers could be applied by slip printers as well. In this case, of course, we would need to refer to them as Generic Sequential Page Numbers, Compare to Bates (TM).

A variant of the slip printer, really a receipt printer (often thermal) and slip printer (often impact) married into one box, is known as a check validator. Very common in grocery stores until recently, these printers both produced receipts and printed an audit number and endorsement on the back of the check a customer might offer in payment. It's difficult to imagine paying for groceries with a check, but it used to be a common practice. For many years, the practicalities of accepting checks were a major driver of POS technology. When a cashier rung you up, there were two options: they pushed the cash button, and the POS "bumped" the cash drawer open, or they pushed the check button, and the POS sent an endorsement to the check validator. The close coupling of these two features means that cash drawer bumping is traditionally the task of the receipt printer, and cash bump outputs are common to this day.

But where, exactly, is this tour of POS printing technology taking us? Well, you might notice the absence of the humble inkjet. It might seem surprising: inkjet mechanisms can actually be quite compact, and they tend to be a natural evolution of impact printing. Well, there are indeed inkjet printers in the receipt printer class, but there are some practical considerations. Moving a smaller print head across the paper in bands requires a more complex mechanism, and it's slow compared to printing in one pass. Inkjet heads large enough to span the whole width of the receipt tape are fairly expensive.

And after all that, inkjet seems high maintenance compared to the almost bulletproof reliability of direct thermal printers. Consider the state of the average gas pump "CRIND" (Card Reader In Dispenser) receipt, and then consider that the small thermal mechanism is still managing to produce that output after many years in the harsh conditions of the outdoors. Inkjets tend to quickly malfunction without some sort of automated mechanical cleaning, and that's under office conditions.

So, to put it succinctly, inkjet receipt printers just aren't popular.

You could make similar comments about office printers, where inkjet suffers in many ways when compared to laser or LED printers. But they have been a tremendous success at the lower end of the market. There are a few reasons for this outcome, but one of the bigger ones is color: for a laser or LED printer to produce color used to be rather complicated. In the '00s, many inexpensive color laser printers were "four-pass" printers: the page had to be looped through the print engine four times, one for each color! It saved a lot of parts but made printing more than four times slower. Inkjets were far from this problem. It's a fairly simple matter to make an inkjet print head that serves multiple colors in one assembly!

The same ideas are applicable to receipt printers. If you, for some reason, want a full-color receipt, inkjet is the way to go. But no one wanted a full-color receipt. Even dual-color impact printers disappeared into the kitchen.

And then a company called Catalina came along. Catalina keeps a somewhat low profile among consumers, certainly lower than the MacOS release. Search results suggest lower even than the island off of Los Angeles, for which the company, and the MacOS release, are named. There's no Wikipedia article about Catalina, and their own About Us is brief and made up mostly of nonsense like this:

Transforming data into insights, and insights into action through a seamless consumer experience that drives results.

Catalina is one of those companies that you never think about, but that is constantly thinking about you. Today we would call it ad-tech.

Catalina is tough to research. Obviously they did not intentionally choose a name that would become a MacOS release; they were using the Catalina name many years earlier. But it does seem like they have participated in a bit of obfuscation. Today, they continue to advertise a charming phone number: 1-800-8-COUPON. This "translates," of course, to 1-800-826-8766. During the 1990s they ran numerous classified ads using this phone number, but the numeric version instead of the easier to remember "vanity" representation. The ads were for advertising associate positions, but curiously did not mention the name of the company at all.

Actually, some of these ads give a slightly different phone number, 1-800-826-8768. It is quite conceivable that both phone numbers were issued to the company, given the different toll-free number industry of the '90s. But the fact that OCR frequently confuses these two numbers leads one to suspect that some of the 8768 ads may have been a copy mistake.

Even better, a few of the ads for the 8768 number, and one ad with the 8766 number, do give the name of a company, but an unfamiliar one: Aquarius Enterprises.

Aquarius Enterprises was a "register tape advertising" or "receipt back advertising" venture. In other words, they sold advertising on the backs of receipts. Curiously, while Catalina mentions their 40-year history, Aquarius Enterprises calls themselves "the most successful register tape advertising" for "over 25 years"... in 1993. Are they the same company? Well, they used the same phone number. Catalina is headquartered in St. Petersburg, Florida today, but seems to have moved, as early articles describe then as Anaheim-based... rather closer to the El Segundo address often used by Aquarius Enterprises.

Perhaps it is a coincidence of similar phone numbers and similar industries, but I strongly suspect that Catalina was a spin-out of Aquarius Enterprises. I tried finding shared employees, but there is remarkably little information about Aquarius Enterprises outside of their classified ads for sales associates. But then, once again, it's not an easy name to search for.

Whatever its origins, Catalina launched in 1985 with "Coupon $olutions." Besides the cringeworthy name, this venture was remarkably similar to what consumers will know them for today: Coupon $olutions consisted of software that recorded a consumer's purchases at the POS, and then printed on-demand targeted coupons.

Early articles about Catalina describe the system as relatively simple. Coupons would be printed for "complimentary items." For example, the purchase of baby food would result in an coupon for diapers. The coupons themselves were also simple: printed in monochrome on tape with a distinctive printed edge.

Coupon $olutions debuted at two Boys markets in Los Angeles. It grew fast. By 1990, Catalina's coupon printers were installed in 3,300 grocery stores nationwide. Newspaper coverage started to mention privacy concerns in the 1990s, waving them away with Catalina's assurances that there was no privacy concern because they tracked only purchases and not the shopper's identity. Of course, in the late '80s Catalina had trialed a shopper loyalty card program that would rather change that situation, but it seems to have been unsuccessful.

As time passed, Catalina expanded further into retail technology. They opened their own clearinghouse service for coupons, and marketed their on-demand coupon system to stores as an analytics product, since it provided real-time reporting on purchases (in this era even large retailers would often not have granular, fast reporting from their POS system).

The 1990s treated Catalina well, but they seem to have flown a little too close to technology, and the dot com bust hit them as well. In the early '00s, they weathered layoffs, an accounting probe, and a stock dive. Still, 2005 brought a big step forward: color.

Yes, we're finally back to the point. Catalina Marketing partnered with Epson to introduce a special variant of the TM-C610 color receipt printer, called the TM-C600. Called the CMC-6 by Catalina, the printer uses a full-width inkjet head to produce 360 DPI full color on 57.5mm paper.

Lately, though, you may have noticed these printers yielding unsatisfactory results. When I've gotten Checkout Coupons at all, they've been barely legible or, increasingly, completely blank. Curious.

Catalina went bankrupt in 2018, and underwent a reorganization. The company emerged, but apparently not by that much, as it went bankrupt once again in 2023. Catalina offers a fully managed service, meaning that they ship stores new ink cartridges when remote monitoring of the printers indicates that it will be needed. I have a suspicion that Catalina's second bankruptcy has introduced some disruptions. And yet, in an article they claim:

Catalina is assuring clients and shoppers that it’s still business as usual, and ongoing promotions won’t be affected. β€œThere will be no interruption in Catalina’s ability to serve its customers or any impact on how it works with them,” Catalina says.

I'm not sure that this is working out, even a year into the bankruptcy process. Safeway/Albertsons has apparently decided to remove the Catalina printers entirely. Smith's (Kroger) doesn't seem to maintain them at all. Walgreens is apparently more committed to the cause, as they are with the cooler screens, but even there checkout coupons have become inconsistent.

Besides, I don't think even Catalina views the printers as very important any more. They're relegated to a small corner of Catalina's website, with the vast majority of their marketing material dedicated to analytics, targeting, and digital marketing. Catalina seems to be a major player in the in-app digital coupons now emphasized by a lot of grocers, although I've personally found the system to be laughably unusable. But it's not surprising that you get a laughably unusable app from an industry that churns out this kind of copy:

84.51Β° currently delivers personalized promotional offers to Kroger’s digitally engaged shoppers via its website, mobile app, and more broadly via its Loyal Customer Mailer. Catalina Reach Extender is a complementary solution to the way current offers are delivered and will expand the impact of promotional offers by aligning those offers to the way customers shop – in-store, online or both.

As far as I can tell, this press release is just describing making digital coupons (managed by a company that is, improbably, called 84.51Β°) also print out on the Catalina printers. The ones that barely work any more. Well, that was January of '23, they didn't know about the second bankruptcy yet.

Catalina may date to 1985, but it's sort of a case study in the advertising industry. It's a huge, publicly traded company, with a market cap that's reached at least $1.7 billion, and two bankruptcies. They write such obtuse copy that it's hard to understand what exactly they do these days, which is probably mainly a way to distract from the fact that their main business is now collecting and selling consumer data. And I would say that no one likes them... subreddits of retail employees are full of comments expressing relief when the Catalina printers would break, since unplugging them would result in multiple phone calls a day from Catalina investigating the "problem."

BUT: there are couponers.

That's right, there's a whole internet subculture that is obsessed with these checkout coupons. They catalog the coupons on offer, and document the process for requesting a replacement coupon from Catalina when the one you expected failed to print. So very strange to me, a reminder of the many people out there and their many strange hobbies.

Why would you ever waste your time on these coupons? I have real things to do, like collecting thermal printers.

2024-05-06 matrix

For those of you who are members of the Matrix project, I wanted to let you know that I am running for the Governing Board, and a bit about why. For those of you who are not, I hope you will forgive the intrusion. Maybe you'll find my opinions on the topic interesting anyway.

I am coming off of a period of intense involvement in an ill-fated government commission, and I wanted to find another way to meaningfully contribute to the governance of something I care about. Auspiciously, the newly constituted Matrix foundation is forming a governing board. I am up for one of the individual member seats.

Why do I care?

Instant messaging is a fascinating case study in the history of technology. It is nearly as old as networked computing, and you could make a decent argument that it is older, running only into dithering around the definitions. We've always wanted to communicate, and text has always been an obvious option. It is probably because of the obviousness of instant messaging that it has repeatedly been coopted by commercial interests.

You don't have to be very old to have lived through several iterations of this process. I'm not quite the right person to remember ICQ fondly; for me it was AIM. But what I remember most fondly is more obscure: XFire. It had an in-game overlay and killcounter integration, both critical features for my computer habits that consisted heavily of Jedi Knight: Jedi Academy. That isn't actually important, I'm just reminiscing, but I think most people have a story like this.

If you have read much of my back catalog you know that I am not always optimistic about federated systems. They face a lot of challenges, which range from the technical complexity of changing federated protocol specifications to a whole category of opposing forces that can be vaguely chalked up as capitalism. And yet, textual communications bring us what is probably federation's greatest and most enduring success: email. Email is also a cautionary tale in a lot of ways, but it gives us a cause for optimism.

The history of federated messaging is rather more varied. XMPP was, in its heyday, nearly on track to mass adoption. High quality clients emerged, XMPP was adopted by grassroots projects and then, as at least an implementation detail, by Facebook and Google. We all know what happened. I think most people today are too quick to blame XMPP's downfall on inconsistent implementation of protocol extensions (XEPs) rather than complete cooption by two of the era's largest internet companies, but to be clear, inconsistent implementation was indeed a problem.

Matrix and Me

I have used Matrix as my main, day-to-day messaging solution since 2016. I have also operated a homeserver with open registration for that entire span. In some ways this has been a rather passive venture, but as the user count of that homeserver has grown I've struggled more with performance and moderation issues. A few months ago things tipped over and I had to spend a weekend doing some serious work on both fronts. This lead me to pay a lot more attention to the Matrix project and the state of the art.

I wish that I had been more involved in the Matrix project to date, but I try very hard to avoid software engineering, and Matrix governance and community efforts, the area that matters to me most, have often been hard for me to follow. This situation has improved significantly recently, and I think that the Matrix foundation deserves enormous credit for the work they have done to pick up the level of community engagement.

Of course I come to the topic with some opinions. Who would expect anything less?

Polish over Features

The Matrix project, especially as personified by Element, has added a huge number of new features. It's hard to call this a bad thing, and some of them have been notable successes. For example, E2E is a challenging feature to deliver, but has indeed become table stakes for a messaging product that attracts a privacy-minded userbase.

Still, there is one criticism of Matrix that has remained constant over its entire lifespan, and its one that needs to be attended to: the level of consistency, usability, and polish.

Polish is tricky in a federated system. It's more the domain of clients than the protocol, but the protocol directly affects the situation by determining how easy it is to develop and maintain high-quality clients. For many years it was clear that the change rate in the Matrix protocol made it difficult to develop a good client. Element often felt like the only complete client, and even it was pretty rocky. Fortunately there has been a lot of progress; Element has greatly improved and the stable of third-party clients like my own choice, Nheko, has a lot to offer.

Still, there's a lot of progress to be made. Matrix competes directly with commercial products that come from vendors with a heavy focus on usability and user experience. It only takes one instance of the dreaded "Unable to decrypt" for casual users to bounce. Element continues to be a de facto "primary" implementation that can make the road more difficult for others.

I think that protocol changes should be evaluated conservatively, with an eye towards providing a level of stability that enables multiple top-tier clients. The Matrix Foundation should actively seek ways to support the enhancement and maintenance of clients beyond Element, supporting the healthy ecosystem of independent implementations that are required for an open protocol to be sustainable.

Moderation

Moderation is one of the great struggles of the internet, if not the greatest. Some advocates of federated systems opine that they make moderation easier or more tractable. I disagree; while federation enables more flexibility in how users experience moderation it makes many of the underlying problems more difficult. Moderation decisions across the system are made in an ad-hoc, distributed way. The rich network of homeservers presents many opportunities for bad actors, including every poorly maintained (or unmaintained) node.

Matrix imposes a moderation challenge at two levels: within communities and within homeservers. Relatively good tools exist at the community level, but still, too many basic functions require introducing the Mjolnir moderation bot. At the level of the homeserver, moderation tools are frustratingly limited. The administration API is minimal in severely limiting ways and there do not appear to be any complete implementations of a client for it.

I applaud the various efforts that have popped up, things like the community moderation initiative's blocklist effort and the "awesome technologies" Synapse administration tool. But we need more, and we need more in two ways.

First, we need technical progress. The in-protocol moderation capabilities of Matrix should be improved over time with a north-star vision of eliminating Mjolnir, an approach to community moderation that was carried over from IRC but probably should have stayed there. The Synapse admin API should be improved and better tooling around it developed.

Second, we need progress in governance. I would like to see an open initiative to develop best practices for moderation of communities and homeservers. This can include the development of shared blocklists through a documented, auditable process (although not necessarily an open one, for reasons of user privacy). I would like to see a sincere effort to advance the state of the art in distributed moderation, bringing together diverse users to learn their concern and developing tools to make consistent and active moderation the default.

The number of independently operated homeservers in Matrix can be a strength, but in this area it can be a weakness. ActivityPub, with its heavier orientation towards public discussion, has served as a laboratory for abuse and moderation issues. Matrix could learn a lot from the efforts going on in the Mastodon community, for example, towards practical means of moderating across instances.

For homeserver operators, moderation is an immense practical concern due to risks from load and CSAM. The volume of CSAM traffic on Matrix, while not a problem beyond solving, seems badly under-discussed and particularly calls for some sort of distributed moderation program to relieve public homeserver operators of ongoing whac-a-mole. Sometimes a graph is only as strong as its weakest node---this is the kind of hard problem we have to take on to build a sustainable future for federated systems, and we should take it on enthusiastically.

I would like to see the Matrix project boldly take on moderation at multiple levels. First, improving the moderation tools and capabilities of the Matrix protocol should always be part of the discussion. Second, I would like to see the Matrix Foundation support the development of improved moderation and abuse tools, preferably including them as part of Synapse or providing a very easy setup process so that good abuse management can be the norm rather than the exception. Third, I would like the Matrix foundation to facilitate community discussion around best practices, tools, and techniques for moderation.

Not everyone will agree on the way to perform moderation, or even the goals of moderation. That's the nature of the internet, and more broadly of communications. We can't let it stop us from trying. This can be one of the hardest areas to build consensus, but that will always be the case, and so we need to include the inherent social complexity of moderation as part of the technical requirements. Once again: we need to be bold and take on the hard problems, and this might be the hardest.

Chat, First and Mostly

One of the concerning trends I have seen in a lot of adjacent nonprofit tech projects lately is dilution of mission. We could also call this "distractions." Unfortunately, Matrix has not been immune. The most obvious example is Third Room, the Matrix metaverse project. I want to temper my criticism by saying that the level of effort devoted to Third Room has evidently been low, but I think that the optical problem created by Third Room (the appearance that Matrix has been capered, one might even say Zuck'd, into a distracting focus on the latest trend) is certainly real. For a community venture, appearances are important, and this means applying discipline in how side projects are presented, especially in this era of so many projects presaging their downfall with some buzzword-reaction initiative.

I might go just a bit further. I don't think that the VoIP features of Matrix (voice and video communications) are a bad idea per se, but I think that that's a complex problem space and the current landscape of instant messaging products suggests that it's not a particularly important one. In other words, people seem happy to do their voice/video chat in a different product than their text chat. You could say that this presents an opportunity for Matrix: to double down on providing a best-in-class textual messaging experience, without having to expend significant resources on real-time media.

I wouldn't want to see existing features removed, but I think that features other than core instant messaging should be deprioritized, at least in the short term.

Onboarding

Sometimes caring a lot about onboarding can be kind of gross. It has the scent of focusing on conversions. But it's a really important issue for IM, when the onbarding experience of a lot of the other options is "you already have it." The Matrix Foundation is well-positioned to demonstrate leadership in the onbaording experience, across the protocol, clients, and public communications. Let's make Matrix easy to get into.

A Consistent Direction

I don't want to dwell too long on how many times a certain prominent Matrix client has been renamed, launched new App Store listings, etc. It's old news and fortunately things seem to have settled down. Still, I think a lot of reputational damage happened that has not fully been forgotten. This history serves as a reminder that significant user-facing changes need to be made carefully. New social applications in general, and especially federated ones, have a bad reputation for churn. The most successful are often the most boring. Let's think carefully about things, and look before we leap.

What Do You Think?

I have a lot of opinions and of course all of them are correct, but usually only in my eccentric construction of reality. Your experience may vary. Please feel free to reach out with your thought on the Matrix project, an offer that stands whether I'm elected or not, because I love to talk about it.

And that concludes my stump speech. I'll be back again soon with a normal post about some useless trivia. I think it might be about a specific kind of printer that you've probably seen but not thought much about, other than slight irritation. I'm also spending some time right now playing video games^w^w^w working on a more ambitious writing project that is out of my normal lane but you might still enjoy. It's about dogs. It's also very sad and I'm not entirely sure what to think about it. You'll see what I mean if I ever finish.

2024-04-26 microsoft at work

I haven't written anything for a bit. I'm not apologizing, because y'all don't pay me enough to apologize, but I do feel a little bad. Part of it is just that I've been busy, with work and travel and events. Part of it is that I've embarked on a couple of writing projects only to have them just Not Work Out. It happens sometimes: I'll notice something interesting, spend an evening or two digging into it, but find that I just can't make a story out of it. There isn't enough information; it's not really that interesting; the original source turned out to just be wrong. Well, this one is a bit of all three. Join me, if you will, on a journey to nowhere in particular.

One of the things I am interested in is embedded real-time operating systems. Another thing I am interested in is Unified Communications. Yet another is failed Microsoft research projects. So if you've ever heard of Microsoft At Work, you probably won't be surprised that it has repeatedly caught my eye. Most likely, you haven't heard of it. Few have; even the normal sources of information on these kinds of things appear to be inaccurate or at least confused about the details.

Microsoft went to work in the summer of 1993, or at least that's when they announced Microsoft At Work. This kind of terrible product naming was rampant in the mid-'90s, perhaps more from Microsoft than usual. MAW, as I and a few others call it, was marketed with a healthy dose of software sales obfuscation. What was it, exactly? an Architecture, Microsoft said. It would enable all kinds of new applications. With MAW, one would be able to seamlessly access the wealth of information on their personal computers. Some reporters called it an Environment. Try this for a lede: "Microsoft Corp. unveils integrated computer program."

The announcement included a demo that got a lot more to the point: a fax machine that ran Windows.

Even this was strangely obfuscated: enough newspaper reports described it as a "fax like product" that I think this verbiage was sincerely used in the announcement. Today, we would refer to MAW as an effort towards "smart" office machines, but in 1993 we hadn't quite learned that vocabulary yet. Microsoft must have been worried that it would be dismissed as "just a fax machine." It couldn't be that, it had to be something more. It had to be a "fax like product," built with "Windows architecture."

I am being a bit dismissive for effect. MAW was more ambitious than just installing Windows on a grape. The effort included a unified communications protocol for the control of office machines, including printers, for which a whole Microsoft stack was envisioned. This built on top of the Windows Printing System, a difficult-to-search-for project that apparently predated MAW by a short time, enough so that Windows Printing System products were actually on the market when MAW was announced---MAW products were, we will learn, very much not.

Windows Printing System modules were sold for at least the HP LaserJet II and III. If you did not experience them, these printers placed their actual rasterization logic onto a modular card that could be swapped out, usually to switch between PCL or PostScript "personalities." The PostScript module was offered mostly for MacOS compatibility, Apple having selected PostScript as a common printer control language. The Windows Printing System module took this operating system specialization a step further, using Windows' simple GDI graphics protocol to draw output to the printer.

I am actually a little unclear on whether or not the Windows Printing System lead directly to the cheap "WinPrinters" that are also associated with the idea of GDI-based printing. "WinPrinters," so-called by analogy to WinModems, are entirely dependent on the host computer to perform rasterization. While extremely irritating from the perspective of software support, this was an important cost-savings measure in consumer printers. Executing a capable printer control language was rather demanding; the Apple LaserWriter famously had a faster processor than the Macintosh computers it was a peripheral to. Printers with independent rasterization, particularly the more complex PostScript, came at a substantial price premium to those that required the host to perform rasterization.

While some details of reporting on the Windows Printing System make me worry that it was in fact rasterizing on device (like the curiously specific limit of "up to 79" TrueType fonts), I'm fairly sure it was indeed a precursor to the later inexpensive designs. Rather than a cost-savings measure, though, Microsoft seems to have marketed it as a premium feature. Because of the Windows Printing System's higher level of integration with the operating system, it brought numerous new features, many of which we take for granted today. TrueType font support at all, for example, a cutting-edge feature in '93. Duplex control from the print dialog rather than the printer's own display, and for that matter, the ability to see printer status messages (like "PC LOAD LETTER") on the computer you just printed from.

And at the end of the day, offloading rasterization from the printer had an advantage: the Windows Printing System was faster than PCL or PostScript.

Even if it did become the dominant printing method years later, the Windows Printing System of the MAW era doesn't seem to have fared very well. Because it took the position of an add-on cartridge (like a font cartridge), it would have been an added-cost option for printer buyers---an added cost of $132.99, according to a period advertisement. The dearth of available documentation or even post-launch advertising for the Windows Printing System cartridge suggests disappointing sales numbers.

The fortunes of Windows Printing Technology would turn a year later, though, as Lexmark introduced their WinWriter series: "With the Microsoft Windows Printing System Built In!" Speaking of the Lexmark WinWriter series, this whole printing thing is kind of a tangent. What about MAW? The Windows Printing System, it seems, was not really a part of MAW. It was just generally related and available when MAW was announced, so it was rolled into the press conference. It is a bit ironic that the Lexmark WinWriter, truly the Printer for Windows, was not a MAW device despite shipping well after MAW was announced.

So, back to the course: MAW was not just Windows on a fax machine, not just the Windows Printing System, but an integrated system of Windows on a fax machine, the Windows Printing System, a generalized network protocol, and apparently a page description language. This was all, as you can see, rather document-focused. MAW would allow Windows users to easily, seamlessly interact with these common office machines, sending and receiving documents like it was 1999.

And later, it would do more: Microsoft was clear from the beginning that MAW had a higher vision, one that is remarkably similar to the later concept of Unified Communications. Microsoft envisioned Windows on a phone, bringing desk phones into the same architecture, or environment, or whatever. Remember the phone part, it comes back.

In practice, MAW would do nothing. It was a complete and total failure. It took two years for the first MAW office machine to reach the market, a Ricoh fax machine. Fortunately, a television commercial has been preserved, giving us a small window into the Windows on a Fax Machine experience. "Microsoft's At Work Still Loafing on the Job," is how the Washington Post put it in 1995.

They call it "the first real step toward the paperless digital office," a nod towards the promise of Microsoft's document-messaging vision, before noting that virtually no products had shipped, everything was behind schedule, and Microsoft had reorganized the At Work team out of existence. Microsoft At Work was seldom spoken of again. Few products ever launched, those that did sold poorly (the Windows licensing fee imposed on them being one of several factors contributing to noncompetitive price tags), and by the time Windows gained proper USB support few would remember it had ever happened.

In other words, a classic Microsoft story.

But I'm not here to chronicle Microsoft's foibles, there are other writers for that. I'm here to chronicle their weird operating system projects. And that's what got me reading into MAW: the promise of not just one, but two weird operating system projects.

Regard that promise with suspicion.

Wikipedia tells us that MAW included "Microsoft At Work Operating System, a small RTOS to be embedded in devices." That's very interesting. I love a small RTOS to be embedded in devices! Tell us more.

Researching this MAW embedded operating system turns out to be a challenge. You see, it is not the better known of the operating systems produced by the MAW initiative. That would be WinPad, curiously not mentioned at all in the MAW Wikipedia article, but instead in the Windows CE article, as a precursor to CE. Windows CE gets a lot more affection than MAW, and so we know quite a bit more about WinPad. It was an early attempt at an operating system for a touchscreen mobile device, one that, in classic Microsoft fashion, competed internally with another project to build an operating system for a touchscreen mobile device (called Pegasus) and died out along with the rest of MAW.

It was based on 16-bit Windows 3.1, using a stripped-down UI layer that resembled Windows 95. Probably not coincidentally, there seems to have been an effort to port WinPad onto Windows 95, and fortunately developer releases of WinPad have been preserved. With some effort, you can get them running on top of appropriate Windows versions in an emulator.

WinPad was envisioned as a core part of MAW, the key enabler of that paperless office. With MAW and WinPad, you could synchronize documents, emails, and faxes, everything you could ever want in 1995, onto your handheld device and then carry it with you. WinPad also didn't work. Evidently the performance was lousy and it required entirely unrealistic battery capacities. Not a surprising outcome when one ports a mid-'90s desktop operating system to a tablet. How charming! But not exactly my target. What about this RTOS?

If you dig into these things for too long, you start to question your life, or at least reality. References to this MAW embedded operating system are so sparse that I quickly started to wonder if it existed at all, or if it was simply confused with WinPad. This MAW OS would run directly on the office machines. Is it possible that it was, in fact, WinPad that ran on a fax machine? Or at least that whatever ran on the fax machine was a direct precursor to WinPad, an earlier new UI layer on top of 16-bit Windows?

The nagging thing that kept me on the hunt for this MAW embedded OS was, oddly enough, the Sega Saturn. A series of newspaper archives, many gathered by Mega Drive Shock, tell an interesting story. Microsoft, it seemed, had been contracted to provide the operating system for the Sega Saturn. Well, this seems to have been a misconception, although clearly a period one. As the news cycle carried on, the scope of this Microsoft-Sega partnership (at first denied by Microsoft!) was reduced to Microsoft providing some sort of firmware related to the Saturn's CD drive.

There is, though, a tantalizing detail. The Electronics Times reported that "Microsoft looks set to port its Microsoft At Work operating system to Hitachi's new SH series of microprocessors." The article explicitly linked the porting to the Saturn effort, but also mentioned that the MAW operating system was being ported to Motorola 68000.

Do you know what never ran on the Hitachi SH or Super-H architecture? 16-bit Windows.

Do you know what did? Windows CE.

Is it possible? Do you think? Is Windows CE a derivative of Windows for Fax Machines?

I'm pretty sure the answer is no. A reader pointed me at John Murray's 1998 book Inside Windows CE, which provides a brief and presumably authoritative history of the platform. It specifically discusses Windows CE as a follow-on project to the failed WinPad, which it describes as 16-bit Windows 3.1, and goes on to say it "was designed for office equipment such as copiers and fax machines."

It is, of course, possible that the book is incorrect. But given the dearth of references to this MAW embedded RTOS, I think this is the more likely scenario:

MAW devices like the Ricoh IFS77 ran 16-bit Windows 3.1 with a new GUI intended to appear more modern while reducing resource requirements. Some reporters at the time noted that Microsoft was cagey about the supported architectures, I suspect they were waiting on ports to be completed. The fax machine was probably x86, though, as there's little evidence MAW actually ran on anything else.

This operating system was extended for the WinPad project, and efforts were made to port it to architectures more common in the embedded devices of the time like SH and 68000. Microsoft may have reached some level of completion on that project and sold it to Sega for the Saturn's complicated storage controller, but it's also possible that the connection between the Saturn and MAW is mistaken and the software Microsoft delivered to Sega was a simple, from-scratch effort. The strange arc of media reporting on the Microsoft-Sega relationship offers the tantalizing possibility that Microsoft was intended to deliver a complete OS for the Saturn but had to pare it back as a result of problems with porting WinPad, but it seems more likely it just results from an overeager electronics industry press and the Sega NDA that a Microsoft spokesperson admitted to being subject to.

MAW failed to win the market, and WinPad failed to win a BillG review. The project was canceled. From the ashes of WinPad and the similarly failed Pegasus, some of the same people started work on a brand new project, Pulsar, which would become Windows CE.

MAW didn't survive the '90s.

Well, some things are like that. I still got 240 lines out of it.

Update: Alert reader abrasive (James Wah) writes in that they had previously dumped the CD-ROM firmware from the Saturn and performed some reverse engineering. Several things suggest that it was not developed by Microsoft, including a Hitachi copyright notice. It seems likely, then, that the supposed Microsoft-Sega partnership never produced anything or was never real in the first place.

2024-04-05 the life of one earth station

Sometimes, when I am feeling down, I read about failed satellite TV (STV) services. Don't we all? As a result, I've periodically come across a company called AlphaStar Television Network. PrimeStar may have had a rough life, but AlphaStar barely had one at all: it launched in 1996 and went bankrupt in 1997. All told, AlphaStar's STV service only operated for 13 months and 6 days.

AlphaStar is sort of an interesting story on its own. Much like the merchant marine, satellites are closely tied to the identity of their home state. Many satellites are government owned and operated, and several prominent satellite communications networks were chartered by governments or intergovernmental organizations. Consider the example of Inmarsat, a pioneer of private satellite communications born of a UN agency, or Telesat, originally a Crown corporation of Canada. As space technology became more proven, private investors started to fund their own satellite projects, but they continued to operate with the imprimatur of their licensing state.

AlphaStar was sort of an oddity in that sense: a subsidiary of a Canadian company set up to offer an STV service in the United States. Understanding this situation seems to require some background in the Canadian STV industry. 1995 saw the announcement of Expressvu, a satellite television service by telecom company BCE and satellite receiver manufacturer Tee-Comm. Canadian satellite operator Cancom would provide the space segment, and Tee-Comm the ground segment.

Expressvu looked to be headed directly for monopoly: despite attempts by a coalition of Montreal company Power and Hughes/DirecTV to launch a competing service, only Expressvu could meet a regulatory requirement that Canadian broadcast services be served by Canadian satellites. Power's efforts to change the rules involved considerable political controversy as politicians up to the prime minister became involved in the back-and-forth between the two hopeful STV operators.

Foreshadowing Alphastar, both potential Canadian STV operators struggled. Neither Expressvu nor PowerDirecTV would ever begin operations as originally planned. While regulatory uncertainty contributed to schedule delays, and the complexity of still relatively new satellite TV technology drove up costs, one of the biggest problems was a lack of satellite capacity. Most Canadian communications satellites were launched and operated by Telesat, and in the mid '90s Telesat's fleet fit onto a small list. Expressvu had been slated to use a set of transponders on Telesat's Anik E1, but in successive events Anik E1 lost a solar panel and then several of its transponders.

The lack of Canadian satellite capacity created a regulatory conundrum for Canadian STV: Industry Canada was requiring that operators show they had access to satellite capacity in order to obtain an STV license. No capacity was available on Canadian satellites, though. For STV to become available at all in Canada, some compromise needed to be found.

PowerDirecTV and a new satellite venture by Shaw Communications applied for an exception, allowing them to use US satellites until transponders were available on Canadian satellites. Industry Canada was reticent to approve the arrangement, considering the uncertainty over what satellites could be used and when.

As Expressvu failed to get off the ground, several of the partners in the project backed out, and Tee-Comm decided to set off on their own. Considering the licensing situation in Canada, they devised a clever plan: they would launch an STV service in the United States. Such a service, delivering US-made content to US customers, could clearly be served by US-owned satellites according to Canadian policy. But it would also secure long-term satellite carriage agreements and fund the construction of infrastructure. When Tee-Comm later returned to apply for an STV license in the Canadian market, they would have fully operational infrastructure and an existing customer base. They could make a far stronger argument that they would be a reliable, affordable service that could transition to Canadian satellites when capacity allowed.

So Tee-Comm started AlphaStar.

AlphaStar carried over several signs of their Canadian origin, including the basic broadcast technology. They would broadcast DVB-S, the norm overseas but new to the United States where DirecTV and the Dish Network used their own protocols. With DVB-S and more powerful Ku-band transponders on AT&T's Telstar 402R satellite, AlphaStar customers needed a 30" dish---smaller than the C-band TVRO dishes associated with earlier STV, but still larger than the 24" and smaller dishes used with DirecTV's DSS.

Of course, satellite feeds have to come from somewhere. AlphaStar purchased an existing earth station in the town of Oxford, Connecticut and adapted it for television use, adding TVRO antennas to receive programming alongside the large steerable dishes used to transmit to the satellite. An on-site network control center ensured the quality and reliability of their television service; corporate headquarters were located nearby in Stamford.

They never signed up many customers. There may have been a high point of around 40,000, but that wasn't enough to cover the cost of operations. Tee-Comm had barely received authorization to launch the Canadian version of the service (AlphaStar Canada) when they went belly-up in both countries. AlphaStar in the US managed over a year, but AlphaStar Canada only made it a few months. In the mean time, the old Expressvu project, minus Tee-Comm, had finally lurched to life. Expressvu went live in 1997, and the AlphaStar story was forgotten.

During the bankruptcy proceedings in the US and Canada, the courts solicited bids to take over AlphaStar's assets. These included, according to a document prepared by AlphaStar, their Oxford earth station which had been built for the Strategic Defense Initiative and hardened to withstand nuclear attack.

See, this is where I really got interested. An SDI satellite earth station in Oxford? What part of SDI was it built for? I started hunting for the location of this earth station. Not far from Oxford I found an obvious candidate, an isolated facility with a half dozen large, steerable antennas. But no, it was built by Inmarsat and is operated today by Comsat (also originally government-chartered).

Finally, digging through FCC rulings, I found an address: 66 Hawley Road. There was nothing to see there, though, just a tilt-up warehouse for a bearing company that showed no signs of satellite communications heritage. It's funny, Google Maps itself intermittently shows images from before or after the bearing company moved in, but I never noticed that. It took Department of Agriculture aerials from the '90s for me to realize the address was correct; the earth station was demolished just a few years ago.

There are few photos of the building. The best I've seen, from a marketing presentation from one of AlphaStar's successors, is only a partial view. The building doesn't look to be nuclear-hardened, though. It has a glass-walled lobby, and no sign of blast deflectors on its ventilation openings. It seemed like it had been renovated, though. Perhaps they tore out its original hardened features?


Historic aerial imagery tells a story. The facility was first built sometime in the 1980s, and in the early '90s featured two large, likely steerable antennas. They were in the open, not enclosed by radomes, an observation that points away from a military application. It is a fairly simple matter to estimate the altitude and azimuth of a satellite antenna from aerial photographs, so antennas used for military and intelligence purposes are almost always kept under inflatable cover.

In the mid-'90s, around when AlphaStar moved in, small antennas proliferated on the site, peaking at probably a dozen. By the turn of the millenium the antennas receded, dwindling in number as the largest were demolished.

AlphaStar's remains were purchased out of bankruptcy by Egyptian telecom entrepreneur Mahmoud Wahba, who operated them as Champion Telecom Platform. Champion was a general-purpose satellite communications company, but took advantage of the network control center and television equipment at the Oxford facility to focus on television distribution. Making the record a bit confusing, Champion advertised many of its services under the AlphaStar name. They seem to have been reasonably successful, but never attracted much press.

Still, there were interesting aspects to the business. They offered a service where Champion used their small network of earth stations to receive international channels, streaming them over IP to cable television operators who could beef up their lineup without the cost of added headend receivers. At one point, it seems, they even provided infrastructure for a nascent direct-to-consumer IPTV service. They offered the Oxford network control center as an amenity to their earth station customers, and had relationships with a few national television networks, likely as a backup site.

Champion had a better run than AlphaStar but still faded away. Their "remote cable headend" service was innovative in the worst way; in the 2000s the model was widely adopted by the increasingly monopolized cable industry. "Virtual headends" became the norm, with each cable network operating central receivers and network control in-house. IPTV was quite simply a commercial failure, but perhaps we can give them the credit of saying that they were ahead of their time. Earth stations became more available and affordable, and the fees Champion could extract from television networks must have gotten thinner.

Champion Telecom shut down sometime in the '00s. Through their holding company, JJT&M Inc., Champion and Wahba held onto the building and leased it to a tenant, SteelVault Data Centers. For several years, SteelVault operated the building as a colocation center. In their marketing materials, they said "The data center building was originally built for [the] CIA in the early 1980's" [1].

Oh? Now the CIA is involved.

At one point, I felt the trail had gone cold on the history of the Oxford earth station. It clearly predated AlphaStar, and it seemed likely that it was built sometime in the early '80s as several sources claimed. But by whom, and for what? Newspaper archives turned up very little. Ironically, any search with the word "satellite" in the 1980s turns up an unlimited number of articles on the Strategic Defense Initiative, but none have any relation to Oxford.

I put down the case for a month or more. I must have looked into property records, but to be honest, I think I was thrown off the case by Connecticut's curious convention of putting tax assessors and clerks in city government rather than the county. Oxford is in New Haven County, but the New Haven assessor works for the city by that name. Of course they have nothing on parcels in Oxford.

It pays to return with fresh eyes, and today I found what should have been obvious: the Oxford assessor has record of the parcel. The Oxford clerk, in a feat rare in my part of the country, has digitized their books. I didn't even have to brave a phone call, just a frustrating web application. It was a simple trail to follow from the current deed to the survey that first described the parcel---in 1982.


In the era of SteelVault, 66 Hawley takes a strange turn. Like most "secure data centers," the sector of the market that often make claim to having renovated a government bunker, SteelVault did not flourish. In 2013, SteelVault was bankrupt and left the building. Of course, that doesn't stop numerous data center directories from repeating their CIA claims today.

JJT&M, too, was bankrupt, and the building at least seemed to be tied up in the matter. There was a lien, then a foreclosure, then a tax auction; unpaid property taxes of over one million dollars.

Then, there was a twist: the Oxford tax collector went to prison. She had been pocketing property tax payments. JJT&M sued the Town of Oxford, alleging the unpaid taxes had, in fact, been paid to begin with. They also sued the town marshal, who conducted the auction, alleging that he failed to tell the bidders that JJT&M might still hold title.

None of these attempts were successful: there were various technical problems with JJT&M's claims, but the larger finding was that JJT&M had been given ample notice of the unpaid taxes, the foreclosure, and the tax auction, but had failed to object until after the whole thing was done. Wahba had a number of business ventures in the television industry and elsewhere, and he must have been an absentee owner. A good reminder for us all to check the mail every once in a while.

The auction purchaser transferred the building to a holding LLC, probably as an investment, and then a few years later sold it to the Roller Bearing Company of America. They tore it down and built a new warehouse, and that's the end of the story.

But what about the beginning?

Several of the deeds on the property, which is variously listed with an address on Hawley or on the adjacent Willenbrock Road, include the same metes-and-bounds description. It ends: "Being the premises shown and described on a certain map entitled 'Survey & Topographical Map Prepared for G.T.E. Satellite Corp, Oxford.'"

In 1981, the Southern Pacific Railroad, owner of Sprint, launched a satellite communications business under the name Southern Pacific Communications Corporation (SPCC). In 1983, GTE acquired both Sprint and SPCC, rebranding SPCC as GTE Satellite and then shortly after as GTE Spacenet. In 1994, GTE sold Spacenet to GE, where it became GE Capital Spacenet Services, who sold the Oxford earth station to AlphaStar in 1995.

Before AlphaStar, it was a commercial earth station for satellite data network Spacenet, who had built the property to begin with. So what about the SDI? The CIA? AlphaStar had, I think, stretched the truth.

Spacenet was a major satellite data operator in the '90s. They had many commercial customers, but also government customers, and so it is not inconceivable that they held defense contracts. GTE Government Systems had definitely been involved in the SDI, contributing to computer systems and radar technology. But GTE was a huge company with many divisions, and the jump from its Government Services arm to Spacenet being built for the SDI is not one that I can find any backing for. Besides, it doesn't make much sense: SDI was, itself, a satellite program. Why would they use a commercial teleport built for civilian communications satellites?

And what of the CIA? As soon as those three letters are invoked, any claim takes on the odor of urban legend. The CIA has been accused of a great many things, and certainly has done some of them, but I can find nothing to substantiate any connection to Oxford.

It seems more likely that the Oxford earth station fits into the history of satellite communications in the obvious way. GTE Satellite was rapidly growing. From its beginning as SPCC, it had ordered the construction of two satellites that would launch in 1984. In 1982, they were making preparations, purchasing property in Oxford CT and completing a survey and zoning approvals. Over the following year the Oxford Earth Station was constructed, and when Spacenet 1 reached orbit in May 1984 it was ready for service. Oxford was just one of a half dozen earth stations built from 1982-1984 by GTE.

But there's a little more: the Oxford earth station has always had an affinity for television. Paul Allen's Skypix, a spectacularly failed satellite pay-per-view movie service, used GTE's Oxford earth station to uplink its 80 channels of video feeds in the early '90s. Perhaps this was the origin of the site's television equipment, or perhaps there had been a TV venture with GTE even earlier.

What we know for sure is that the Oxford earth station didn't make the cut when GE acquired Spacenet. They sold the earth station shortly after the acquisition. A few years later, in the words of a bankrupt company looking to sell its assets, GTE became the SDI. In the eyes of a failing data center, it became the CIA. And now those claims are rattling around in Wikipedia.

[1] The original just says "built for CIA," which has charming echoes of Arrested Development's "going to Army."

2024-03-27 telephone cables

two phone cables, terminated opposite ways

So let's say you're working on a household project and need around a dozen telephone cables---the ordinary kind that you would use between your telephone and the wall. It is, of course, more cost effective to buy bulk cable, or simply a long cable, and cut it to length and attach jacks yourself. This is even mercifully easy for telephone cable, as the wires come out of the flat cable jacket in the same order they go into the modular connector. No fiddly straightening and rearranging, you can just cut off the jacket and shove it into the jack.

But, wait, what's up with that whole thing anyway? and are telephone cables really as simple as stripping the jacket and shoving them in?

There's a lot of weirdness about modular cables. I use modular cable to refer to a cable assembly that is terminated in modular connectors, a standard type of multipin connector developed by the Bell System in the 1960s and now widely used for telephones, Ethernet, and occasionally other applications. These types of connectors are often referred to as RJ connectors, although that's a bit problematic for the pedantic. The modular connector itself is more properly designated in terms of its positions and contacts. Telephone connections predominantly use a 6P4C modular connector: the connector has six positions, but only four are populated with actual contacts. Ethernet uses an 8P8C modular connector, a bit larger with eight positions, all of which are used. The handset of a telephone typically connects to the base with a 4P4C connector: smaller than the 6P4C, but still with four contacts.

Why? And what do the RJ designations actually have to do with it?

Well, historically, telephones would be hardwired to the wall by the telephone installer. This proved inconvenient, and so the connection between the telephone and wall started to be connectorized. Telephones of the early 20th century were unlike the ones we use today, though, and were not fully self contained. A "desk set," the part of the telephone that sat on your desk, would be connected to an electrical box, usually mounted on the wall. The box was often called the ringer box, because it contained the ringer, but in many cases it also contained the hybrid transformer that achieved the telephone's key feat of magic: the combination of bidirectional signals onto one wire pair.

The hybrid transformer performed the conversion between a two-wire (one pair) signal and a four-wire (two-pair) signal with 'talk' and 'listen' on separate circuits. Since the hybrid was in the box on the wall, the telephone needed to be connected to the box by four wires. Thus the first standard telephone connector, a chunky block with protruding pins, had four contacts. These connectors were in use even after the end of separate ringer boxes, making two of the four wires vestigial. They were still in use into the 1960s, and so you might still find them in older houses.

As you will gather from the fact that the hybrid may have been in the phone or in a box on the wall, and thus the telephone connection to the wall may require four or two wires, the interface between telephone and wall was poorly standardized. This wasn't much of a problem in practice: at the time, you did not own a telephone, you rented it. When you rented a phone, an installer would be sent to your house, and if any wiring was already present they would check it and adjust the connections as required. Depending on the specific type of service you had, the type of phone you had, and when it was all installed, there were a number of ways things might actually be connected.

By the 1950s, as the Model 500 telephone became the norm, a separate hybrid became very unusual: the Model 500 had a hybrid built into its base and only needed the two wires, which could be connected directly to the exchange without an intermediary box. So what of the other two wires? Just about anyone will tell you that the other two wires are present to allow for a second telephone line. This isn't wrong in the modern context, but it is ahistorical to the origin of the wiring convention. The four wires originated with the use of an external hybrid, and when they became vestigial, other uses were sometimes found for them.

For example, the "Princess" phone, a rather slick phone introduced as more of a consumer-friendly product in 1959, had a cool new feature: a lighted dial. The Princess phone was advertised specifically for home use, and particularly as a bedside telephone, so the lighted dial was a convenient feature if you wanted to make a telephone call at night. I realize that might sound a bit strange to the modern reader, but a lot of people used to put a phone extension on their nightstand. If you wanted to place a call after you had turned out the lights, wouldn't it be nice to not have to get up and turn them back on just to see the dial? Anyway, the whole concept of the Princess phone was this kind of dialing-in-bed luxury, and the glowing dial was a nice touch.

There's a problem, though: how to power the dial light? It could potentially be powered by the loop current, but the loop current is very small, likely to be split across multiple extensions, and the exchange would not appreciate the increased load of a lot of tiny dial lights. Instead, Princess phones were installed with a transformer that produced 6VAC from wall power for the dial light. That power was delivered to the phone using the two unused wires in its wall connection. This sounds rather slick in the era of DECT phones that require a separate power cable to the wall, and was one of the upsides of the complete integration of the telephone system. One of the downsides was, of course, that you were paying a monthly rental rate for all of this convenience.

In the late 1960s, the nature of telephone ownership radically changed. A series of judicial and regulatory decisions, culminating in the Carterfone decision, unleashed the telephone itself from the phone company. In the 1970s, consumers gained the ability to purchase their own phone and connect it to the telephone network without a rental fee. Increasingly, they chose to do so. Suddenly, the loose standardization of the telephone-to-wall interface became a very real problem, and one that impeded the ability of consumers to choose their own telephone.

The solution was the Registered Jack, originally a set of standardized wiring configurations developed within the Bell System and later a matter of federal regulation. Wiring installed by telephone companies was required to provide a standard Registered Jack so that consumers could easily connect their own device. It is important to understand that the Registered Jack standards are really about wiring, not connectors. They describe the way that connectors should be wired to meet specific standard applications.

The most straightforward is number 11, RJ11, which specifies a 6P2C connector with a single telephone pair. But what of the 6P4C connector we use today? Well, that's RJ14, a 6P4C with two telephone lines. The problem is that neither consumers nor the telephone cable industry have much of any appetite for understanding these distinctions, and so today the RJ standards have become misunderstood to such a degree that they are only poor synonyms for the modular jack configuration.

Cables with 6P4C connectors are routinely advertised as RJ11 or RJ14, sometimes RJ11/RJ14. Most of the time RJ11 is manifestly incorrect as they do, in fact, contain four wires and thus provide 6P4C connectors. Actual 6P2C telephone cables are uncommon, as they don't really cost any less than 6P4C (manufacturing cost by far dominating the small-gauge copper) and consumers tend to expect any telephone cable to work with a two-line phone. RJ14 here is even incorrect, as there really is no such thing as an RJ14 cable. It's in the name, Registered Jack: RJ14 describes the jack you plug the cable into, the electrical interface presented on the wall. Any 6P4C cable could be used with any RJ that specifies a 6P4C connector. Incidentally, this is only academic, as RJ14 is the only 6P4C jack. This is, of course, much of why the terminology has become confused: Most of the time it doesn't matter! If the connector fits, it will work.

This whole thing becomes famously complex with Ethernet. It is common, but entirely incorrect, to refer to the 8P8C connector used for Ethernet as RJ45. This terminology is purely the result of confusion, a real RJ45 connector is actually keyed differently (and thus incompatible with) the 8P8C non-keyed connector used for Ethernet. They just look similar, if you don't look too close. A true RJ45 connector provides one telephone line and a resistor with a value that would tell a modem what transmit power it should use. In practice this jack was rarely used and it is entirely obsolete today.

In fact, Ethernet is wired according to a standard called TIA 568, which famously has two different variants, A and B. A and B are electrically identical and differ only in the mapping of color pairs to pins. The origin of this standard, and its two variants, is arcane and basically a result of awkwardly shoehorning Ethernet into telephone wiring while trying not to interfere with the telephone lines, or the RJ45 resistor if present. The connectors are wired strangely in order to provide crossover of transmit and receive while using the pins not used by the RJ45 standard: ironically, Ethernet is very intentionally incompatible with RJ45. It's sort of the inverse, plus a twist to swap RX and TX.

So you have to know why? Well, on any modular wiring, the center pins (4 and 5 for an 8P connector) are almost guaranteed to carry a telephone line. That's what modular wiring was for! Additionally, the RJ45 standard that closely resembles Ethernet uses pins 7 and 8 for the resistor. For these reasons, Ethernet originally avoided those pins, using only pins 1, 2, 3, and 6. Pins 3 and 6 would likely already be a pair, as they are the conventional position for either a second telephone line or a key system control circuit. That maintains, of course, the symmetry that is standard for telephone wiring. But that leaves pins 1 and 2 to be used for the other pair. And this is where we get the weird, inconsistent wiring pattern: 1 and 2, and 3 and 6, respectively were used for pairs by 10/100. When Gigabit ethernet came around and used four pairs, 4 and 5 were obvious since they were already going to be a telephone pair, and 7 and 8 were left. Ethernet connectors grew like tree rings: the middle is symmetric according to telephone convention, the outside is weird, according to Ethernet convention.

And as for why there are two different color conventions... well, the "A" variant was identical to the telephone industry convention for the two center pairs, which was very convenient for any installation that reused or coexisted with telephone wiring. The "B" pattern was actually included only for backwards compatibility with a pre-Ethernet, pre-TIA 568 structured wiring system called SYSTIMAX. SYSTIMAX was widely installed for a variety of applications in early business networking, carrying everything from analog voice to token ring, but particularly emphasized serial terminal connections. Since both telephone wiring and SYSTIMAX wiring were widely installed, using different color conventions for mapping pairs to 8P8C connectors, TIA-568 decided to encompass both.

It is ironic, of course, that SYSTIMAX was originally an AT&T product, and so AT&T created the whole confusion themselves. Today, it is the legalistic view that TIA-568A is "correct" as the standard says it is preferred. TIA-568B, despite being included in the standard for backwards compatibility, is nonetheless extremely common. People will tell you various rules of thumb, like "government uses A and business uses B," or "horizontal wiring uses A and patch cables use B," but really, you just have to check.

But that's not what I meant to talk about here, and I don't think I even explained it very well. Ethernet is weird, that's the point. It's the odd one out, because it was shoehorned into a wiring convention originally designed for another purpose, and in many cases it had to coexist with that other purpose. It's some real legacy stuff. And also Ethernet was originally used with coaxial cables, yes I know, that's why it only needed one pair to begin with, but then we wanted full duplex.

So that's the great thing about phone cables: they're actually using the cable and modular connector the way they were intended to be used, so they fit right into each other. So quick and easy, and there's nothing to think about.

Except...

With Ethernet, there used to be this confusion about whether or not RX and TX were swapped by the cable. Today, because of something originally called auto-MDIX and replaced by the media-independent interface part of GbE, we rarely have to worry about this. But with older 10/100 equipment, there was a wiring convention for one end, and a wiring convention for the other, but if you tried to connect two things that were wired to be the same end, you had to swap RX and TX in the cable. This was called a crossover cable, and is directly analogous to the confusingly named "null modem" serial cable.

Telephone cables are... well, if you go shopping for RJ11 or RJ14 telephone cables, you might run into something odd. Some sellers, typically the more knowledgeable ones, may identify their cables as "straight" or "reverse." Even more confusingly, you will often read that "straight" is for data applications (like fax machines!) while "reverse" is for voice applications. If you consider that the majority of fax machines provide a telephone handset and are, in fact, capable of voice, this is particularly confusing.

See, the thing is, a reverse cable has the two ends swapped relative to each other. It's not like Ethernet, the RX and TX pairs aren't swapped, because there are no such pairs. Remember, the two pairs of a 6P4C telephone cable are used as two separate circuits. Instead, the polarity is swapped within each pair.

Telephone cables are wired in such a way that this is easy: in a 6P4C connector, the "first" pair is the middle two pins (3 and 4), while the "second" pair is the next two pins out (2 and 5). That makes them symmetric, so you can swap the polarity of all of the pairs by simply putting one of the modular jacks on the other way around. With Ethernet, not coincidentally, the "inner" two pairs still work this way. It's the outer ones that buck convention.

When the jacks are connected such that the pins are consistent---that is, pin 1 on one connector is connected to pin 1 on the other, we could call that a straight cable. If the ends are mirrored, that is, pin 1 on one end is connected to pin 6 on the other, we could call it a reverse cable.

With a telephone, we already talked about the hybrid situation: the two directions are not separated on the telephone line. We don't need to swap out RX and TX. So... why? why are there straight and reverse cables? Why do they have different applications?

Telephone lines have a distinct polarity, because of the DC battery voltage. For historic reasons, the two "sides" of a telephone pair are referred to as "tip" and "ring," referring to where they would land on the 1/4" connector that we no longer call a "phone" connector and instead associate mostly with electric guitars and expensive headphones. The ring is the negative side of the battery power, and the tip is the positive side. As standard, these are identified as -48v and 0v, because the exchange equipment is grounded on the positive side. Both sides should be regarded as floating at the subscriber end, though, so the voltages and positive or negative aren't that important. It's just tip and ring.

There is a correct way to connect a phone, but older phones with entirely analog wiring wouldn't notice the difference. When touch-tone phones introduced active digital electronics, polarity suddenly mattered, but you can imagine how this went over with consumers: some people had telephone jacks wired the wrong way around, and had for years, without any problems. When they upgraded to a touch-tone phone and it didn't work, the phone was clearly at fault, not the wiring. So, quite a few touch-tone phones were made with circuitry to "fix" a reverse-wired telephone connection. Besides, just to keep things complex, there were some types of pre-touch-tone phones that required tip and ring be correctly preserved for biasing the magnetic ringer.

But wait... why, then, would so many sources assert that reverse-wired cables are appropriate for voice use? Well, there is a major problem of internet advice here. Look carefully at the websites that are the top results for the question of straight vs. reverse telephone cables, and you will find that they don't actually agree on what those terms mean. There are, in fact, two ways to look at it: you could say that a straight cable is a cable with the same correspondence of color to pin, or you could say that a straight cable has the two modular connectors installed the same way up.

If you think about it, you will realize that these conflict: if you attach both modular connectors with the latch on the same side of the cable, they will have mirrored pinouts and thus opposite polarity. To have a 1:1 pin correspondence that preserves polarity, you must attach the connectors such that one has the latch up and the other has the latch down. Now, this only makes sense if you lay your cable out perfectly flat, and for a round cable (like the twisted pair cables used for ethernet) you still wouldn't be able to tell. But telephone cables are flat, and what's more, the manufacturing process leaves a distinct ridge on one side that makes it obvious which way the connector is oriented. Latch on the ridge side, or latch on the smooth side?

There's another way to look at it: put two 6P4C connectors face-to-face, like you are trying to plug the two into each other. You will notice that, if the wiring is pin-to-pin, they don't match each other. Pin 2 on one connector is a different color from the adjacent pin 5 on the other connector. This isn't all that surprising, because we're basically doing the same thing: we're focusing on the physical orientation of the connectors instead of the electrical connection.

Whether "straight" refers to the wiring or the connector orientation varies from author to author. I will confidently assert that the correct definition of "straight" is a cable where a given pin on one end corresponds to the same pin on the other, but there are certainly some that will disagree with me!

Diagrams of two ways of terminating

Here's the thing: as far as I can tell, the entire issue of straight vs. reverse telephone cables comes from this exact confusion. Oddly enough, non-pin-consistent wiring (e.g. with pin 2 on one connector going to pin 4 on the other) seems to have been the historical convention. Many manufactured telephone cables are made this way, even today. I am not sure, but I will speculate it might be an artifact of the manufacturing technique, or at least the desire of those manufacturing telephone cables to have an easy, consistent way to put the connector on. Non pin-consistent cables are often articulated as placing the connector latch on the ridge side of the cable at both ends. Which makes sense, in a way!

The thing is, these cables, standard though they apparently are, will reverse the polarity of the telephone line. If you connect two with a mating connector, the second one might reverse it back to the way it was before... but it might not! mating connectors are made in both straight and reverse variants, although in this case straight seems much more common.

And I believe this is the whole origin of the "data" vs "voice" advice: telephones, the voice application, rarely care about line polarity. Data applications, because of the diversity of the equipment in use, are more likely to care about polarity. Indeed, for true digital applications like T-carrier, the cable must be straight. The whole thing is perhaps more succinctly described as "straight vs. don't care" rather than "straight vs. reverse," because as far as I can tell, there is no true application for what I am calling a reverse cable (one that does not preserve pin consistency). They're just common because of the applications in which polarity need not be maintained.

But I would love to hear if anyone knows otherwise! Truthfully I am very frustrated by this whole thing. The inconsistency of naming conventions, confusion over applications and the history, and argumentative forum threads about this have all deeply unsettled my belief in the consistency of telecommunications wiring.

Also, if you're making telephone cables, just make them straight (pin-consistent). It seems to be the safer way. I've never had it not work!

two phone cables, terminated opposite ways

2024-03-17 wilhelm haller and photocopier accounting

In the 1450s, German inventor Johannes Gutenburg designed the movable-type printing press, the first practical method of mass-duplicating text. After various other projects, he applied his press to the production of the Bible, yielding over one hundred copies of a text that previously had to be laboriously hand-copied.

His Bible was a tremendous cultural success, triggering revolutions not only in printed matter but also in religion. It was not a financial success: Gutenburg had apparently misspent the funds loaned to him for the project. Gutenburg lost a lawsuit and, as a result of the judgment, lost his workshop. He had made printing vastly cheaper, but it remained costly in volume. Sustaining the revolution of the printing press evidently required careful accounting.

For as long as there have been documents, there has been a need to copy. The printing press revolutionized printed matter, but setting up plates was a labor-intensive process, and a large number of copies needed to be produced at once for the process to be feasible. Into the early 20th century, it was not unusual for smaller-quantity business documents to be hand-copied. It wasn't necessarily for lack of duplicating technology; if anything, there were a surprising number of competing methods of duplication. But all of them had considerable downsides, not least among them the cost of treated paper stock and photographic chemicals.

The mimeograph was the star of the era. Mimeograph printing involved preparing a wax master, which would eventually be done by typewriter but was still a frustrating process when you only possessed a printed original. Photographic methods could be used to reproduce anything you could look at, but required expensive equipment and a relatively high skill level. The millennial office's proliferation of paper would not fully develop until the invention of xerography.

Xerography is not a common term today, first because of the general retreat of the Xerox corporation from the market, and second because it specifically identifies an analog process not used by modern photocopiers. In the 1960s, Xerox brought about a revolution in paperwork, though, mass-producing a reprographic machine that was faster, easier, and considerably less expensive to operate than contemporaries like the Photostat. The photocopier was now simple and inexpensive enough that they ventured beyond the print shop, taking root in the hallways and supply rooms of offices around the nation.

They were cheap, but they were costly in volume. Cost per page for the photocopiers of the '60s and '70s could reach $0.05, approaching $0.40 in today's currency. The price of photocopies continued to come down, but the ease of photocopiers encouraged quantity. Office workers ran amok, running off 30, 60, even 100 pages of documents to pass around. The operation of photocopiers became a significant item in the budget of American corporations.

The continued proliferation of the photocopier called for careful accounting.

Illustration


Wilhelm Haller was born in Swabia, in Germany. Details of his life, in the English language and seemingly in German as well, are sparse. His Wikipedia biography has the tone of a hagiography; a banner tells us that its neutrality is disputed.

What I can say for sure is that, in the 1960s, Haller found the start of his career as a sales apprentice for Hengstler. Hengstler, by then nearly a hundred years old, had made watches and other fine machinery before settling into the world of industrial clockwork. Among their products were a refined line of mechanical counters, of the same type we use today: hour meters, pulse counters, and volume meters, all driving a set of small wheels printed with the digits 0 through 9. As each wheel rolled from 9 to 0, a peg pushed a lever to advance the next wheel by one digit. They had numerous applications in commercial equipment and Haller must have become quite familiar with them before he moved to New York City, representing Hengstler products to the American market.

Perhaps he worked in an office where photocopier expenses were a complaint. I wish there was more of a story behind his first great invention, but it is quite overshadowed by his later, more abstract work. No source I can find cares to go deeper than to say that, along with Hengstler employee Paul Buser, he founded an American subsidiary of Hengstler called the Hecon Corporation. I can speculate somewhat confidently that Hecon was short for "Hengstler Counter," as Hecon dealt entirely in counters. More specifically, Hecon introduced a new application of the mechanical counter invented by Haller himself: the photocopier key counter.

Xerox photocopiers already included wiring that distributed a "pulse per page" signal, used to advance a counter used for scheduled maintenance. The Hecon key counter was a simple elaboration on this idea: a socket and wiring harness, furnished by Hecon, was installed on the photocopier. An "enable" circuit for the photocopier passed through the socket, and had to be jumpered for the photocopier to function. The socket also provided a pulse per page wire.

Photocopier users, typically each department, were issued a Hecon mechanical counter that fit into the socket. To make photocopies, you had to insert your key counter into the socket to enable the photocopier. The key counter was not resettable, so the accounting department could periodically collect key counters and read the number displayed on them like a utility meter. Thus the name key counter: it was a key to enable the photocopier, and a counter to measure the keyholder's usage.

Key counters were a massive success and proliferated on office photocopiers during the '70s. Xerox, and then their competitors, bought into the system by providing a convenient mounting point and wiring harness connector for the key counter socket. You could find photocopiers that required a Hecon key counter well into the 1990s. Threads on office machine technician forums about adapting the wiring to modern machines suggest that there were some users into the 2010s.


Hecon would not allow the technology to stagnate. The mechanical key counter was reliable but had to be collected or turned in for the counter to be read. The Hecon KCC, introduced by the mid-1990s, replaced key counters with a microcontroller. Users entered an individual PIN or department number on a keypad mounted to the copier and connected to the key counter socket. The KCC enabled the copier and counted the page pulses, totalizing them into a department account that could be read out later from the keypad or from a computer by serial connection.

Hecon was not only invested in technological change, though. At some point, Hecon became a major component of Hengstler, with more Hengstler management moving to its New Jersey headquarters. "Must have good command of German and English," a 1969 newspaper listing for a secretarial job stated, before advising applicants to call a Mr. Hengstler himself.

By 1976, the "Liberal Benefits" in their job listing had been supplemented by a new feature: "Hecon Corp, the company that pioneered & operates on flexible working hours."

During the late '60s, Wilhelm Haller seems to have returned to Germany and shifted his interests beyond photocopiers to the operations of corporations themselves. Working with German management consultant Christel Kammerer, he designed a system for mechanical recording of employee's working hours.

This was not the invention of the time clock. The history of the time clock is obscure but they were already in use during the 19th century. Haller's system implemented a more specific model of working hours promoted by Kammerer: flexitime (more common in Germany) or flextime (more common in the US).

Flextime is a simple enough concept and gained considerable popularity in the US during the 1970s and 1980s, making it almost too obvious to "invent" today. A flextime schedule defines "core hours," such as 11a-3p, during which employees are required to be present in the office. Outside of core hours, employees are free to come and go so long as their working hours total eight each day. Haller's time clock invention was, like the key counter, a totalizing counter: one that recorded not when employees arrived and left, but how many hours they were present each day.

It's unclear if Haller still worked for Hengstler, but he must have had some influence there. Hecon was among the first, perhaps the first, companies to introduce flextime in the United States.


Photocopier accounting continued apace. Dallas Semiconductor and Sun Microsystems popularized the iButton during the late 1990s, a compact and robust device that could store data and perform cryptographic operations. Hecon followed in the footprints of the broader stored value industry, introducing the Hecon Quick Key system that used iButtons for user authentication at the photocopier. Copies could even be "prepaid" onto an iButton, ideal for photocopiers with a regular cast of outside users, like those in courthouses and county clerk's offices.

The Quick Key had a distinctive, angular copier controller apparently called the Base 10. It had the aesthetic vibes of a '90s contemporary art museum, all white and geometric, although surviving examples have yellowed to to the pallor of dated office equipment.

As the Xerographic process was under development, British Bible scholar Hugh Schonfield spent the 1950s developing his Commonwealth of World Citizens. Part micronation, part NGO, the Commonwealth had a mission of organizing its members throughout many nations into a world community that would uphold the ideals of equality and peace while carrying out humanitarian programs.

Adopting Esperanto as its language, it renamed itself to the Mondcivitan Republic, publishing a provisional constitution and electing a parliament. The Mondcivitan Republic issued passports; some of its members tried to abandon citizenship of their own countries. It was one of several organizations promoting "world citizenship" in the mid-century.

In 1972, Schonfield published a book, Politics of God, describing the organization's ideals. Those politics were apparently challenging. While the Mondcivitan Republic operated various humanitarian and charitable programs through the '60s and '70s, it failed to adopt a permanent constitution and by the 1980s had effectively dissolved. Sometime around then, Wilhelm Haller joined the movement and established a new manifestation of the Mondcivitan Republic in Germany. Haller applied to cancel his German citizenship, he would be a citizen of the world.

As a management consultant and social organizer, he founded a series of progressive German organizations. Haller's projects reached their apex in 2004, with the formation of the "International Leadership and Business Society," a direct extension of the Mondcivitan project. That same year, Haller passed away, a victim of thyroid cancer.


A German progressive organization, Lebenshaus SchwΓ€bische Alb eV, published a touching obituary of Haller. Hengstler and Hecon are mentioned only as "a Swabian factory," his work on flextime earns a short paragraph.

In translation:

He was able to celebrate his 69th birthday sitting in a wheelchair with a large group of his family and the circle of friends from the Reconciliation Association and the Life Center. With a weak and barely audible voice, he took part in our discussion about new financing options for the local independent Waldorf school from the purchasing power of the affected parents' homes.

Haller is, to me, a rather curious type of person. He was first an inventor of accounting systems, second a management consultant, and then a social activist motivated by both his Christian religion and belief in precision management. His work with Hengstler/Hecon gave way to support and adoption programs for disadvantaged children, supportive employment programs, and international initiatives born of unique mid-century optimism.

Flextime, he argued, freed workers to live their lives on their own schedules, while his timekeeping systems maintained an eight-hour workday with German precision. The Hecon key counter, a footnote of his career, perhaps did the same on a smaller scale: duplication was freed from the print shop but protected by complete cost recovery. Later in his career, he would set out to unify the world.

But then, it's hard to know what to make of Haller. Almost everything written about him seems to be the work of a true believer in his religious-managerial vision. I came for a small detail of photocopier history, and left with this strange leader of West German industrial thought, a management consultant who promised to "humanize" the workplace through time recording.

For him, a new building in the great "city on a hill" required only two things: careful commercial accounting with the knowledge of our own limited possibilities, and a deep trust in God, who knows how to continue when our own strength has come to an end.

Illustration

2024-03-09 the purple streetscape

Across the United States, streets are taking on a strange hue at night. Purple.

Purple streetlights have been reported in Tampa, Vancouver, Wichita, Boston. They're certainly in evidence here in Albuquerque, where Coal through downtown has turned almost entirely to mood lighting. Explanations vary. When I first saw the phenomenon, I thought of fixtures that combined RGB elements and thought perhaps one of the color channels had failed.

Others on the internet offer more involved explanations. "A black light surveillance network," one conspiracist calls them, as he shows his mushroom-themed blacklight poster fluorescing on the side of a highway. I remain unclear on what exactly a shadowy cabal would gain from installing blacklights across North America, but I am nonetheless charmed by his fluorescent fingerpainting demonstration. The topic of "blacklight" is a somewhat complex one with LEDs.

Historically, "blacklight" had referred to long-wave UV lamps, also called UV-A. These lamps emitted light around 400nm, beyond violet light, thus the term ultraviolet. This light is close to, but not quite in, the visible spectrum, which is ideal for observing the effect of fluorescence. Fluorescence is a fascinating but also mundane physical phenomenon in which many materials will absorb light, becoming excited, and then re-emit it as they relax. The process is not completely efficient, so the re-emited light is longer in wavelength than the absorbed light.

Because of this loss of energy, a fluorescent material excited by a blacklight will emit light down in the visible spectrum. The effect seems a bit like magic: the fluorescence is far brighter, to the human eye, than the ultraviolet light that incited it. The trouble is that the common use of UV light to show fluorescence leads to a bit of a misconception that ultraviolet light is required. Not at all, fluorescent materials will emit just about any light at a slightly lower wavelength. The emitted light is relatively weak, though, and under broad spectrum lighting is unlikely to stand out against the ambient lighting. Fluorescence always occurs, it's just much more visible under a light source that humans can't see.

When we consider LEDs, though, there is an economic aspect to consider. The construction of LEDs that emit UV light turns out to be quite difficult. There are now options on the market, but only relatively recently, and they run a considerable price premium compared to visible wavelength LEDs. The vast majority of "LED blacklights" are not actually blacklights; they don't actually emit UV. They're just blue. Human eyes aren't so sensitive to blue, especially the narrow emission of blue LEDs, and so these blue "blacklights" work well enough for showing fluorescence, although not as well as a "real" blacklight (still typically gas discharge).

This was mostly a minor detail of theatrical lighting until COVID, when some combination of unknowing buyers and unscrupulous sellers lead to a wave of people using blue LEDs in an attempt to sanitize things. That doesn't work, long-wave UV already barely has enough energy to have much of a sanitizing effect and blue LEDs have none at all. For sanitizing purposes you need short wave UV, or UV-C, which has so much energy that it is almost ionizing radiation. The trouble, of course, is that this energy damages most biological things, including us. UV-C lights can quickly cause mild (but very unpleasant) eye damage called flashburn or "welder's eye," and more serious exposure can cause permanent damage to your eyes and skin. Funny, then, that all the people waving blue LEDs over their groceries on Instagram reels were at least saving themselves from an unpleasant learning experience.

You can probably see how this all ties back to streetlights. The purple streetlights are not "blacklights," but the clear fluorescence of our friend's psychedelic art tells us that they are emitting energy mostly at the short end of the visible spectrum, allowing the longer wave light emitted by the poster to appear inexplicably bright to our eyes. We are apparently looking at some sort of blue LED.

Those familiar with modern LED lighting probably easily see what's happening. LEDs are largely monochromatic lighting sources, they emit a single wavelength that results in very poor color rendering, which is both aesthetically unpleasing and produces poor perception for drivers. While some fixtures do indeed combine LEDs of multiple colors to produce white output, there's another technique that is less expensive, more energy efficient, and produces better quality light. Today's inexpensive, good quality LED lights have been enabled by phosphor coatings.

Here's the idea: LEDs of a single color illuminate a phosphorous material. Phosphorescence is actually a closely related phenomenon to fluorescence, but involves kicking an electron up to a different spin state. Fewer materials exhibit this effect than fluorescence, but chemists have devised synthetic phosphors that can sort of "rearrange" light energy within the spectrum.

Blue LEDs are the most energy efficient, so a typical white LED light uses blue LEDs coated in a phosphor that absorbs a portion of the blue light and re-emits it at longer wavelengths. The resulting spectrum, the combination of some of the blue light passing through and red and green light emitted by the phosphor, is a high-CRI white light ideal for street lighting.

Incidentally, one of the properties of phosphorescence that differentiates it from fluorescence is that phosphors take a while to "relax" back to their lower energy state. A phosphor will continue to glow after the energy that excited it is gone. This effect has long been employed for "glow in the dark" materials that continue to glow softly for an extended period of time after the room goes dark. During the Cold War, the Civil Defense Administration recommended outlining stair treads and doors with such phosphorescent tape so that you could more safely navigate your home during a blackout. The same idea is still employed aboard aircraft and ships, and I suppose you could still do it to your house, it would be fun.

Phosphor-conversion white LEDs use phosphors that minimize this effect but they still exhibit it. Turn off a white LED light in a dark room and you will probably notice that it continues to glow dimly for a short time. You are observing the phosphor slowly relaxing.

So what of the purple streetlights? The phosphor has failed, at least partially, and the lights are emitting the natural spectrum of their LEDs rather than the "adjusted" spectrum produced by the phosphor. The exact reason for this failure doesn't seem to have been publicized, but judging by the apparently rapid onset most people think the phosphor is delaminating and falling off of the LEDs rather than slowly burning away or undergoing some sort of corrosion. They may have simply not used a very good glue.

So we have a technical explanation: white LED streetlights are not white LEDs but blue LEDs with phosphor conversion. If the phosphor somehow fails or comes off, their spectrum shifts towards deep blue. Some combination of remaining phosphor on the lights and environmental conditions (we are not used to seeing large areas under monochromatic blue light) causes this to come off as an eery purple.

There is also, though, a system question. How is it that so many streetlights across so many cities are demonstrating the same failure at around the same time?

The answer to that question is monopolization.

Virtually all LED street lighting installed in North America is manufactured by Acuity Brands. Based in Atlanta, Acuity is a hundred-year-old industrial conglomerate that originally focused on linens and janitorial supplies. In 1969, though, Acuity acquired Lithonia: one of the United States' largest manufacturers of area lighting. Acuity gained a lighting division, and it was on the war path. Through a huge number of acquisitions, everything from age-old area lighting giants like Holophane to VC-funded networked lighting companies have become part of Acuity.

In the mean time, GE's area lighting division petered out along with the rest of GE (they recently sold their entire lighting division to a consumer home automation company). Directories of street lighting manufacturers now list Acuity followed by a list of brands Acuity owns. Their dominant competitor for traditional street lighting are probably Cree and Cooper (part of Eaton), but both are well behind Acuity in municipal sales.

Starting around 2017, Acuity started to manufacture defective lights. The exact nature of the defect is unclear, but it seems to cause abrupt failure of the phosphor after around five years. And here we are, over five years later, with purple streets.

The situation is not quite as bad as it sounds. Acuity offered a long warranty on their street lighting, and the affected lights are still covered. Acuity is sending contractors to replace defective lights at their expense, but they have to coordinate with street lighting operators to identify defective lights and schedule the work. It's a long process. Many cities have over a thousand lights to replace, but finding them is a problem on its own.

Most cities have invested in some sort of smart streetlighting solution. The most common approach is a module that plugs into the standard photocell receptacle on the light and both controls the light and reports energy use over a municipal LTE network. These modules can automatically identify many failure modes based on changes on power consumption. The problem is that the phosphor failure is completely nonelectrical, so the faulty lights can't be located by energy monitoring.

So, while I can't truly rule out the possibility of a blacklight surveillance network, I'd suggest you report purple lights to your city or electrical utility. They're likely already working with Acuity on a replacement campaign, but they may not know the exact scale of the problem yet.


While I'm at it, let's talk about another common failure mode of outdoor LED lighting: flashing. LED lights use a constant current power supply (often called a driver in this context) that regulates the voltage applied to the LEDs to achieve their rated current. Unfortunately, several failure modes can cause the driver to continuously cycle. Consider the common case of an LED module that has failed in such a way that it shorts at high temperature. The driver will turn on until the faulty module gets warm enough and the driver turns off again on current protection. The process repeats indefinitely. Some drivers have a "soft start" feature and some failure modes cause current to rise beyond limits over time, so it's not unusual for these faulty lights to fade in before shutting off.

It's actually a very similar situation to the cycling that gas discharge street lighting used to show, but as is the way of electronics, it happens faster. Aged sodium bulbs would often cause the ballast to hit its current limit over the span of perhaps five minutes, cycling the light on and off. Now it often happens twice in a second.

I once saw a parking lot where nearly every light had failed this way. I would guess that lightning had struck, creating a transient that damaged all of them at once. It felt like a silent rave, only a little color could have made it better. Unfortunately they were RAB, not Acuity, and the phosphor was holding on.

2024-03-01 listening in on the neighborhood

Last week, someone leaked a spreadsheet of SoundThinking sensors to Wired. You are probably asking "What is SoundThinking," because the company rebranded last year. They used to be called ShotSpotter, and their outdoor acoustic gunfire detection system still goes by the ShotSpotter name.

ShotSpotter has attracted a lot of press and plenty of criticism for the gunfire detection service they provide to many law enforcement agencies in the US. The system involves installing acoustic sensors throughout a city, which use some sort of signature matching to detect gunfire and then use time of flight to determine the likely source.

One of the principle topics of criticism is the immense secrecy with which they operate: ShotSpotter protects information on the location of its sensors as if it were state secret, and does not disclose them even to the law enforcement agencies that are its customers. This secrecy attracts accusations that ShotSpotter's claims of efficacy cannot be independently validated, and that ShotSpotter is attempting to suppress research into the civil rights impacts of its product.

I have encountered this topic before: the Albuquerque Police Department is a ShotSpotter customer, and during my involvement in police oversight was evasive in response to any questions about the system and resisted efforts to subject its surveillance technology purchases to more outside scrutiny. Many assumed that ShotSpotter coverage was concentrated in disadvantaged parts of the city, an unsurprising outcome but one that could contribute to systemic overpolicing. APD would not comment.

I have always assumed that it would not really be that difficult to find the ShotSpotter sensors, at least if you have my inclination to examine telephone poles. While the Wired article focuses heavily on sensors installed on buildings, it seems likely that in environments like Albuquerque with city-operated lighting and a single electrical utility, they would be installed on street lights. That's where you find most of the technology the city fields.

The thing is, I didn't really know what the sensors looked like. I've seen pictures, but I know they were quite old, and I assumed the design had gotten more compact over time. Indeed it has.

ShotSpotter sensor on light pole

An interesting thing about the Wired article is that it contains a map, but the MapBox embed produced with Flourish Studio had a surprisingly high maximum zoom level. That made it more or less impossible to interpret the locations of the sensors exactly. I'm concerned that this was an intentional decision by Wired to partially obfuscate the data, because it is not an effective one. It was a simple matter to find the JSON payload the map viewer was using for the PoI overlay and then convert it to KML.

I worried that the underlying data would be obscured; it was not. The coordinates are exact. So, I took the opportunity to enjoy a nice day and went on an expedition.

ShotSpotter sensor in a neighborhood

The sensors are pretty much what I imagined, innocuous beige boxes clamped to street light arms. There are a number of these boxes to be found in modern cities. Some are smart meter nodes, some are base stations for municipal data networks, others collect environmental data. Some are the police, listening in on your activities.

This is not as hypothetical of a concern as it might sound. Conversations recorded by ShotSpotter sensors have twice been introduced as evidence in criminal trials. In one case the court allowed it, in another the court did not. The possibility clearly exists, and depending on interpretation of state law, it may be permissible for ShotSpotter to record conversations on the street for future use as evidence.

ShotSpotter sensor in a neighborhood

This ought to give us pause, as should the fact that ShotSpotter has been compellingly demonstrated to manipulate their "interpretation" of evidence to fit a prosecutor's narrative---even when ShotSpotter's original analysis contradicted it.

But pervasive surveillance of urban areas and troubling use of that evidence is nothing new. Albuquerque already has an expansive police-operated video surveillance network connected to the Real-Time Crime Center. APD has long used portable automated license plate readers (ALPR) under cover of "your speed is" trailers, and more recently has installed permanent ALPR at major intersections in the city.

All of this occurs with virtually no public oversight or even public awareness.

ShotSpotter sensor in a neighborhood

What most surprised me is the density of ShotSpotter sensors. In my head, I assumed they were fairly sparse. A Chicago report on the system says there are 20 to 25 per square mile. Density in Albuquerque is lower, probably reflecting the wide streets and relative lack of high rises. Still, there are a lot of them. 721 in Albuquerque, a city of about 190 square miles. At present, only parts of the city are covered.

Map of ShotSpotter sensors in Albuquerque

And those coverage decisions are interesting. The valley (what of it is in city limits) is well covered, as is the west side outside of Coors/Old Coors. The International District, of course, is dense with sensors, as is inner NE bounded by roughly by the freeways to Louisiana and Montgomery.

Conspicuously empty is the rest of the northeast, from UNM's north campus area to the foothills. Indian School Road makes almost its entire east side length without any sensors.

ShotSpotter sensor in a neighborhood

The reader can probably infer how this coverage pattern relates to race and class in Albuquerque. It's not perfect, but the distance from your house to a ShotSpotter sensor correlates fairly well with your household income. The wealthier you are, the less surveilled you are.

The "pocket of poverty" south of Downtown where I live, the historically Spanish Barelas and historically Black South Broadway, are predictably well covered. All of the photos here were taken within a mile, and I did not come even close to visiting all of the sensors. Within a one mile radius of the center of Barelas, there are 31 sensors.

ShotSpotter sensor in a neighborhood

Some are conspicuous. Washington Middle School, where 13-year-old Bennie Hargrove was shot by another student, has a sensor mounted at its front entrance. Another sensor is in the cul de sac behind the Coors and I-40 Walmart, where a body was found in a burned-out car. Perhaps the deep gulch of the freeway poses a coverage challenge, there are two more less than a thousand feet away.

In the Downtown Core, buildings were preferred to light poles. The PNM building, the Anasazi condos, and the Banque building are all feeding data into the city's failing scheme of federal prosecutions for downtown gun crime.

The closest sensor to the wealthy Heights is at Embudo Canyon, and coverage stops north of Central in the affluent Nob Hill residential area. Old Town is almost completely uncovered, as is the isolationist Four Hills.

Highland High School has a sensor on its swimming pool building. The data says there are two at the intersection of Gibson and Chavez, probably an error, it also says there are two sensors on "Null Island." Don't worry about coverage in the south campus area, though. There are 16 in the area bounded by I-25 to Yale and Gibson to Coal.

Detail of a ShotSpotter sensor

KOB quotes APD PIO Gallegos saying "We don't know, technically, where all the sensors are." Well, I suppose they do now, the leak has been widely reported on. APD received about 14,000 ShotSpotter reports last year. The accuracy of these reports, in terms of their correctly identifying gunfire, is contested. SoundThinking claims impressive statistics, but has actively resisted independent evaluation. A Chicago report found that only 11.3% of ShotSpotter reports could be confirmed as gunfire. APD, for its part, reports a few hundred suspects or victims identified as a result of ShotSpotter reports.

APD has used a local firearms training business, Calibers, to fire blanks around the city to verify detection. They say the system performed well.

But, if asked, they provide a form letter written by ShotSpotter. Their contract prohibits the disclosure of any actual data.

2024-02-25 a history of the tty

It's one of those anachronisms that is deeply embedded in modern technology. From cloud operator servers to embedded controllers in appliances, there must be uncountable devices that think they are connected to a TTY.

I will omit the many interesting details of the Linux terminal infrastructure here, as it could easily fill its own article. But most Linux users are at least peripherally aware that the kernel tends to identify both serial devices and terminals as TTYs, assigning them filesystem names in the form of /dev/tty*. Probably a lot of those people remember that this stands for teletype or perhaps teletypewriter, although in practice the term teleprinter is more common.

Indeed, from about the 1950s (the genesis of electronic computers) to the 1970s (the rise of video display terminals/VDTs), teleprinters were the most common form of interactive human-machine interface. The "interactive" distinction here is important; early computers were built primarily around noninteractive input and output, often using punched paper tape. Interactive operation was a more advanced form of computing, one that took almost until the widespread use of VDTs to mature. Look into the computers of the 1960s especially, the early days of interactive operation, and you will be amazed at how bizarre and unfriendly the command interface is. It wasn't really intended for people to use; it was for the Computer Operator (who had attended a lengthy training course on the topic) to troubleshoot problems in the noninteractive workload.

But interactive computing is yet another topic I will one day take on. Right now, I want to talk about the heritage of these input/output mechanisms. Why is it that punched paper tape and the teleprinter were the most obvious way to interact with the first electronic computers? As you might suspect, the arrangement was one of convenience. Paper tape punches and readers were already being manufactured, as were teleprinters. They were both used for communications.

Most people who hear about the telegraph think of Morse code keys and rhythmic beeping. Indeed, Samuel Morse is an important figure in the history of telegraphy. The form of "morse code" that we tend to imagine, though, a continuous wave "beep," is mostly an artifact of radio. For telegraphs, no carrier wave or radio modulation was required. You can transmit a message simply by interrupting the current on a wire.

This idea is rather simple to conceive and even to implement, so it's no surprise that telegraphy has a long history. By the end of the 18th century inventors in Europe and Great Britain were devising simple electrical telegraphs. These early telegraphs had limited ranges and even more limited speeds, though, a result mostly of the lack of a good way to indicate to the operator whether or not a current was present. It is an intriguing aspect of technical history that the first decades of experimentation with electricity were done with only the clumsiest means of measuring or even detecting it.

In 1820, three physicists or inventors (these were vague titles at the time) almost simultaneously worked out that electrical current induced a magnetic field. They invented various ways of demonstrating the effect, usually by deflecting a magnetic needle. This innovation quickly lead to the "electromagnetic telegraph," in which a telegrapher operates a key to switch current, which causes a needle or flag to deflect at the other end of the circuit. This was tremendously simpler than previous means of indicating current and was applied almost immediately to build the first practical telegraphs. During the 1830s, the invention of the relay allowed telegraph signals to be repeated or amplified as the potential weakened (the origin of the term "relay"). Edward Davy, one of the inventors of the relay, also invented the telegraph recorder.

From 1830 to 1850, so many people invented so many telegraph systems that it is difficult to succinctly describe how an early practical telegraph worked. There were certain themes: for non-recording systems, a needle was often deflected one way or the other by the presence or absence of current, or perhaps by polarity reversal. Sometimes the receiver would strike a bell or sound a buzzer with each change. In recording systems, a telegraph printer or telegraph recorder embossed a hole or left a small mark on a paper tape that advanced through the device. In the first case, the receiving operator would watch the needle, interpreting messages as they came. In the second case, the operator could examine the paper tape at their leisure, interpreting the message based on the distances between the dots.

Recording systems tended to be used for less time-sensitive operations like passing telegrams between cities, while non-recording telegraphs were used for more real-time applications like railroad dispatch and signaling. Regardless, it is important to understand that the teleprinter is about as old as the telegraph. Many early telegraphs recorded received signals onto paper.

The interpretation of telegraph signals was as varied as the equipment that carried them. Samuel Morse popularized the telegraph in the United States based in part on his alphabetic code, but it was not the first. Gauss famously devised a binary encoding for alphabetic characters a few years earlier, which resembles modern character encodings more than Morse's scheme. In many telegraph applications, though, there was no alphabetic code at all. Railroad signal telegraphs, for example, often used application-specific schemes that encoded types of trains and routes instead of letters.

Morse's telegraph system was very successful in the United States, and in 1861 a Morse telegraph line connected the coasts. It surprises some that a transcontinental telegraph line was completed some fifty years before the transcontinental telephone line. Telegraphy is older, though, because it is simpler. There is no analog signaling involved; simple on/off or polarity signals can be amplified using simple mechanical relays. The tendency to view text as more complex than voice (SMS came after the first cellphones, for one) has more to do with the last 50 years than the 50 years before.

The Morse telegraph system was practical enough to spawn a large industry, but suffered a key limitation: the level of experience required to key and copy Morse quickly and reliably is fairly high. Telegraphers were skilled and, thus, fairly well paid and sometimes in short supply [1]. To drive down the cost of telegraphy, there would need to be more automation.

Many of the earliest telegraph designs had employed parallel signaling. A common scheme was to provide one wire for each letter, and a common return. These were impractical to build over any meaningful distance, and Morse's one-wire design (along with one-wire designs by others) won out for obvious reasons. The idea of parallel signaling stayed around, though, and was reintroduced during the 1840s with a simple form of multiplexing: one "logical channel" for each letter could be combined onto one wire using time division muxing, for example by using a transmitter and receiver with synchronized spinning wheels. Letters would be presented by positions on the wheel, and a pulse sent at the appropriate point in the revolution to cause the teleprinter to produce that letter. With this alphabetic teleprinter, an experienced operator was no longer required to receive messages. They appeared as text on a strip of paper, ready for an unskilled clerk to read or paste onto a message card.

This system proved expensive but still practical to operate, and a network of such alphabetic teleprinters was built in the United States during the mid 19th century. A set of smaller telegraph companies operating one such system, called the Hughes system after its inventor, joined together to become the Western Union Telegraph Company. In a precedent that would be followed even more closely by the telephone system, practical commercial telegraphy was intertwined with a monopoly.

The Hughes system was functional but costly. The basic idea of multiplexing across 30 channels was difficult to achieve with mechanical technology. Γ‰mile Baudot was employed by the French telegraph service to find a way to better utilize telegraph lines. He first developed a proper form of multiplexing, using synchronized switches to combine five Hughes system messages onto one wire and separate them again at the other end. Likely inspired by his close inspection of the Hughes system and its limitations, Baudot went on to develop a more efficient scheme for the transmission of alphabetic messages: the Baudot code.

Baudot's system was similar to the Hughes system in that it relied on a transmitter and receiver kept in synchronization to interpret pulses as belonging to the correct logical channel. He simplified the design, though, by allowing for only five logical channels. Instead of each pulse representing a letter, the combination of all five channels would be used to form one symbol. The Baudot code was a five-bit binary alphabetic encoding, and most computer alphabetic encodings to the present day are at least partially derived from it.

One of the downsides of Baudot's design is that it was not quite as easy to operate as telegraphy companies would hope. Baudot equipment could keep up 30 words per minute with a skilled operator who could work the five-key piano-style keyboard in good synchronization with the mechanical armature that read it out. This took a great deal of practice, though, and pressing keys out of synchronization with the transmitter could easily cause incorrect letters to be sent.

In 1901, during the early days of the telephone, Donald Murray developed an important enhancement to the Baudot system. He was likely informed by an older practice that had been developed for Morse telegraphs, of having an operator punch a Morse message into paper tape to be transmitted by a simple tape reader later. He did the same for Baudot code: he designed a device with an easy to use typewriter-like keyboard that punched Baudot code onto a strip of paper tape with five rows, one for each bit. The tape punch had no need to be synchronized with the other end, and the operator could type at whatever pace they were comfortable.

The invention of Murray's tape punch brought about the low-cost telegram networks that we are familiar with from the early 20th century. A clerk would take down a message and then punch it onto paper tape. Later, the paper tape would be inserted into a reader that transmitted the Baudot message in perfect synchronization with the receiver, a teleprinter that typed it onto tape as text once again. The process of encoding and decoding messages for the telegraph was now fully automated.

The total operation of the system, though, was not. For one, the output was paper tape, that had to be cut and pasted to compose a paragraph of text. For another, the transmitting and receiving equipment operated continuously, requiring operators to coordinate on the scheduling of sending messages (or they would tie up the line and waste a lot of paper tape).

In a wonderful time capsule of early 20th century industrialism, the next major evolution would come about with considerable help from the Morton Salt Company. Joy Morton, its founder, agreed to fund Frank Pearne's efforts to develop an even more practical printing telegraph. This device would use a typewriter mechanism to produce the output as normal text on a page, saving considerable effort by clerks. Even better, it would use a system of control codes to indicate the beginning and end of messages, allowing a teleprinter to operate largely unattended. This was more complex than it sounded, as it required finding a way for the two ends to establish clock synchronization before the message.

There were, it turned out, others working on the same concept. After a series of patent disputes, mergers, and negotiations, the Morkrum-Kleinschmidt Company would market this new technology. A fully automated teleprinter, lurching into life when the other end had a message to send, producing pages of text like a typewriter with an invisible typist.

In 1928, Morkrum-Kleinschmidt adopted a rather more memorable name: the Teletype Corporation. During the development of the Teletype system, the telephone network had grown into a nationwide enterprise and one of the United States' largest industrial ventures (at many points in time, the country's single largest employer). AT&T had already entered the telegraph business by leasing its lines for telegraph use, and work had already begun on telegraphs that could operate over switched telephone lines, transmitting text as if it were a phone call. The telephone was born of the telegraph but came to consume it. In 1930, the Teletype Corporation was purchased by AT&T and became part of Western Electric.

That same year, Western Electric introduced the Teletype Model 15. Receiving Baudot at 45 baud [2] with an optional tape punch and tape reader, the Model 15 became a workhorse of American communications. By some accounts, the Model 15 was instrumental in the prosecution of World War II. The War Department made extensive use of AT&T-furnished teletype networks and Model 15 teleprinters as the core of the military logistics enterprise. The Model 15 was still being manufactured as late as 1963, a production record rivaled by few other electrical devices.

It is difficult to summarize the history of the networks that teleprinters enabled. The concept of switching connections between teleprinters, as was done on the phone network, was an obvious one. The dominant switched teleprinter network was Telex, not really an organization but actually a set of standards promulgated by the ITU. The most prominent US implementation of Telex was an AT&T service called TWX, short for Teletypewriter Exchange Service. TWX used Teletype teleprinters on phone lines (in a special class of service), and was a very popular service for business use from the '40s to the '70s.

Incidentally, TWX was assigned the special purpose area codes 510, 610, 710, 810, and 910, which contained only teleprinters. These area codes would eventually be assigned to other uses, but for a long time ranked among the "unusual" NPAs.

Western Union continued to develop their telegraph network during the era of TWX, acting in many ways as a sibling or shadow of AT&T. Like AT&T, Western Union developed multiplexing schemes to make better use of their long-distance telegraph lines. Like AT&T, Western Union developed automatic switching systems to decrease operator expenses. Like AT&T, Western Union built out a microwave network to increase the capacity of their long-haul network. Telegraphy is one of the areas where AT&T struggled despite their vast network, and Western Union kept ahead of them, purchasing the TWX service from AT&T. Western Union would continue to operate the switched teleprinter network, under the Telex name, into the '80s when it largely died out in favor of the newly developed fax machine.

During the era of TWX, encoding schemes changed several times as AT&T and Western Union developed better and faster equipment (Western Union continued to make use of Western Electric-built Teletype machines among other equipment). ASCII came to replace Baudot, and so a number of ASCII teleprinters existed. There were also hybrids. For some time Western Union operated teleprinters on an ASCII variant that provided only upper case letters and some punctuation, with the benefit of requiring fewer bits. The encoding and decoding of this reduced ASCII set was implemented by the Bell 101 telephone modem, designed in 1958 to allow SAGE computers to communicate with one another and then widely included in TWX and Telex teleprinters. The Bell 101's descendants would bring about remote access to time-sharing computer systems and, ultimately, one of the major forms of long-distance computer networking.

You can see, then, that the history of teleprinters and the history of computers are naturally interleaved. From an early stage, computers operated primarily on streams of characters. This basic concept is still the core of many modern computer systems and, not coincidentally, also describes the operation of teleprinters.

When electronic computers were under development in the 1950s and 1960s, teleprinters were near the apex of their popularity as a medium for business communications. Most people working on computers probably had experience with teleprinters; most organizations working on computers already had a number of teleprinters installed. It was quite natural that teleprinter technology would be repurposed as a means of input and output for computers.

Some of the very earliest computers, for example those of Konrad Zuse, employed punched tape as an input medium. These were almost invariably repurposed or modified telegraphic punched tape systems, often in five-bit Baudot. Particularly in retrospect, as more materials have become available to historians, it is clear that much of the groundwork for digital computing was laid by WWII cryptological efforts.

Newly devised cryptographic machines like the Lorenz ciphers were essentially teleprinters with added digital logic. The machines built to attack these codes, like Colossus, are now generally recognized as the first programmable computers. The line between teleprinter and computer was not always clear. As more encoding and control logic was added, teleprinters came to resemble simple computers.

The Manchester Mark I, a pioneer of stored-program computing built in 1949, used a 5-bit code adopted from Baudot by none other than Alan Turing. The major advantage of this 5-bit encoding was, of course, that programs could be read and written using Baudot tape and standard telegraph equipment. The addition of a teleprinter allowed operators to "interactively" enter instructions into the computer and read the output, although the concept of a shell (or any other designed user interface) had not yet been developed. EDSAC, a contemporary of the Mark I and precursor to a powerful tea logistics system that would set off the development of business computing, also used a teleprinter for input and output.

Many early commercial computers limited input and output to paper tape, often 5-bit for Baudot or 8-bit for ASCII with parity, as in the early days of computing preparation of a program was an exacting process that would not typically be done "on the fly" at a keyboard. It was, of course, convenient that teleprinters with tape punches could be used to prepare programs for entry into the computer.

Business computing is most obviously associated with IBM, a company that had large divisions building both computers and typewriters. The marriage of the two was inevitable considering the existing precedent. Beginning around 1960 it was standard for IBM computers to furnish a teleprinter as the operator interface, but IBM had a distinct heritage from the telecommunications industry and, for several reasons, was intent on maintaining that distinction. IBM's teleprinter-like devices were variously called Data Communications Systems, Printer-Keyboards, Consoles, and eventually Terminals. They generally operated over proprietary serial channels.

Other computer manufacturers didn't have typewriter divisions, and typewriters and teleprinters were actually rather complex mechanical devices and not all that easy to build. As a result, they tended to buy teleprinters from established manufacturers, often IBM or Western Electric. Consider the case of a rather famous non-IBM computer, the DEC PDP-1 of 1960. It came with a CRT graphics display as standard, and many sources will act as if this was the primary operator interface, but it is important to understand that early CRT graphics displays had a hard time with text. Text is rather complex to render when you are writing point-by-point to a CRT vector display from a rather slow machine. You would be surprised how many vertices a sentence has in it.

So despite the ready availability of CRTs in the 1960s (they were, of course, well established in the television industry), few computers used them for primary text input/output. Instead, the PDP-1 was furnished with a modified IBM typewriter as its console. This scheme of paying a third-party company (Soroban Engineering) to modify IBM typewriters for teleprinter control was apparently not very practical, and later DEC PDP models tended to use Western Electric Teletypes as user terminals. These had the considerable advantage that they were already designed to operate over long telephone circuits, making it easy to install multiple terminals throughout a building for time sharing use.

Indeed, time sharing was a natural fit for teleprinter terminals. With a teleprinter and a computer with a suitable modem, you could "call in" to a time sharing computer over the telephone from a remote office. Most of the first practical "computer networks" (term used broadly) were not actually networks of computers, but a single computer with many remote terminals. This architecture evolved into the BBS and early Internet-like services such as CompuServe. The idea was surprisingly easy to implement once time sharing operating systems were developed; the necessary hardware was already available from Western Electric.

While I cannot swear to the accuracy of this attribution, many sources suggest that the term "tty" as a generic reference to a user terminal or serial I/O channel originated with DEC. It seems reasonable; DEC's software was very influential on the broader computer industry, particularly outside of IBM. UNIX originally targeted a PDP-11 with teleprinters. While I can't prove it, it seems quite believable that the tty terminology was adopted directly from RT-11 or another operating system that Bell Labs staff might have used on the PDP-11.

Computers were born of the teleprinter and would inevitably come to consume them. After all, what is a computer but a complex teleprinter? Today, displaying text and accepting it from a keyboard is among the most basic functions of computers, and computers continue to perform this task using an architecture that would be familiar to engineers in the 1970s. They would likely be more surprised by what hasn't changed than what has: many of us still spend a lot of time in graphical software pretending to be a video display terminal built for compatibility with teleprinters.

And we're still using that 7-bit ASCII code a lot, aren't we. At least Baudot died out and we get to enjoy lower case letters.

[1] Actor, singer, etc. Gene Autry had worked as a telegrapher before he began his career in entertainment. This resulted in no small number of stories of a celebrity stand-in at the telegraph office. Yes, this is about to be a local history anecdote. It is fairly reliably reported that Gene Autry once volunteered to stand in for the telegrapher and station manager at the small Santa Fe Railroad station in Socorro, New Mexico, as the telegrapher had been temporarily overwhelmed by the simultaneous arrival of a packed train and a series of telegrams. There are enough of these stories about Gene that I think he really did keep his Morse sharp well into his acting career.

[2] Baud is a somewhat confusing unit derived from Baudot. Baud refers to the number of symbols per second on the underlying communication medium. For simple binary systems (and thus many computer communications systems we encounter daily), baud rate is equivalent to bit rate (bps). For systems that employ multi-level signaling, the bit rate will be higher than the baud rate, as multiple bits are represented per symbol on the wire. Methods like QAM are useful because they result in bit rates that are many multiples of the baud rate, reducing the bandwidth on the wire.

2024-02-11 the top of the DNS hierarchy

In the past (in fact two years ago, proof I have been doing this for a while now!) I wrote about the "inconvenient truth" that structural aspects of the Internet make truly decentralized systems infeasible, due to the lack of a means to perform broadcast discovery. As a result, most distributed systems rely on a set of central, semi-static nodes to perform initial introductions.

For example, Bitcoin relies on a small list of volunteer-operated domain names that resolve to known-good full nodes. Tor similarly uses a small set of central "directory servers" that provide initial node lists. Both systems have these lists hardcoded into their clients; coincidentally, both have nine trusted, central hostnames.

This sort of problem exists in basically all distributed systems that operate in environments where it is not possible to shout into the void and hope for a response. The internet, for good historic reasons, does not permit this kind of behavior. Here we should differentiate between distributed and decentralized, two terms I do not tend to select very carefully. Not all distributed systems are decentralized, indeed, many are not. One of the easiest and most practical ways to organize a distributed system is according to a hierarchy. This is a useful technique, so there are many examples, but a prominent and old one happens to also be part of the drivetrain mechanics of the internet: DNS, the domain name system.

My reader base is expanding and so I will provide a very brief bit of background. Many know that DNS is responsible for translating human-readable names like "computer.rip" into the actual numerical addresses used by the internet protocol. Perhaps a bit fewer know that DNS, as a system, is fundamentally organized around the hierarchy of these names. To examine the process of resolving a DNS name, it is sometimes more intuitive to reverse the name, and instead of "computer.rip", discuss "rip.computer" [1].

This name is hierarchical, it indicates that the record "computer" is within the zone "rip". "computer" is itself a zone and can contain yet more records, we tend to call these subdomains. But the term "subdomain" can be confusing as everything is a subdomain of something, even "rip" itself, which in a certain sense is a subdomain of the DNS root "." (which is why, of course, a stricter writing of the domain name computer.rip would be computer.rip., but as a culture we have rejected the trailing root dot).

Many of us probably know that each level of the DNS hierarchy has authoritative nameservers, operated typically by whoever controls the name (or their third-party DNS vendor). "rip" has authoritative DNS servers provided by a company called Rightside Group, a subsidiary of the operator of websites like eHow that went headfirst into the great DNS land grab and snapped up "rip" as a bit of land speculation, alongside such attractive properties as "lawyer" and "navy" and "republican" and "democrat", all of which I would like to own the "computer" subdomain of, but alas such dictionary words are usually already taken.

"computer.rip", of course, has authoritative nameservers operated by myself or my delegate. Unlike some people I know, I do not have any nostalgia for BIND, and so I pay a modest fee to a commercial DNS operator to do it for me. Some would be surprised that I pay for this; DNS is actually rather inexpensive to operate and authoritative name servers are almost universally available as a free perk from domain registrars and others. I just like to pay for this on the general feeling that companies that charge for a given service are probably more committed to its quality, and it really costs very little and changing it would take work.

To the observant reader, this might leave an interesting question. If even the top-level domains are subdomains of a secret, seldom-seen root domain ".", who operates the authoritative name servers for that zone?

And here we return to the matter of even distributed systems requiring central nodes. Bitcoin uses nine harcoded domain names for initial discovery of decentralized peers. DNS uses thirteen harcoded root servers to establish the top level of the hierarchy.

These root servers are commonly referred to as a.root-servers.net through m.root-servers.net, and indeed those are their domain names, but remember that when we need to use those root servers we have no entrypoint into the DNS hierarchy and so are not capable of resolving names. The root servers are much more meaningfully identified by their IP addresses, which are "semi-harcoded" into recursive resolves in the form of what's often called a root hints file. You can download a copy, it's a simple file in BIND zone format that BIND basically uses to bootstrap its cache.

And yes, there are other DNS implementations too, a surprising number of them, even in wide use. But when talking about DNS history we can mostly stick to BIND. BIND used to stand for Berkeley Internet Name Domain, and it is an apt rule of thumb in computer history that anything with a reference to UC Berkeley in the name is probably structurally important to the modern technology industry.

One of the things I wanted to get at, when I originally talked about central nodes in distributed systems, is the impact it has on trust and reliability. The TOR project is aware that the nine directory servers are an appealing target for attack or compromise, and technical measures have been taken to mitigate the possibility of malicious behavior. The Bitcoin project seems to mostly ignore that the DNS seeds exist, but of course the design of the Bitcoin system limits their compromise to certain types of attacks. In the case of DNS, much like most decentralized systems, there is a layer of long-lived caching for top-level domains that mitigates the impact of unavailability of the root servers, but still, in every one of these systems, there is the possibility of compromise or unavailability if the central nodes are attacked.

And so there is always a layer of policy. A trusted operator can never guarantee the trustworthiness of a central node (the node could be compromised, or the trusted operator could turn out to be the FBI), but it sure does help. Tor's directory servers are operated by the Tor project. Bitcoin's DNS seeds are operated by individuals with a long history of involvement in the project. DNS's root nodes are operated by a hodgepodge of companies and institutions that were important to the early internet.

Verisign operates two, of course. A California university operates one, of course, but amusingly not Berkeley. Three are operated by various arms of US defense. Some internet industry associations, an NCC, another university, ICANN runs one of them themselves. It's pretty random, though, and just reflects a set of organizations prominently involved in the early internet.

Some people, even some journalists I've come across, hear that there are 13 name servers and picture 13 4U boxes with a lot of blinking lights in heavily fortified data centers. Admittedly this description was more or less accurate in the early days, and a couple of the smaller root server operators did have single machines until surprisingly recently. But today, all thirteen root server IP addresses are anycast groups.

Anycast is not a concept you run into every day, because it's not really useful on local networks where multicast can be used. But it's very important to the modern internet. The idea is this: an IP address (really a subnetwork) is advertised by multiple BGP nodes. Other BGP nodes can select the advertisement they like the best, typically based on lowest hop count. As a user, you connect to a single IP address, but based on the BGP-informed routing tables of internet service providers your traffic could be directed to any number of sites. You can think of it as a form of load balancing at the IP layer, but it also has the performance benefit of users mostly connecting to nearby nodes, so it's widely used by CDNs for multiple reasons.

For DNS, though, where we often have a bootstrapping problem to solve, anycast is extremely useful as a way to handle "special" IP addresses that are used directly. For authoritative DNS servers like 192.5.5.241 [2001:500:2f::f] [2] (root server F) or recursive resolvers like 8.8.8.8 [2001:4860:4860::8888] (Google public DNS), anycast is the secret that allows a "single" address to correspond to a distributed system of nodes.

So there are thirteen DNS root servers in the sense that there are thirteen independently administered clusters of root servers (with the partial exception of A and J, both operated by Verisign, due to their acquisition of former A operator Network Solutions). Each of the thirteen root servers is, in practice, a fairly large number of anycast sites, sometimes over 100. The root server operators don't share much information about their internal implementation, but one can assume that in most cases the anycast sites consist of multiple servers as well, fronted by some sort of redundant network appliance. There may only be thirteen of them, but each of the thirteen is quite robust. For example, the root servers typically place their anycast sites in major internet exchanges distributed across both geography and provider networks. This makes it unlikely that any small number of failures would seriously affect the number of available sites. Even if a root server were to experience a major failure due to some sort of administration problem, there are twelve more.

Why thirteen, you might ask? No good reason. The number of root servers basically grew until the answer to an NS request for "." hit the 512 byte limit on UDP DNS responses. Optimizations over time allowed this number to grow (actually using single letters to identify the servers was one of these optimizations, allowing the basic compression used in DNS responses to collapse the matching root-servers.net part). Of course IPv6 blew DNS response sizes completely out of the water, leading to the development of the EDNS extension that allows for much larger responses.

13 is no longer the practical limit, but with how large some of the 13 are, no one sees a pressing need to add more. Besides, can you imagine the political considerations in our modern internet environment? The proposed operator would probably be Cloudflare or Google or Amazon or something and their motives would never be trusted. Incidentally, many of the anycast sites for root server F (operated by ISC) are Cloudflare data centers used under agreement.

We are, of course, currently trusting the motives of Verisign. You should never do this! But it's been that way for a long time, we're already committed. At least it isn't Network Solutions any more. I kind of miss when SRI was running DNS and military remote viewing.

But still, there's something a little uncomfortable about the situation. Billions of internet hosts depend on thirteen "servers" to have any functional access to the internet.

What if someone attacked them? Could they take the internet down? Wouldn't this cause a global crisis of a type seldom before seen? Should I be stockpiling DNS records alongside my canned water and iodine pills?

Wikipedia contains a great piece of comedic encyclopedia writing. In its article on the history of attacks on DNS root servers, it mentions the time, in 2012, that some-pastebin-user-claiming-to-be-Anonymous (one of the great internet security threats of that era) threatened to "shut the Internet down". "It may only last one hour, maybe more, maybe even a few days," the statement continues. "No matter what, it will be global. It will be known."

That's the end of the section. Some Wikipedia editor, no doubt familiar with the activities of Anonymous in 2012, apparently considered it self-evident that the attack never happened.

Anonymous may not have put in the effort, but others have. There have been several apparent DDoS attacks on the root DNS servers. One, in 2007, was significant enough that four of the root servers suffered---but there were nine more, and no serious impact was felt by internet users. This attack, like most meaningful DDoS, originated with a botnet. It had its footprint primarily in Korea, but C2 in the United States. The motivation for the attack, and who launched it, remains unknown.

There is a surprisingly large industry of "booters," commercial services that, for a fee, will DDoS a target of your choice. These tend to be operated by criminal groups with access to large botnets; the botnets are sometimes bought and sold and get their tasking from a network of resellers. It's a competitive industry. In the past, booters and botnet operators have sometimes been observed announcing a somewhat random target and taking it offline as, essentially, a sales demonstration. Since these demonstrations are a known behavior, any time a botnet targets something important for no discernible reason, analysts have a tendency to attribute it to a "show of force." I have little doubt that this is sometimes true, but as with the tendency to attribute monumental architecture to deity worship, it might be an overgeneralization of the motivations of botnet operators. Sometimes I wonder if they made a mistake, or maybe they were just a little drunk and a lot bored, who is to say?

The problem with this kind of attribution is evident in the case of the other significant attack on the DNS root servers, in 2015. Once again, some root servers were impacted badly enough that they became unreliable, but other root servers held on and there was little or even no impact to the public. This attack, though, had some interesting properties.

In the 2007 incident, the abnormal traffic to the root servers consisted of large, mostly-random DNS requests. This is basically the expected behavior of a DNS attack; using randomly generated hostnames in requests ensures that the responses won't be cached, making the DNS server exert more effort. Several major botnet clients have this "random subdomain request" functionality built in, normally used for attacks on specific authoritative DNS servers as a way to take the operator's website offline. Chinese security firm Qihoo 360, based on a large botnet honeypot they operate, reports that this type of DNS attack was very popular at the time.

The 2015 attack was different, though! Wikipedia, like many other websites, describes the attack as "valid queries for a single undisclosed domain name and then a different domain the next day." In fact, the domain names were disclosed, by at least 2016. The attack happened on two days. On the first day, all requests were for 336901.com. The second day, all requests were for 916yy.com.

Contemporaneous reporting is remarkably confused on the topic of these domain names, perhaps because they were not widely known, perhaps because few reporters bothered to check up on them thoroughly. Many sources make it sound like they were random domain names perhaps operated by the attacker, one goes so far as to say that they were registered with fake identities.

Well, my Mandarin isn't great, and I think the language barrier is a big part of the confusion. No doubt another part is a Western lack of familiarity with Chinese internet culture. To an American in the security industry, 336901.com would probably look at first like the result of a DGA or domain generation algorithm. A randomly-generated domain used specifically to be evasive. In China, though, numeric names like this are quite popular. Qihoo 360 is, after all, domestically branded as just 360---360.cn.

As far as I can tell, both domains were pretty normal Chinese websites related to mobile games. It's difficult or maybe impossible to tell now, but it seems reasonable to speculate that they were operated by the same company. I would assume they were something of a gray market operation, as there's a huge intersection between "mobile games," "gambling," and "target of DDoS attacks." For a long time, perhaps still today in the right corners of the industry, it was pretty routine for gray-market gambling websites to pay booters to DDoS each other.

In a 2016 presentation, security researchers from Verisign (Weinberg and Wessels) reported on their analysis of the attack based on traffic observed at Verisign root servers. They conclude that the traffic likely originated from multiple botnets or at least botnet clients with different configurations, since the attack traffic can be categorized into several apparently different types [3]. Based on command and control traffic from a source they don't disclose (perhaps from a Verisign honeynet?), they link the attack to the common "BillGates" [4] botnet. Most interestingly, they conclude that it was probably not intended as an attack on the DNS root: the choice of fixed domain names just doesn't make sense, and the traffic wasn't targeted at all root servers.

Instead, they suspect it was just what it looks like: an attack on the two websites the packets queried for, that for some reason was directed at the root servers instead of the authoritative servers for that second-level domain. This isn't a good strategy; the root servers are a far harder target than your average web hosting company's authoritative servers. But perhaps it was a mistake? An experiment to see if the root server operators might mitigate the DDoS by dropping requests for those two domains, incidentally taking the websites offline?

Remember that Qihoo 360 operates a large honeynet and was kind enough to publish a presentation on their analysis of root server attacks. Matching Verisign's conclusions, they link the attack to the BillGates botnet, and also note that they often observe multiple separate botnet C2 servers send tasks targeting the same domain names. This probably reflects the commercialized nature of modern botnets, with booters "subcontracting" operations to multiple botnet operators. It also handily explains Verisign's observation that the 2015 attack traffic seems to have come from more than one implementation a DNS DDoS.

360 reports that, on the first day, five different C2 servers tasked bots with attacking 336901.com. On the second day, three C2 servers tasked for 916yy.com. But they also have a much bigger revelation: throughout the time period of the attacks, they observed multiple tasks to attack 916yy.com using several different methods.

360 concludes that the 2015 DNS attack was most likely the result of a commodity DDoS operation that decided to experiment, directing traffic at the DNS roots instead of the authoritative server for the target to see what would happen. I doubt they thought they'd take down the root servers, but it seems totally reasonable that they might have wondered if the root server operators would filter DDoS traffic based on the domain name appearing in the requests.

Intriguingly, they note that some of the traffic originated with a DNS attack tool that had significant similarities to BillGates but didn't produce quite the same packets. Likely we will never know, but a likely explanation is that some group modified the BillGates DNS attack module or implemented a new one based on the method used by BillGates.

Tracking botnets gets very confusing very fast, there are just so many different variants of any major botnet client! BillGates originated, for example, as a Linux botnet. It was distributed to servers, not only through SSH but through vulnerabilities in MySQL and ElasticSearch. It was unusual, for a time, in being a major botnet that skipped over the most common desktop operating system. But ports of BillGates to Windows were later observed, distributed through an Internet Explorer vulnerability---classic Windows. Why someone chose to port a Linux botnet to Windows instead of using one of the several popular Windows botnets (Conficker, for example) is a mystery. Perhaps they had spent a lot of time building out BillGates C2 infrastructure and, like any good IT operation, wanted to simplify their cloud footprint.

High in the wizard's tower of the internet, thirteen elders are responsible for starting every recursive resolver on its own path to truth. There's a whole Neal Stephenson for Wired article there. But in practice it's a large and robust system. The extent of anycast routing used for the root DNS servers, to say nothing of CDNs, is one of those things that challenges are typical stacked view of the internet. Geographic load balancing is something we think of at high layers of the system, it's surprising to encounter it as a core part of a very low level process.

That's why we need to keep our thinking flexible: computers are towers of abstraction, and complexity can be added at nearly any level, as needed or convenient. Seldom is this more apparent than it is in any process called "bootstrapping." Some seemingly simpler parts of the internet, like DNS, rely on a great deal of complexity within other parts of the system, like BGP.

Now I'm just complaining about pedagogical use of the OSI model again.

[1] The fact that the DNS hierarchy is written from right-to-left while it's routinely used in URIs that are otherwise read left-to-right is one of those quirks of computer history. Basically an endianness inconsistency. Like American date order, to strictly interpret a URI you have to stop and reverse your analysis part way through. There's no particular reason that DNS is like that, there was just less consistency over most significant first/least significant first hierarchical ordering at the time and contemporaneous network protocols (consider the OSI stack) actually had a tendency towards least significant first.

[2] The IPv4 addresses of the root servers are ages old and mostly just a matter of chance, but the IPv6 addresses were assigned more recently and allowed an opportunity for something more meaningful. Reflecting the long tradition of identifying the root servers by their letter, many root server operators use IPv6 addresses where the host part can be written as the single letter of the server (i.e. root server C at [2001:500:2::c]). Others chose a host part of "53," a gesture at the port number used for DNS (i.e. root server J, [2001:7fe::53]). Others seem more random, Verisign uses 2:30 for both of their root servers (i.e. root server A, [2001:503:ba3e::2:30]), so maybe that means something to them, or maybe it was just convenient. Amusingly, the only operator that went for what I would call an address pun is the Defense Information Systems Agency, which put root server G at [2001:500:12::d0d].

[3] It really dates this story that there was some controversy around the source IPs of the attack, originating with none other than deceased security industry personality John McAfee. He angrily insisted that it was not plausible that the source IPs were spoofed. Of course botnets conducting DDoS attacks via DNS virtually always spoof the source IP, as there are few protections in place (at the time almost none at all) to prevent it. But John McAfee has always had a way of ginning up controversy where none was needed.

[4] Botnets are often bought, modified, and sold. They tend to go by various names from different security researchers and different variants. I'm calling this one "BillGates" because that's the funniest of the several names used for it.

2024-01-31 multi-channel audio part 2

Last time, we left off at the fact that modern films are distributed with their audio in multiple formats. Most of the time, there is a stereo version of the audio, and a multi-channel version of the audio that is perhaps 5.1 or 7.1 and compressed using one of several codecs that were designed within the film industry for this purpose.

But that was all about film, in physical form. In the modern world, films go out to theaters in the form of Digital Cinema Packages, a somewhat elaborate format that basically comes down to an encrypted motion JPEG 2000 stream with PCM audio. There are a lot of details there that I don't know very well and I don't want to get hung up on anyway, because I want to talk about the consumer experience.

As a consumer, there are a lot of ways you get movies. If you are a weirdo, you might buy a Blu-Ray disc. Optical discs are a nice case, because they tend to conform to a specification that allows relatively few options (so that players are reasonable to implement). Blu-Ray are allowed to encode their audio as linear PCM [1], Dolby Digital, Dolby TrueHD, DTS, DTS-HD, or DRA.

DRA is a common standard in the Chinese market but not in the US (that's where I live), so I'll ignore it. That still leaves three basic families of codecs, each of which have some variations. One of the interesting things about the Blu-Ray specification is that PCM audio can incorporate up to eight channels. The Blu-Ray spec allows up to 27,648 Kbps of audio, so it's actually quite feasible to do uncompressed, 24-bit, 96 kHz, 7.1 audio on a Blu-Ray disc. This is an unusual capability in a consumer standard and makes the terribly named Blu-Ray High Fidelity Pure Audio standard for Blu-Ray audio discs make more sense. Stick a pin in that, though, because you're going to have a tough time actually playing uncompressed 7.1.

On the other hand, you might use a streaming service. There's about a million of those and half of them have inane names ending in Plus, so I'm going to simplify by pretending that we're back in 2012 and Netflix is all that really matters. We can infer from Netflix help articles that Netflix delivers audio as AAC or Dolby Digital.

Or, consider the case of video files that you obtained by legal means. I looked at a few of the movies on my NAS to take a rough sampling. Most older films, and some newer ones, have stereo AAC audio. Some have what VLC describes as A52 aka AC3. A/52 is an ATSC standard that is equivalent to AC3, and AC-3 (hyphen inconsistent) is sort of the older name of Dolby Digital or the name of the underlying transport stream format, depending on how you squint at it. Less common, in my hodgepodge sample, is DTS, but I can find a few.

VLC typically describes the DTS and Dolby Digital as 3F2M/LFE, which is a somewhat eccentric (and I think specific to VLC) notation for 5.1 surround. An interesting detail is that VLC differentiates 3F2M/LFE and 3F2R/LFE, both 5.1, but with the two "surround" channels assigned to either side or rear positions. While 5.1 configurations with the surround channels to the side seem to be more standard, you could potentially put the two surround channels to the rear. Some formats have channel mapping metadata that can differentiate the two.

Because there is no rest for the weary, there is some inconsistency between "5.1 side" and "5.1 rear" in different standards and formats. At the end of the day, most applications don't really differentiate. I tend to consider surround channels on the side to be "correct," in that movie theaters are configured that way and thus it's ostensibly the design target for films. One of few true specifications I could find for general use, rather than design standards specific to theaters like THX, is ITU-R BS 775. It states that the surround channels of a 5.1 configuration should be mostly to the side, but slightly behind the listener.

That digression aside, it's unsurprising that a video file could contain a multi-channel stream. Most video containers today can support basically arbitrary numbers of streams, and you could put uncompressed multichannel audio into such a container if you wanted. And yet, multi-channel audio in films almost always comes in the form of a Dolby Digital or DTS stream. Why is that? Well, in part, because of tradition: they used to be the formats used by theaters, although digital cinema has somewhat changed that situation and the consumer versions have usually been a little different in the details. But the point stands, films are usually mastered in Dolby or DTS, so the "home video" release goes out with Dolby or DTS.

Another reason, though, is the problem of interconnections.

Let's talk a bit about interconnections. In a previous era of consumer audio, the age of "hi-fi," component systems dominated residential living rooms. In a component system, you had various audio sources that connected to a device that came to be known as a "receiver" since it typically had an FM/AM radio receiver integrated. It is perhaps more accurate to refer to it as an amplifier since that's the main role it serves in most modern systems, but there's also an increasing tendency to think of their input selection and DSP features as part of a preamp. The device itself is sometimes referred to as a preamp, in audiophile circles, when component amplifiers are used to drive the actual speakers. You can see that in these conventional component systems you need to move audio signals between devices. This kind of set up, though, is not common in households with fewer than four bathrooms and one swimming pool.

Most consumers today seem to have a television and, hopefully, some sort of audio device like a soundbar. Sometimes there are no audio interconnections at all! Often the only audio interconnection is from the TV to the soundbar via HDMI. Sometimes it's wireless! So audio interconnects as a topic can feel a touch antiquated today, but these interconnects still matter a lot in practice. First, they are often either the same as something used in industry or similar to something used in industry. Second, despite the increasing prevalence of 5.1 and 7.1 soundbar systems with wireless satellites, the kind of people with a large Blu-Ray collection are still likely to have a component home theater system. Third, legacy audio interconnects don't die that quickly, because a lot of people have an older video game console or something that they want to work with their new TV and soundbar, so manufacturers tend to throw in one or two audio interconnects even if they don't expect most consumers to use them.

So let's think about how to transport multi-channel audio. An ancient tradition in consumer audio says that stereo audio will be sent between components on two sets of two-conductor cables terminated by RCA connectors. The RCA connector dates back to to the Radio Corporation of America and, apparently, at least 1937. It remains in widespread service today. There are a surprising number of variations in this interconnect, in practice.

For one, the audio cables may be coaxial or just zipped up in a common jacket. Coaxial audio cables are a bit more expensive and a lot less flexible but admit less noise. There is a lot of confusion in this area because a particular digital transport we'll talk about later specified coaxial cables terminated in RCA connectors, but then is frequently used with non-coaxial cables terminated in RCA connectors, and for reasonable lengths usually still works fine. This has lead to a lot of consumer confusion and people thinking that any cable with RCA connectors is coaxial, when in fact, most of them are not. Virtually all of them are not. Unless you specifically paid more money to get a coaxial one, it's not, and even then sometimes it's not, because Amazon is a hotbed of scams.

Second, though these connections are routinely described as "line level" as if that means something, there is remarkably little standardization of the actual signaling. There are various conventions like 1.7v peak-to-peak and 2v peak-to-peak and about 1v peak-to-peak, and few consumer manufacturers bother to tell you which convention they have followed. There are also a surprising number of ways of expressing signaling levels, involving different measurement bases (peak vs RMS) and units (dBv vs dBu), making it a little difficult to interpret specifications when they are provided. This whole mess is just one of the reasons you find yourself having to make volume adjustments for different sources, or having to tune input levels on receivers with that option [2].

But that's all sort of a tangent, the point here is multi-channel audio. You could, conceptually, move 5.1 over six RCA cables, or 7.1 over eight RCA cables. Home theater receivers used to give you this option, but much like analog HDTV connections, it has largely disappeared.

There is one other analog option: remember Pro Logic, from the film soundtracks? that matrixed five channels into the analog stereo? Some analog formats like VHS and LaserDisc often had a Pro Logic soundtrack that could be "decoded" (really dematrixed) by a receiver with that capability, which used to be common. In this case you can transport multi-channel audio over your normal two RCA cables. The matrixing technique was always sort of cheating, though, and produces inferior results to actual multichannel interconnects. It's no longer common either.

Much like video, audio interconnects today have gone digital. Consumer digital audio really took flight with the elegantly named Sony/Philips Digital Interface, or S/PDIF. S/PDIF specifies a digital format that is extremely similar to, but not quite the same as, a professional digital interconnect called AES3. AES3 is typically carried on a three-conductor (balanced) cable with XLR connectors, though, which are too big an expensive for consumer equipment. In one of the weirder decisions in the history of consumer electronics, one that I can only imagine came out of an intractable political fight, S/PDIF specified two completely different physical transports: one electrical, and one optical.

The electrical format should be transmitted over a coaxial cable with RCA connectors. In practice it is often used over non-coaxial cables with RCA connectors, which will usually work fine if the length is short and nothing nearby is too electrically noisy. S/PDIF over non-coaxial cables is "fine" in the same way that HDMI cables longer than you are tall are "fine." If it doesn't work reliably, try a more expensive cable and you'll probably be good.

The optical format is used with cheap plastic optical cables terminated in a square connector called Toslink, originally for Toshiba Link, after the manufacturer that gave us the optical variant. Toslink is one of those great disappointments in consumer products. Despite the theoretical advantages of an optical interconnect, the extremely cheap cables used with Toslink mean it's mostly just worse than the electrical transport, especially when it comes to range [3].

But the oddity of S/PDIF's sibling formats isn't the interesting thing here. Let's talk about the actual S/PDIF bitstream, the very-AES3-like format the audio actually needs to get through.

S/PDIF was basically designed for CDs, and so it comfortably carries CD audio: two channels of 16 bit samples at 44.1kHz. In fact, it can comfortably go further, carrying 20 (or with the right equipment even 24) bit samples at the 48 kHz sampling rate more common of digital audio other than CDs. That's for two channels, though. Make the leap to six channels for 5.1 and you are well beyond the capabilities of an S/PDIF transceiver.

You see where this is going? compression.

See, the problems that Dolby Digital and DTS solved, of fitting multichannel audio onto the limited space of a 35mm film print, also very much exist in the world of S/PDIF. CDs brought us uncompressed digital audio remarkably early on, but also set sort of a constraint on the bitrate of digital audio streams that ensured the opposite in the world of multi-channel theatrical sound. It sort of makes sense, anyway. DTS soundtracks came on CDs!

Of course even S/PDIF is looking rather long in the tooth today. I don't think I use it at all any more, which is not something I expected to be saying this soon. Today, though, all of my audio sources and sinks are either analog or have HDMI. HDMI is the de facto norm for consumer digital audio today.

HDMI is a complex thing when it comes to audio or, really, just about anything. Details like eARC and the specific HDMI version have all kinds of impacts on what kind of audio can be carried, and the same is true for video as well. I am going to spare a lengthy diversion into the many variants of HDMI, which seem almost as numerous as those of USB, and talk about HDMI 2.1.

Unsurprisingly, considering the numerous extra conductors and newer line coding, HDMI offers a lot more bandwidth for audio than S/PDIF. In fact, you can transport 8 channels of uncompressed 24-bit PCM at 192kHz. That's about 37 Mbps, which is not that fast for a data transport but sure is pretty fast for an audio cable. Considering the bandwidth requirements for 4K video at 120Hz, though, it's only a minor ask. With HDMI, compression of audio is no longer necessary.

But we still usually do it.

Why? Well, basically everything can handle Dolby Digital or DTS, and so films are mostly mastered to Dolby Digital or DTS, and so we mostly use Dolby Digital or DTS. That's just the way of things.

One of the interesting implications of this whole thing is that audio stacks have to deal with multiple formats and figure out which format is in use. That's not really new, with Dolby Pro Logic you either had to turn it on/off with a switch or the receiver had to try to infer whether or not Pro Logic had been used to matrix a multichannel soundtrack to stereo. For S/PDIF, IEC 61937 standardizes a format that can be used to encapsulate a compressed audio stream with sufficient metadata to determine the type of compression. HDMI adopts the same standard to identify compressed audio streams (and, in general, HDMI audio is pretty much in the same bitstream format as good old S/PDIF, but you can have a lot more of it).

In practice, there are a lot of headaches around this format switching. For one, home theater receivers have to switch between decoding modes. They mostly do this transparently and without any fuss, but I've owned a couple that had occasional issues with losing track of which format was in use, leading to dropouts. Maybe related to signal dropouts but my current receiver has the same problem with internal sources, so it seems more like a software bug of some sort.

It's a lot more complicated when you get out of dedicated home theater devices, though. Consider the audio stack of a general-purpose operating system. First, PCs rarely have S/PDIF outputs, so we are virtually always talking about HDMI. For a surprisingly long time, common video cards had no support for audio over HDMI. This is fortunately a problem of the past, but unfortunately ubiquitous audio over HDMI means that your graphics drivers are now involved in the transport of audio, and graphics drivers are notoriously bad at reliably producing video, much less dealing with audio as a side business. I shudder to think of the hours of my life I have lost dealing with defects of AMD's DTS support.

Things are weird on the host software side, though. The operating system does not normally handle sound in formats even resembling Dolby Digital or DTS. So, when you play a video file with audio encoded in one of those formats, a "passthrough" feature is typically used to deliver the compressed stream directly to the audio (often actually video) device, without normal operating system intervention. We are reaching the point where this mostly just works but you will still notice some symptoms of the underlying complexity.

On Linux, it's possible to get this working, but in part because of licensing issues I don't think any distros will do it right out of the box. My knowledge may be out of date as I haven't tried for some time, but I am still seeing Kodi forum threads about bash scripts to bypass PulseAudio, so things seem mostly unchanged.

There are other frustrations, as well. For one, the whole architecture of multichannel audio interconnection is based around sinks detecting the mode used by the source. That means that your home theater receiver should figure out what your video player is doing, but your video player has no idea what your home theater receiver is doing. This manifests in maddening ways. Consider, for example, the number of blog posts I ran across (while searching for something else!) about how to make Netflix less quiet by disabling surround sound.

If Netflix has 5.1 audio they deliver it; they don't know what your speaker setup is. But what if you don't have 5.1 speakers? In principal you could downmix the 5.1 back to stereo, and a lot of home theater receivers have DSP modes that do this (and in general downmix 5.1 or 7.1 to whatever speaker channels are active, good for people with less common setups like my own 3.1). But you'd have to turn that on, which means having a receiver or soundbar or whatever that is capable, understanding the issue, and knowing how to enable that mode. That is way more than your average Netflix watcher wants to think about any of this. In practice, setting the Netflix player to only ever provide stereo audio is an easier fix.

The use of compressed multichannel formats that are decoded in the receiver rather than the computer playing back introduces other problems as well, like source equalization. If you have a computer connected to a home theater receiver (which is a ridiculous thing to do and yet here I am), you have two completely parallel audio stacks: "normal" audio that passes through the OS sound server and goes to the receiver as PCM, and "surround sound" that bypasses the OS sound server and goes to the receiver as Dolby Digital or DTS. It is very easy to have differences in levels, adjustments, latency, etc. between these two paths. The level problem here is just one of the several factors in the perennial "Plex is too quiet" forum threads [4].

Finally, let's talk about what may be, to some readers, the elephant in the room. I keep talking about Dolby Digital and DTS, but both are 5.1 formats, and 5.1 is going out of fashion in the movie world. Sure, there's Dolby Digital Plus which is 7.1, but it's so similar to the non-plus variant that there isn't much use in addressing them separately. Insert the "Plus" after Dolby Digital in the proceeding paragraphs if it makes you feel better.

But there are two significantly different formats appearing on more and more film releases, especially in the relatively space-unconstrained Blu-Ray versions: lossless surround sound and object-based surround sound.

First, lossless is basically what it sounds like. Dolby TrueHD and DTS-HD are both formats that present 7.1 surround with only lossless compression, at the cost of a higher bitrate than older media and interconnects support. HDMI can easily handle these, and if you have a fairly new setup of a Blu-Ray player and recent home theater receiver connected by HDMI you should be able to enjoy a lossless digital soundtrack on films that were released with one. That's sort of the end of that topic, it's nothing that revolutionary.

But what about object-based surround sound? I'm using that somewhat lengthy term to try to avoid singling out one commercial product, but, well, there's basically one commercial product: Dolby Atmos. Atmos is heralded as a revolution in surround sound in a way that makes it sort of hard to know what it actually is. Here's the basic idea: instead of mastering a soundtrack by mixing audio sources into channels, you master a soundtrack by specifying the physical location (in cartesian coordinates) of each sound source.

When the audio is played back, an Atmos decoder then mixes the audio into channels on the fly, using whatever channels are available. Atmos allows the same soundtrack to be used by theaters with a variety of different speaker configurations, and as a result, makes it practical for theaters to expand into much higher channel counts.

Theaters aren't nearly as important a part of the film industry as they used to be, though, and unsurprisingly Atmos is heavily advertised for consumer equipment as well. How exactly does that work?

Atmos is conveyed on consumer equipment as 7.1 Dolby Digital Plus or Dolby TrueHD with extra metadata.

If you know anything about HDR video, also known as SDR video with extra metadata, you will find this unsurprising. But some might be confused. The thing is, the vast majority of consumers don't have Atmos equipment, and with lossless compression soundtracks are starting to get very large so including two complete copies isn't very appealing. The consumer encoding of Atmos was selected to have direct backward compatibility to 7.1 systems, allowing normal playback on pre-Atmos equipment.

For Atmos-capable equipment, an extra PCM-like subchannel (at a reduced bitrate compared to the audio channels) is used to describe the 3D position of specific sound sources. Consumer Atmos decoders cannot support as many objects as the theatrical version, so part of the process of mastering an Atmos film for home release is clustering nearby objects into groups that are then treated as a single object by the consumer Atmos decoder. One way to think about this is that Atmos is downmixed to 7.1, and in the process a metadata stream is created that can be used to upmix back to Atmos mostly correctly. If it sounds kind of like matrix encoding it kind of is, in effect, which is perhaps part of why Dolby's marketing materials are so insistent that it is not matrix encoding. To be fair it is a completely different implementation, but has a similar effect of reducing the channel separation compared to the original source.

Also I don't think Atmos has really taken off in home setups? I might just be out of date here, I think half the soundbars on the market today claim Atmos support and amazing feats with their five channels two of which are pointed up. I'm just pretty skeptical of the whole "we have made fewer, smaller speakers behave as if they were more, bigger speakers" school of audio products. Sorry Dr. Bose, there's just no replacement for displacement.

[1] The term Linear PCM or LPCM is used to clarify that no companding has been performed. This is useful because PCM originated for the telephone network, which uses companding as standard. LPCM clarifies that neither ΞΌ-law companding nor A-law companding has been performed. I will mostly just use PCM because I'm talking about movies and stuff, where companding digital audio is rare.

[2] There is also the matter of magnetic sources like turntables and microphones that produce much lower output levels than a typical "line level." Ideally you need a preamplifier with adjustable gain for these, although in the case of turntables there are generally accepted gain levels for the two common types of cartridges. A lot of preamplifiers either let you choose from those two or give you no control at all. Traditionally a receiver would have a built-in preamplifier to bring up the level of the signal on the turntable inputs, but a lot of newer receivers have left this out to save money, which leads to hipsters with vinyl collections having to really crank the volume.

[3] I don't feel like I should have to say this, but in the world of audio, I probably do: if it works, it doesn't matter! The problem with optical is that it develops reliability problems over shorter lengths than the electrical format. If you aren't getting missing samples (dropouts) in the audio, though, it's working fine and changing around cables isn't going to get you anything. In practice the length limitations on optical don't tend to matter very much anyway, since the average distance between two pieces of a component home theater system is, what, ten inches?

[4] Among the myriad other factors here is the more difficult problem that movies mix most of the dialog into the center channel while most viewers don't have a center channel. That means you need to remix the center channel into left and right to recover dialog. So-called professionals mastering Blu-Ray releases don't always get this right, and you're in even more trouble if you're having to do it yourself.

2024-01-21 multi-channel audio part 1

Stereophonic or two-channel audio is so ubiquitous today that we tend to refer to all kinds of pieces of consumer audio reproduction equipment as "a stereo." As you might imagine, this is a relatively modern phenomenon. While stereo audio in concept dates to the late 19th century, it wasn't common in consumer settings until the 1960s and 1970s. Those were very busy decades in the music industry, and radio stations, records, and film soundtracks all came to be distributed primarily in stereo.

Given the success of stereo, though, one wonders why larger numbers of channels have met more limited success. There are, as usual, a number of factors. For one, two-channel audio was thought to be "enough" by some, considering that humans have two ears. Now it doesn't quite work this way in practice, and we are more sensitive to the direction from which sound comes than our binaural system would suggest. Still, there are probably diminishing returns, with stereo producing the most notable improvement in listening experience over mono.

There are also, though, technical limitations at play. The dominant form of recorded music during the transition to stereo was the vinyl record. There is a fairly straightforward way to record stereo on a record, by using a cartridge with coils on two opposing axes. This is the limit, though: you cannot add additional channels as you have run out of dimensions in the needle's free movement.

This was probably the main cause of the failure of quadraphonic sound, the first music industry attempt at pushing more channels. Introduced almost immediately after stereo in the 1970s, quadraphonic or four-channel sound seemed like the next logical step. It couldn't really be encoded on records, so a matrix encoding system was used in which the front-rear difference was encoded as phase shift in the left and right channels. In practice this system worked poorly, and especially early quadraphonic systems could sound noticeably worse than the stereo version. Wendy Carlos, an advocate of quadraphonic sound but harsh critic of musical electronics, complained bitterly about the inferiority of so-called quadraphonic records when compared to true four-channel recordings, for example on tape.

Of course, four-channel tape players were vastly more expensive than record players in the 1970s, as they ironically remain today. Quadraphonic sound was in a bind: it was either too expensive or too poor of quality to appeal to consumers. Quadraphonic radio using the same matrix encoding, while investigated by some broadcasters, had its own set of problems and never saw permanent deployment. Alan Parsons famously produced Pink Floyd's "Dark Side of the Moon" in quadraphonic sound; the effort was a failure in several ways but most memorably because, by the time of the album's release in 1973, the quadraphonic experiment was essentially over.

Three-or-more-channel-sound would have its comeback just a few years later, though, by the efforts of a different industry. Understanding this requires backtracking a bit, though, to consider the history of cinema prints.

Many are probably at least peripherally aware of Cinerama, an eccentric-seeming film format that used three separate cameras, and three separate projectors, to produce an exceptionally widescreen image. Cinerama's excess was not limited to the picture: it involved not only the three 35mm film reels for the three screen panels, but also a fourth 35mm film that was entirely coated with a magnetic substrate and was used to store seven channels of audio. Five channels were placed behind the screen, effectively becoming center, left, right, left side, and right side. The final two tracks were played back behind the audience, as the surround left and surround right.

Cinerama debuted in 1952, decades before 35mm films would typically carry even stereo audio. Like quadraphonic sound later, Cinerama was not particularly successful. By the time stereo records were common, Cinerama had been replaced by wider film formats and anamorphic formats in which the image was horizontally compressed by the lens of the camera, and expanded by the lens of the projector. Late Cinerama films like 2001: A Space Odyssey were actually filmed Super Panavision 70 and projected onto Cinerama screens from a single projector with a specialized lens.

There's a reason people talk so much about Cinerama, though. While it was not a commercial success, it was influential on the film industry to come. Widescreen formats, mostly anamorphic, would become increasingly common in the following decades. It would take years longer, but so would seven-channel theatrical sound.

"Surround sound," as these multi-channel formats came to be known in the late '50s, would come and go in theatrical presentations throughout the mid-century even as the vast majority of films were presented monaurally, with only a single channel. Most of these relied on either a second 35mm reel for audio only, or the greater area for magnetic audio tracks allowed by 70mm film. Both of these options were substantially more expensive for the presenting theater than mono, limiting surround sound mostly to high-end theaters and premiers. For surround sound to become common, it had to become cheap.

1971's A Clockwork Orange (I will try not to fawn over Stanley Kubrick too much but you are learning something about my film preferences here) employed a modest bit of audio technology, something that was becoming well established in the music industry but was new to film. The magnetic recordings used during the production process employed Dolby Type A noise reduction, similar to what became popular on compact cassette tapes, for a slight improvement in audio quality. The film was still mostly screened in magnetic mono, but it was the beginning of a profitable relationship between Dolby Labs and the film industry. Over the following years a number of films were released with Dolby Type A noise reduction on the actual distribution print, and some theaters purchased decoders to use with these prints. Dolby had bigger ambitions, though.

Around the same time, Kodak had been experimenting with the addition of stereo audio to 35mm release prints, using two optical tracks. They applied Dolby noise reduction to these experimental prints, and brought Dolby in to consult. This presented the perfect opportunity to implement an idea Dolby had been considering. Remember the matrix encoded quadraphonic recording that had been a failure for records? Dolby licensed a later-generation matrix decoder design from Sansui, and applied it to Kodak's stereo film soundtracks, allowing separation into four channels. While the music industry had placed the four channels at the four corners of the soundstage, the film industry had different tastes, driven mostly by the need to place dialog squarely in the center of the field. Dolby's variant of quadraphonic audio was used to present left, right, center, and a "surround" or side channel. This audio format went through several iterations, including much improved matrix decoding, and along the way picked up a name that is still familiar today: Dolby Stereo.

That Dolby Stereo is, in fact, a quadraphonic format reflects a general atmosphere of terminological confusion in the surround sound industry. Keep this in mind.

One of Dolby Stereo's most important properties was its backwards compatibility. The two optical tracks could be played back on a two-channel (or actually stereo) system and still sound alright. They could even be placed on the print alongside the older magnetic mono audio, providing compatibility with mono theaters. This compatibility with fewer channels became one of the most important traits in surround sound systems, and somewhat incidentally served to bring them to the consumer. Since the Dolby Stereo soundtrack played fine on a two-channel system, home releases of films on formats like VHS and Laserdisc often included the original Dolby Stereo audio from the print. A small industry formed around these home releases, licensing the Dolby technology to sell consumer decoders that could recover surround sound from home video.

For cost reasons these decoders were inferior to Dolby's own in several ways, and to avoid the hazard of damage to the Dolby Stereo brand, Dolby introduced a new marketing name for consumer Dolby Stereo decoders: Dolby Surround.

By the 1980s, Dolby Stereo, or Dolby Surround, had become the most common audio format on theatrical presentations and their home video releases. Even some television programs and direct-to-video material was recorded in Dolby Surround. Consumer stereo receivers, in the variant that came to be known as the home theater receiver, often incorporated Dolby Surround decoders. Improvements in consumer electronics brought the cost of proper Dolby Stereo decoders down, and so the home systems came to resemble the theatrical systems as well. Seeking a new brand to unify the whole mess of Dolby Stereo and Dolby Surround (which, confusingly, were often 4 and 3 channel, respectively), Dolby seems to have turned to the "Advanced Logic" and "Full Logic" terms once used by manufacturers of quadraphonic decoders. Dolby's theatrical sound solution came to be known as Dolby Pro Logic. A Dolby Pro Logic decoder processed two audio channels to produce a four-channel output. According to a modern naming convention, Dolby Pro Logic is a 4.0 system: four full-bandwidth channels.

This entire thing, so far, has been a preamble to the topic I actually meant to discuss. It's an interesting preamble, though! I just want to apologize that I didn't mean to write a history of multi-channel audio distribution and so this one isn't especially complete. I left out a number of interesting attempts at multi-channel formats, of which the film industry produced a surprising number, and instead focused on the ones that were influential and/or used for Kubrick films [1].

Dolby Pro Logic, despite its impressive name, was still an analog format, based on an early '70s technique. Later developments would see an increase in the number of channels, and the transition to digital audio formats.

Recall that 70mm film provided six magnetic audio channels, which were often used in an approximation of the seven-channel Cinerama format. Dolby experimented with the six-channel format, though, confusingly also under the scope of the Dolby Stereo product. During the '70s, Dolby observed that the ability of humans to differentiate the source of a sound is significantly reduced as the sound becomes lower in frequency. This had obvious potential for surround sound systems, enabling something analogous to chroma subsampling in video. The lower-frequency component of surround sound does not need to be directional, and for a sense of directionality the high frequencies are most important.

Besides, bassheads were coming to the film industry. The long-used Academy response curve fell out of fashion during the '70s, in part due to Dolby's work, in part due to generally improved loudspeaker technology, and in part due to the increasing popularity of bass-heavy action films. Several 70mm releases used one or more of the audio channels as dedicated bass channels.

For the 1979 film Apocalypse Now in its 70mm print, Dolby premiered a 5.1 format in which three full-bandwidth channels were used for center, left, and right, two channels with high-pass filtering were used for surround left and surround right, and one channel with low-pass filtering was used for bass. Apocalypse Now was not, in fact, the first film to use this channel configuration, but Dolby promoted it far more than the studios had.

Interestingly, while I know less about live production history, the famous cabaret Moulin Rouge apparently used a 5.1 configuration during the 1980s. Moulin Rouge was prominent enough to give the 5.1 format a boost in popularity, perhaps particularly important because of the film industry's indecision on audio formats.

The seven-channel concept of the original Cinerama must have hung around in the film industry, as there was continuing interest in a seven-channel surround configuration. At the same time, the music industry widely adopted eight-channel tape recorders for studio use, making eight-channel audio equipment readily available. The extension to 7.1 surround, adding left and right side channels to the 5.1 configuration, was perhaps obvious. Indeed, what I find strangest about 7.1 is just how late it was introduced to film. Would you believe that the first film released (not merely remastered or mixed for Blu-Ray) in 7.1 was 2010's Toy Story 3?

7.1 home theater systems were already fairly common by then, a notable example of a modern trend afflicting the film industry: the large installed base and cost avoidance of the theater industry means that consumer home theater equipment now evolves more quickly than theatrical systems. Indeed, while 7.1 became the gold standard in home theater audio during the 2000s, 5.1 remains the dominant format in theatrical sound systems today.

Systems with more than eight channels are now in use, but haven't caught on in the consumer setting. We'll talk about those later. For most purposes, eight-channel 7.1 surround sound is the most complex you will encounter in home media. The audio may take a rather circuitous route to its 7.1 representation, but, well, we'll get to that.

Let's shift focus, though, and talk a bit about the actual encodings. Audio systems up to 7.1 can be implemented using analog recording, but numerous analog channels impose practical constraints. For one, they are physically large, making it infeasible to put even analog 5.1 onto 35mm prints. Prestige multi-channel audio formats like that of IMAX often avoided this problem by putting the audio onto an entirely separate film reel (much like Cinerama back at the beginning), synchronized with the image using a pulse track and special equipment. This worked well but drove up costs considerably. Dolby Stereo demonstrated that it was possible to matrix four channels into two channels (with limitations), but considering the practical bandwidth of the magnetic or optical audio tracks on film you couldn't push this technique much further.

Remember that the theatrical audio situation changed radically during the 1970s, going from almost universal mono audio to four channels as routine and six channels for premiers and 70mm. During the same decade, the music reproduction industry, especially in Japan, was exploring another major advancement: digital audio encoding.

In 1980, the Compact Disc launched. Numerous factors contributed to the rapid success of CDs over vinyl and, to a lesser but still great extent, the compact cassette. One of them was the quality of the audio reproduction. CDs were a night and day change: records could produce an excellent result but almost always suffered from dirt and damage. Cassette tapes were better than most of us remember but still had limited bandwidth and a high noise floor, requiring Dolby noise reduction for good results. The CD, though, provided lossless digital audio.

Audio is encoded on an audio CD in PCM format. PCM, or pulse code modulation, is a somewhat confusing term that originated in the telephone industry. If we were to reinvent it today, we would probably just call it digital modulation. To encode a CD, audio is sampled (at 44.1 kHz for historic reasons) and quantized to 16 bits. A CD carries two channels, stereo, which was by then the universal format for music. Put together, those add up to 1.4Mbps. This was a very challenging data rate in 1980, and indeed, practical CD players relied on the fact that the data did not need to be read perfectly (error correcting codes were used) and did not need to be stored (going directly to a digital to analog converter). These were conveniently common traits of audio reproduction systems, and the CD demonstrated that digital audio was far more practical than the computing technology of the time would suggest.

The future of theatrical sound would be digital. Indeed, many films would be distributed with their soundtracks on CD.

There remained a problem, though: a CD could encode two channels. Even four channels wouldn't fit within the data rate CD equipment was capable of, much less six or eight. The film industry would need to formats that could encode six or eight channels of audio into either the bandwidth of a two-channel signal or into precious unused space on 35mm film prints.

Many ingenious solutions were developed. A typical 35mm film print today contains three distinct representations of the audio: a two-channel optical signal outside of the sprocket holes (which could encode Dolby Stereo), a continuous 2D barcode between the frame and sprocket holes which carries the SDDS (Sony Dynamic Digital Sound) digital signal, and individual 2D barcodes between the sprocket holes which encode the Dolby digital signal. Finally, a small pulse pattern at the very edge of the film provides a time code used for synchronization with audio played back from a CD, the DTS system.

But then, a typical 35mm film print today wouldn't exist, as 35mm film distribution has all but disappeared. Almost all modern film is played back entirely digitally from some sort of flexible stream container. You would think, then, that the struggles of encoding multi-channel audio are over. Many media container formats can, after all, contain an arbitrary number of audio channels.

Nothing is ever so simple. Much like a dedicated audio reel adds cost, multiple audio channels inflate file sizes, media cost, and in the era of playback from optical media, could stress the practical read rate. Besides, constraints of the past have a way of sticking around. Every multichannel audio format to find widespread success in the film industry has done so by maintaining backwards compatibility with simple mono and stereo equipment. That continues to be true today: modern multi-channel digital audio formats are still mostly built as extensions of an existing stereo encoding, not as truly new arbitrary-channel formats.

At the same time, the theatrical sound industry has begun a transition away from channel-centric audio formats and towards a more flexible system that is much further removed from the actual playback equipment.

Another trend has emerged since 1980 as well, which you probably already suspected from the multiple formats included in 35mm prints. Dolby's supremacy in multi-channel audio was never as complete as I made it sound, although they did become (and for some time remained) the most popular surround sound solution. They have always had competition, and that's still true today. Just as 35mm prints came with the audio in multiple formats, current digitally distributed films often do as well.

In Part 2, I'll get to the topic I meant to write about today before I got distracted by history: the landscape of audio formats included in digitally distributed films and common video files today, and some of the ways they interact remarkably poorly with computers. We're going to talk about:

  • Dolby Digital/AC-3/AC-4
  • DTS
  • Dolby Atmos
  • MPEG Surround/MPEG-H 3D
  • HDMI (ugh)
  • And more!

Postscript: Film dweebs will of course wonder where George Lucas is in this story. His work on the Star Wars trilogy lead to the creation of THX, a company that will long be remembered for its distinctive audio identity. The odd thing is that THX was never exactly a technology company, although it was closely involved in sound technology developments of the time. THX was essentially a certification agency: THX theaters installed equipment by others (Altec Lansing, for much of the 20th century), and used any of the popular multi-channel audio formats.

To be a THX-certified theater, certain performance requirements had to be met, regardless of the equipment and format in use. THX certification requirements included architectural design standards for theaters, performance specifications for audio equipment, and a specific crossover configuration designed by Lucasfilm.

In 2002, Lucasfilm spun out THX and it essentially became a rental brand, shuffled into the ownership of gamer headphone manufacturer Razer today. THX certification still pops up in some consumer home theater equipment but is no longer part of the theatrical audio industry.

Read part 2 >

[1] Incidentally, Kubrick did not adapt to Dolby Stereo. Despite his early experience with Dolby noise reduction, all of his films would be released in mono except for 2001 (six-channel audio only in the Cinerama release) and Eyes Wide Shut (edited in Dolby Stereo after Kubrick's death).

2024-01-16 the tacnet tracker

Previously on Deep Space Nine, I wrote that "the mid-2000s were an unsettled time in mobile computing." Today, I want to share a little example. Over the last few weeks, for various personal reasons, I have been doing a lot of reading about embedded operating systems and ISAs for embedded computing. Things like the NXP TriMedia (Harvard architecture!) and pSOS+ (ran on TriMedia!). As tends to happen, I kept coming across references to a device that stuck in my memory: the TacNet Tracker. It prominently features on Wikipedia's list of applications for the popular VxWorks real-time operating system.

It's also an interesting case study in the mid-2000s field of mobile computing, especially within academia (or at least the Department of Energy). You see, "mobile computing" used to be treated as a field of study, a subdiscipline within computer science. Mobile devices imposed practical constraints, and they invited more sophisticated models of communication and synchronization than were used with fixed equipment. I took a class on mobile computing in my undergraduate, although it was already feeling dated at the time.

Today, with the ubiquity of smartphones, "mobile computing" is sort of the normal kind. Perhaps future computer science students will be treated to a slightly rusty elective in "immobile computing." The kinds of strange techniques you use when you aren't constrained by battery capacity. Busy loop to blink the cursor!

Sometime around 2004, Sandia National Laboratory's 6452 started work on the TacNet Tracker. The goal: to develop a portable computer device that could be used to exchange real-time information between individuals in a field environment. A presentation states that an original goal of the project was to use COTS (commercial, off-the-shelf) hardware, but it was found to be infeasible. Considering the state of the mobile computing market in 2004, this isn't surprising. It's not necessarily that there weren't mobile devices available; if anything, the opposite. There were companies popping up with various tablets fairly regularly, and then dropping them two years later. You can find any number of Windows XP tablets; but the government needed something that could be supported long-term. That perhaps explains the "Life-cycle limitations" bullet point the presentation wields against COTS options.

The only products with long-term traction were select phones and PDAs like the iPaq and Axim. Even this market collapsed almost immediately with the release of the iPhone, although Sandia engineers wouldn't have known that would come. Still, the capabilities and expandability of these devices were probably too limited for the Tracker's features. There's a reason all those Windows XP tablets existed. They weighed ten pounds, but they were beefy enough to run the data entry applications that were the major application of commercial mobile computing at the time.

The TacNet Tracker, though, was designed to fit in a pocket and to incorporate geospatial features. Armed with a Tracker, you could see the real-time location of other Tracker users on a map. You could even annotate the map, marking points and lines, and share these annotations with others. This is all very mundane today! At the time, though, it was an obvious and yet fairly complex application for a mobile device.

The first question, of course, is of architecture. The Tracker was built around the XScale PXA270 SoC. XScale, remember, was Intel's marketing name for their ARMv5 chips manufactured during the first half of the '00s. ARM was far less common back then, but was already emerging as a leader in power-efficient devices. The PXA270 was an early processor to feature speed-stepping, decreasing its clock speed when under low load to conserve power.

The PXA270 was attached to 64MB of SDRAM and 32MB of flash. It supported more storage on CompactFlash, had an integrated video adapter, and a set of UARTs that, in the Tracker, would support a serial interface, a GPS receiver, and Bluetooth.

A rechargeable Li-Poly pack allowed the Tracker to operate for "about 4 hours," but the presentation promises 8-12 hours in the future. Battery life was a huge challenge in this era. It probably took about as long to charge as it did to discharge, too. There hadn't been much development in high-rate embedded battery chargers yet.

The next challenge was communication. 802.11 WiFi was achieving popularity by this time, but suffered from a difficult and power-intensive association process even more than it does today. Besides, in mobile applications like those the Tracker was intended for, conventional WiFi's requirement for network infrastructure was impractical. Instead, Sandia turned to Motorola. The Tracker used a PCMCIA WMC6300 Pocket PC MEA modem. MEA stands for "Mesh Enabled Architecture," which seems to have been the period term for something Motorola later rebranded as MOTOMESH.

Marketed primarily for municipal network and public safety applications, MOTOMESH is a vaguely 802.11-adjacent proprietary radio protocol that provides broadband mesh routing. One of the most compelling features of MEA and MOTOMESH is its flexibility: MOTOMESH modems will connect to fixed infrastructure nodes under central management, but they can also connect directly to each other, forming ad-hoc networks between adjacent devices. 802.11 itself was conceptually capable of the same, but in practice, the higher-level software to support this kind of use never really emerged. Motorola offered a complete software suite for MOTOMESH, though, and for no less than Windows CE.

Yes, it really enforces the period vibes that the user manual for the WMC6300 modem starts by guiding you through using Microsoft ActiveSync to transfer the software to an HP iPaq. One did not simply put files onto a mobile device at the time; you had to sync them. Microsoft tried to stamp out an ecosystem of proprietary mobile device sync protocols with ActiveSync. Ultimately none of them would really see much use, PDAs were always fairly niche.

Sandia validated performance of the Tracker's MEA modem using an Elektrobit Propsim C2. I saw one of these at auction once (possibly the same one!), and sort of wish I'd bid on it. It's a chunky desktop device with a set of RF ports and the ability to simulate a wide variety of different radio paths between those ports, introducing phenomena like noise, fading, and multipath that will be observed in the real world. The results are impressive: in a simulated hilly environment, Trackers could exchange a 1MB test image in just 13.6 seconds. Remember that next time you are frustrated by LTE; we really take what we have today for granted.

But what of the software? Well, the Tracker ran VxWorks. Actually, that's how I ran into it: it seems that Wind River (developer of VxWorks) published a whitepaper about the Tracker, which made it onto a list of featured applications, which was the source a Wikipedia editor used to flesh out the article. Unfortunately I can't find the original whitepaper, only dead links to it. I'm sure it would have been a fun read.

VxWorks is a real-time operating system mostly used in embedded applications. It supports a variety of architectures, provides a sophisticated process scheduler with options for hard real-time and opportunistic workloads, offers network, peripheral bus, and file system support, and even a POSIX-compliant userspace. It remains very popular for real-time control applications today, although I don't think you'd find many UI-intensive devices like the Tracker running it. A GUI framework is actually a fairly new feature.

The main application for the Tracker was a map, with real-time location and annotation features. It seems that a virtual whiteboard and instant messaging application were also developed. A charmingly cyberpunk Bluetooth wrist-mounted display was pondered, although I don't think it was actually made.

But what was it actually for?

Well, federal R&D laboratories have a tendency to start a project for one application and then try to shop it around to others, so the materials Sandia published present a somewhat mixed message. A conference presentation suggests it could be used to monitor the health of soldiers in-theater (an extremely frequent justification for grants in mobile computing research!), for situational awareness among security or rescue forces, or for remote control of weapons systems.

I think a hint comes, though, from the only concrete US government application I can find documented: in 2008, Sandia delivered the TacNet Tracker system to the DoE Office of Secure Transportation (OST). OST is responsible for the over-road transportation of nuclear weapons and nuclear materials in the United States. Put simply, they operate a fleet of armored trucks and accompanying security escorts. There is a fairly long history, back to at least the '70s, of Sandia developing advanced radio communications systems for use by OST convoys. Many of these radio systems seemed ahead of their time or at least state of the art, but they often failed to gain much traction outside of DoE. Perhaps this relates to DoE culture, perhaps to the extent to which private contractors have captured military purchasing.

Consider, for example, that Sandia developed a fairly sophisticated digital HF system for communication between OST convoys and control centers. It seemed rather more advanced than the military's ALE solution, but a decade or so later OST dropped it and went to using ALE like everyone else (likely for interoperability with the large HF ALE networks operated by the FBI and CBP for domestic security use, although at some point the DoE itself also procured its own ALE network). A whole little branch of digital HF technology that just sort of fizzled out in the nuclear weapons complex. There's a lot of things like that, it's what you get when you put an enormous R&D capability into a particularly insular and secretive part of the executive branch.

Sandia clearly hoped to find other applications for the system. A 2008 Sandia physical security manual for nuclear installations recommends that security forces consider the TacNet Tracker as a situational awareness solution. It was pitched for several military applications. It's a little hard to tell because the name "TacNet" is a little too obvious, but it doesn't seem that the Sandia device ever gained traction in the military.

As it does with many technical developments that don't go very far, Sandia licensed the technology out. A company called Homeland Integrated Security Systems (HISS) bought it, a very typical name for a company that sells licensed government technology. HISS partnered with a UK-based company called Arcom to manufacture the TacNet Tracker as a commercial product, and marketed it to everyone from the military to search and rescue teams.

HISS must have found that the most popular application of the Tracker was asset tracking. It makes sense, the Tracker device itself lacked a display, under the assumption that it would be in a dock or used with an accessory body-worn display. By the late 2000s, HISS had rebranded the TacNet Tracker as the CyberTracker, and re-engineered it around a Motorola iDEN board. I doubt they actually did much engineering on this product, it seems to have been pretty much an off-the-shelf Motorola iDEN radio that HISS just integrated into their tracking platform. It was advertised as a deterrent to automotive theft and a way to track hijacked school buses in real time---the Chowchilla kidnapping was mentioned.

And that's the curve of millennial mobile computing: a cutting-edge R&D project around special-purpose national security requirements, pitched as a general purpose tactical device, licensed to a private partner, turned into yet another commodity anti-theft tracker. Like if LoJack had started out for nuclear weapons. Just a little story about telecommunications history.

Sandia applied for a patent on the Tracker in 2009, so it's probably still in force (ask a patent attorney). HISS went through a couple of restructurings but, as far as I can tell, no longer exists. The same goes for Arcom, a company by the same name that makes cable TV diagnostic equipment seems to be unrelated. Like the OLPC again, all that is left of the Tracker is a surprising number of used units for sale. I'm not sure who ever used the commercial version, but they sure turn up on eBay. I bought one, of course. It'll make a good paperweight.

2024-01-06 usb on the go

USB, the Universal Serial Bus, was first released in 1996. It did not achieve widespread adoption until some years later; for most of the '90s RS-232-ish serial and its awkward sibling the parallel port were the norm for external peripheral. It's sort of surprising that USB didn't take off faster, considering the significant advantages it had over conventional serial. Most significantly, USB was self-configuring: when you plugged a device into a host, a negotiation was performed to detect a configuration supported by both ends. No more decoding labels like 9600 8N1 and then trying both flow control modes!

There are some significant architectural differences between USB and conventional serial that come out of autoconfiguration. Serial ports had no real sense of which end was which. Terms like DTE and DCE were sometimes used, but they were a holdover from the far more prescriptive genuine RS-232 standard (which PCs and most peripherals did not follow) and often inconsistently applied by manufacturers. All that really mattered to a serial connection is that one device's TX pin went to the other device's RX pin, and vice versa. The real differentiation between DCE and DTE was the placement of these pins: in principle, a computer would have them one way around, and a peripheral the other way around. This meant that a straight-through cable would result in a crossed-over configuration, as expected.

In practice, plenty of peripherals used the same DE-9 wiring convention as PCs, and sometimes you wanted to connect two PCs to each other. Some peripherals used 8p8c modular jacks, some peripherals used real RS-232 connectors, and some peripherals used monstrosities that could only have emerged from the nightmares of their creators. The TX pin often ended up connected to the TX pin and vice versa. This did not work. The solution, as we so often see in networking, was a special cable that crossed over the TX and RX wires within the cable (or adapter). For historical reasons this was referred to as a null modem cable.

One of the other things that was not well standardized with serial connections was the gender of the connectors. Even when both ends features the PC-standard DE-9, there was some inconsistency over the gender of the connectors on the devices and on the cable. Most people who interact with serial with any regularity probably have a small assortment of "gender changers" and null-modem shims in their junk drawer. Sometimes you can figure out the correct configuration from device manuals (the best manuals provide a full pinout), but often you end up guessing, stringing together adapters until the genders fit and then trying with and without a null modem adapter.

You will notice that we rarely go through this exercise today. For that we can thank USB's very prescriptive standards for connectors on devices and cables. The USB standard specifies three basic connectors, A, B, and C. There are variants of some connectors, mostly for size (mini-B, micro-B, even a less commonly used mini-A and micro-A). For the moment, we will ignore C, which came along later and massively complicated the situation. Until 2014, there was only A and B. Hosts had A, and devices had B.

Yes, USB fundamentally employs a host-device architecture. When you connect two things with USB, one is the host, and the other is the device. This differentiation is important, not just for the cable, but for the protocol itself. USB prior to 3, for example, does not feature interrupts. The host must poll the device for new data. The host also has responsibility for enumeration of devices to facilitate autoconfiguration, and for flow control throughout a tree of USB devices.

This architecture makes perfect sense for USB's original 1990s use-case of connecting peripherals (like mice) to hosts (like PCs). In fact, it worked so well that once USB1.1 addressed some key limitations it became completely ubiquitous. Microsoft used the term "legacy-free PC" to identify a new generation of PCs at the very end of the '90s and early '00s. While there were multiple criteria for the designation, the most visible to users was the elimination of multiple traditional ports (like the game port! remember those!) in favor of USB.

Times change, and so do interconnects. The consumer electronics industry made leaps and bounds during the '00s and "peripheral" devices became increasingly sophisticated. The introduction of portables running sophisticated operating systems pushed the host-device model to a breaking point. It is, of course, tempting to talk about this revolution in the context of the iPhone. I never had an iPhone though, so the history of the iDevice doesn't have quite the romance to me that it has to so many in this space [1]. Instead, let's talk about Nokia. If there is a Windows XP to Apple's iPhone, it's probably Nokia. They tried so hard, and got so far, but [...].

The Nokia 770 Internet Tablet was not by any means the first tablet computer, but it was definitely a notable early example. Introduced in 2005, it premiered the Linux-based Maemo operating system beloved by Nokia fans until iOS and Android killed it off in the 2010s. The N770 was one of the first devices to fall into a new niche: with a 4" touchscreen and OMAP/ARM SoC, it wasn't exactly a "computer" in the consumer sense. It was more like a peripheral, something that you would connect to your computer in order to load it up with your favorite MP3s. But it also ran a complete general-purpose operating system. The software was perfectly capable of using peripherals itself, and MP3s were big when you were storing them on MMC. Shouldn't you be able to connect your N770 to a USB storage device and nominate even more MP3s as favorites?

Obviously Linux had mainline USB mass storage support in 2005, and by extension Maemo did. The problem was USB itself. The most common use case for USB on the N770 was as a peripheral, and so it featured a type-B device connector. It was not permitted to act as a host. In fact, every PDA/tablet/smartphone type device with sophisticated enough software to support USB peripherals would encounter the exact same problem. Fortunately, it was addressed by a supplement to the USB 2.0 specification released in 2001.

The N770 did not follow the supplement. That makes it fun to talk about, both because it is weird and because it is an illustrative example of the problems that need to be solved.

The N770 featured an unusual USB transceiver on its SoC, seemingly unique to Nokia and called "Tahvo." The Tahvo controller exposed an interface (via sysfs in the Linux driver) that allowed the system to toggle it between device mode (its normal configuration) and host mode. This worked well enough with Maemo's user interface, but host mode had a major limitation. The N770 wouldn't provide power on the USB port; it didn't have the necessary electrical components. Instead, a special adapter cable was needed to provide 5v power from an alternate source.

So there are several challenges for a USB device to operate as host or device:

  • The USB controller needs a way to determine if it should behave in host or device mode. Ideally, the user shouldn't have to think about this.
  • The USB controller needs to be able to supply power when in host mode, and in most practical situations also needs to accept power (e.g. for charging) when in device mode.

Note that "special cable" involved in host mode for the N770. You might think this was the ugliest part of the situation. You're not wrong, but it's also not really the hack. For many years to follow, the proper solution to this problem would also involve a special cable.

As I mentioned, since 2001 there has been a supplement USB specification called USB On-The-Go, commonly referred to as USB OTG, perhaps because On-The-Go is an insufferably early '00s name. It reminds me of, okay, here goes a full-on anecdote.

Anecdote

I attended an alternative middle school in Portland that is today known as the Sunnyside Environmental School. I could tell any number of stories about the bizarre goings-on at this school that you would scarcely believe, but it also had its merits. One of them, which I think actually came from the broader school district, was a program in which eighth graders were encouraged to "job shadow" someone in a profession they were interested in pursuing. By good fortune, a friend's father was an electrical engineer employed at Intel's Jones Farm campus, and agreed to be my host. I had actually been to Jones Farm a number of times on account of various extracurricular programs (in that era, essentially every STEM program in the Pacific Northwest operated on the largess of either Intel or Boeing, if not both). This was different, though: this guy had a row of engraved brass patent awards lining his cubicle wall and showed me through labs where technicians tinkered with prototype hardware. Foreshadowing a concerning later trend in my career, though, the part that stuck with me most was the meetings. I attended meetings, including one where this engineering team was reporting to leadership on the status of a few of their projects.

I am no doubt primed to make this comparison by the mediocre movie I watched last night, but I have to describe the experience as Wonka-esque. These EEs demonstrated a series of magical hardware prototypes to some partners from another company. Each was more impressive than the last. It felt like I was seeing the future in the making.

My host demonstrated his pet project, a bar that contained an array of microphones and used DSP methods to compare the audio from each and directionalize the source of sounds. This could be used for a sophisticated form of noise canceling in which sound coming from an off-axis direction could be subtracted, leaving only the voice of the speaker. If this sounds sort of unremarkable, that is perhaps a reflection of its success, as the same basic concept is now implemented in just about every laptop on the market. Back then, when the N770 was a new release, it was challenging to make work and my host explained that the software behind it usually crashed before he finished the demo, and sometimes it turned the output into a high pitched whine and he hadn't quite figured out why yet. I suppose that meeting was lucky.

But that's an aside. A long presentation, and then debate skeptical execs, revolved around a new generation of ultramobile devices that Intel envisioned. One, which I got to handle a prototype of, would eventually become the Intel Core Medical Tablet. It featured chunky, colorful design that is clearly of the same vintage as the OLPC. It was durable enough to stand on, which a lab technician demonstrated with delight (my host, I suspect tired of this feat, picked up some sort of lab interface and dryly remarked that he could probably stand on it too). The Core Medical Tablet shared another trait with the OLPC: the kind of failure that leaves no impact on the world but a big footprint at recyclers. Years later, as an intern at Free Geek, I would come across at least a dozen.

Another facet of this program, though, was the Mobile Metro. The Metro was a new category of subnotebook, not just small but thin. A period article compares its 18mm profile to the somewhat thinner Motorola Razr, another product with an outsize representation in the Free Geek Thrift Store. Intel staff were confident that it would appeal to a new mobile workforce, road warriors working from cars and coffee shops. The Mobile Metro featured SideShow, a small e-ink display (in fact, I believe, a full Windows Mobile system) on the outside of a case that could show notifications and media controls.

The Mobile Metro was developed around the same time as the Classmate PC, but seems to have been even less successful. It was still in the conceptual stages when I heard of it. It was announced, to great fanfare, in 2007. I don't think it ever went into production. It had WiMax. It had inductive charging. It only had one USB port. It was, in retrospect, prescient in many ways both good and bad.

The point of this anecdote, besides digging up middle school memories while attempting to keep others well suppressed, is that the mid-2000s were an unsettled time in mobile computing. The technology was starting to enable practical compact devices, but manufacturers weren't really sure how people would use them. Some innovations were hits (thin form factors). Some were absolute misses (SideShow). Some we got stuck with (not enough USB ports).

End of anecdote

As far as I can tell, USB OTG wasn't common on devices until it started to appear on Android smartphones in the early 2010s. Android gained OTG support in 3.1 (codenamed Honeycomb, 2011), and it quickly appeared in higher-end devices. Now OTG support seems nearly universal for Android devices; I'm sure there are lower-end products where it doesn't work but I haven't yet encountered one. Android OTG support is even admirably complete. If you have an Android phone, amuse yourself sometime by plugging a hub into it, and then a keyboard and mouse. Android support for desktop input peripherals is actually very good and operating mobile apps with an MX Pro mouse is an entertaining and somewhat surreal experience. On the second smartphone I owned, I hazily think a Samsung in 2012-2013, I used to take notes with a USB keyboard.

iOS doesn't seem to have sprouted user-exposed OTG support until the iPhone 12, although it seems like earlier versions probably had hardware support that wasn't exposed by the OS. I could be wrong about this; I can't find a straightforward answer in Apple documentation. The Apple Community Forums seem to be... I'll just say "below average." iPads seem to have gotten OTG support a lot earlier than the iPhone despite using the same connector, making the situation rather confusing. This comports with my general understanding of iOS, though, from working with bluetooth devices: Apple is very conservative about hardware peripheral support in iOS, and so it's typical for iOS to be well behind Android in this regard for purely software reasons. Ask me about how this has impacted the Point of Sale market. It's not positive.

But how does OTG work? Remember, USB specifies that hosts must have an A connector, and devices a B connector. Most smartphones, besides Apple products and before USB-C, sported a micro-B connector as expected. How OTG?

The OTG specification decouples, to some extent, the roles of A/B connector, power supply, and host/device role. A device with USB OTG support should feature a type AB socket that accommodates either an A or a B plug. Type AB is only defined for the mini and micro sizes, typically used on portable devices. The A or B connectors are differentiated not only by the shape of their shells (preventing a type-A plug being inserted into a B-only socket), but also electrically. The observant among you may have noticed that mini and micro B sockets and plugs feature five pins, while USB2.0 only uses four. This is the purpose of the fifth pin: differentiation of type A and B plugs.

In a mini or micro type B plug, the fifth pin is floating (disconnected). In a mini or micro type A plug, it is connected to the ground pin. When you insert a plug into a type AB socket, the controller checks for connectivity between the fifth pin (called the ID pin) and the ground. If connectivity is present, the controller knows that it must act as an OTG A-device---it is on the "A" end of the connection. If there is no continuity, the more common case, the controller will act as an OTG B-device, a plain old USB device [2].

The OTG A-device is always responsible for supplying 5v power (see exception in [2]). By default, the A-device also acts as the host. This provides a basically complete solution for the most common OTG use-case: connecting a peripheral like a flash drive to your phone. The connector you plug into your phone identifies itself as an A connector via the ID pin, and your phone thus knows that it must supply power and act as host. The flash drive doesn't need to know anything about this, it has a B connection and acts as a device as usual. This simple case only became confusing when you consider a few flash drives sold specifically for use with phones that had a micro-A connector right on them. These were weird and I don't like them.

In the more common situation, though, you would use a dongle: a special cable. A typical OTG cable, which were actually included in the package with enough Android phones of the era that I have a couple in a drawer without having ever purchased one, provides a micro-A connector on one end and a full-size A socket on the other. With this adapter, you can plug any USB device into your phone with a standard USB cable.

Here's an odd case, though. What if you plug two OTG devices into each other? USB has always had this sort of odd edge-case. Some of you may remember "USB link cables," which don't really have a technical name but tend to get called Laplink cables after a popular vendor. Best Buy and Circuit City used to be lousy with these things, mostly marketed to people who had bought a new computer and wanted to transfer their files. A special USB cable had two A connectors, which might create the appearance that it connected two hosts, but in fact the cable (usually a chunky bit in the middle) acted as two devices to connect to two different hosts. The details of how these actually worked varied from product to product, but the short version is "it was proprietary." Most of them didn't work unless you found the software that came with them, but there are some pseudo-standard controllers supported out of the box by Windows or Linux. I would strongly suggest that you protect your mental state by not trying to use one.

OTG set out to address this problem more completely. First, it's important to understand that this in no way poses an exception to the rule that a USB connection has an A end and a B end. A USB cable you use to connect two phones together might, at first glance, appear to be B-B. But, if you inspect closer, you will find that one end is mini or micro A, and the other is mini or micro B. You may have to look close, the micro connectors in particular have a similar shell!

If you are anything like me, you are most likely to have encountered such a cable in the box with a TI-84+. These calculators had a type AB connector and came with a micro A->B cable to link two units. You might think, by extension, that the TI-84+ used USB OTG. The answer is kind of! The USB implementation on the TI-84+ and TI-84+SE was very weird, and the OS didn't support anything other than TIConnect. Eventually the TI-84+CE introduced a much more standard USB controller, although I think support for any OTG peripheral still has to be hacked on to the OS. TI has always been at the forefront of calculator networking, and it has always been very weird and rarely used.

This solves part of the problem: it is clear, when you connect two phones, which should supply power and which should handle enumeration. The A-device is, by default, in charge. There are problems where this interacts with common USB devices types, though. One of the most common uses of USB with phones is mass storage (and its evil twin MTP). USB mass storage has a very strong sense of host and device at a logical level; the host can browse the devices files. When connecting two smartphones, though, you might want to browse from either end. Another common problem case here is that of the printer, or at least it would be if printer USB host support was ever usable. If you plug a printer into a phone, you might want to browse the phone as mass storage on the printer. Or you might want to use conventional USB printing to print a document from the phone's interface. In fact you almost certainly want to do the latter, because even with Android's extremely half-assed print spooler it's probably a lot more usable than the file browser your printer vendor managed to offer on its 2" resistive touchscreen.

OTG adds Host Negotiation Protocol, or HNP, to help in this situation. HNP allows the devices on a USB OTG connection to swap roles. While the A-device will always be the host when first connected, HNP can reverse the logical roles on demand.

This all sounds great, so where does it fall apart? Well, the usual places. Android devices often went a little off the script with their OTG implementations. First, the specification did not require devices to be capable of powering the bus, and phones couldn't. Fortunately that seems to have been a pretty short lived problem, only common in the first couple of generations of OTG devices. This wasn't the only limitation of OTG implementations; I don't have a good sense of scale but I've seen multiple reports that many OTG devices in the wild didn't actually support HNP, they just determined a role when connected based on the ID pin and could not change after that point.

Finally, and more insidiously, the whole thing about OTG devices having an AB connector didn't go over as well as intended. We actually must admire TI for their rare dedication to standards compliance. A lot of Android phones with OTG support had a micro-B connector only, and as a result a lot of OTG adapters use a micro-B connector.

There's a reason this was common; since A and B plugs are electrically differentiable regardless of the shape of the shell, the shell shape arguably doesn't matter. You could be a heavy OTG user with such a noncompliant phone and adapter and never notice. The problem only emerges when you get a (rare) standards-compliant OTG adapter or, probably more common, OTG A-B cable. Despite being electrically compatible, the connector won't fit into your phone. Of course this behavior feeds itself; as soon as devices with an improper B port were common, manufacturers of cables were greatly discouraged from using the correct A connector.

The downside, conceptually, is that you could plug an OTG A connector (with a B-shaped shell) into a device with no OTG support. In theory this could cause problems, in practice the problems don't seem to have been common since both devices would think they were B devices and (if standards compliant) not provide power. Essentially these improper OTG adapters create a B-B cable. It's a similar problem to an A-A cable but, in practice, less severe. Like an extension cord with two female ends. Home Depot might even help you make one of those.

While trying to figure out which iPhones had OTG support, I ran across an Apple Community thread where someone helpfully replied "I haven't heard of OTG in over a decade." Well, it's not a very helpful reply, but it's not exactly wrong either. No doubt the dearth of information on iOS OTG is in part because no one ever really cared. Much like the HDMI-over-USB support that a generation of Android phones included, OTG was an obscure feature. I'm not sure I have ever, even once, seen a human being other than myself make use of OTG.

Besides, it was completely buried by USB-C.

The thing is that OTG is not gone at all, in fact, it's probably more popular than ever before. There seems to be some confusion about how OTG has evolved with USB specifications. I came across more than one article saying that USB 3.1 Dual Role replaced OTG. This assertion is... confusing. It's not incorrect, but there's a good chance of it leading you int he wrong direction.

Much of the confusion comes from the fact that Dual-Role doesn't mean anything that specific. The term Dual-Role and various resulting acronyms like DRD and DRP have been applied to multiple concepts over the life of USB. Some vendors say "static dual role" to refer to devices that can be configured as either host or device (like the N770). Some vendors use dual role to identify chipsets that detect role based on the ID pin but are not actually capable of OTG protocols like HNP. Some articles use dual role to identify chipsets with OTG support. Subjectively, I think the intent of the changes in USB 3.1 were mostly to formally adopt the "dual role" term that was already the norm in informal use---and hopefully standardize the meaning.

For USB-C connectors, it's more complicated. USB-C cables are symmetric, they do not identify a host or device end in any way. Instead, the USB-C ports use resistance values to indicate their type. When either end indicates that it is only capable of the device role, the situation is simple, behaving basically the same way that OTG did: the host detects that the other end is a device and behaves as the host.

When both ends support the host role, things work differently: the Dual Role feature of USB-C comes into play. The actual implementation is reasonably simple; a dual-role USB-C controller will attempt to set up a connection both ways and go with whichever succeeds. There are some minor complications on top of this, for example, the controller can be configured with a "preference" for host or device role. This means that when you plug your phone into your computer via USB-C, the computer will assume the host role, because although it's capable of either the phone is configured with a preference for the device role. That matches consumer expectations. When both devices are capable of dual roles and neither specifies a preference, the outcome is random. This scenario is interesting but not all that common in practice.

The detection of host or device role by USB-C is based on the CC pins, basically a more flexible version of OTG's ID pin. There's another important difference between the behavior of USB-C and A/B: USB-C interfaces provide no power until they detect, via the CC pins, that the other device expects it. This is an important ingredient to mitigate the problem with A-A cables, that both devices will attempt to power the same bus.

The USB-C approach of using CC pins and having dual role controllers attempt one or the other at their preference is, for the most part, a much more elegant approach. There are a couple of oddities. First, in practice cables from C to A or B connectors are extremely common. These cables must provide the appropriate values on the CC pins to allow the USB-C controller to correctly determine its role, both for data and power delivery.

Second, what about role reversal? For type A and B connectors, this is achieved via HNP, but HNP is not supported on USB-C. Application notes from several USB controller vendors explain that, oddly enough, the only way to perform role reversal with USB-C is to implement USB Power Delivery (PD) and use the PD negotiation protocol to change the source of power. In other words, while OTG allows reversing host and device roles independently of the bus power source, USB-C does not. The end supplying power is always the host end. This apparent limitation probably isn't that big of a deal, considering that the role reversal feature of OTG was reportedly seldom implemented.

That's a bit of a look into what happens when you plug two USB hosts into each other. Are you confused? Yeah, I'm a little confused too. The details vary, and a lot more based on the capabilities of the individual devices rather than the USB version in use. This has been the malaise of USB for a solid decade now, at least: the specification has become so expansive, with so many non-mandatory features, that it's a crapshoot what capabilities any given USB port actually has. The fact that USB-C supports a bevy of alternate modes like Thunderbolt and HDMI only adds further confusion.

I sort of miss when the problem was just inappropriate micro-B connectors. Nonetheless, USB-C dual role support seems ubiquitous in modern smartphones, and that's the only place any of this ever really mattered. Most embedded devices still seem to prefer to just provide two USB ports: a host port and a device port. And no one ever uses the USB host support on their printer. It's absurd, no one ever would. Have you seen what HP thinks is a decent file browser? Good lord.

[1] My first smartphone was the HTC Thunderbolt. No one, not even me, will speak of that thing with nostalgia. It was pretty cool owning one of the first LTE devices on the market, though. There was no contention at all in areas with LTE service and I was getting 75+Mbps mobile tethering in 2011. Then everyone else had LTE too and the good times ended.

[2] There are actually several additional states defined by fixed resistances that tell the controller that it is the A-device but power will be supplied by the bus. These states were intended for Y-cables that allowed you to charge your phone from an external charger while using OTG. In this case neither device supplies power, the external charger does. The details of how this works are quite straightforward but will be confusing to keep adding as an exception, so I'm going to pretend the whole feature doesn't exist.

2023-12-23 ITT Technical Institute

Programming note/shameless plug: I am finally on Mastodon.

The history of the telephone industry is a bit of an odd one. For the greatest part of the 20th century, telephony in the United States was largely a monopoly of AT&T and its many affiliates. This wasn't always the case, though. AT&T held patents on their telephone implementation, but Bell's invention was not the only way to construct a practical telephone. During the late 19th century, telephone companies proliferated, most using variations on the design they felt would fall outside of Ma Bell's patent portfolio. AT&T was aggressive in challenging these operations but not always successful. During this period, it was not at all unusual for a city to have multiple competing telephone companies that were not interconnected.

Shortly after the turn of the 20th century, AT&T moved more decisively towards monopoly. Theodore Newton Vail, president of AT&T during this period, adopted the term "Universal Service" to describe the targeted monopoly state: there would be one universal telephone system. One operated under the policies and, by implication, the ownership of AT&T. AT&T's path to monopoly involved many political and business maneuvers, the details of which have filled more than a few dissertations in history and economics. By the 1920s the deal was done, there would be virtually no (and in a legal sense literally no) long-distance telephone infrastructure in the United States outside of The Bell System.

But what of the era's many telephone entrepreneurs? For several American telephone companies struggling to stand up to AT&T, the best opportunities were overseas. A number of countries, especially elsewhere in the Americas, had telephone systems built by AT&T's domestic competitors. Perhaps the most neatly named was ITT, the International Telephone and Telegraph company. ITT was formed from the combination of Puerto Rican and Cuban telephone companies, and through a series of acquisitions expanded into Europe.

TelefΓ³nica, for example, is a descendent of an early ITT acquisition. Other European acquisitions led to wartime complications, like the C. Lorenz company, which under ITT ownership functioned as a defense contractor to the Nazis during WWII. Domestically, ITT also expanded into a number of businesses outside of the monopolized telephone industry, including telegraphy and international cables.

ITT had been bolstered as well by an effect of AT&T's first round of antitrust cases during the 1910s and 1920s. As part of one of several settlements, AT&T agreed to divest several overseas operations to focus instead on the domestic market. They found a perfect buyer: ITT, a company which already seemed like a sibling of AT&T and through acquisitions came to function as one.

ITT grew rapidly during the mid-century, and in the pattern of many industrial conglomerates of the time ITT diversified. Brands like Sheraton Hotels and Avis Rent-a-Car joined the ITT portfolio (incidentally, Avis would be spun off, conglomerated with others, and then purchased by previous CAB subject Beatrice Foods). ITT was a multi-billion-dollar American giant.

Elsewhere in the early technology industry, salesman Howard W. Sams worked for the P. R. Mallory Company in Indianapolis during the 1930s and 1940s. Mallory made batteries and electronic components, especially for the expanding radio industry, and as Sams sold radio components to Mallory customers he saw a common problem and a sales opportunity: radio technicians often needed replacement components, but had a hard time identifying them and finding a manufacturer. Under the auspices of the Mallory company Sams produced and published several books on radio repair and electronic components, but Mallory didn't see the potential that Sams did in these technical manuals.

Sams, driven by the same electronics industry fervor as so many telephone entrepreneurs, struck out on his own. Incorporated in 1946, the Howard W. Sams Company found quick success with its Photofact series. Sort of the radio equivalent of Haynes and Chilton in the auto industry, Photofact provided schematics, parts lists, and repair instructions for popular radio receivers. They were often found on the shelves of both technicians and hobbyists, and propelled the Sams Company to million-dollar revenues by the early 1950s.

Sams would expand along with the electronics industry, publishing manuals on all types of consumer electronics and, by the 1960s, books on the use of computers. Sams, as a technical press, eventually made its way into the ownership of Pearson. Through Pearson's InformIT, the Sams Teach Yourself series remains in bookstores today. I am not quite sure, but I think one of the first technical books I ever picked up was an earlier edition of Sams HTML in 24 Hours.

The 1960s were an ambitious era, and Sams was not content with just books. Sams had taught thousands electronics technicians through their books. Many radio technicians had demonstrated their qualifications and kept up to date by maintaining a membership in the Howard Sams Radio Institute, a sort of correspondence program. It was a natural extension to teach electronics skills in person. In 1963, Sams opened the Sams Technical Institute in Indianapolis. Shortly after, they purchased the Acme Institute of Technology (Dayton, Ohio) and the charmingly named Teletronic Technical Institute (Evansville, Indiana), rebranding both as Sams campuses.

In 1965, the Sams Technical Institute had 2,300 students across five locations. Sams added the Bramwell Business College to its training division, signaling a move into the broader world of higher education. It was a fast growing business; it must have looked like a great opportunity to a telephone company looking for more ways to diversify. In 1968, ITT purchased the entire training division from Sams, renaming it ITT Educational Services [1].

ITT approached education with the same zeal it had overseas telephone service. ITT Educational Services spent the late '60s and early '70s on a shopping spree, adding campus after campus to the ITT system. Two newly constructed campuses expanded ITT's business programs, and during the '70s ITT introduced formal curriculum standardization programs and a bureaucratic structure to support its many locations. Along with expansion came a punchier name: the ITT Technical Institute.

"Tri-State Businessmen Look to ITT Business Institute, Inc. for Graduates," reads one corner of a 1970 full-page newspaper ad. "ITT adds motorcycle repair course to program," 1973. "THE ELECTRONICS AGE IS HERE. If your eyes are on the future, ITT Technical institute can prepare you for a HIGH PAYING, EXCITING career in... ELECTRONICS," 1971. ITT Tech has always known the value of advertising, and ran everything from full-page "advertorials" to succinct classified ads throughout their growing region.

During this period, ITT Tech clearly operated as a vocational school rather than a higher education institution. Many of its programs ran as short as two months, and they were consistently advertised as direct preparation for a career. These sorts of job-oriented programs were very attractive to veterans returning from Vietnam, and ITT widely advertised to veterans on the basis of its approval (clearly by 1972 based on newspaper advertisements, although some sources say 1974) for payment under the GI Bill. Around the same time ITT Tech was approved for the fairly new federal student loan program. Many of ITT's students attended on government money, with or without the expectation of repayment.

ITT Tech flourished. By the mid-'70s the locations were difficult to count, and ITT had over 1,000 students in several states. ITT Tech was the "coding boot camp" of its day, advertising computer programming courses that were sure to lead to employment in just about six months. Like the coding boot camps of our day, these claims were suspect.

In 1975, ITT Tech was the subject of investigations in at least two states. In Indiana, three students complained to the Evansville municipal government after ITT recruiters promised them financial aid and federally subsidized employment during their program. ITT and federal work study, they were told, would take care of all their living expenses. Instead, they ended up living in a YWCA off of food stamps. The Indiana board overseeing private schools allowed ITT to keep its accreditation only after ITT promised to rework its entire recruiting policy---and pointed out that the recruiters involved had left the company. ITT refunded the tuition of a dozen students who joined the complaint, which no doubt helped their case with the state.

Meanwhile, in Massachusetts, the Boston Globe ran a ten-part investigative series on the growing for-profit vocational education industry. ITT Tech, they alleged, promised recruits to its medical assistant program guaranteed post-graduation employment. The Globe claimed that almost no students of the program successfully found jobs, and the Massachusetts Attorney General agreed. In fact, the AG found, the program's placement rate didn't quite reach 5%. For a settlement, ITT Tech agreed to change its recruiting practices and refund nearly half a million dollars in tuition and fees.

ITT continued to expand at a brisk pace, adding more than a dozen locations in the early '80s and beginning to offer associates degrees. Newspapers from Florida to California ran ads exhorting readers to "Make the right connections! Call ITT Technical Institute." As the 1990s dawned, ITT Tech enjoyed the same energy as the computer industry, and aspired to the same scale. In 1992, ITT Tech announced their "Vision 2000" master plan, calling for bachelor's programs, 80 locations, and 45,000 students for beginning of the new millennium. ITT Tech was the largest provider of vocational training the country.

In 1993, ITT Tech was one of few schools accepted into the first year of the Direct Student Loan program. The availability of these new loans gave enrollment another boost, as ITT Tech reached 54 locations and 20,000 students. In 1994, ITT Tech started to gain independence from its former parent: an IPO sold 17% ownership to the open market, with ITT retaining the remaining 83%. The next year, ITT itself went through a reorganization and split, with its majority share of ITT Tech landing in the new ITT Corporation.

As was the case with so many diversified conglomerates of the '90s (see Beatrice Foods again), ITT's reorganization was a bad portent. ITT Hartford, the spun-out financial services division, survives today as The Hartford. ITT Industries, the spun-out defense contracting division, survives today as well, confusingly renamed to ITT Corporation. But the third part of the 1995 breakup, the ITT Corporation itself, merged with Starwood Hotels and Resorts. The real estate and hospitality side-business of a telephone and telegraph company saw the end of its parent.

Starwood had little interest in vocational education, and over the remainder of the '90s sold off its entire share of ITT Tech. Divestment was a good idea: the end of the '90s hit hard for ITT Tech. Besides the general decline of the tech industry as the dot com bubble burst, ITT Tech's suspect recruiting practices were back. This time, they had attracted federal attention.

In 1999, two ITT Tech employees filed a federal whistleblower suit alleging that ITT Tech trained recruiters to use high-pressure sales tactics and outright deception to obtain students eligible for federal aid. Recruiters were paid a commission for each student they brought in, and ITT Tech obtained 70% of its revenue from federal aid programs. A federal investigation moved slowly, apparently protracted by the Department of Education's nervous approach following the criticism it received for shutting down similar operation Computer Learning Centers. In 2004, federal agents raided ITT Tech campuses across ten states, collecting records on recruitment and federal funding.

During the early 2000s ITT Tech students defaulted on $400 million in federal student loans. The result, a large portion of ITT Tech revenue coming from defaulted federal loans, attracted ongoing attention. ITT Tech was deft in its legal defense, though, and through a series of legal victories and, more often, settlements, ITT Tech stayed in business.

ITT Tech aggressively advertised throughout its history. In the late '90s and early '00s, ITT Tech's constant television spots filled a corner of my brain. "How Much You Know Measures How Far You Can Go," a TV spot proclaims, before ITT's distinctive block letter logo faded on screen in metallic silver. By the year 2000, International Telephone and Telegraph, or rather its scattered remains, no longer had any relationship with ITT Tech. Starwood agreed to license the name and logo to the independent public ITT Technical Institutes corporation, though, and with the decline of ITT's original business the ITT name and logo became associated far more with the for-profit college than the electronics manufacturer.

For-profit universities attracted a lot of press in the '00s---the wrong kind of press. ITT Tech was far from unique in suspicious advertising and recruiting, high tuition rates, and frequent defaults on the federal loans that covered that tuition. For-profit education, it seemed, was more of a scam on the taxpayer dollar than way to secure a promising new career. Publicly traded colleges like DeVry and the University of Phoenix had repeated scandals over their use, or abuse, of federal aid, and a 2004 criminal investigation into ITT Tech for fraud on federal student aid made its future murky.

ITT Tech was a survivor. The criminal case fell apart, the whistleblower lawsuit lead to nothing, and ITT Tech continued to grow. In 2009, ITT Tech acquired the formerly nonprofit Daniel Webster University, part of a wave of for-profit conversions of small colleges. ITT Tech explained the purchase as a way to expand their aeronautics offerings, but observers suspected other motives, ones that had more to do with the perceived legitimacy of what was once a nonprofit, regionally accredited institution. Today, regional accreditors re-investigate institutions that are purchased. There was a series of suspect expansions of small colleges to encompass large for-profit organizations during the '00s that lead to the tightening of these rules.

ITT Tech, numerically, achieved an incredible high. In 2014, ITT Tech reported a total cost of attendance of up to $85,000. I didn't spend that much on my BS and MS combined. Of course, I attended college in impoverished New Mexico, but we can make a comparison locally. ITT Tech operated here as well, and curiously, New Mexico tuition is specially listed in an ITT Tech cost estimate report because it is higher. At its location in Albuquerque's Journal Center office development, ITT Tech charged more than $51,000 in tuition alone for an Associate's in Criminal Justice. The same program at Central New Mexico Community College would have cost under $4,000 over the two years [2].

That isn't the most remarkable, though. A Bachelor's in Criminal Justice would run over $100,000---more than the cost of a JD at UNM School of Law, for an out-of-state student, today.

In 2014, more than 80% of ITT Tech's revenue came from federal student aid. Their loan default rate was the highest of even for-profit programs. With their extreme tuition costs and notoriously poor job placement rates, ITT Tech increasingly had the appearance of an outright fraud.

Death came swiftly for ITT Tech. In 2016, they were a giant with more than 130 campuses and 40,000 students. The Consumer Financial Protection Bureau sued. State Attorneys General followed, with New Mexico's Hector Balderas one of the first two. The killing blow, though, came from the Department of Education, which revoked ITT Tech's eligibility for federal student aid. Weeks later, ITT Tech stopped accepting applications. The next month, they filed for bankruptcy, chapter 7, liquidation.

Over the following years, the ITT Tech scandal would continue to echo. After a series of lawsuits, the Department of Education agreed to forgive the federal debt of ITT Tech attendees, although a decision by Betsy DeVos to end the ITT Tech forgiveness program produced a new round of lawsuits over the matter in 2018. Private lenders faced similar lawsuits, and made similar settlements. Between federal and private lenders, I estimate almost $4.5 billion in loans to pay ITT Tech tuition were written off.

The Department of Education decision to end federal aid to ITT Tech was based, in part, on ITT Tech's fraying relationship with its accreditor. The Accrediting Council for Independent Colleges and Schools (ACICS), a favorite of for-profit colleges, had its own problems. That same summer in 2016, the Department of Education ended federal recognition of ACICS. ACICS accreditation reviews had been cursory, and it routinely continued to accredit colleges despite their failure to meet even ACIC's lax standards. ITT Tech was not the only large ACIC-accredited institution to collapse in scandal.

Two years later, Betsy DeVos reinstated ACICS to federal recognition. Only 85 institutions still relied on ACICS, such august names as the Professional Golfers Career College and certain campuses of the Art Institutes that were suspect even by the norms of the Art Institutes (the Art Institutes folded just a few months ago following a similar federal loan fraud scandal). ACICS lost federal recognition again in 2022. Only time will tell what the next presidential administration holds for the for-profit college industry.

ITT endured a long fall from grace. A leading electronics manufacturer in 1929, a diversified conglomerate in 1960, scandals through the 1970s. You might say that ITT is distinctly American in all the best and worst ways. They grew to billions in revenue through an aggressive program of acquisitions. They were implicated in the CIA coup in Chile. They made telephones and radios and radars and all the things that formed the backbone of the mid-century American electronics industry.

The modern ITT Corporation, descended from spinoff company ITT Industries, continues on as an industrial automation company. They have abandoned the former ITT logo, distancing themselves from their origin. The former defense division became Exelis, later part of Harris, now part of L3, doomed to slowly sink into the monopolized, lethargic American defense industry. German tool and appliance company KΓ€rcher apparently holds a license to the former ITT logo, although I struggle to find any use of it.

To most Americans, ITT is ITT Tech, a so-called college that was actually a scam, an infamous scandal, a sink of billions of dollars in federal money. Dozens of telephone companies around the world, tracing their history back to ITT, are probably better off distancing themselves from what was once a promising international telephone operator, a meaningful technical competitor to Western Electric. The conglomeration of the second half of the 20th century put companies together and then tore them apart; they seldom made it out in as good of condition as they went in. ITT went through the same cycle as so many other large American corporations. They went into hotels, car rentals, then into colleges. They left thousands of students in the lurch on the way out. When ITT Tech went bankrupt, everyone else had already started the semester. They weren't accepting applicants. They wouldn't accept transfer credit from ITT anyway; ITT's accreditation was suspect.

"What you don't know can hurt you," a 1990s ITT Tech advertisement declares. In Reddit threads, ITT Tech alums debate if they're better off telling prospective employers they never went to college at all.

[1] Sources actually vary on when ITT purchased Sams Training Institute, with some 1970s newspaper articles putting it as early as 1966, but 1968 is the year that ITT's involvement in Sams was advertised in the papers. Further confusing things, the former Sams locations continued to operate under the Sams Technical Institute name until around 1970, with verbiage like "part of ITT Educational Services" inconsistently appearing. ITT may have been weighing the value of its brand recognition against Sams but apparently made a solid decision during 1970, after which ads virtually always use the ITT name and logo above any other.

[2] Today, undergraduate education across all of New Mexico's public universities and community colleges is free for state residents. Unfortunately 2014 was not such an enlightened time. I must take every opportunity to brag about this remarkable and unusual achievement in our state politics.

2023-12-05 vhf omnidirectional range

VORTAC site

The term "VHF omnidirectional range" can at first be confusing, because it includes "range"---a measurement that the technology does not provide. The answer to this conundrum is, as is so often the case, history. The "range" refers not to the radio equipment but to the space around it, the area in which the signal can be received. VOR is an inherently spatial technology; the signal is useless except as it relates to the physical world around it.

This use of the word "range" is about as old as instrument flying, dating back to the first radionavigation devices in the 1930s. We still use it today, in the somewhat abstract sense of an acronym that is rarely expanded: VOR.

This is Truth or Consequences VOR. Or, perhaps more accurately, the transmitter that defines the center of the Truth or Consequences VOR, which extends perhaps two hundred miles around this point. The range can be observed only by instruments, but it's there, a phase shift that varies like terrain.

The basic concept of VOR is reasonably simple: a signal is transmitted with two components, a 30Hz tone in amplitude modulation and a 30Hz in frequency modulation. The two tones are out of phase, by an amount that is determined by your position in the range, and more specifically by the radial from the VOR transmitter to your position. This apparent feat of magic, a radio signal that is different in different locations, is often described as "space modulation."

The first VOR transmitters achieved this effect the obvious way, by rapidly spinning a directional antenna in time with the electronically generated phase shift. Spinning anything quickly becomes a maintenance headache, and so VOR was quickly transitioned to solid-state techniques. Modern VOR transmitters are electronically rotated, by one of two techniques. They rotate in the same sense as images on a screen, a set of discrete changes in a solid state system that produce the effect of rotation.

Warning sign

The Truth or Consequences VOR operates on 112.7 MHz, near the middle of the band assigned for this use. Patterned after the nearby Truth or Consequences Airport, KTCS, it identifies itself by transmitting "TCS" in Morse code. Modern charts give this identifier in dots and dashes, an affordance to the poor level of Morse literacy among contemporary pilots.

In the airspace, it defines the intersection of several airways. They all go generally north-south, unsurprising considering that the restricted airspace of White Sands Missile Range prevents nearly all flight to the east. Flights following the Rio Grande, most north-south traffic in this area, will pass directly overhead on their way to VOR transmitters at Socorro or Deming or El Paso, where complicated airspace leads to two such sites very nearby.

This is the function that VORs serve: for the most part, you fly to or from them. Because the radial from the VOR to you remains constant, they provide a reliable and easy to use indication that you are still on the right track. A warning sign, verbose by tradition, articulates the significance:

This facility is used in FAA air traffic control. Loss of human life may result from service interruption. Any person who interferes with air traffic control or damages or trespasses on this property will be prosecuted under federal law.

The sign is backed up by a rustic wooden fence. Like most VOR transmitters, this one was built in the late 1950s or 1960s. The structure has seen only minimal changes since then, although the radio equipment has been improved and simplified.

Antennas

The central, omnidirectional antenna of a VOR transmitter makes for a distinctive silhouette. You have likely noticed one before. I must admit that I have somewhat simplified; most of the volume of the central antenna housing is actually occupied by the TACAN antenna. Most VOR sites in the US are really VORTAC sites, combining the civilian VOR and military TACAN systems into one facility. TACAN has several minor advantages over VOR for military use, but one big advantage: it provides not only a radial but a distance. The same system used by TACAN for distance information, based on an unusual radio modulation technique called "squitter," can be used by civilian aircraft as well in the form of DME. VORTAC sites thus provide VOR, DME, and TACAN service.

True VOR sites, rare in the US but plentiful across the rest of the world, have smaller central antennas. If you are not used to observing the ring of radial antennas, you might not recognize them as the same system.

The radial antennas are placed in a circle some distance away, to open space between them. This reduces, but does not eliminate, the effect of each antenna's radiated power being absorbed by its neighbors. They are often on the roof of the equipment building, and may be surrounded by a metallic ground plane that extends still further. Most US VORTAC sites, originally built before modern RF technology, rely on careful positioning on suitable terrain rather than a ground plane.

Intriguingly, the radial antennas are not directional designs. In a modern VOR site, the radial antennas transmit an in-phase signal. The phase shift used for space modulation is created by rapidly changing the omnidirectional antenna in use. The space modulation is created not by rotating the antenna, but by moving the antenna through a circular path and allowing the Doppler effect to vary the apparent phase of the received signal.

Central Antenna

The lower part of the central antenna, the more cone shaped part, is mostly empty. It encloses the structure that supports the cylindrical radome that houses the actual antenna elements. In newer installations it is often an exposed frame, but the original midcentury sites all provide a conical enclosure. I suspect the circular metallic sheathing simplified calculation of the effective radiation pattern at the time.

An access door can be used to reach the interior to service the antennas; the rope holding this one closed is not standard equipment but is perhaps also not very unusual. These are old facilities. When this cone was installed, adjacent Interstate 25 wasn't an interstate yet.

Monitor antennas

Aviation engineers leave little to chance, and almost never leave a system without a spare. Ground-based infrastructure is no exception. Each VOR transmitter is continuously tested by a monitoring system. A pair of antennas mounted on a post near the fence line feed redundant monitoring systems that ensure the static antennas receive the correct radial. If failure or a bad fix are detected, it switches the transmit antennas over to a second, redundant set of radio equipment. The problem is reported to the FAA, and Tech Ops staff are dispatched to investigate the problem.

Occasionally, the telephone lines VOR stations use to report problems are, themselves, unreliable. When Tech Ops is unable to remotely monitor a VOR station, they issue a NOTAM that it should not be relied upon.

Rear of building

The rear of the building better shows its age. The wall is scarred where old electrical service equipment has been removed; the weather-tight light fixture is a piece of incandescent history. It has probably been broken for longer than I have been alive.

A 1000 gallon propane tank to one side will supply the generator in the enclosure in case of a failure. Records of the Petroleum Storage Bureau of the New Mexico Environment Department show that an underground fuel tank was present at this site but has been removed. Propane is often selected for newer standby generator installations where an underground tank, no longer up to environmental safety standards, had to be removed.

It is indeed in its twilight years. The FAA has shut down about half of the VOR transmitters. TCS was spared this round, with all but one of the VOR transmitters in sparsely covered New Mexico. It is part of the "minimum operational network." It remains to be seen how long VOR's skeleton crew will carry on. A number of countries have now announced the end of VOR service. Another casualty to satellite PNT, joining LORAN wherever dead radio systems go.

Communications tower

The vastness and sparse population of southern New Mexico pose many challenges. One the FAA has long had to contend with is communications. Very near the Truth or Consequences VOR transmitter is an FAA microwave relay site. This tower is part of a chain that relays radar data from southern New Mexico to the air route traffic control center in Albuquerque.

When it was first built, the design of microwave communications equipment was much less advanced than it is today. Practical antennas were bulky and often pressurized for water tightness. Waveguides were expensive and cables were inefficient. To ease maintenance, shorten feedlines, and reduce tower loading, the actual antennas were installed on shelves near the bottom of the tower, pointing straight upwards. At the top of the tower, two passive reflectors acted like mirrors to redirect the signal into the distance. This "periscope" design was widely used by Western Union in the early days of microwave data networking.

Today, this system is partially retired, replaced by commercial fiber networks. This tower survives, maintained under contract by L3Harris. As the compound name suggests, half of this company used to Harris, a pioneer in microwave technology. The other half used to be L3, which split off from Lockheed Martin, which bought it when it was called Loral. Loral was a broad defense contractor, but had its history and focus in radar, another application of microwave RF engineering.

Two old radio sites, the remains of ambitious nationwide systems that helped create today's ubiquitous aviation. A town named after an old radio show. Some of the great achievements of radio history are out there in Sierra County.

2023-11-25 the curse of docker

I'm heading to Las Vegas for re:invent soon, perhaps the most boring type of industry extravaganza there could be. In that spirit, I thought I would write something quick and oddly professional: I'm going to complain about Docker.

Packaging software is one of those fundamental problems in system administration. It's so important, so influential on the way a system is used, that package managers are often the main identity of operating systems. Consider Windows: the operating system's most alarming defect in the eyes of many "Linux people" is its lack of package management, despite Microsoft's numerous attempts to introduce the concept. Well, perhaps more likely, because of the number of those attempts. And still, in the Linux world, distributions are differentiated primarily by their approach to managing software repositories. I don't just mean the difference between dpkg and rpm, but rather more fundamental decisions, like opinionated vs. upstream configuration and stable repositories vs. a rolling release. RHEL and Arch share the vast majority of their implementation and yet have very different vibes.

Linux distributions have, for the most part, consolidated on a certain philosophy of how software ought to be packaged, if not how often. One of the basic concepts shared by most Linux systems is centralization of dependencies. Libraries should be declared as dependencies, and the packages depended on should be installed in a common location for use of the linker. This can create a challenge: different pieces of software might depend on different versions of a library, which may not be compatible. This is the central challenge of maintaining a Linux distribution, in the classical sense: providing repositories of software versions that will all work correctly together. One of the advantages of stable distributions like RHEL is that they are very reliable in doing this; one of the disadvantages is that they achieve that goal by packaging new versions very infrequently.

Because of the need to provide mutually compatible versions of a huge range of software, and to ensure compliance with all kinds of other norms established by distributions (which may range from philosophical policies like free software to rules on the layout of configuration files), putting new software into Linux distributions can be... painful. For software maintainers, it means dealing with a bunch of distributions using a bunch of old versions with various specific build and configuration quirks. For distribution and package maintainers, it means bending all kinds of upstream software into compliance with distribution policy and figuring out version and dependency problems. It's all a lot of work, and while there are some norms, in practice it's sort of a wild scramble to do the work to make all this happen. Software developers that want their software to be widely used have to put up with distros. Distros that want software have to put up with software developers. Everyone gets mad.

Naturally there have been various attempts to ease these problems. Naturally they are indeed various and the community has not really consolidated on any one approach. In the desktop environment, Flatpak, Snap, and AppImage are all distressingly common ways of distributing software. The images or applications for these systems package the software complete with its dependencies, providing a complete self-contained environment that should work correctly on any distribution. The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals that this concept doesn't always work entirely as advertised, but to be fair that kind of situation usually crops up when the software has to interact with elements of the system that the runtime can't properly isolate them from. The video stack is a classic example, where errant OpenGL libraries in packages might have to be removed or replaced for them to function with your particular graphics driver.

Still, these systems work reasonably well, well enough that they continue to proliferate. They are greatly aided by the nature of the desktop applications for which they're used (Snapcraft's system ambitions notwithstanding). Desktop applications tend to interact mostly with the user and receive their configuration via their own interface. Limiting the interaction surface mostly to a GUI window is actually tremendously helpful in making sandboxing feasible, although it continues to show rough edges when interacting with the file system.

I will note that I'm barely mentioning sandboxing here because I'm just not discussing it at the moment. Sandboxing is useful for security and even stability purposes, but I'm looking at these tools primarily as a way of packaging software for distribution. Sandboxed software can be distributed by more conventional means as well, and a few crusty old packages show that it's not as modern of a concept as it's often made out to be.

Anyway, what I really wanted to complain a bit about is the realm of software intended to be run on servers. Here, there is a clear champion: Docker, and to a lesser degree the ecosystem of compatible tools like Podman. The release of Docker lead to a surprisingly rapid change in what are widely considered best practices for server operations. While Docker images a means of distributing software first seemed to appeal mostly to large scalable environments with container orchestration, it sort of merged together with ideas from Vagrant and others to become a common means of distributing software for developer and single-node use as well.

Today, Docker is the most widespread way that server-side software is distributed for Linux. I hate it.

This is not a criticism of containers in general. Containerization is a wonderful thing with many advantages, even if the advantages over lightweight VMs are perhaps not as great as commonly claimed. I'm not sure that Docker has saved me more hours than it's cost, but to be fair I work as a DevOps consultant and, as a general rule, people don't get me involved unless the current situation isn't working properly. Docker images that run correctly with minimal effort don't make for many billable hours.

What really irritates me these days is not really the use of Docker images in DevOps environments that are, to some extent, centrally planned and managed. The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users. When I see open-source, server-side software offered to me as a Docker image or--even worse---Docker Compose stack, my gut reaction is irritation. These sorts of things usually take longer to get working than equivalent software distributed as a conventional Linux package or to be built from source.

But wait, how does that happen? Isn't Docker supposed to make everything completely self-contained? Let's consider the common problems, something that I will call my Taxonomy of Docker Gone Bad.

Configuration

One of the biggest problems with Docker-as-distribution is the lack of consistent conventions for configuration. The vast majority of server-side Linux software accepts its configuration through an ages-old technique of reading a text file. This certainly isn't perfect! But, it is pretty consistent in its general contours. Docker images, on the other hand...

If you subscribe to the principles of the 12-factor-app, the best way for a Docker image to take configuration is probably via environment variables. This has the upside that it's quite straightforward to provide them on the command line when starting the container. It has the downside that environment variables aren't great for conveying structured data, and you usually interact with them via shell scripts that have clumsy handling of long or complicated values. A lot of Docker images used in DevOps environments take their configuration from environment variables, but they tend to make it a lot more feasible by avoiding complex configuration (by assuming TLS will be terminated by "someone else" for example) or getting a lot of their configuration from a database or service on the network.

For most end-user software though, configuration is too complex or verbose to be comfortable in environment variables. So, often, they fall back to configuration files. You have to get the configuration file into the container's file system somehow, and Docker provides numerous ways of doing so. Documentation on different packages will vary on which way it recommends. There are frequently caveats around ownership and permissions.

Making things worse, a lot of Docker images try to make configuration less painful by providing some sort of entry-point shell script that generates the full configuration from some simpler document provided to the container. Of course this level of abstraction, often poorly documented or entirely undocumented in practice, serves mostly to make troubleshooting a lot more difficult. How many times have we all experienced the joy of software failing to start, referencing some configuration key that isn't in what we provided, leading us to have to find have the Docker image build materials and read the entrypoint script to figure out how it generates that value?

The situation with configuration entrypoint scripts becomes particularly acute when those scripts are opinionated, and opinionated is often a nice way of saying "unsuitable for any configuration other than the developer's." Probably at least a dozen times I have had to build my own version of a Docker image to replace or augment an entrypoint script that doesn't expose parameters that the underlying software accepts.

In the worst case, some Docker images provide no documentation at all, and you have to shell into them and poke around to figure out where the actual configuration file used by the running software is even located. Docker images must always provide at least some basic README information on how the packaged software is configured.

Filesystems

One of the advantages of Docker is sandboxing or isolation, which of course means that Docker runs into the same problem that all sandboxes do. Sandbox isolation concepts do not interact well with Linux file systems. You don't even have to get into UID behavior to have problems here, just a Docker Compose stack that uses named volumes can be enough to drive you to drink. Everyday operations tasks like backups, to say nothing of troubleshooting, can get a lot more frustrating when you have to use a dummy container to interact with files in a named volume. The porcelain around named volumes has improved over time, but seemingly simple operations can still be weirdly inconsistent between Docker versions and, worse, other implementations like Podman.

But then, of course, there's the UID thing. One of the great sins of Docker is having normalized running software as root. Yes, Docker provides a degree of isolation, but from a perspective of defense in depth running anything with user exposure as root continues to be a poor practice. Of course this is one thing that often leads me to have to rebuild containers provided by software projects, and a number of common Docker practices don't make it easy. It all gets much more complicated if you use hostmounts because of UID mapping, and slightly complex environments with Docker can turn into NFS-style puzzles around UID allocation. Mitigating this mess is one of the advantages to named volumes, of course, with the pain points they bring.

Non-portable Containers

The irony of Docker for distribution, though, and especially Docker Compose, is that there are a lot of common practices that negatively impact portability---ostensibly the main benefit of this approach. Doing anything non-default with networks in Docker Compose will often create stacks that don't work correctly on machines with complex network setups. Too many Docker Compose stacks like to assume that default, well-known ports are available for listeners. They enable features of the underlying software without giving you a way to disable them, and assume common values that might not work in your environment.

One of the most common frustrations, for me personally, is TLS. As I have already alluded to, I preach a general principle that Docker containers should not terminate TLS. Accepting TLS connections means having access to the private key material. Even if 90-day ephemeral TLS certificates and a general atmosphere of laziness have deteriorated our discipline in this regard, private key material should be closely guarded. It should be stored in only one place and accessible to only one principal. You don't even have to get into these types of lofty security concerns, though. TLS is also sort of complicated to configure.

A lot of people who self-host software will have some type of SNI or virtual hosting situation. There may be wildcard certificates for multiple subdomains involved. All of this is best handled at a single point or a small number of dedicated points. It is absolutely maddening to encounter Docker images built with the assumption that they will individually handle TLS. Even with TLS completely aside, I would probably never expose a Docker container with some application directly to the internet. There are too many advantages to having a reverse proxy in front of it. And yet there are Docker Compose stacks out there for end-user software that want to use ACME to issue their own certificate! Now you have to dig through documentation to figure out how to disable that behavior.

The Single-Purpose Computer

All of these complaints are most common with what I would call hobby-tier software. Two examples that pop into my mind are HomeAssistant and Nextcloud. I don't call these hobby-tier to impugn the software, but rather to describe the average user.

Unfortunately, the kind of hobbyist that deploys software has had their mind addled by the cheap high of the Raspberry Pi. I'm being hyperbolic here, but this really is a problem. It's absurd the number of "self-hosted" software packages that assume they will run on dedicated hardware. Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device." Call me old-fashioned, but I like my computers to perform more than one task, especially the ones that are running up my power bill 24/7.

HomeAssistant is probably the biggest offender here, because I run it in Docker on a machine with several other applications. It actively resists this, popping up an "unsupported software detected" maintenance notification after every update. Can you imagine if Postfix whined in its logs if it detected that it had neighbors?

Recently I decided to give NextCloud a try. This was long enough ago that the details elude me, but I think I burned around two hours trying to get the all-in-one Docker image to work in my environment. Finally I decided to give up and install it manually, to discover it was a plain old PHP application of the type I was regularly setting up in 2007. Is this a problem with kids these days? Do they not know how to fill in the config.php?

Hiding Sins

Of course, you will say, none of these problems would be widespread of people just made good Docker images. And yes, that is completely true! Perhaps one of the problems with Docker is that it's too easy to use. Creating an RPM or Debian package involves a certain barrier to entry, and it takes a whole lot of activation energy for even me to want to get rpmbuild going (advice: just use copr and rpkg). At the core of my complaints is the fact that distributing an application only as a Docker image is often evidence of a relatively immature project, or at least one without anyone who specializes in distribution. You have to expect a certain amount of friction in getting these sorts of things to work in a nonstandard environment.

It is a palpable irony, though, that Docker was once heralded as the ultimate solution to "works for me" and yet seems to just lead to the same situation existing at a higher level of configuration.

Last Thoughts

This is of course mostly my opinion and I'm sure you'll disagree on something, like my strong conviction that Docker Compose was one of the bigger mistakes of our era. Fifteen years ago I might have written a nearly identical article about all the problems I run into with RPMs created by small projects, but what surprises me about Docker is that it seems like projects can get to a large size, with substantial corporate backing, and still distribute in the form of a decidedly amateurish Docker Compose stack. Some of it is probably the lack of distribution engineering personnel on a lot of these projects, since Docker is "simple." Some of it is just the changing landscape of this class of software, with cheap single-board computers making Docker stacks just a little less specialized than a VM appliance image more palatable than they used to be. But some if it is also that I'm getting older and thus more cantankerous.

2023-11-19 Centrex

I have always been fascinated by the PABX - the private automatic branch exchange, often shortened to "PBX" in today's world where the "automatic" is implied. (Relatively) modern small and medium business PABXs of the type I like to collect are largely solid-state devices that mount on the wall. Picture a cabinet that's maybe two feet wide, a foot and half tall, and five inches deep. That's a pretty accurate depiction of my Comdial hybrid key/PABX system, recovered from the offices of a bankrupt publisher of Christian home schooling materials.

These types of PABX, now often associated with Panasonic on the small end, are affordable and don't require much maintenance or space. They have their limitations, though, particularly in terms of extension count. Besides, the fact that these compact PABX are available at all is the result of decades of development in electronics.

Not that long ago, PABX were far more complex. Early PBX systems were manual, and hotels were a common example of a business that would have a telephone operator on staff. The first PABX were based on the same basic technology as their contemporary phone switches, using step-by-step switches or even crossbar mechanisms. They no longer required an operator to connect every call, but were still mostly designed with the assumption that an attendant would handle some situations. Moreover, these early PABX were large, expensive, and required regular maintenance. They were often leased from the telephone company, and the rates weren't cheap.

PABX had another key limitation as well: they were specific to a location. Each extension had to be home-run wired to the PABX, easy in a single building but costly at the level of a campus and, especially, with buildings spread around a city. For organizations with distributed buildings like school districts, connecting extensions back to a central PABX could be significantly more expensive than connecting them to the public telephone exchange.

This problem must have been especially common in a city the size of New York, so it's no surprise that New York Telephone was the first to commercialize an alternative approach: Centrex.

Every technology writer must struggle with the temptation to call every managed service in history a precursor to "the Cloud." I am going to do my very best to resist that nagging desire, but it's difficult not to note the similarity between Centrex service and modern cloud PABX solutions. Indeed, Centrex relied on capabilities of telephone exchange equipment that are recognizably similar to mainframe computer concepts like LPARs and virtualization today. But we'll get there in a bit. First, we need to talk about what Centrex is.

I've had it in my mind to write something about Centrex for years, but I've always had a hard time knowing where to start. The facts about Centrex are often rather dry, and the details varied over years of development, making it hard to sum up the capabilities in short. So I hope that you will forgive this somewhat dry post. It covers something that I think is a very important part of telephone history, particularly from the perspective of the computer industry today. It also lists off a lot of boring details. I will try to illustrate with interesting examples everywhere I can. I am indebted, for many things but here especially, to many members of the Central Office mailing list. They filled in a lot of details that solidified my understanding of Centrex and its variants.

The basic promise of Centrex was this: instead of installing your own PABX, let the telephone company configure their own equipment to provide the features you want to your business phones. A Centrex line is a bit like a normal telephone line, but with all the added capabilities of a business phone system: intercom calling, transfers, attendants, routing and long distance policies, and so on. All of these features were provided by central telephone exchanges, but your lines were partitioned to be interconnected within your business.

Centrex was a huge success. By 1990, a huge range of large institutions had either started their telephone journey with Centrex or transitioned away from a conventional PABX and onto Centrex. It's very likely that you have interacted with a Centrex system before and perhaps not realized. And now, Centrex's days are numbered. Let's look at the details.

Centrex is often explained as a reuse of the existing central office equipment to serve PABX requirements. This isn't entirely incorrect, but it can be misleading. It was not all that unusual for Centrex to rely on equipment installed at the customer site, but operated by the telco. For this reason, it's better to think of Centrex as a managed service than as a "cloud" service, or a Service-as-a-Service, or whatever modern term you might be tempted to apply.

Centrex existed in two major variants: Centrex-CO and Centrex-CU. The CO case, for Central Office, entailed this well-known design of each business telephone line connecting to an existing telco central office, where a switch was configured to provide Centrex features on that line group. CU, for Customer Unit, looks more like a very large PABX. These systems were usually limited to very large customers, who would provide space for the telco to build a new central office on the customer's site. The exchange was located with the customer, but operated by the telco.

These two different categories of service lead to two different categories of customers, with different needs and usage patterns. Centrex-CO appealed to smaller organizations with fewer extensions, but also to larger organizations with extensions spread across a large area. In that case, wiring every extension back to the CO using telco infrastructure was less expensive than installing new wiring to a CU exchange. A prototypical example might be a municipal school district.

Centrex-CU appealed to customers with a large number of extensions grouped in a large building or a campus. In this case it was much less costly to wire extensions to the new CU site than to connect them all over the longer distance to an existing CO. A prototypical Centrex-CU customer might be a university.

Exactly how these systems worked varied greatly from exchange to exchange, but the basic concept is a form of partitioning. Telephone exchanges with support for Centrex service could be configured such that certain lines were grouped together and enabled for Centrex features. The individual lines needed to have access to Centrex-specific capabilities like service codes, but also needed to be properly associated with each other so that internal calling would indeed be internal to the customer. This concept of partitioning telephone switches had several different applications, and Western Electric and other manufacturers continued to enhance it until it reached a very high level of sophistication in digital switches.

Let's look at an example of a Centrex-CO. The State of New Mexico began a contract with Mountain States Telephone and Telegraph [1] for Centrex service in 1964. The new Centrex service replaced 11 manual switchboards distributed around Santa Fe, and included Wide-Area Telephone Service (WATS), a discount arrangement for long-distance calls placed from state offices to exchanges throughout New Mexico. On November 9th, 1964, technicians sent to Santa Fe by Western Electric completed the cutover at the state capitol complex. Incidentally, the capitol phones of the day were being installed in what is now the Bataan Memorial Building: construction of the Roundhouse, today New Mexico's distinctive state capitol, had just begun that same year.

The Centrex service was estimated to save $12,000 per month in the rental and operation of multiple state exchanges, and the combination of WATS and conference calling service was expected to produce further savings by reducing the need for state employees to travel for meetings. The new system was evidently a success, and lead to a series of minor improvements including a scheme later in 1964 to ensure that the designated official phone number of each state agency would be answered during the state lunch break (noon to 1:15). In 1965, Burns Reinier resigned her job as Chief Operator of the state Centrex to launch a campaign for Secretary of State. Many state employees would probably recognize her voice, but that apparently did not translate to recognition on the ballot, as she lost the Democratic party nomination to the Governor's former secretary.

The late 1960s saw a flurry of newspaper advertisements giving new phone numbers for state and municipal agencies, Albuquerque Public Schools, and universities, as they all consolidated onto the state-run Centrex system. Here we must consider the geographical nature of Centrex: Centrex service operates within a single telephone exchange. To span the gap between the capitol in Santa Fe, state offices and UNM in Albuquerque, NMSU in Las Cruces, and even the State Hospital in Las Vegas (NM), a system of tie lines were installed between Centrex facilities in each city. These tie lines were essentially dedicated long distance trunks leased by the state to connect calls between Centrex exchanges at lower cost than even WATS long-distance service.

This system was not entirely CO-based: in Albuquerque, a Centrex exchange was installed in state-leased space at what was then known as the National Building, 505 Marquette. In the late '60s, 505 Marquette also hosted Telepak, an early private network service from AT&T. It is perhaps a result of this legacy that 505 Marquette houses one of New Mexico's most important network facilities, a large carrier hotel now operated by H5 Data Centers. The installation of the Centrex exchange at 505 Marquette saved a lot of expense on new local loops, since a series of 1960s political and bureaucratic events lead to a concentration of state offices in the new building.

Having made this leap to customer unit systems, let's jump almost 30 years forward to an example of a Centrex-CU installation... one with a number of interesting details. In late 1989, Sandia National Laboratories ended its dependence on the Air Force for telephony services by contracting with AT&T for the installation of a 5ESS telephone exchange. The 5ESS, a digital switch and a rather new one at the time, brought with it not just advanced calling features but something even more compelling to an R&D institution at the time: data networking.

The Sandia installation went nearly all-in on ISDN, the integrated digital telephony and data standard that largely failed to achieve adoption for telephone applications. Besides the digital telephone sets, though, Sandia made full use of the data capabilities of the exchange. Computers connected to the data ports on the ISDN user terminals (the conventional term for the telephone instrument itself in an ISDN network) could make "data calls" over the telephone system to access IBM mainframes and other corporate computing resources... all at a blistering 64 kbps, the speed of an ISDN basic rate interface bearer channel. The ISDN network could even transport video calls, by combining multiple BRIs for 384 kbps aggregate capacity.

The 5ESS was installed on a building on Air Force property near Tech Area 1, and the 5ESS's robust support for remote switch modules was fully leveraged to place an RSM in each Tech Area. The new system required renumbering, always a hassle, but allowed for better matching of Sandia's phone numbers on the public network to phone numbers on the Federal Telecommunications System or FTS... a CCSA operated for the Federal Government. But we'll talk about that later. The 5ESS was also equipped with ISDN PRI tie lines to a sibling 5ESS at Sandia California in Livermore, providing inexpensive calling and ISDN features between the two sites.

This is a good time to discuss digital Centrex. Traditional telephony, even today in residential settings, uses analog telephones. Business systems, though, made a transition from analog to digital during the '80s and '90s. Digital telephone sets used with business systems provided far easier access to features of the key system, PABX, or Centrex, and with fewer wires. A digital telephone set on one or two telephone pairs could offer multiple voice lines, caller ID, central directory service, busy status indication for other phones, soft keys for pickup groups and other features, even text messaging in some later systems (like my Comdial!). Analog systems often required as many as a half dozen pairs just for a simple configuration like two lines and busy lamp fields; analog "attendant" sets with access to many lines could require a 25-pair Amphenol connector... sometimes even more than one.

Many of these digital systems used proprietary protocols between the switch and telephones. A notable example would be the TCM protocol used by the Nortel Meridian, an extremely popular PABX that can still be found in service in many businesses. Digital telephone sets made the leap to Centrex as well: first by Nortel themselves, who offered a "Meridian Digital Centrex" capability on their DMS-100 exchange switch that supported telephone sets similar to (but not the same as!) ordinary Meridian digital systems. AT&T followed several years later by offering 5ESS-based digital Centrex over ISDN: the same basic capability that could be used for computer applications as well, but with the advantage of full compatibility with AT&T's broader ISDN initiative.

The ISDN user terminals manufactured by Western Electric and, later, Lucent, are distinctive and a good indication that that digital Centrex is in use. They are also lovely examples of the digital telephones of the era, with LCD matrix displays, a bevy of programmable buttons, and pleasing Bellcore distinctive ringing. It is frustrating that the evolution of telephone technology has seemingly made ringtones far worse. We will have to forgive the oddities of the ISDN electrical standard that required an "NT1" network termination device screwed to the bottom of your desk or, more often, underfoot on the floor.

Thinking about these digital phones, let's consider the user experience of Centrex. Centrex was very flexible; there were a large number of options available based on customer preference, and the details varied between the Centrex host switches used in the United States: Western Electric's line from the 5XB to the 5ESS, Nortel's DMS-100 and DMS-10, and occasionally the Siemens EWSD. This all makes it hard to describe Centrex usage succinctly, but I will focus on some particular common features of Centrex.

Like PABXs, most Centrex systems required that a dialing prefix (conventionally nine) be used for an outside line. This was not universal, "assumed nine" could often be enabled at customer request, but it created a number of complications in the dialplan and was best avoided. Centrex systems, because they mostly belonged to larger customers, were more likely than PABXs to offer tie lines or other private routing arrangements, which were often used by dialing calls with a prefix of 8. Like conventional telephone systems, you could dial 0 for the operator, but on traditional large Centrex systems the operator would be an attendant within the Centrex customer organization.

Centrex systems enabled internal calling by extension, much like PABXs. Because of the large size of some Centrex-CU installations in particular you are probably much more likely to encounter five-digit extensions with Centrex than with a PABX. These types of extensions were usually designed by taking several exchange prefixes in a sequence, and using the last digit of the exchange code as the first digit of the extension. For that reason the extensions are often written in a format like 1-2345. A somewhat charming example of this arrangement was the 5ESS-based Centrex-CU at Los Alamos National Laboratories, which spans exchange prefixes 662-667 in the 505 NPA. Since that includes the less desirable exchange prefix 666, it was skipped. Of course, that didn't stop Telnyx from starting to use it more recently. Because of the history of Los Alamos's development, telephones in the town use these same prefixes, generally the lower ones.

With digital telephones, Centrex features are comparatively easy to access, since they can be assigned to buttons on the telephones. With analog systems there are no such convenient buttons, so Centrex features had to be awkwardly bolted on much like advanced features on non-Centrex lines. Many features are activated using vertical service codes starting with *, although in some systems (especially older systems for pulse compatibility) they might be mapped to codes that look more like extensions. Operations that involve interrupting an active call, like transfer or hold, involve flashing the hookswitch... a somewhat antiquated operation now more often achieved with a "flash" button on the telephone, when it's done at all.

Still, some analog Centrex systems used electrical tricks on the pair (similar to many PABX) to provide a message waiting light and even an extra button for common operations.

While Centrex initially appealed mainly to larger customers, improvements in host switch technology and telephone company practices made it an accessible option for small organizations as well. Verizon's "CustoPAK" was an affordable offering that provided Centrex features on up to 30 extensions. These small-scale services were also made more accessible by computerization. Configuration changes to the first crossbar Centrex service required exchange technicians climbing ladders to resolder jumpers. With the genesis of digital switches, telco employees in translation centers read customer requirements and built switch configuration plans. By the '90s, carriers offered modem services that allowed customers to reconfigure their Centrex themselves, and later web-based self-service systems emerged.

So what became of Centrex? Like most aspects of the conventional copper phone network, it is on the way out. Major telephone carriers have mostly removed Centrex service from their tariffs, meaning they are no longer required to offer it. Even in areas where it is present on the tariff it is reportedly hard to obtain. A report from the state of Washington notes that, as a result particularly of CenturyLink removing copper service from its tariffs entirely, CenturyLink has informed the state that it may discontinue Centrex service at any time, subject to six months notice. Six months may seem like a long time but it is a very short period for a state government to replace a statewide telephone system... so we can anticipate some hurried acquisitions in the next couple of years.

Centrex had always interacted with tariffs in curious ways, anyway. Centrex was the impetus behind multiple lawsuits against AT&T on grounds varying from anti-competitive behavior to violations of the finer points of tariff regulation. For the most part AT&T prevailed, but some of these did lead to changes in the way Centrex service was charged. Taxation was a particularly difficult matter. There were excise taxes imposed on telephone service in most cases, but AT&T held that "internal" calls within Centrex customers should not be subject to these taxes due to their similarity to untaxed PABX and key systems. The finer points of this debate varied from state to state, and it made it to the Supreme Court at least once.

Centrex could also have a complex relationship with the financial policies of many institutional customers. Centrex was often paired with services like WATS or tie lines to make long-distance calling more affordable, but this also encouraged employees to make their personal long-distance calls in the office. The struggle of long-distance charge accounting lead not only to lengthy employee "acceptable use" policies that often survive to this day, but also schemes of accounting and authorization codes to track long distance users. Long-distance phone charges by state employees were a perennial minor scandal in New Mexico politics, leading to some sort of audit or investigation every few years. Long-distance calling was often disabled except for extensions that required it, but you will find stories of public courtesy phones accidentally left with long-distance enabled becoming suddenly popular parts of university buildings.

Today, Centrex is generally being replaced with VoIP solutions. Some of these are fully managed, cloud-based services, analogous to Centrex-CO before them. IP phones bring a rich featureset that leave eccentric dialplans and feature codes mostly forgotten, and federal regulations around the accessibility of 911 have broadly discouraged prefix schemes for outside calls. On the flip side, these types of phone systems make it very difficult to configure dialplan schemes on endpoints, leading office workers to learn a new type of phone oddity: dialing pound after a number to skip the end-of-dialing timeout. This worked on some Centrex systems as well; some things never change.

[1] Later called US West, later called Qwest, now part of CenturyLink, which is now part of Lumen.

2023-11-04 nuclear safety

Nuclear weapons are complex in many ways. The basic problem of achieving criticality is difficult on its own, but deploying nuclear weapons as operational military assets involves yet more challenges. Nuclear weapons must be safe and reliable, even with the rough handling and potential of tampering and theft that are intrinsic to their military use.

Early weapon designs somewhat sidestepped the problem by being stored in inoperational condition. During the early phase of the Cold War, most weapons were "open pit" designs. Under normal conditions, the pit was stored separately from the weapon in a criticality-safe canister called a birdcage. The original three nuclear weapons stockpile sites (Manzano Base, Albuquerque NM; Killeen Base, Fort Hood TX; Clarksville Base, Fort Campbell KY) included special vaults to store the pit and assembly buildings where the pits would be installed into weapons. The pit vaults were designed not only for explosive safety but also to resist intrusion; the ability to unlock the vaults was reserved to a strictly limited number of Atomic Energy Commission personnel.

This method posed a substantial problem for nuclear deterrence, though. The process of installing the pits in the weapons was time consuming, required specially trained personnel, and wasn't particularly safe. Particularly after the dawn of ICBMs, a Soviet nuclear attack would require a rapid response, likely faster than weapons could be assembled. The problem was particularly evident when nuclear weapons were stockpiled at Strategic Air Command (SAC) bases for faster loading onto bombers. Each SAC base required a large stockpile area complete with hardened pit vaults and assembly buildings. Far more personnel had to be trained to complete the assembly process, and faster. Opportunities for mistakes that made weapons unusable, killed assembly staff, or contaminated the environment abounded.

As nuclear weapons proliferated, storing them disassembled became distinctly unsafe. It required personnel to perform sensitive operations with high explosives and radioactive materials, all under stressful conditions. It required that nuclear weapons be practical to assemble and disassemble in the field, which prevented strong anti-tampering measures.

The W-25 nuclear warhead, an approximately 220 pound, 1.7 kT weapon introduced in 1957, was the first to employ a fully sealed design. A relatively small warhead built for the Genie air-to-air missile, several thousand units would be stored fully assembled at Air Force sites. The first version of the W-25 was, by the AEC's own admission, unsafe to transport and store. It could detonate by accident, or it could be stolen.

The transition to sealed weapons changed the basic model of nuclear weapons security. Open weapons relied primarily on the pit vault, a hardened building with a bank-vault door, as the authentication mechanism. Few people had access to this vault, and two-man policies were in place and enforced by mechanical locks. Weapons stored assembled, though, lacked this degree of protection. The advent of sealed weapons presented a new possibility, though: the security measures could be installed inside of the weapon itself.

Safety elements of nuclear weapons protect against both unintentional and intentional attacks on the weapon. For example, from early on in the development of sealed implosion-type weapons "one-point safety" became common (it is now universal). One-point safe weapons have their high explosive implosion charge designed so that a detonation at any one point in the shell will never result in a nuclear yield. Instead, the imbalanced forces in the implosion assembly will tear it apart. This improper detonation produces a "fizzle yield" that will kill bystanders and scatter nuclear material, but produces orders of magnitude less explosive force and radiation dispersal than a complete nuclear detonation.

The basic concept of one-point safety is a useful example to explain the technical concepts that followed later. One-point safety is in some ways an accidental consequence of the complexity of implosion weapons: achieving a full yield requires an extremely precisely timed detonation of the entire HE shell. Weapons relied on complex (at the time) electronic firing mechanisms to achieve the required synchronization. Any failure of the firing system to produce a simultaneous detonation results in a partial yield because of the failure to achieve even implosion. One-point safety is essentially just a product of analysis (today computer modeling) to ensure that detonation of a single module of the HE shell will never result in a nuclear yield.

This one-point scenario could occur because of outside forces. For example, one-point safety is often described in terms of enemy fire. Imagine that, in combat conditions, anti-air weapons or even rifle fire strike a nuclear weapon. The shock forces will reach one side of the HE shell first. If they are sufficient to detonate it (not an easy task as very insensitive explosives are used), the one-point detonation will destroy the weapon with a fizzle yield.

We can also examine one-point safety in terms of the electrical function of the weapon. A malfunction or tampering with a weapon might cause one of the detonators to fire. The resulting one-point detonation will destroy the weapon. Achieving a nuclear yield requires that the shell be detonated in synchronization, which naturally functions as a measure of the correct operation of the firing system. Correctly firing a nuclear weapon is complex and difficult, requiring that multiple components are armed and correctly functioning. This itself serves as a safety mechanism since correct operation, difficult to achieve by intention, is unlikely to happen by accident.

Like most nuclear weapons, the W-25 received a series of modifications or "mods." The second, mod 1 (they start at 0), introduced a new safety mechanism: an environmental sensing device. The environmental sensing device allowed the weapon to fire only if certain conditions were satisfied, conditions that were indicative of the scenario the weapon was intended to fire in. The details of the ESD varied by weapon and probably even by application within a set of weapons, but the ESD generally required things like a moving a certain distance at a certain speed (determined by inertial measurements) or a certain change in altitude in order to arm the weapon. These measurements ensured that the weapon had actually been fired on a missile or dropped as a bomb before it could arm.

The environmental sensing device provides one of two basic channels of information that weapons require to arm: indication that the weapon is operating under normal conditions, like flying towards a target or falling onto one. This significantly reduces the risk of unintentional detonation.

There is a second possibility to consider, though, that of intentional detonation by an unauthorized user. A weapon could be stolen, or tampered with in place as an act of terrorism. To address this possibility, a second basic channel of input was developed: intent. For a weapon to detonate, it must be proven that an authorized user has the intent to detonate the weapon.

The implementation of these concepts has varied over time and by weapon type, but from unclassified materials a general understanding of the architecture of these safety systems can be developed. I decided to write about this topic not only because it is interesting (it certainly is), but also because many of the concepts used in the safety design of nuclear weapons are also applicable to other systems. Similar concepts are used, for example, in life-safety systems and robotics, fields where unintentional operation or tampering can cause significant harm to life and property. Some of the principles are unsurprisingly analogous to cryptographic methods used in computer security, as well.

The basic principle of weapons safety is called the strong link, weak link principle, and it is paired to the related idea of an exclusion zone. To understand this, it's helpful to remember the W-25's sealed design. For open weapons, a vault was used to store the pit. In a sealed weapon, the vault is, in a sense, built into the weapon. It's called the exclusion zone, and it can be thought of as a tamper-protected, electrically isolated chamber that contains the vital components of the weapon, including the electronic firing system.

In order to fire the weapon, the exclusion zone must be accessed, in that an electrical signal needs to be delivered to the firing system. Like the bank vaults used for pits, there is only one way into the exclusion zone, and it is tightly locked. An electrical signal must penetrate the energy barrier that surrounds the exclusion zone, and the only way to do so is by passing through a series of strong links.

The chain of events required to fire a nuclear weapon can be thought of like a physical chain used to support a load. Strong links are specifically reinforced so that they should never fail. We can also look at the design through the framework of information security, as an authentication and authorization system. Strong links are strict credential checks that will deny access under all conditions except the one in which the weapon is intended to fire: when the weapon is in suitable environmental conditions, has received an authorized intent signal, and the fuzing system calls for detonation.

One of the most important functions of the strong link is to confirm that correct environmental and intent authorization has occurred. The environmental sensing device, installed in the body of the weapon, sends its authorizing signal when its conditions are satisfied. There is some complexity here, though. One of the key concerns in weapons safety was the possibility of stray electrical signals, perhaps from static or lightning or contact with an aircraft electrical system, causing firing. The strong link needs to ensure that the authorization signal received really is from the environmental sensing device, and not a result of some electrical transient.

This verification is performed by requiring a unique signal. The unique signal is a digital message consisting of multiple bits, even when only a single bit of information (that environmental conditions are correct) needs to be conveyed. The extra bits serve only to make the message complex and unique. This way, any transient or unintentional electrical signal is extremely unlikely to match the correct pattern. We can think of this type of unique signal as an error detection mechanism, padding the message with extra bits just to verify the correctness of the important one.

Intent is a little trickier, though. It involves human input. The intent signal comes from the permissive action link, or PAL. Here, too, the concept of a unique signal is used to enable the weapon, but this time the unique signal isn't only a matter of error detection. The correct unique signal is a secret, and must be provided by a person who knows it.

Permissive action links are fascinating devices from a security perspective. The strong link is like a combination lock, and the permissive action link is the key or, more commonly, a device through which they key is entered. There have been many generations of PALs, and we are fortunate that a number of older, out of use PALs are on public display at the National Museum of Nuclear Science and History here in Albuquerque.

Here we should talk a bit about the implementation of strong links and PALs. While newer designs are likely more electronic, older designs were quite literally combination locks: electromechanical devices where a stepper motor or solenoid had to advance a clockwork mechanism in the correct pattern. It was a lot like operating a safe lock by remote. The design of PALs reflected this. Several earlier PALs are briefcases that, when opened, reveal a series of dials. An operator has to connect the PAL to the weapon, turn all the dials to the correct combination, and then press a button to send to the unique signal to the weapon.

Later PALs became very similar to the key loading devices used for military cryptography. The unique signal is programmed into volatile memory in the PAL. To arm a weapon, the PAL is connected, an operator authenticates themselves to the PAL, and then the PAL sends the stored unique signal. Like a key loader, the PAL itself incorporates measures against tampering or theft. A zeroize function is activated by tamper sensors or manually and clears the stored unique key. Too many failures by an operator to authenticate themselves also results in the stored unique signal being cleared.

Much like key loaders, PALs developed into more sophisticated devices over time with the ability to store and manage multiple unique signals, rekey weapons with new unique signals, and to authenticate the operator by more complex means. A late PAL-adjacent device on public display is the UC1583, a Compaq laptop docked to an electronic interface. This was actually a "PAL controller," meaning that it was built primarily for rekeying weapons and managing sets of keys. By this later era of nuclear weapons design, the PAL itself was typically integrated into communications systems on the delivery vehicle and provided a key to the weapon based on authorization messages received directly from military command authorities.

The next component to understand is the weak link. A strong link is intended to never fail open. A weak link is intended to easily fail closed. A very basic type of weak link would be a thermal fuse that burns out in response to high temperatures, disconnecting the firing system if the weapon is exposed to fire. In practice there can be many weak links and they serve as a protection against both accidental firing of a damaged weapon and intentional tampering. The exclusion zone design incorporates weak links such that any attempt to open the exclusion zone by force will result in weak links failing.

A special case of a weak link, or at least something that functions like a weak link, is the command disable feature on most weapons. Command disable is essentially a self-destruct capability. Details vary but, on the B61 for example, the command disable is triggered by pulling a handle that sticks out of the control panel on the side of the weapon. The command disable triggers multiple weak links, disabling various components of the weapon in hard-to-repair ways. An unauthorized user, without the expertise and resources of the weapons assembly technicians at Pantex, would find it very difficult to restore a weapon to working condition after the command disable was activated. Some weapons apparently had an explosive command disable that destroyed the firing system, but from publicly available material it seems that a more common design involved the command disable interrupting the power supply to volatile storage for unique codes and configuration information.

There are various ways to sum up these design features. First, let's revisit the overall architecture. Critical components of nuclear weapons, including both the pit itself and the electronic firing system, are contained within the exclusion zone. The exclusion zone is protected by an energy barrier that isolates it from mechanical and electrical influence. For the weapon to fire, firing signals must pass through strong links and weak links. Strong links are designed to never open without a correct unique signal, and to fail open only in extreme conditions that would have already triggered weak links. Weak links are designed to easily fail closed in abnormal situations like accidents or tampering. Both strong links and weak links can receive human input, strong links to provide intent authorization, and weak links to manually disable the weapon in a situation where custody may be lost.

The physical design of nuclear weapons is intricate and incorporates many anti-tamper and mechanical protection features, and high explosives and toxic and radioactive materials lead to hazardous working conditions. This makes the disassembly of modern nuclear weapons infamously difficult; a major challenge in the reduction of the nuclear stockpile is the backlog of weapons waiting for qualified technicians to take them apart. Command disable provides a convenience feature for this purpose, since it allows weapons to be written off the books before they can be carefully dismantled at one of very few facilities (often just one) capable of doing so. As an upside, these same properties make it difficult for an unauthorized user to circumvent the safety mechanisms in a nuclear weapon, or repair one in which weak links have failed.

Accidental arming and detonation of a nuclear weapon should not occur because the weapon will only arm on receipt of complex unique signals, including an intent signal that is secret and available only to a limited number of users (today, often only to the national command authority). Detonation of a weapon under extreme conditions like fire or mechanical shock is prevented by the denial of the strong links, the failure of the weak links, and the inherent difficulty of correctly firing a nuclear weapon. Compromise of a nuclear weapon, or detonation by an unauthorized user, is prevented by the authentication checks performed by the strong links and the tamper resistance provided by the weak links. Cryptographic features of modern PALs enhance custodial control of weapons by enabling rotation and separation of credentials.

Modern PALs particularly protect custodial control by requiring keys unknown to the personnel handling the weapons before they can be armed. These keys must be received from the national command authority as part of the order to attack, making communications infrastructure a critical part of the nuclear deterrent. It is for this reason that the United States has so many redundant, independent mechanisms of delivering attack orders, ranging from secure data networks to radio equipment on Air Force One capable of direct communication with nuclear assets.

None of this is to say that the safety and security of nuclear weapons is perfect. In fact, historical incidents suggest that nuclear weapons are sometimes surprisingly poorly protected, considering the technical measures in place. The widely reported story that the enable code for the Minuteman warhead's PAL was 00000000 is unlikely to be true as it was originally reported, but that's not to say that there are no questions about the efficacy of PAL key management. US weapons staged in other NATO countries, for example, have raised perennial concerns about effective custody of nuclear weapons and the information required to use them.

General military security incidents endanger weapons as well. Widely reported disclosures of nuclear weapon security procedures by online flash card services and even Strava do not directly compromise these on-weapon security measures but nonetheless weaken the overall, multi-layered custodial security of these weapons, making other layers more critical and more vulnerable.

Ultimately, concerns still exist about the design of the weapons themselves. Most of the US nuclear fleet is very old. Many weapons are still in service that do not incorporate the latest security precautions, and efforts to upgrade these weapons are slow and endangered by many programmatic problems. Only in 1987 was the entire arsenal equipped with PALs, and in 2004 all weapons were equipped with cryptographic rekeying capability.

PALs, or something like them, are becoming the international norm. The Soviet Union developed similar security systems for their weapons, and allies of the United States often use US-designed PALs or similar under technology sharing agreements. Pakistan, though, remains a notable exception. There are still weapons in service in various parts of the world without this type of protection. Efforts to improve that situation are politically complex and run into many of the same challenges as counterproliferation in general.

Nuclear weapons are perhaps safer than you think, but that's certainly not to say that they are safe.

[1] This "popular fact" comes from an account by a single former missileer. Based on statements by other missile officers and from the Air Force itself, the reality seems to be complex. The 00000000 code may have been used before the locking mechanism was officially placed in service, during a transitional stage when technical safeguards had just been installed but missile crews were still operating on procedures developed before their introduction. Once the locking mechanism was placed in service and missile crews were permitted to deviate from the former strict two-man policy, "real" randomized secret codes were used.

❌