Articles about A.I. and meta-prompts; a Bed of Nails; goodbye to the black hole on my block.

I’m All Lost in

What I’m obsessing over this week. Week #10:

1) I finally got around to reading the pair of New Yorker feature stories about A.I. that I had dog-eared as must reads a few weeks ago. Specifically, these stories were about: 1) Microsoft’s partnership with (and $13 billion investment/49% ownership of) 2023’s breakout A.I. tech start-up, ChatGPT maker, OpenAI, and 2) Nvidia, the company that makes the unique and powerful processors that run ChatGPT.

I had been riveted last month during Thanksgiving week by big deal headlines when OpenAI’s board fired OpenAI’s CEO Sam Altman—and then when Microsoft turned around and hired Altman as OpenAI’s 700 employees clamored to join his exit.

What drama! Involving our century’s (soon-to-be) apparently defining technology. I also liked that a Microsoft return to glamour could be excellent news for Sound Transit and our 2 Line opening next Spring; the line goes right to the company’s Redmond HQ.

The article about Nvidia, How Jensen Huang’s Nvidia Is Powering the A.I. Revolution, was written by Stephen Witt, a reporter who wrote one of my favorite non-fiction books, How Music Got Free, about the history of MP3s. With his knack for getting super anecdotes (“Sometimes, when Huang was crossing the bridge, the local boys would grab the ropes and try to dislodge him”) and perfect quotes (“There’s a war going on out there in A.I., and Nvidia is the only arms dealer,”) Witt tells the story of Nvidia’s game-changing G.P.U. technology (Graphics Processing Unit), which the company initially sold in video game cards marketed to gamers who simply wanted to improve the on-screen graphic experience. It blew up when A.I. academics got hold of them.

In 2012, Krizhevsky and his research partner, Ilya Sutskever, working on a tight budget, bought two GeForce cards from Amazon. Krizhevsky then began training a visual-recognition neural network on Nvidia’s parallel-computing platform, feeding it millions of images in a single week. “He had the two G.P.U. boards whirring in his bedroom,” Hinton said. “Actually, it was his parents who paid for the quite considerable electricity costs.”

Sutskever and Krizhevsky were astonished by the cards’ capabilities. Earlier that year, researchers at Google had trained a neural net that identified videos of cats, an effort that required some sixteen thousand C.P.U.s. Sutskever and Krizhevsky had produced world-class results with just two Nvidia circuit boards. “G.P.U.s showed up and it felt like a miracle,” Sutskever told me.

Witt, who also has a gift for making technology intelligible with clear analogies, goes on to explain: “Unlike general-purpose C.P.U.s (Central Processing Units) the G.P.U. breaks complex mathematical tasks apart into small calculations, then processes them all at once, in a method known as parallel computing. A C.P.U. functions like a delivery truck, dropping off one package at a time; a G.P.U. is more like a fleet of motorcycles spreading across a city.”

In its very next issue, the New Yorker ran what felt like Pt. 2, an article about Microsoft and OpenAI, The Inside Story of Microsoft’s Partnership with OpenAI. The story used the Thanksgiving week drama as a news peg to tell the history of Microsoft’s pivotal and emergent relationship with OpenAI.

The article revolves around Microsoft’s chief technology officer, Kevin Scott, an idealist populist with a formative rags to riches biography. Scott convinces Microsoft CEO Satya Nadella that the company’s A.I. division must be driven by serving the masses not stealing their jobs. It’s Scott who forged Microsoft’s partnership with OpenAI.

He began looking at various startups, and one of them stood out: OpenAI. Its mission statement vowed to insure that “artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” … In March, 2018, Scott arranged a meeting with some employees at the startup, which is based in San Francisco. He was delighted to meet dozens of young people who’d turned down millions of dollars from big tech firms in order to work eighteen-hour days for an organization that promised its creations would not “harm humanity or unduly concentrate power.”

Before making its $13 billion commitment to OpenAI, Microsoft started out with a $1 billion investment in the young company. It paid off thanks to GitHub, a promising indie start-up like OpenAI, but one that Microsoft had actually acquired outright and brought onto campus. This Microsoft flex, however, didn’t signal the same old enervating death knell for this cult favorite company of software engineers. In a change of thinking, Microsoft left GitHub, now an independent division on the Redmond campus, alone to flourish as is under its own CEO.

GitHub was working on a product called Copilot that intended to help techies finish code. Fortuitously, OpenA.I. had an earlier, separate and stunning success testing A.I. to do just that. Shazam! Microsoft paired GitHub’s product with OpenAI’s technology. It worked. Copilot, released in 2021 on a limited trial to other tech companies, was a smash: “When the GitHub Copilot was released, it was an immediate success. ‘Copilot literally blew my mind,’ one user tweeted hours after it was released. ‘it’s witchcraft!!!’ another posted. Microsoft began charging ten dollars per month for the app; within a year, annual revenue had topped a hundred million dollars.”

Looking for a mass market angle, Microsoft then coupled Copilot, now powered by OpenAI’s latest ChatGPT upgrade, with Microsoft Office to help general users.

The release of the Copilots—a process that began this past spring with select corporate clients and expanded more broadly in November—was a crowning moment for the companies, and a demonstration that Microsoft and OpenAI would be linchpins in bringing artificial intelligence to the wider public. ChatGPT, launched in late 2022, had been a smash hit, but it had only about fourteen million daily users. Microsoft had more than a billion.

The Copilots let users pose questions to software as easily as they might to a colleague—“Tell me the pros and cons of each plan described on that video call,” or “What’s the most profitable product in these twenty spreadsheets?”—and get instant answers, in fluid English. The Copilots could write entire documents based on a simple instruction. (“Look at our past ten executive summaries and create a financial narrative of the past decade.”) They could turn a memo into a PowerPoint. They could listen in on a Teams video conference, then summarize what was said, in multiple languages, and compile to-do lists for attendees.

Earlier this fall, the company gave me a demonstration of the Word Copilot. You can ask it to reduce a five-page document to ten bullet points. (Or, if you want to impress your boss, it can take ten bullet points and transform them into a five-page document.) You can “ground” a request in specific files and tell the Copilot to, say, “use my recent e-mails with Jim to write a memo on next steps.” Via a dialogue box, you can ask the Copilot to check a fact, or recast an awkward sentence, or confirm that the report you’re writing doesn’t contradict your previous one. You can ask, “Did I forget to include anything that usually appears in a contract like this?,” and the Copilot will review your previous contracts. None of the interface icons look even vaguely human. The system works hard to emphasize its fallibility by announcing that it may provide the wrong answer.

The Office Copilots seem simultaneously impressive and banal. They make mundane tasks easier, but they’re a long way from replacing human workers. They feel like a far cry from what was foretold by sci-fi novels. But they also feel like something that people might use every day.

This story on Microsoft and OpenAI also explains one of the key concepts that makes A.I. work in the first place, meta-prompts. Meta-prompts are a series of hyper discrete behind-the-curtain nudges that guide users’ often unwieldy prompts, giving them the most germane and fine-tuned results. Originally, meta-prompts were intended to steer users away from illegal or nefarious paths.

A series of commands—known as meta-prompts— would be invisibly appended to every user query. The meta-prompts were written in plain English. Some were specific: “If a user asks about explicit sexual activity, stop responding.” Others were more general: “Giving advice is O.K., but instructions on how to manipulate people should be avoided.” Anytime someone submitted a prompt, Microsoft’s version of GPT-4 attached a long, hidden string of meta-prompts and other safeguards—a paragraph long enough to impress Henry James.

Crafting meta-prompts, which seems like something a computer whisperer from a 1990s William Gibson novel would specialize in, does appear to be the super power one needs to master to become an A.I. pioneer. Take '“promptographer” Boris Eldagsen, for example, who was profiled by tech website TheVerge.com earlier this month. Their recent video story on Eldagsen began:

He inputs highly specific and deliberate text prompts into generative AI programs like DALL-E or Midjourney, and tweaks their outputs repeatedly to create thought-provoking photographs…or at least, what look like photographs. Senior video producer, Becca Farsace flies to Berlin to investigate how exactly Boris process works, how he’s fooled award shows, and what her final thoughts are on this new age of generative AI art.

Ultimately, the parallel stories about Nvidia’s niche video game sundries (G.P.U.-powered gaming cards) morphing into AI’s secret ingredient, and GitHub’s niche coding tool (Copilot) morphing into the starring feature of Microsoft Office, are larger stories about the way re-configuring intended uses and hacking preordained narratives creates the path to game-changing technologies.

Similarly, the way Microsoft’s relationship with GitHub foreshadows its relationship with OpenAI sets up a telling parallel story. These adjacent threads about a notable change in Microsoft strategy—the willingness to give its young partners autonomy— reflect a defining aspect of technological breakthroughs: Important shifts in thinking don’t seem meaningful until the Eureka! finale gives us the lens to look back and identify all those necessary precursor moments.

Coupled with Neuromancer details like meta-prompting and promptography these New Yorker features captured the future mid-stream.


2) Happy Hannukah to me. My bestie ECB got me a magical Hannukah present: Bed of Nails’ BON Mat, a soft mat covered with 8,820 plastic nails that mimics the soothing effects of acupuncture, or, according to the woo-woo brochure: “the mystical bed of nails originated over 1,000 years ago… used by gurus in the practice of meditation and healing.”

In my case this means releasing an ocean of DOSE inside my body (dopamine, oxytocin, serotonin, and endorphins) as I lay down on it every night.

“Use your Bed of Nails when needed, preferably daily for 10 to 20 minutes, or as long as you desire … even to fall asleep.”

That’s me.

3) Capitol Hill Seattle blog has the news: Bounty Kitchen, the black hole that’s devouring the centerpiece ground-floor space below the modern-age apartment building on my block’s (otherwise lively) corner intersection, is finally disappearing.

With lights ablaze at the other corner businesses—the crowded Vietnamese restaurant, the noisy taco place, and the pizza joint—this energy vacuum (I’d taken to calling it Empty Kitchen shortly after it opened three-and-a-half years ago) was always pitch black by dinner time and on into the evening. You might see some random customers there during the weekdays, inevitably looking a bit confused and lunching alone in the capacious dining room; and apparently, they never came back.

With its rigid flow-chart vibe and the staff’s utter bewilderment at the idea that customers might want to linger and chill at a restaurant, their awkward and forced business plan was a mismatch for the neighborhood from Day 1. Despite the large mod space, high ceiling, and leafy patio all inherited from the previous groovy tenant, Tallulah’s restaurant, Bounty Kitchen was not a place that ever made you feel welcome to chat with a friend or telecommute solo over a long lunch.

They were also oblivious to the fact that Tallulah’s, which had live jazz on Thursday nights, had actually seated people at the long and gorgeous mahogany bar while the warm staff catered to the lively tables with a flirty expectation that patrons wanted more food and second rounds.

Bounty Kitchen’s last day—inevitable for its entire uninspired tenure on the block—is this Saturday.

Zoned Neighborhood Commercial (NC)-1—meaning multi-story, mixed-use apartment buildings and convenient retail—my aspirational block, which also has a community health clinic, a boisterous kindergarten, a coffee shop, affordable housing, an ice cream place, a yuppie grocery mart, a yoga studio, and an art gallery, can finally get on with our city planning.

No news about what’s moving in yet, but hopefully at the new place, they’ll ask if you’d like another coffee at lunch or a glass of wine after dinner.

Previous
Previous

What do any of my city planning poems have to do with coyotes, you ask?

Next
Next

Robert Glasper’s Abstract R&B; Bryan Washington’s short stories; and dispatches from the cities of the 21st century.