“The true challenge of software development is not writing code—it is achieving shared understanding among humans, encoding it in rigid logic, and doing so in a constantly shifting landscape.” - Fred Brooks, The Mythical Man Month
This is an essay that lays out the following argument: A world transformed by AI will be boring. Or at least as boring as the one we currently inhabit. This isn’t to say it’s not an important technology. But it’s to reframe the conversation towards the intractable human problems of technological scalability and diffusion, and how these same intractable human problems and their collision with this new technology will shape the workflows of organizations. It’s an essay about AI and management, which is markedly different from an essay about automating individual human beings’ tasks.
So first, by way of an intro about how technology development still runs up against intractable human problems: Fifty years ago Fred Brooks, a computer architect and software engineer, sat down and wrote the “Bible” of software management, The Mythical Man Month.
The most famous formulations from Brooks’ book are Brook’s Law: that “adding manpower to a late software project makes it later,” and the “No Silver Bullet” rule that, “there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity."
What Brooks highlights is a notion that flies largely in the face of our assumptions about the relationship between individual productivity and the success of a software project. Brooks’ reasoning behind his eponymous law is that “Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them.” The work of getting new people onboarded and plugged into the various module development without the full context of the project leads to bloated timelines and the creation of more bugs that then need to be fixed later on.
As for the Silver Bullet, Brookes highlights that there are certain parts of software development that are deemed “essential difficulties” – struggles that are baked into the nature of the process itself. The four difficulties Brooks identifies are:
Complexity: software, as made out of “pure thought stuff,” is more complicated than any other human construct because of its interrelated parts, and complexity that grows nonlinearly (small changes can affect the entire system).
Conformity: since it doesn’t necessarily conform to the same laws of nature of physical objects, software must need to conform to the arbitrary, inconsistent, or changing external systems (business regulations, bureaucracy, interfaces, policy) that are fungible and can’t be engineered around.
Changeability: again, due to its lack of a concrete physical presence, software is expected to change constantly, which introduces new bugs, requires ongoing testing, and increases system entropy.
Invisibility: software has no easily translatable visualization mechanism from human processes to computing processes. This lack of representability makes it difficult to communicate and understand the overarching system and its dependencies.
These can’t be willed away, and affect how a program must be managed as it scales in complexity.
What is interesting here is that Brooks’ book is representative of a larger phenomenon wherein people with engineering backgrounds arrive at similar organizational principles when put into management roles because they understand, intuitively, that what they’re dealing with is an information processing task not a technological one.
Coordination, Communication, Conceptual Integrity

In the upper left hand corner is a program. The kind of thing that is built by “two programmers in a remodeled garage.” A program is a solution to a specific problem with a defined set of inputs. But as you cross over the vertical and horizontal lines, into a Programming Product or a Programming System, respectively, the costs multiply and team sizes need to expand due to issues related to scale, maintenance, and systems integration.
The barriers to building a program are decreasing, especially with AI coding co-pilots. AI has accelerated production tasks connecting back end to front end, generating unit tests, handling some API integrations, and even doing the documentation that no one usually wants to write. But the cost and complexity of production-readiness is still a real problem. Scalability, maintenance, reliability come from code reviews and shared conventions within the team. It has yet to be seen how resilient AI code will be in face of error handling.
This is where a two-person team in a garage ceases to be a useful touchstone for the kind of robust software development that requires management, shared conventions within a team. And management requires creating processes for dealing with the “essential difficulties” Brooks outlined above, which are ultimately issues of coordination, communication and maintaining conceptual integrity.
The perspective change I want to introduce here, then, is to think less about AI as an individual productivity-enhancer, the silver bullet, and more as a tool that will reshape the issues and processes Brooke’s identified that are related to software development, but also extend to any type of management process, the success of which depends on those three cornerstones I mentioned above: coordination, communication, conceptual integrity.
Normal Technology
In April, Arvind Narayananan and Sayash Kapoor published a paper for the Knight Foundation called “AI as Normal Technology.” Narayananan and Kapoor do something eminently reasonable by labelling AI as a “normal” technology, which is not to understate its impact as a “transformative, general-purpose” technology – other normal technologies identified by the author are the internet and electricity – but to cut the middle between the utopian and dystopian visions of AI by refusing to treat it as an autonomous, potentially superintelligent entity.
There is sound logic behind this. For all of the discussion about the arms race towards AGI, no one can really explain what “AGI” is, with some tech leaders resorting to shrugging their shoulders saying “we’ll know it when we see it.” This belies an important fact: we don’t have any reliable way to measure intelligence in humans given the sheer diversity of modes of intelligence.
The fact that an LLM can perfectly score the Bar exam tells us remarkably little about its ability to practice law because it under-emphasizes the real-world skills that are harder to measure. The ability to accurately correspond to self-contained benchmarks fails to take into account that, in most human practices, there is no single, correct answer. And the more likely a task is easily measurable by benchmarks, the less likely it corresponds to the real world.
And even as models become increasingly good at probabilistic reasoning, there’s still a staged process by which they move from a new tool to the transformation of broader social processes as adoption increases.

What’s important to note here is that the metric by which to judge adoption is not the availability of the software, but its use. This is an important distinction because the speed of diffusion is limited by the ability for individuals, organizations and institutions to adapt to the new workflows the technology might introduce. Diffusion is measured in decades.
For example, for nearly 40 years after Edison’s first generation station, Electric Dynamos were “everywhere but in the productivity statistics.” It wasn’t resistance to change that accounted for the four decade lag between electricity being discovered as a productive force and its widescale adoption, it was that factory owners needed to completely redesign the factory layout, workplace organization, process control, and hiring and training practices. This could only happen as the result of long, drawn-out experiments across industries.
This is a wide-scale misunderstanding that I would like to blame on the (potentially misattributed) Henry Ford quote about faster horses (among other common business dictums that sound smart but are wrong). The gap between innovation and diffusion doesn’t happen because we are stubborn and hate change and not particularly inventive in imagining new ways to use technology. It is because there is an entire world of regulations, bureaucratic laws, and workplace norms that you can’t just snap your fingers and change to fall in line with whatever is new. This is a familiar problem in a genre like fantasy fiction: “it’s a bit ridiculous to just drop magic spells into a medieval setting, unless you have some explanation of why the existence of magic hasn’t completely changed the whole basis of society.”
“Normal Technology” lets us see the gradual process of AI as a series of technological jumps, yes, but also as a challenge to management norms. In the same way electricity required a reshaping of management infrastructure and workflows before it became useful, LLMs will need to pass a similar hurdle to reach broad scale diffusion. That’s what I want to talk about next.
Management Infrastructure and Communication Challenges
Does anyone know how an organization functions?
No. Not really. Once an organization reaches a certain level of complexity, it effectively acts like a black box. No one, not even the CEO, can explain the inner-workings of every single system and role that keeps the light on. But the functioning of a successful organization needs different levels, ways to communicate between levels, and systems to regulate each of the levels.
It’s helpful to think of these communication, translation, and regulation functions as an orchestra.
System One is the people who do the thing, play the music. System Two regulates System One, it is the piece of music System One is playing. It dictates what notes to play and when. System Three then regulates System Two (you’re noticing a pattern here): it is the conductor, in charge of interpreting the piece, its dynamics, its tempo. System Four is an intelligence function, think of the higher-order functions of the orchestra, deciding the repertoire, where and when to play. This could be the orchestra’s artistic director, or the conductor, it doesn’t matter. The point is, someone needs to be looking ahead and making decisions about the orchestra that are future-facing. Finally, System Five is what might be called the philosophy or the identity of the orchestra. It dictates what types of orchestra they are, and therefore what types of information is relevant to them. Do they play Mozart, or do they play classical music covers of Guns n’ Roses songs? Without identifying its core identity, an organization has no idea about what is relevant to its functioning. And that’s a bad thing.
The point of mapping organizational functions like this is that a successful organization can communicate changes in the environment without needing to go from the lowest levels to the highest levels. A broken string doesn’t need to throw the entire orchestra off. And the concertmaster shouldn’t be booking venues. At each level, there are ways to account for change and the transmission of information that are attuned to the complexity of what that level does.
Management and LLMs
What has been less discussed amongst conversations about enormous LLM-based productivity gains is what concrete management practices will need to change given that we have a new tool that produces certain kinds of information.
As the replacement theory of labor weakens, both as the big AI companies are struggling to produce more advanced models, and the dream of a fully autonomous software engineer is proving to be much more complicated than super-boosters are willing to admit, it’s wise to shift our attention to how LLMs are changing the kinds of technology that glue organizations from one type of information to another.
The role of management is the management of information and complexity, deciding what kinds of information is important and how to translate that information into action. And how we present and organize information is critical to the function of any company. The largest problem any company faces is that they know more than they know. How to make the right strategic decisions depends on what types of information are considered valuable. The type of information used to make these decisions, the “glue,” is malleable.
The invention of the spreadsheet is a useful analogue here. If you’re interested in the transition from VisiCal to Lotus 1-2-3 to Excel you can read about the history and intricacies of spreadsheet functionality here, but by 1985 and into the 90s and beyond, Excel was firmly entrenched in the global financial system as the analysis and forecasting tool de jure.
Now, Excel lets you do two very important things that working by hand did not let you do. It allowed you to create much bigger and more detailed financial models. And it allows you to work iteratively. “Rather than thinking about what assumptions made the most business sense, then sitting down to project them, Excel encouraged you to just set out the forecasts, then sit around tweaking the assumptions up and down until you got an answer you could live with.”
This transformed the type of management relationships one could have. One could give answers to business questions in the exact format that finance textbooks said you should give them, whether they made sense or not. More saliently, however, they changed what information was deemed important, and thus worth acting upon. Information that wasn’t the product of a spreadsheet was set to the side, seen as less important and less robust.
We need a way to manage what we know, deem what’s important, and give that important thing the resources it needs. This is, as Brooke noticed from a different angle, an essentially human problem. There is no magical productivity wand once something becomes big enough to require large-scale collaboration. We need management, which is essentially to say we need to manage flows of information up and down the chain. And these management structures are malleable. As in the case of Excel, I can see a future in which management workflows are fundamentally restructured by LLM to help continue to manage complexity.
Concrete Strategies for Management Efficacy
The through line of this entire essay thus far can be summed up as follows:
- Technological developments can’t be treated as silver bullets given the ever-persistent challenge of communicating and coordinating across teams, across an organization, and then eventually out into the world where diffusion happens slowly. There is no silver bullet, but there are practices to manage coordination and communication more effectively. While LLMs aren’t a silver bullet, they will have an impact on how we coordinate and communicate information.
- The central problem of all organizations is they know more than they know. The way to effectively deal with the glut of information that, at any time, can completely change the nature of an organization, is to have systems in place to regulate and communicate information between different levels. Management is one of the tools we have for navigating the ever-persistent challenge of communicating information. In fact, that is its main function.
- The types of possible management structures are dictated by the type of information we are able to create and thus what types of information are deemed important. LLMs are a new type of information and therefore we can expect to see a concomitant change in management workflows.
In the (new) philosophical tradition of “Normal Technology” and in the absence of a silver bullet, these are the concrete AI strategies that I envision will be most helpful for big organizations.
Coordinate at Scale
AI is good at summarizing large swaths of information and then translating them into different terms. For example, for coordination you need both common protocols and a way to translate these protocols into terms that each subcomponent of an organization can understand. “So, taking goals and procedures that are expressed in the language of Overall Management System A, and translating them into the terms and objectives of Sub-Management Systems B, C, and D or for that matter, at giving Sub-Management System C a better idea of what those people in Sub-Management B are actually on about, when they use those weird words and keep on pushing incomprehensible goals.”
Understand the Format of the Information That Will Matter
The same way Excel hyper-charged the importance of quantifiable knowledge, organizational decisions will increasingly be made with the aid of some sort of LLM-abetted technology. This might mean every decision must be made by first asking an LLM if it is the right decision, or it might mean that decisions made by an LLM are valued more than those made by an employee. I am dubious about both of these scenarios, and have seen them play out across organizations of all sizes, but this is a warning sign. Be thoughtful about how you arrive at your decisions! In the same way it is easy to get an Excel spreadsheet to say what you want it to say, LLMs are not neutral, third-party observers.
Create Knowledge Maps
One time a potential client told me a story about how a potential anthrax-related disaster was averted because someone who worked at the organization for 30 years vaguely remembered an experiment they’d conducted for just this type of scenario. They dug up the technology and documentation, quickly got it to where it needed to go, and the crisis was averted. But, what happens if there isn’t someone who embodies the institutional memory of the organization? How can a company access valuable repositories of information that can’t be accessed or deployed because they sit in some dusty shelf? You could imagine a more scaled up version of NotebookLM providing an imperfect, albeit useful summary of an organization's internal knowledge base, along with links to its source material. Innovation rarely comes from getting hit with a flash of divine inspiration. More often it involves a process of combining past knowledge and experimentation with present infrastructure to arrive at a new way to frame a problem or solve a challenge.
The Post-It note – a product close to IDEO’s heart – was invented in this way. What was an attempt by 3M to build adhesives for aircraft led to the creation of a glue that stuck lightly to surfaces but could be easily removed without leaving residue. 6 years later, a different 3M employee stumbled across a product demo of the glue and saw it as a solution to a bookmarking problem for his church hymnals. And thus the Post-It was born.
Define What is Information and What is Noise
Although I’ve yet to see evidence of an LLM creating a version of a company identity that truly functions as such, the summarization of lots of different data points lessens the time needed to understand which of these signals are worth paying attention to, and which can be ignored. Understanding what information actually matters to your organization is the only way to stay ahead of large-scale shifts.
Take the housing bubble crisis, for example. What was surprising is that the central banks were surprised. During a period they termed “the Great Moderation,” a giant debt bubble was building as housing prices spiralled out of control, and debt financing turned into bankruptcy. But this was ignored because private sector-debt was deemed outside the purview of the central banks; only government deficits were monitored. This is a monumental failure and a lesson in understanding how information gets ignored because of a lack of coherence in the banks’ ability to understand its role, and therefore to sniff out what types of information is important for them to know.
Greater summarization would not have stopped the global economy from melting down. But, taken on a smaller scale, having access to a greater wealth of information can allow your organization to absorb the shocks from the outside world insofar as it understands how to filter this greater wealth of information into what matters and what doesn’t.
A New Conversation
Despite the breathless coverage of AI as an individual productivity enhancer, I’ve only seen fringe pieces on how it will affect the way large organizations are managed (the actual details, not PR thought pieces about how “X company is using AI to supercharge its future”). And that’s important because we deal with large organizations everyday! They’re responsible for managing business cycles so the economy doesn’t crash. Hollowing out our ability to manage information and act accordingly exposes us to more unforeseen shocks that lead to real world consequences.
So we need to start changing the tenor in which we discuss this technology. A benevolent or malevolent superintelligence (the existence of which is doubtful) creating slop on different social media platforms is not a future we want. Nor is one in which a model’s “ethics” are trained by effectively using slave labor while environmental damage piles up and people lose their jobs.
By conceptualizing AI as “Normal Technology” we can begin to have this discussion. One that isn’t waiting around for the silver bullet that will never come, and start thinking about the entrenched human and organizational problems we’ll need to navigate alongside AI if we are to ever make any headway towards a future we’d like to inhabit.