
Each GC I do know is getting ready AI updates for his or her board. Some ship quarterly decks. Others put together deep-dive technique classes. Many scramble to distill fast-moving regulatory developments into digestible speaking factors. But the identical drawback retains surfacing. Boards stroll away overwhelmed, underinformed, or uncertain what to do subsequent.
The problem shouldn’t be experience. The problem is the communication construction.
AI shouldn’t be a single subject. It’s a class of applied sciences, dangers, alternatives, and governance challenges that shift each quarter. Boards anticipate GCs to make sense of that ambiguity, however most authorized groups nonetheless current AI the best way they current different authorized updates. They begin with the complexity and hope the board can extract the perception.
That is backward. Boards want a story backbone that orients them. They want a transparent reply to 3 questions earlier than the rest. What is occurring? Why it issues. What they need to do.
That is the place many well-intentioned updates crumble.
How Complexity Crowds Out Board-Stage Judgment
Boards don’t have infinite cognitive bandwidth. When authorized groups stroll in with dense memos, lengthy lists of dangers, or technical descriptions of mannequin conduct, administrators lose the plot. They cease listening to what they want and begin making an attempt to reconcile particulars which might be irrelevant to their position.
This isn’t a board failure. It’s a communication failure.
The board’s job is to not perceive each mannequin parameter, regulatory nuance, or implementation element. Their job is to know what the corporate is making an attempt to attain with AI, the extent of publicity that technique creates, and the standard of the decision-making course of behind it. Something that doesn’t assist them try this job turns into noise.
The problem for GCs is that AI produces quite a lot of noise. With out a disciplined strategy to construction the dialog, the board will get a firehose as an alternative of a sign. That’s how misalignment builds. It’s how firms find yourself with boards that both overreact to AI danger or deal with it as a passing technical curiosity.
Each outcomes harm the enterprise.
A Easy Precept: Don’t Talk AI Till You Know The Story
Earlier than you transient the board, the GC should reply one foundational query. What’s the story of AI on this firm proper now? Are you utilizing AI to enhance inner effectivity? Are you integrating AI into customer-facing merchandise? Are you navigating heightened regulatory scrutiny? Are you making an attempt to get forward of opponents who’re transferring shortly?
Should you can not articulate the story in a single sentence, the board won’t grasp it both.
As soon as the story is obvious, you may translate it into the governance dialog. However most legal professionals by no means get the prospect, as a result of they begin with data as an alternative of which means. They stroll the board by means of the exercise as an alternative of the path. They overindex on what authorized groups are seeing reasonably than what administrators want to know.
That is the place the What, So What, Now What mannequin turns into indispensable.
Why The What, So What, Now What Mannequin Works For AI
The mannequin forces readability. It requires you to clarify the state of affairs, the importance, and the subsequent steps with out drowning the board in pointless complexity. It aligns the GC’s intuition for thoroughness with the board’s want for strategic focus. And it forces the GC to make a judgment name, not an information dump.
AI is transferring too shortly for meandering updates. Boards need to perceive whether or not AI is creating alternative, introducing publicity, or reshaping operational assumptions. They need to know the place the corporate is positioned relative to friends. They need to really feel assured that administration shouldn’t be solely conscious of the dangers however is actively shaping the corporate’s future.
The mannequin helps you try this by constructing a pathway from data to which means to motion. It guides the dialog so the board can govern with readability reasonably than react with confusion.
How GCs Lose Credibility With out a Framework
Even refined authorized groups unintentionally overwhelm the board after they deal with AI like a conventional compliance subject. They embody an excessive amount of element about laws that haven’t been finalized, too many definitions, or too many examples of mannequin failure. They confuse breadth with credibility. They assume thoroughness builds belief.
It does the other.
Boards belief readability. They belief judgment. They belief the GC who can stroll into a gathering and say one thing easy and true. Here’s what is altering. Right here is why it issues. Here’s what we suggest. That type of communication alerts maturity and strategic management. It additionally demonstrates that the GC understands what the board wants, not solely what the authorized crew is aware of.
That is the distinction between a authorized replace and a governance second.
AI Requires A Governance Lens, Not A Technical Lens
AI is reworking enterprise fashions, price buildings, buyer expectations, and aggressive dynamics. It’s also attracting political scrutiny, regulatory fragmentation, and public uncertainty. Boards need to know the way these modifications have an effect on the corporate’s danger profile and long-term well being. They don’t want a technical seminar. They want a governance body.
The What, So What, Now What mannequin shifts the GC into that governance posture. It distills complexity all the way down to the weather administrators use to train oversight. It retains the dialog grounded in enterprise impression, not technical curiosity. It additionally helps the GC anticipate the sorts of questions administrators will ask. How uncovered are we? How does this have an effect on our technique? What safeguards have we constructed? What choices require board-level engagement?
With out this construction, conversations drift. With it, conversations sharpen.
A Higher Path Ahead For AI Board Communication
The useful resource you shared, What, So What, Now What: Successfully Speaking With Your Board About Transformative Know-how Comparable to Synthetic Intelligence, lays out this communication technique in depth. It provides GCs a repeatable strategy to transient boards with readability, conciseness, and credibility. It breaks down the best way to diagnose the core message, separate operational element from governance perception, and shut the dialog with a transparent advice.
It’s a sensible device for each in-house lawyer navigating AI conversations with administrators, and you can access it here.
The mannequin doesn’t oversimplify AI. It helps you clarify it in a manner that empowers higher oversight. For boards, that’s the actual worth.
The GC’s Position Is Evolving, And Communication Is Now A Core Talent
AI is accelerating the evolution of the GC position. Administrators anticipate authorized leaders to steward danger, affect technique, and talk with readability in environments that lack secure solutions. Frameworks like What, So What, Now What assist GCs ship that readability persistently.
Additionally they assist authorized groups construct stronger, extra assured relationships with their boards. And so they put together the group for a future through which the tempo of technological change will proceed to speed up.
If you wish to sharpen your capability to speak about advanced rising applied sciences, begin with a construction that makes which means out of complexity as an alternative of amplifying it.
Boards don’t want extra data. They want a sign. And the GC who delivers it turns into indispensable.
Olga V. Mack is the CEO of TermScout, the place she builds authorized methods that make contracts sooner to know, simpler to function, and extra reliable in actual enterprise situations. Her work focuses on how authorized guidelines allocate energy, handle danger, and form choices underneath uncertainty. A serial CEO and former Common Counsel, Olga beforehand led a authorized know-how firm by means of acquisition by LexisNexis. She teaches at Berkeley Legislation and is a Fellow at CodeX, the Stanford Middle for Authorized Informatics. She has authored a number of books on authorized innovation and know-how, delivered six TEDx talks, and her insights repeatedly seem in Forbes, Bloomberg Legislation, VentureBeat, TechCrunch, and Above the Legislation. Her work treats regulation as important infrastructure, designed for a way organizations truly function.
The put up The Board Briefing Mistake Even The Best GCs Still Make appeared first on Above the Law.