The wolf of AI: ‘I’ll huff and I’ll puff and I’ll blow your house down!’
Here is a story about The Three Little Pigs and how they used artificial intelligence.
The first little pig built a house of straw: closed source + frontier AI.
Impressive results can arrive quickly by using closed-source frontier AI models through public, vendor-run interfaces. “Closed source” means you can use the model, but you can’t inspect or control how it was built or how it may change over time; “frontier” reflects the model’s advanced capabilities, but they can still produce unexpected outputs. They can seem easy and quick, but have limited visibility and limited controls and pose real risk if sensitive client information enters the workflow.
The second little pig built a house of sticks: enterprise-approved AI tools (closed-source + frontier AI).
Here, companies start using AI tools that have been checked and approved with controls for business use. Teams can run pilots, learn faster and demonstrate value while operating with stronger controls. Some open‑source models may be used, but typically only in secure, limited environments where data is carefully protected and monitored.
The third little pig built a house of bricks: proprietary + frontier AI.
At this stage, AI becomes part of the company’s core infrastructure. The company owns the system — including how the data is used, who has access to it, and how outputs are reviewed and can track what the system produces. Strong controls — such as audit trails, monitoring and documentation — are built in to meet regulatory and governance requirements. It takes longer to build, but it is safer, more reliable and easier to scale.
The first pig: The straw house
The first pig, who built his house of straw, resembles the early adopter who eagerly experimented with ChatGPT in November 2022. This was not an enterprise move; it was a bottom-up experiment driven by curiosity and accessibility.
Closed-source frontier models suddenly invited exploration beyond traditional programming techniques, opening a new door for those with decades of applied learning in computer science, software engineering and rule-based logic. “Frontier” refers to a leading-edge model with strong general capabilities, but one that can still produce unexpected outputs and requires testing and controls. An example of this would be a model that generates clean, convincing prose for a claims summary but quietly inserts an incorrect coverage detail, misreads a date or cites a policy clause that doesn’t apply.
For many testers, late 2022 marked a shift away from if-then automation toward systems in which individuals with limited or largely traditional AI engineering skill sets could engage frontier-level intelligence through a simple interface — without building models, managing infrastructure or understanding the mechanics beneath the surface.
The straw house went up quickly. It was impressive and seemingly transformative. Productivity surged, experimentation flourished and the promise of intelligence felt immediate. But what the first pig built was a structure optimized for speed, not durability — constructed on borrowed strength.
It stood just fine until pressure arrived.
In this straw house, compliance felt deceptively simple because governance was effectively reduced to vendor terms of service. Policies, guardrails and safety controls were inherited rather than engineered, allowing rapid experimentation without internal oversight, audit trails or formal model governance.
Risk was almost entirely outsourced; accountability was not. Model behavior could change without warning. Failures were opaque, and there was little visibility into why output drifted, hallucinations appeared or prior results failed to remain consistent.
The house someone else built was sufficient for early experimentation so long as the wind blew gently.
When the first gust of wind — the wolf — arrived, it did not come as a threat; it arrived as an email. Workplaces began circulating guidance on the “responsible use of AI,” reminding employees that experimental tools were not neutral playgrounds but extensions of the corporate environment.
What had felt like personal productivity quietly crossed into organizational exposure. Knowledge workers were warned against pasting sensitive information into public interfaces, relying on unverified outputs, or using AI-generated content in client-facing or regulated workflows.
This marked the moment when informal experimentation met institutional risk awareness. The wind was still light, but it was directional. The straw house began to creak as the gap between individual experimentation and organizational responsibility became visible.
As Hasan Davulcu, computer science professor at Arizona State University, emphasizes in discussions of socio-technical systems, when capability outpaces accountability, the “first failure” it is organizational.
The wolf had not yet blown the house down — but for the first time, the pig noticed the weather changing.
The second pig: The house of sticks
The second pig chose a different material. Instead of straw, he built his house from sticks. When the wolf arrived, the house bent, but it did not collapse.
This stage reflects the shift from individual experimentation to business-led pilots, where local adopters and internal change agents began shaping AI use inside real work environments. Curiosity matured into intention. Teams were no longer simply using AI; they were testing it, inspecting it and evaluating its behavior under controlled conditions.
Frontier capability was still present, now combined with greater transparency, testing and the possibility of governance. In practice, this often appeared under the protective veil of approved, vetted and secure enterprise vendors.
What mattered was not just the copilot itself, but what came beyond it: teams experimenting with prompts, workflows, plug-ins and task-specific augmentations while remaining within corporate security boundaries. Inspectability improved, and testing became feasible, bringing governance into the conversation — not as a mandate but as a choice.
In insurance terms, this is often where copilots begin to touch real workflows (e.g., claims intake summaries, policy quality assurance support, underwriting research assist), while still being constrained by approved environments and monitored use.
Risk was no longer fully outsourced; it depended on internal discipline, documentation and the pace at which teams matured their practices. Compliance could succeed in this environment, but only if organizations matched technical experimentation with operational rigor.
The house of sticks held — not because it was unbreakable, but because it was continuously reinforced while the wind was already blowing.
Tip: The strongest business use cases are designed within existing enterprise constraints, not around them. Start by understanding approved vendors, security guidelines and the core technology stack. Then frame use cases that extend what is already trusted — solving real workflow problems today while remaining inspectable, governable and scalable tomorrow.
The third pig: The house of bricks
The third pig built with bricks — not to slow progress but to make progress durable.
This is the proprietary + frontier strategy, where AI is treated less like a tool and more like institutional infrastructure. When the wolf — operational risk, regulatory scrutiny and the steady drumbeat of responsible use — arrives here, the brick house prevails not by being invincible but by being accountable.
Decision ownership is clear. We evolve from “systems of record” that merely store the what (policy data, claim status) to “systems of meaning” that capture the why. Outputs are traceable, logged and reviewable. Controls are expressed in language that compliance teams already understand: recordkeeping, audit trails, data classification, access management and role-based oversight.
Change moves more deliberately than it does with straw or sticks, but outcomes are more predictable. In regulated environments, predictability is not a constraint; it is the foundation that prevents innovation from collapsing under its own velocity. It allows us to capture “decision traces” — such as a senior underwriter’s judgment to override a risk score — turning momentary exceptions into permanent, teachable assets.
In Davulcu’s framing, this is the point where AI stops being “a clever assistant” and becomes an accountable socio-technical system — instrumented, auditable and aligned with how institutions actually make decisions.
Leaders responsible for business technology solutions at large insurance carriers often describe this stage as the moment when AI use cases stop being experiments and start becoming systems.
This is where cautious optimism belongs. The brick house is an invitation to scale responsibly.
The most effective proprietary frontier strategies do not treat compliance as a gatekeeper to bypass, but as a partner in design. When builders engage early with legal, risk, security and business stakeholders, constraints begin to function less like roadblocks and more like architectural supports.
The overlap between innovation goals and governance goals becomes the load-bearing center of the strategy — where use cases are compelling and approvable, clever and defensible, fast and sustainable. From that center, organizations earn the ability to move faster over time because they have built a house that can withstand the wind — and continue to rise.
Tip: The most scalable AI use cases are designed with governance, not after it. Don’t just automate rules; build a “context graph” that scales human wisdom. Identify business problems where innovation, compliance and stakeholder incentives already overlap. Co-design solutions with legal, risk, security and business leaders from the outset so accountability, auditability and operational value are built in from day one.
Sue Kuraja has been in the financial services industry for 20 years, with more than 15 years of experience in business development, scaling insurance and financial services product distribution. She is an avid researcher of emerging trends in the tech space and their ability to modernize the insurance industry. Sue is dedicated to transforming the insurance industry and growing tech-ed knowledge within the broader insurance marketplace. She may be contacted at [email protected].



The quiet rise of AI in everyday insurance decisions
Customizing FIAs with riders meets a broad set of client needs
Advisor News
- Advisors must lead the policy risk conversation
- Gen X more anxious than baby boomers about retirement
- Taxing trend: How the OBBBA is breaking the standard deduction reliance
- Why advisors can’t afford to delay succession planning
- 6 in 10 Americans struggle with financial decisions
More Advisor NewsAnnuity News
- CT commissioner: 70% of policyholders covered in PHL liquidation plan
- ‘I get confused:’ Regulators ponder increasing illustration complexities
- Three ways the Corebridge/Equitable merger could shake up the annuity market
- Corebridge, Equitable merge to create potential new annuity sales king
- LIMRA: Final retail annuity sales total $464.1 billion in 2025
More Annuity NewsHealth/Employee Benefits News
- BREAKING: MIKE ROGERS' HEALTH CARE PLAN INCLUDES NEW OUT-OF-POCKET COSTS AND HIGH-RISKS POOLS THAT INCREASE PREMIUMS
- Iowa Senate drops insurer, managed care limits from subacute mental health bill
- Lamont, Democrats divided on Connecticut Option health plan
- Lamont, Dems divided on Conn. Option health plan
- Many Virginians drop ACA coverage, SCC hears
Many Virginians drop ACA coverage and more likely will, SCC hears
More Health/Employee Benefits NewsLife Insurance News
- WHAT THEY ARE SAYING: KATHLEEN COULOMBE JOINS ACU AS CHIEF ADVOCACY OFFICER
- A-CAP Appoints Kirk Cullimore as President of Sentinel Security Life
- Nationwide enters centennial year stronger than ever
- AM Best Affirms Credit Ratings of Mutual of Omaha Insurance Company and Its Subsidiaries
- AM Best Affirms Credit Ratings of CMB Wing Lung Insurance Company Limited
More Life Insurance News