skip to main content
Form CRS Disclosures Client Login
Wealth Management, Finance & Investing Blog
AI

The New Duty of Care: Navigating AI in Purpose-Driven Portfolios

March 03, 2026

Content adapted from an article originally written by Kevin O’Connell, published by, and used with permission from, Fiducient Advisors LLC.

Artificial intelligence (AI) is reshaping the fiduciary role, challenging investment committees to balance innovation with mission integrity and traditional stewardship responsibilities.

Twenty years ago, the world ran on an entirely different operating system. The iPhone had not yet transformed global communication. Platforms such as YouTube hadn’t democratized media. In the not-too-distant past, Bitcoin didn’t exist and people were limited to paying for their pizza with cash or credit;1 can you imagine that?! Today, we live in an era defined by exponential innovation, and investment stewards must navigate these shifts with clarity, discipline, and intentionality. Moreover, as this technological wave accelerates, demographic realities appear to be extending time horizons. With life expectancy already reaching upwards of 80 years in developed nations, many institutions may be forced to rethink how they steward resources meant to last across multiple generations. A changing world places new demands on what it means to oversee capital prudently and in alignment with a long-term mission.

The Paradox: More Data, Less Clarity

Modern fiduciaries face an uncomfortable reality: they have access to more financial data than ever, yet arguably with less clarity about how to act on it. This could be one of the driving factors behind the growth in outsourcing investment management services. This approach allows trustees and committee members to put greater focus on strategic initiatives and oversight, rather than sifting through reams of portfolio data and market minutiae, while also reducing the operational burden on staff members.

At the same time, nonprofit board members increasingly confront technologies they haven’t grown up with, AI models, digital assets, dynamic risk systems, and new forms of operational and reputational exposure. Investment oversight that once focused primarily on asset allocation, manager selection, and liquidity to address traditional risk/return objectives must now incorporate data privacy risks, algorithmic bias, cybersecurity, and the implications of automated decision systems. As noted in “Fiduciary Fun Facts,” fiduciary duty dates back to ancient Rome2 and has faced previous challenges and forced evolution.

However, these aren’t simply technical developments; they reshape the context in which fiduciary stewardship occurs. And the speed of innovation means committees cannot rely on historical frameworks alone. Failing to incorporate responsible AI and technology considerations is no longer a passive choice, it’s an active risk.

AI’s Promise and Potential Peril

Artificial intelligence is more than just another tool; it is a structural shift affecting research, risk management, and strategic planning. AI’s greatest contribution may be transforming large, unstructured datasets into decision-ready insights, helping investment teams filter noise and detect patterns earlier and more accurately.

However, AI introduces new risks. Three-quarters of institutional investors now consider the negative externalities of data and AI technology a significant long-term investment risk.3 These concerns extend beyond market volatility to include data governance failures, harmful algorithmic impacts on society, reputational risks from controversial AI applications, and overreliance on opaque models that committees cannot easily explain.

This tension, AI’s potential versus its risks, defines the current moment and demands careful navigation. Microsoft is among several major companies at the forefront of AI development and has subsequently established its own principles and guidelines to help ensure systems remain “transparent, reliable, and worthy of trust.”4

A Committee’s Dilemma: The Cost of Advice in an AI Era

Consider a recent conversation with the board representing a mid-sized educational foundation. A committee member raised a pointed question, paraphrasing: “If AI can deliver real-time analysis and data-driven decisions by constantly learning and adapting while at the same time avoiding bias, fatigue, or ego, can we assume it will inevitably outperform human-managed portfolios? And if every dollar should advance our mission and AI promises precision and cost efficiency, should we question whether paying for human advice still aligns with fiduciary responsibility?”

It’s a reasonable question, and one that reflects a fundamental misinterpretation of what fiduciary responsibility actually requires. Yes, AI has advanced remarkably and offers powerful tools for efficiency and analysis. But fiduciary responsibility isn’t just about data and execution, it’s about judgment, governance, and accountability. Again, context matters: AI can process numbers brilliantly, but it doesn’t truly understand an organization’s mission (nor will it ever). It is unlikely to ever have an appreciation for the sensitivity of spending needs or drawdown tolerance with the nuance that a human advisor provides. Committees remain legally and ethically responsible for investment decisions, and a fiduciary partner helps ensure compliance, documentation, and alignment with policy, areas where algorithms cannot assume liability.

The most effective approach isn’t “AI or advisor”; it’s integrating both. Leading advisors already leverage AI-driven insights to enhance decision-making and continually seek best practices.5 The value lies in combining these tools with experience, strategic planning, and governance oversight tailored to each organization’s unique objectives.

Redefining Fiduciary Duty for the AI Age

As AI reshapes markets, fiduciaries must reinterpret long-standing obligations, such as duty of care, loyalty, and prudence within this new environment.

Duty of Care: Understanding Emerging Risks. Investment committees are not expected to be AI engineers, but they must understand how AI affects markets, managers, and operational risk. Committees often respond to AI risks with: “What’s the problem? What are you talking about?” This knowledge gap represents fiduciary vulnerability. Simply put, fiduciaries cannot oversee risks that they do not understand.

Duty of Loyalty: Ensuring Oversight and Accountability. As one technology leader put it: “People should be accountable for AI systems. How can we create oversight so that humans can be accountable and in control?”6 Accountability is not optional. Delegating decisions to opaque algorithms without governance represents a failure of fiduciary duty.

Duty of Prudence: Documented, Disciplined Decision-Making. Prudence should not suggest a burden of predicting the future; rather, it should focus on applying a structured, transparent process. AI’s increasing influence requires committees to document how they evaluate technology-driven risks, question managers using AI-based strategies, and help ensure decisions remain aligned with mission and long-term goals. Prudence in the AI era starts with better questions.

Mission Alignment in a Technology-Driven World

Nonprofits are more than investors. They are mission-driven stewards whose portfolios often aspire to reflect both financial goals and institutional values. AI adds a new dimension: evaluating how technologies influence communities, equity, fairness, and social well-being.

Mission alignment must involve evaluating the impact of emerging technologies and investing responsibly in innovation instead of simply deciding to exclude certain sectors. As one industry leader notes, “We need to bring the lens of impact and responsibility into investment decisions to help ensure focus on driving positive outcomes while managing downside risks [related to AI].”

Moreover, responsible technology governance can be a long-term competitive differentiator. Operating this way helps drive long-term success for companies and their investors. It’s not a trade-off; it’s an investment in resilience.

What Investment Committees Can Do Next

Build Literacy, Not Mastery. Committees need not become AI engineers, but they must understand AI’s implications for risk, operations, and mission alignment.

Strengthen Governance and Documentation. Investment Policy Statements should consider addressing the use of AI-influenced strategies, data privacy expectations, and model oversight requirements.

Integrate Responsible Technology Questions into Due Diligence. Committees should ask managers to document how AI-enabled processes are validated, request transparency on model training data and governance, and incorporate responsible technology questions into RFPs and manager reviews.

Embrace Transparency as a Fiduciary Requirement. Transparency bridges innovation and accountability. Committees should demand clear explanations from managers using AI-enabled systems.

Prioritize Mission Integrity. AI should enhance a mission, not compromise it. Principles emphasizing that AI systems should empower everyone and engage all people, regardless of backgrounds, provide a north star for aligning investment decisions with institutional purpose.

Stewardship in an Age of Acceleration

Technological advances are taking place faster than in any other period in modern history. AI is transforming how investment strategies are built, how risks are detected, and how global markets function. These shifts don’t diminish the importance of fiduciary duty, they elevate it. As CIO Brad Long reminded us in this year’s Outlook: “[as] stewards of capital, it is our duty to protect capital and not speculate with assets that have been placed in our care.”7

We believe committees that integrate responsible AI principles deepen their understanding, ask better questions, and anchor decisions in mission will be best positioned to steward capital wisely in the decades ahead. Responsible oversight does not mean slowing innovation. Instead, responsible oversight should focus on helping to ensure innovation serves people, planet, and purpose.

In this moment of transformation, the role of the fiduciary has never been more essential or more powerful. Contact us for guidance grounded in governance, accountability, and long-term mission alignment.

1 Bitcoin Pizza Day: Why Bitcoiners Are Celebrating Today By Eating Pizza
2 Fiduciary Fun Facts: From Roman Trusts to Employee Retirement Plans
3 VentureESG-Pushing-Forward-—-LP-White-Paper-Feb-2025.pdf
4 Responsible AI Principles and Approach | Microsoft AI
5 AI in Investment Management: 5 Lessons from the Front Lines – CFA Institute Enterprising Investor
6 Responsible AI Principles and Approach | Microsoft AI
7 2026 Outlook: The Discipline Dividend

Wealthspire Advisors LLC, Fiducient Advisors LLC, Wealthspire Retirement, LLC dba Wealthspire Retirement Advisory, and certain other affiliates are separately registered investment advisers. © 2026 Wealthspire

This material should not be construed as a recommendation, offer to sell, or solicitation of an offer to buy a particular security or investment strategy. The information provided is for informational purposes only and should not be relied upon for accounting, legal, or tax advice. While the information is deemed reliable, Wealthspire cannot guarantee its accuracy, completeness, or suitability for any purpose, and makes no warranties with regard to the results to be obtained from its use.


Related Content

https://www.wealthspire.com/blog/the-new-duty-of-care-navigating-ai-in-purpose-driven-portfolios/
2026 Copyright | All Right Reserved