The Human Edge in the AI Era Starts at McDonough
Georgetown’s McDonough School of Business is preparing principled leaders for an AI-driven world, not by teaching tools, but by building the judgment, accountability, and moral seriousness that no machine can replicate.
The case looked straightforward. Market data, competitive analysis, a clean AI-generated recommendation: increase production by 22%, enter three new regions. The logic made sense, the charts were clear. The students nodded.
Then the professor revealed what the data had not shown. The supplier was a front for child labor. The growth opportunity would displace a vulnerable community. The algorithm was optimized for profit; it had not considered human dignity.
At Georgetown McDonough, that moment in the classroom is deliberate, designed to surface the most consequential leadership question of the AI era: not whether to use the technology but whether the person deploying it has the formation to recognize when it is wrong, when it is harmful, and when the only responsible answer is to stop.
“The real question with AI is not what the technology can do, it’s what leaders choose to do with it,” said Prashant Malaviya, vice dean for programs at Georgetown McDonough. “Our responsibility as a school is to develop leaders who can evaluate these systems critically, understand their consequences, and make decisions that balance performance with responsibility.”
That responsibility extends across every program at McDonough, shaping how students – from undergraduates to executives – learn to navigate the opportunities and risks of AI in real-world contexts.
What Happens When Leaders Stop Questioning AI
When AI produces confident, instant answers, the temptation to accept them at face value and move on is powerful and understandable. It is also one of the most dangerous habits a leader can cultivate, because leadership has never been about speed alone. It is about making choices, owning tradeoffs, and being answerable for what follows.
Many organizations are already failing this test, treating AI governance as a compliance exercise rather than a leadership obligation. Alberto Rossi, director of the AI, Analytics, and Future of Work Initiative at Georgetown McDonough, has watched the pattern repeat across industries and sectors.
“The most common mistake I see is companies treating AI governance as a rubber stamp,” he said. “They publish a responsible AI policy, check the box, and move on.”
The organizations that earn the right to call themselves responsible AI leaders do something different. They embed human oversight into their workflows from the start, build cultures where employees can raise concerns without fear of consequence, and hold AI outcomes to the same rigorous, transparent accountability they apply to financial results. Everything else, Rossi argues, is theater.
Georgetown’s Emphasis on Values Over Algorithms
Many business schools are teaching students how to use AI. Georgetown McDonough is teaching them when to challenge it, when to override it, and how to bear full responsibility for what happens next.
Georgetown is grounded in a values-based approach to business, which centers on a core concept of the Jesuit tradition: human dignity is not a variable to be optimized.
“The Jesuit tradition asks us to see the whole person, not just the data point,” said Rossi. “Every system a student builds or evaluates must be assessed for its effect on the dignity of those it touches, especially those with the least power to push back.”
Cura personalis, a Latin phrase that is translated to “care for the whole person,” is a foundational element embedded into the curriculum, into the classroom, and into the standard by which every graduate is measured. That conviction shapes three questions students are trained to ask before any AI system goes live: Who benefits? Who bears the risk? And who is accountable when something goes wrong? This practice of principled leadership stays with the graduates long after commencement.
“The leaders who will thrive in the AI era are the ones who know when and how to make decisions – who have the moral seriousness to stop, ask who is affected, and refuse to let speed override responsibility,” said Nick Lovegrove, academic director of the Executive MBA program at McDonough. “That’s what Georgetown has always stood for, and in today’s world it matters more than ever.”
When the world is anxiously debating bias, privacy, accountability, and the human cost of algorithmic decision-making, Georgetown does not have to retrofit a values framework onto a technical curriculum because it’s already ingrained into the institution. And situated in Washington, D.C., at the center of the policy conversations that will define how AI is governed at a global scale, McDonough is positioned to lead the debate and drive these discussions forward.
A Curriculum Built for A Changing World
Beginning in fall 2026, every Georgetown MBA student will encounter AI not as a technical elective but as an essential leadership skill woven through the full arc of their education.
The entry point is an AI bootcamp in the opening term, though the word bootcamp understates what it demands. Students will arrive to memorize prompts, but instead, they will be tasked to understand what AI systems actually are, how to read their outputs critically, what responsible deployment looks like when bias, privacy, and governance are genuinely at stake, and how to use these tools appropriately within their academic work and future careers.
“By the end of that bootcamp, every student, regardless of industry or background, should be able to ask better questions, write better prompts, interpret outputs thoughtfully, and understand the ethical dimensions of AI,” said Sudipta Dasmohapatra, senior associate dean of MBA programs.
Students also work through real AI case studies including high-profile failures, developing the analytical instincts to understand not just why AI adoption succeeds but why so many initiatives collapse after the pilot phase, and what organizational and governance failures precede that collapse.
Students across the school are learning how to evaluate these technologies in the context of real business decisions. For students in the Executive MBA program, these lessons are directly applicable to their work on a daily basis.
“These are senior professionals running teams, managing P&Ls, making decisions with real consequences,” said Lovegrove. “For us, AI isn’t just a standalone course focused on learning its capabilities. From the opening residency onwards, we’re building it into the fabric of every course – from strategy to finance to organizational leadership. That’s how it shows up in our students’ lives – not as a separate technology question but as a leadership question embedded in everything they do.”
Babak Zafari, academic director of the M.S. in Business Analytics program at Georgetown McDonough, is designing a new AI course built on a philosophy that cuts through the noise around AI education.
“There is no other way you should teach AI in 2026,” said Zafari. “People talk about AI, which makes you curious. But eventually, you need to be able to show what you did with AI. Did you build anything?”
The course is structured as an AI laboratory in practice, moving MSBA students from the foundations of large language models through building and testing real prototypes, evaluating strategic decisions around building vs. buying vs. creating an application programming interface (API), and developing the technical fluency to lead data science teams with genuine authority.
“You won’t be the data scientist on the team,” said Zafari, “but you might be the manager making decisions for data science teams. You need enough fluency to understand when they tell you this model is 80% accurate and that one is 90%, what that extra 10% really means, and whether it is worth the cost.”
Malaviya raises a strategic dimension that most organizations have not yet asked. The overwhelming focus of AI adoption today is cost reduction, operational efficiency, and headcount. The question almost no one is asking yet is the more consequential one: how does AI build the top line?
“There is only one Walmart in the world,” said Malaviya. “Everybody else has to think about distinctiveness.” Georgetown McDonough graduates will enter organizations equipped to reframe that question, to lead AI adoption not merely as an efficiency measure but as a genuine engine of competitive differentiation and long-term growth.
Protecting Your Thinking Muscle
There is a threat inside the promise of AI that most institutions are not confronting honestly. If AI does the cognitive work, students stop developing the capacity to do it themselves.
Professor Karen Kitching, chair of the AI in the Classroom Committee and a member of the accounting faculty at Georgetown McDonough, found through a faculty survey that nearly 80% of professors were already using AI in their teaching or research, but those uses varied so widely in purpose, transparency, and depth that students were experiencing AI entirely differently across courses, with no shared standard for what thoughtful, responsible integration looked like.
The committee moved quickly on three fronts: securing enterprise AI licenses to ensure equitable, institutionally supported access; launching faculty workshops to build shared understanding and surface best practices; and developing a framework for ethical, intentional AI integration across the curriculum.
Kitching’s own response has been practical and rooted in the school’s values. She developed what she calls AI Learning Assistants – structured, course-specific tools that undergraduate students engage with outside class to work through material step by step and receive immediate feedback tied directly to the curriculum. Credit is awarded for completion rather than correctness, which removes the pressure that drives students toward shortcuts and restores focus to genuine understanding and effort.
“This approach reflects cura personalis by meeting students where they are and allowing them to move at an appropriate pace,” said Kitching. “It supports reflection by helping students see what they understand and where they are still confused before coming to class.”
The results have been measurable and meaningful. Students arrive to class better prepared and more confident, and class time shifts from clarifying basics to more enriching discussions of the concepts.
Malaviya also frames the deeper pedagogical challenge of AI: “What AI does in many cases is dull the sharpness of the student’s mind by doing the hard work,” he said. “AI can generate ideas in seconds. What it cannot do is teach a student to evaluate those ideas rigorously, to interrogate the assumptions behind them, to ask whether the answer is the right one for this specific context with these specific people and these specific stakes. Psychologists call this metacognition: thinking about thinking.”
Malaviya believes this is precisely where business education must now concentrate its energy and its design. “Where we need to sharpen critical thinking is in evaluating ideas, challenging assumptions, and asking: in this context, would you recommend something else?”
Training Leaders To Say No
Knowing when to say no to an AI recommendation requires practice and the willingness to fail before the stakes are real.
In one Georgetown simulation, students received the same AI-generated hiring recommendation flagging high-potential candidates. Half the class was given the algorithm’s training data, half was not. The group without the data consistently asks harder, better questions. Whose resumes trained the model? What definition of potential was embedded in the system at the outset? Which candidates never surfaced for consideration, and why? In other exercises, the AI is deliberately wrong, generating authoritative, confident recommendations that would lead to disastrous outcomes. Students must diagnose the flaw, identify the assumption the model got wrong, and defend the decision to override it before a skeptical audience.
“We teach students to be adversarial readers of AI outputs,” said Rossi. “That means systematically asking: What data shaped this? Whose interests does it reflect? What would change if we stress-tested it?”
The goal is not skepticism for its own sake; it is the intellectual rigor and moral seriousness that a good lawyer brings to a witness, the kind of interrogation that protects institutions, people, and the integrity of the decisions that shape both.
Zafari elevates the urgency of that formation by pointing to where AI is headed. The field is moving fast toward agentic AI systems, tools that do not merely recommend but act, that make decisions and execute them on behalf of humans in real time.
“For you to be able to tell those systems how to make decisions,” he said, “you need to have had that training. You need that knowledge. Otherwise, the gap would be vast and the consequences would be substantial.”
The leader who has been trained to ask who is harmed and who is accountable will be equipped to govern those systems. The one who has only been trained to deploy them will not. That distinction, according to Zafari, will define careers and organizations in the decade ahead.
What Must Remain Human
Bloomberg Businessweek surveyed thousands of employers worldwide and asked a single, revealing question: Where do you find the best-trained MBA graduates? Georgetown McDonough ranked first. When researchers pressed on what best-trained actually meant, the answer was not analytical power or technical depth. Employers said something more fundamental: Georgetown graduates know how to work inside an organizational context. They read culture quickly, bring teams into alignment, build trust across differences, and make the whole greater than the sum of its parts.
Malaviya believes that leadership strength from McDonough students has never been more important, and that AI is actively working against it.
“AI is creating a fear of competition,” he said, “that I am competing against AI, that if I do not do something, AI is going to steal my job. That is a competitive mindset, and in a competitive mindset, we tend to isolate ourselves from each other.”
The leaders who navigate the AI era with integrity will be the ones who refuse that isolation, who use technology to amplify human collaboration rather than circumvent it, and who understand that the measure of an organization is not how much it can automate but how well it serves the people inside and around it.
“I would like to see that five years from now, organizations where AI is moving toward supporting humans and serving the common good, Georgetown is at the forefront of that thinking.”
Anne Kilby, associate dean of MBA Admissions, sees this play out in every admissions cycle. When she evaluates candidates for the MBA program, she listens for something no data set can capture and no algorithm can rank: the capacity for genuine human connection.
“Only humans can cultivate genuine relationships and trust,” she said. “My hope is that individuals will use AI to free up their time in order to focus more on the personal elements of relationship building. Because building trust is the foundation of successful collaboration, and without that foundation, everything else becomes fragile.”
That is the Georgetown standard the school holds for every graduate who carries the McDonough name into an organization. Not proficiency with AI tools, though that is expected and built into the curriculum. The deeper expectation is the habits of mind and character that make technology serve human ends rather than replace human judgment.
“What we teach above all is the judgment to evaluate any tool,” said Lovegrove. “Our approach is to give executives a durable framework: How do I assess what this technology actually does versus what it claims to do? Where does it create value and where does it create risk? When should I trust it and when should I override it? If you build that muscle, you can adapt to whatever comes next. The goal is to make our graduates pilots of AI, not passengers – people who are actively directing the technology rather than being carried along by it.”
Each year, every MSBA student receives a book at the start of the program, and this past year they received Malcom Gladwell’s Outliers. The book’s central argument is that success is not purely individual. It is shaped, more than anything else, by the environment that forms you, the standards held around you, the expectations placed upon you, and the community that holds you to them.
Georgetown McDonough is building that environment with full deliberateness. It is one where human dignity is a design constraint before it is a discussion point, where students are trained to interrogate what they cannot see and to refuse what they should not accept, and where the formation of conscience is treated as seriously as the development of competence.
“For more than two centuries, Georgetown has sought to educate the whole person and to form leaders who serve others and the common good. The arrival of AI has reinforced the importance of Jesuit education,” said Paul Almeida, dean and William R. Berkley Chair at Georgetown McDonough. “Fluency in AI will be essential, and our students will be prepared to lead with this. Just as much, we will ensure that our graduates grow along the essentially human dimensions that will complement AI. These areas of human flourishing include critical thinking, creativity and innovation, exercising ethical judgement, making decisions under uncertainty, and so much more. These characteristics, combined with AI fluency, will ensure that our students lead in the future.”


