Buying and Selling: Mind the Data
Jennifer Logg researches how people can improve the accuracy of their judgments by infusing data analytics with psychology to enhance the use of algorithms. Organizations are increasingly using algorithms to sort through big data and produce advice, but how do they maximize the benefits of those insights?
According to Logg, assistant professor of management, researchers have documented for decades that algorithms consistently make more accurate predictions than people. The key to leveraging this accuracy is to understand how people respond to algorithmic judgment in the first place.
Through her research and theoretical framework, Theory of Machine, Logg has found that people are more likely to listen to identical advice when they think it comes from an algorithm than from a person.
But despite the tendency to trust algorithms, Logg suggests people can use them to their fullest by thinking carefully about algorithmic auditing: how we evaluate algorithms for bias, especially in instances where people’s lives are affected by those judgments (i.e policing, credit scores, and other consequential contexts), and most importantly, where the biased data is sourced from.
Here, Logg shares her thoughts on debiasing algorithmic output, and how organizations can go one step further by using algorithms to detect bias in our own judgment.
BUYING: Algorithms as Magnifying Glasses. As society continues to use algorithms for more tasks, especially with the rise of ChatGPT, decision-makers need to understand that the information fed to algorithms (input) can be biased.
While auditing algorithms is useful, most biased output from an algorithm can ultimately be traced back to biased input. Input data is often based on historical data; decisions made by people. You can improve decision-making not only by auditing algorithms but using algorithms to audit human decisions.
Human judgment is not perfectly accurate; decades of research highlight the predictable ways that people’s judgments are prone to bias or inaccuracy. There is an opportunity to use algorithms as magnifying glasses by not just by auditing algorithms but by using that same algorithm to examine the human-generated data it used to produce biased judgments. Companies will then be better equipped to learn when and how a bias occurs and prevent recurring patterns at their source.
SELLING: Algorithmic Auditing. Algorithms have received a lot of attention for producing biased decisions. Rightfully so, people are outraged over predictive policing and sentencing that disproportionately penalizes people of color. Likewise, people are upset when recruiting algorithms overlook female job applicants. This biased output is especially upsetting because race and gender were not included as inputs into these algorithms. We should be outraged by bias reflected in algorithmic output but should not stop there. Algorithmic auditing is just the start.
If people only scapegoat the algorithm for biased output, companies will simply revert back to having people solely make decisions. It is true, we need to audit algorithms; however, we should take it one step further and use algorithms as magnifying glasses because, after all, algorithms by definition aggregate individual data points. Using them with the purpose of unearthing patterns can allow people to see trends that are otherwise difficult to detect—and when the algorithms surface biases, companies should seize this opportunity to understand where that bias originates. Then, we can truly improve our judgments and decision-making.
This story was originally featured in the Georgetown Business Fall 2023 Magazine.