Is AI Less Biased than Humans?

An evaluation of AI decision making

Welcome to the Property AI Tools Newsletter!

Today I’ll be exploring:

  • The differences between AI and Human Bias

  • Examples of Bias in real estate

  • How to mitigate bias in AI systems

LATEST TECH NEWS

📰 OpenAI Launches Free academy
Expert and community led learning resources

📰 The New Whatsapp AI that can’t be turned off
WhatsApp users across Europe now have AI.

📰 Google Launches AI AgentSpace
Create a powerful knowledge resource connected to your Google workspace and data

NEW TOOLS

🛠️ AICHITECT
AICHITECT is an AI platform designed to redefine construction for developers, architects, consultants, government entities, and homeowners. Use their innovative platform to de-risk your property deals, remove unknowns and maximise returns with AI-powered insights and streamlined project management.

🛠️ Huzi
Huzi is a trainable AI assistant designed to supercharge the productivity of modern real estate agents. Streamline your workflow, reclaim valuable time, and elevate your client service with Huzi's comprehensive suite of AI-powered tools.

Is AI actually capable of being less biased than humans? On one hand, human biases are shaped by our personal experiences, cultural influences and ingrained prejudices whereas AI reflects the patterns present in its training data. Data which can inherit biases from past events which may have unfairly excluded certain demographics. Let’s look into some of the differences between the two.

Bias in Humans

Examples of Human bias include:

Name Bias - Studies have shown that identical resumes can receive different responses based purely on the candidate’s name. Agents might unknowingly associate names with stereotypes which can lead to unfair outcomes.

Affinity Bias – This is the act of favouring individuals with similar backgrounds, interests or traits such as going to the same university, coming from the same town or sharing the same religious beliefs. This is something we do naturally as humans, whether we are aware of it or not.

Confirmation Bias – Once an initial impression is made, there is the tendency to interpret all future information in a way that confirms this initial belief. For example, if a resume is submitted with a minor spelling mistake, the hiring manager may view that candidate less favourably regardless of their suitability for the role.

With all of these factors in mind, it’s important to understand that these biases can greatly impact the fairness and quality of decisions made by humans.

If these patterns present themself in human behaviour, imagine the impact of that bias being carried over to a Large Language Model (LLM)?

Bias in AI

AI bias mirrors the human biases and can even amplify these biases due to limitation in training data and algorithms. Let’s define what AI bias looks like in more detail:

Training Data Issues - We’ve acknowledged that AI learns from historical data, therefore, if the data is biased, the AI system is biased due to the model perpetuating the pattern within its training data. This bias can creep in through the inclusion of unrepresentative data, incomplete data and historical data collected during a time of societal inequality. This issue is particularly important in real estate when fair pricing decisions and finance approvals are dependent on this data.

 Team Conversation Point

“Can you think of a few examples of where biased training data can cause issues with mortgage approvals, property valuations and recruitment?”

– Reply to this email with your thoughts on this.

Algorithm Design – Machine Learning Developers play a role in writing, fine-tuning and influencing the output of AI systems. This human input can lead to bias ‘by design’. To get a better understanding of AI Systems, visit What is an AI system?

Feedback Loops – This is when an AI system reviews it’s own performance to get feedback. It then uses that feedback to improve itself, forming a cycle of continuous learning improvement. The downside to this is that the cycle can reinforce existing biases, making them deep rooted into the model.

Examples of Bias in Real Estate

Bias in Real Estate

Ways to Mitigate Bias

1. Auditing and Testing

Conduct regular audits to identify and reduce bias in AI systems. Stress test your system to see how it responds in different adverse scenarios, regularly review data and sources to ensure it is representative of current real estate demographics and put a process in place to rectify inaccuracies and biases.

2. Demographic Parity Assessment

This is a fairness metric which measures whether an AI’s decision produces similar results across various demographics, regardless of their characteristics or traits. It can be used as part of a comprehensive multi-step audit to determine how fairly each group is being treated by the algorithm.

3. Diverse Human Audits

Periodic audits by third parties or diversity panels can assess whether decisions are consistent and non-discriminatory. Pairing this with data analysis ensures outcomes remain trustworthy, equitable and well informed.

4. Failsafe/ AI Peer Review

Implement a ‘failsafe’ model that measures the output of the primary AI system in real time, scoring and moderating its output for bias before it reaches the end user. This adds another level of security to prevent undesirable outcomes.

5. Fairness Framework

By encompassing all of the methods above, you can create a fairness framework which addresses and mitigates bias from all angles, whilst maintaining a level of human oversight over AI systems. Setting and maintaining these standards within your organisation can ensure AI outcomes remain balanced and ethical.

Conclusion

Yes, it is possible for AI to be less biased than humans, but achieving it requires improving practices and intentional, ethical oversight. Both humans and AI have flaws that can perpetuate biases, but AI presents opportunities to address these biases through improved training, moderation and auditing. Ultimately, if we want to live in an equitable and fair world, AI may give us the best shot of getting there, but only if that’s what we really want.

Signing out!

Danielle 
Founder | AI Consultant @ Caique

Would you like to sponsor this newsletter?
Email [email protected] to request our media kit