Responsible AI in hiring: Why humans still need to make the call

October 15, 2025

The hiring landscape has changed, fast

Let’s start with a scenario that’s becoming all too common:

A candidate uses artificial intelligence (AI) to generate a resume tailored to a job description, also written by AI. That resume is then parsed by another AI tool, scored, and maybe even ranked—all to hire a person who will work with other humans.

This isn’t a hypothetical. It’s happening now. I recently spoke with a talent leader in high tech/financial services who’s hiring AI/machine learning (ML) engineers, with very specialized skills. One requisition pulled in 3,000 applicants in a day. That’s not a typo, and it’s not unusual. So, as a candidate, how do you stand out? And as a recruiter, how do you make sense of the noise?

The rise of AI in talent acquisition

AI has transformed recruiting, from sourcing and screening to scheduling and onboarding. Tools promise faster decisions, reduced bias, and better outcomes. And yes, AI has made my job easier. I use it to:

  • Draft emails
  • Edit copy
  • Create visuals
  • Analyze data
  • Brainstorm

However, here’s the thing: I still make the final call because I don’t want to sound or feel robotic, and neither should your hiring process.

When AI goes too far

The problem isn’t AI itself, it’s how we use it.

I’ve always loved this story from the 2017 Grandmaster Competition. It pitted humans against machines in a talent evaluation competition. In the end, machines were faster, but humans took first place in accuracy. That distinction matters.

Then there’s the infamous story of a very large US employer that designed its own AI solution to help determine who should be hired. The answer? People were more likely to succeed if their first name was “Jerod” and they played lacrosse in high school. This raises all kinds of red flags around AI’s unintentional bias toward minorities.

Now, we’re seeing the legal consequences. You can simply search “AI hiring lawsuit” to find that candidates are starting to call out employers for discrimination because the AI algorithms are screening them out before a human even has a chance to form an impression, or even review them, for that matter. There has even been a request for the provider to release the full list of companies that have used this AI feature for hiring.

The ethical dilemma: Who’s making the decision?

Let’s be clear: AI should not be making hiring decisions.

It should help humans make decisions better, more efficiently, more fairly, and more consistently. However, when AI becomes the decision-maker, we risk:

  • Unintended bias: Algorithms can learn and amplify existing biases
  • Lack of transparency: Many AI models are black boxes
  • Legal exposure: Without proper validation, companies are vulnerable

Questions every talent acquisition leader should be asking

If you are a vice president, director, or manager in talent acquisition (TA), here are the questions you need to ask your vendors and your team:

  • How can you prove your AI doesn’t create adverse impact?
  • How can you validate that your AI actually predicts job success?
  • Can you demonstrate how your AI explainable and auditable?
  • Are you using AI to assist decisions or to make them?
  • What happens when the AI gets it wrong?

These aren’t just technical questions; they are strategic ones. They impact your brand, your compliance posture, and your ability to attract top talent.

Where does Infor fit into all of this?

Infor™ acquired PeopleAnswers in 2014 and rebranded it as Talent Science. PeopleAnswers was a pioneer in predictive talent assessment and:

  • Offered multi-tenant access before “cloud” was a must-have for businesses
  • Delivered industry and customer impact analyses before “big data” was a buzzword
  • Used ML to analyze millions of data points to build predictive profiles before ChatGPT and generative AI were household terms.

And today?

  • Zero successful legal challenges or out-of-court settlements since being founded in 2001
  • Clients have switched to Talent Science after facing lawsuits with other vendors

We validate our models. We back our clients. We help you avoid risk while improving hiring outcomes that lead to better performance and higher retention.

Real-world impact: What responsible AI looks like

Responsible AI isn’t just about avoiding lawsuits; it’s about fair opportunities for the people seeking employment.

Here’s what it looks like in practice:

  • Transparency: Candidates understand how they’re being evaluated
  • Fairness: Models are tested (and retested) for bias across gender, race, age, and more
  • Validation: Predictive models are backed by science and regularly updated
  • Human Oversight: Recruiters and hiring managers make the final call

At Talent Science, we provide:

  • Continuous validation to keep models accurate, fair, and compliant
  • Predictive profiles tailored to each role, and regularly updated to reflect your current reality

The future of hiring is human-centered

AI is here to stay. However, it’s not a replacement for human judgment, it’s a tool to enhance it.

As TA leaders, there are responsibilities to:

  • Use AI ethically
  • Stay informed about legal and regulatory changes
  • Choose partners who prioritize fairness and transparency
  • Keep the human in human resources

Final thoughts: Don’t let the robots run the show

AI can help you move faster, analyze more data, and reduce bias, if used responsibly. But when we let machines make decisions about people, we risk losing the very thing that makes great hiring possible: Human connection.

Hiring is about people. Let’s keep it that way.

Let's Connect

Contact us and we'll have a Business Development Representative contact you within 24 business hours.

By submitting this form you agree that Infor will process your personal data provided in the above form for communicating with you as our potential or actual customer or a client as described in our Privacy Policy.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.