3 core skills workers need to succeed with generative AI

Why employers should nurture and recruit for abilities like creativity, critical thinking, and ethical decision-making

Blog
Howard Rabinowitz

Howard RabinowitzThe Works contributor

Jan 17, 20245 MINS READ

Could generative AI tools weaken, rather than strengthen, a worker’s job performance?

The answer is yes—especially when workers don’t understand what gen AI can and cannot do. 

A recent study by Boston Consulting Group posed a series of business problems to two groups of knowledge workers—one armed with GPT-4, the other without it. Despite being warned that it wasn’t designed for their use case and might produce wrong answers, the participants who used generative AI performed 23% worse than those who completed the same tasks without the tool. BCG noted a paradox: “People seem to mistrust the technology in areas where it can contribute massive value and to trust it too much in areas where the technology isn’t competent.”

That insight serves as a reminder that employees and managers shouldn’t blindly trust the new tools. It also underscores the value of human skills such as critical thinking in assessing who uses these tools and how.

“Since generative AI can generate coherent nonsense, we need to have people asking the hard questions around its outputs,” says Heide Abelli, co-founder and CEO of educational technology company SageX. 

HR experts like Abelli say companies looking to embrace generative AI to boost productivity must not overlook the soft skills workers will need to leverage these tools for business value.

Here are the three critical soft skills that talent leaders say companies should seek out and nurture in current and future employees.

Creative thinking (and prompting)

Generative AI promises to end the scourge of writer’s block. With the right prompts, it can produce serviceable first drafts of marketing emails, newsletters, blog posts, quarterly financial reports, and software code.

Every knowledge worker will have to become a prompt engineer of some sort, but the most creative will have an advantage.

It can also draw on its visual databases to design advertisements or event signage. But because generative AI is iterating on data in its training sets and in some cases repeating verbatim what it has learned, its inputs and outputs both require creative touches to make them unique, accurate, and targeted to the right audience.

On the input side, “how you engineer prompts will have a tremendous impact on the output you receive. Real creative thought has to go into that,” Abelli says. That’s one reason why some companies are already hiring for the new role of prompt engineer. Every knowledge worker will have to become a prompt engineer of some sort, but the most creative will have an advantage. 

“The more creative people are, the more successful they will be at using generative AI,” Abelli says.

For example, a graphic designer in marketing might create an image for an ad around a simplistic prompt, say, “foggy London street.” A more creative prompt, which would generate a more elaborate and less generic image, would be to feed a rich passage from a Dickens novel with a foggy London street described in it.

“Generative AI is essentially a stochastic parrot of what has been done before,” says Tomas Chamorro-Premuzic, chief innovation officer of global hiring firm Manpower. “But by parroting the familiar, it may spark humans to do something genuinely creative.”

Critical analysis

In a now-infamous blunder, a judge sanctioned a New York lawyer for submitting a brief written by ChatGPT that was riddled with fake legal citations. It’s a potent reminder that still-nascent generative AI occasionally generates inaccurate or outright false content.

But even without the need to discern the factual from the fantastic, critical thinking is a skill whose importance will rise along with workplace adoption of generative AI. The technology’s ability to automate tasks leaves workers with more time to use their critical know-how, says Kathi Enderes, senior vice president of research at the Josh Bersin Company.

Even among IT workers, she notes, companies today are screening for critical thinking as much as tech skills. 

“Coding is becoming less valuable because generative AI can write code,” Enderes says.

While GPT-4 can spit out software code in seconds, skeptical engineers who test it more produce software with fewer security bugs and vulnerabilities, according to a recent study by Stanford University researchers. 

Critical thought ... is the ability to act on your own insights and to ignore the output that should be ignored.

Tomas Chamorro-Premuzic

Chief innovation officer, Manpower

Beyond security needs, critical analysis is invaluable to improved decision-making. For instance, generative AI may summarize a startup’s market potential, but a financial analyst still needs to apply analytic acumen before recommending an investment. In HR, GPT-4 can sift through thousands of résumés to find the one “perfect match” for a job, but an HR manager would be foolish not to conduct an in-depth interview before making a hire.

As useful as generative AI can be in crunching data, human judgment is more important than ever in leveraging its output. “Critical thought is not about translating data into insights,” explains Chamorro-Premuzic. “It’s about the ability to act on your own insights and to ignore the output that should be ignored.”

Related report on AI-powered software design: Software developers’ new superpower

Ethical decision-making 

“AI ethicist” has been a specialized job title for nearly as long as AI has existed, but the rise of generative AI is creating the need for widespread ethical grounding and the ability to deploy that judgment in how the technology is used, talent experts argue.

Abelli contends that workers need moral grounding when using generative AI outputs. 

“There’s a danger in blindly leveraging that output for organizational decision-making,” she argues. “Every employee needs to think carefully and holistically about all the stakeholders impacted by this output. The risks are only going to grow.”

Ethical dangers range from bias in training data to dissemination of potentially harmful misinformation. Already a docket of class-action lawsuits alleging privacy violations and copyright infringement related to generative AI have been filed.  

While companies can and should put guardrails and training in place, Abelli believes they should also make sure they’re bringing individuals with ethical decision-making skills into the organization. They can start this during the interview process by posing ethical challenges, such as those suggested by the Society for Human Resources Management, and evaluating applicants’ reasoning. 

Beyond avoiding legal and regulatory risks, companies that hire employees who have the ability to make ethical decisions around AI are positioning themselves to meet not just this moment of transformation but whatever comes next, Abelli says. 

“Things are moving so fast with AI,” she says. “You’ve got to start to prepare the organization for the future. We need to hire for these skills now.” 

We want to hear from you! Please send us your feedback, and get informed about exciting updates from The Works. Drop us a line: theworks@freshworks.com.