News Tempus - Promo

The AI Skills You Should Be Building Now


ALISON BEARD: Welcome to the HBR IdeaCast from Harvard Business Review, I’m Alison Beard.

Generative AI has been heralded as a breakthrough technology for workers in a wide variety of industries, and many companies are encouraging employees to experiment with, if not wholeheartedly embrace it. Beyond playing around with chat GPT, Claude, Adobe’s AI tools or Microsoft’s Copilot, what can you do to understand and get comfortable with these tools? How do you make them work for you? How do you master them?

Whether you’re really excited about gen AI or worried it will make your job obsolete, it’s time to learn what our guests today call fusion skills, ways of working with large language models that will help you get the most out of them and set you up for success in this new world of work.

H. James Wilson is the global managing director of technology research and thought leadership at Accenture, and Paul R. Daugherty is Accenture’s chief technology and innovation officer.

Together, they wrote the HBR book, Human Plus Machine: Reimagining Work in the Age of AI, which is now available in a new and expanded edition, as well as the HBR article, Embracing Gen AI at work. Paul, Jim, thanks so much for joining me.

PAUL R. DAUGHERTY: Yeah, Alison, looking forward to the discussion.

H. JAMES WILSON: Great to be here, Alison.

ALISON BEARD: This might seem obvious, but why does Gen AI as opposed to the AI that’s been around for years promise to be so revolutionary for all kinds of workplaces?

PAUL R. DAUGHERTY: What’s new about generative AI now is that it has the ability to create new things. Previously, you could do diagnostics with AI, you could do predictions, and it’s been applied in business for many years and people have been using it every day. If you use your mobile phone, if you use maps, if you use ecommerce sites, you’re using AI every day in what you do.

What’s different is that ability to create and approximate more human-like capability. It’s the first technology that we’ve had that really has that ability to do things and simulate the way we as humans do them. That’s why I think of it as we’ve been talking about AI, artificial intelligence, with generative AI, you really can reverse it and flip the script almost and talk about IA, which is intelligence augmentation and how we as humans can use the technology to advance our own capabilities.

ALISON BEARD: I know that a lot of workers may be focus on the worry that they’ll lose their jobs to AI, but I think experts like you often say the real worry is losing your job to someone who knows how to use AI well, and particularly gen AI better than everyone else. Are you seeing evidence of that already?

H. JAMES WILSON: Yeah, absolutely. I think it’s important to not look at AI as a zero-sum game, as human versus machine. When workers use it effectively, they see that it’s really about collaboration. Collaboration with an intelligent new colleague in a lot of ways. It’s about the advertising writer using AI to augment her research, her brainstorming and so on. It’s really about amplifying that writer’s work in two or three of her jobs, maybe 10 or 15 tasks. It’s not about wholesale replacement.

We did a large survey a bit earlier this year and we found that 95% of workers see potential value in working with these systems. Moreover, about that same number, 94% of professionals tell us that they’re ready to learn new skills to work with these gen AI systems.

I think it is really important to consider that the pace of technology innovation – of technology improvements – is really picking up. Workers are going to need to develop advanced AI collaboration skills continuously across their careers. It’s not something that can be done annually or every couple of years or every couple of months. There really needs to be a fusion of human and machine both in work processes, but at the same time, I think there’s an increasing need for a fusion of working and learning at the same time, and that’s part of the reason why we call these fusion skills.

ALISON BEARD: Okay, let’s dig into these fusion skills. As an overview, which are the most important when it comes to working with gen AI?

PAUL R. DAUGHERTY: Well, there’s three that we’ve really singled out that we think are particularly important for all of us to focus on. The first is intelligent interrogation, and this is how you use different techniques in terms of how you work with these generative AI models to get better results out of them, so it’s how you interact, how you work with the models.

The second is, and second fusion skill that’s important is judgment integration. This is integrating the judgment that we as humans have along with the results that come out of the models to make sure that the right results are presented with the right context and in the right way and you draw the right conclusions.

Then the third one is a really interesting one, we call it reciprocal apprenticing: how do you combine the human and the AI capabilities so that you’re improving the capabilities of the models in the way that you’re working with it and also learning more yourself. It’s the concept of helping the AI do a little bit better and learning yourself as you do it.

ALISON BEARD: Okay, so let’s start with intelligent interrogation thinking with AI. What do you mean by that?

H. JAMES WILSON: You know, one of the really interesting features of generative AI is that natural language is the new programming language in a sense, so you don’t need to be a data scientist or a software engineer to get these systems to run really complex tasks. This is really opening up all sorts of new ways that AI can be brought into your knowledge work routines, your creative work routines, like market analysis or brainstorming.

There’s this business theory going back to the mid-1960s, I think Michael Polanyi first coined this phrase that “we know more than we can tell.” Polanyi called this the tacit dimension of organizational knowledge, since our professional thought processes and imaginings about how to say run a marketing campaign couldn’t be efficiently coded into computers, so you really couldn’t amplify many of the cognitive tasks of knowledge work of that marketer’s job, for example.

That’s really changing, that tacit dimension, those hidden mental activities that we perform in our day-to-day work can be jotted down into Claude or Gemini’s chat window. So that tacit dimension can be made explicit in a sense with large language model interactions. And moreover, we’re discovering now that there are really more robust prompting tools and techniques for thinking and creating and reasoning with machines. We call this first skill intelligent interrogation because it’s really the skill of thinking with AI in a way that produces measurably better reasoning outcomes.

ALISON BEARD: What are some ways to get better at this skill of intelligent interrogation?

H. JAMES WILSON: What you’re really trying to do is bring out what’s in the back of your head, this tacit dimension, that is by writing your thoughts and your queries into the prompt window in a practical and incisive way. The thing is, without the skill, you’re winging it. We see a lot of this going on these days in our research, so your prompt is somewhat improvisational or ad hoc and probably isn’t going to be all that effective.

But we’re starting to see both in practice and also, in the empirical research that human gen AI interaction can really be improved with some of the techniques like chain of thought prompts, for instance. That is when you break down your reasoning into steps and when prompting gen AI, you need to break down the process it should follow into the constituent parts and then strive to optimize each step. It’s important to prompt the machine to know that you’re thinking step-by-step with the machine, and literally you write, let’s think step-by-step into the prompt, or something along those lines.

ALISON BEARD: That was cool to me that you could just say something as simple as that, like do this in a step-by-step fashion, and the AI would come back at you with answer that was more clearly thought through or more transparently thought through. There does seem to be this common theme intelligent interrogation of breaking down your work. I think my hesitation with that is it feels like more work for me, but ultimately, the productivity gains are astronomical, right?

H. JAMES WILSON: Exactly. I think there are two sides to this. One is there are certain work activities that you can break down, like a supply chain manager who’s looking to understand the right inventory levels would ask the model to consider steps like demand forecasting and lead times for components or storage costs or even potential supply chain disruptions. Then for each step she could ask the model to consider the impact of each of those steps on the final inventory decision. That’s a really structured way of thinking and thinking in advance because I think there has to be some of that before you go into a prompt.

But the thing is, you can also apply this approach for more open-ended and exploratory types of work activities like strategic planning. And skillful chain of thought prompting can really help you become a much better forecaster of say, future trends or potential economic shocks that might come to your core market and that sort of thing.

And so by working with AI in a disciplined skillful way, like a recent business graduate can leap up the experience curve to co-create really sophisticated market scenarios with the AI so that new MBA can punch above her weight to develop almost a CSO level vision of future market trends. It’s both with really structured work, but also that open-ended type of work as well that we see the benefit.

PAUL R. DAUGHERTY: One of the areas where we see it being applied a lot really foundational is in the area that’s getting a lot of focus right now, which is software development and coding itself. It is a very stepwise process. If you’re writing software, you’re gathering data, you’re analyzing it a certain way, you’re writing code to do certain things and you’re testing the code. This kind of process of intelligent interrogation is very consistent with the way that process works and is allowing software developers using models tool tune it this way to get much greater results.

Another interesting twist on the case story is what we’re doing in our own company in Accenture with a tool that we have called Amethyst, which is our own knowledge agent that any employee can interact with. Amethyst will actually prompt me every morning with, did you know this? Did you know that? Here’s some things to think about. That prompts or suggests for me things to do and things to ask Amethyst to help me with to get better information. We’re finding that by using Amethyst, our employees are spending 98% less time searching for information than they were previously because they’re learning how to get the information better in using the tool in the appropriate way.

For example, we’re in a season where we’re doing a lot of performance reviews and feedback for our employees, so there’s certain things that it’s suggesting to me, have you done this yet? Have you thought about this yet? Things like that, that are really helpful. Part of this is developing the culture of teaching people to be comfortable and interact in the right way with the generative AI technology.

ALISON BEARD: That does seem like a good transition to reciprocal apprenticing. I do think that most people can see the value add in some of the more basic tasks that you described, but the real idea as you’re talking about is for the AI to get better and better at helping you for you to get better at teaching the AI how to help you, and it becomes this virtuous loop of clearer communication, increasing productivity, so what recommendations do you have for improving on this front of apprenticing?

H. JAMES WILSON: A lot of the tacit parts of our work, a lot of that tacit dimension is about context. Your company wants to grow its product line in say, Southeast Asia, but there’s also this other context in the back of your head that your company also wants to streamline its product portfolio in Northern Europe. Your company is going to be very culturally and strategically and operationally different than its industry peers. With generative AI, you can now bring that really critical context back from in your head into the chat window in everyday language. That just wasn’t feasible for most non-technical workers in the earlier eras of AI and analytics.

ALISON BEARD: What’s a good real-world business example of how someone is using reciprocal apprenticing to create that virtuous loop?

PAUL R. DAUGHERTY: One is something that we’ve implemented at our company for one of our clients, which is a large consumer goods company, and this is an application to help their sales professionals be more effective. It’s a generative AI model that has a lot of capabilities in it. It understands the, let’s say you’re selling chocolate-based products to distributors, food products to a distributor. As a sales professional, you want to know what’s in stock, what’s the inventory look like, needing get access to your supply chain. You want to know what’s in demand from your customers, who have you sold to recently because you’re planning out your day and who you’re going to call, who you’re going to try to sell to.

The model has got all this information in it, and the sales agent, by interacting with the model, is giving a sense of how these decisions are developed and how the sales process works. On the other side, the sales agent can ask for help and say, “Hey, I’m trying to sell to this customer and they’ve been a difficult buyer. Do you have any insights or advice from other best practices that might help me?” He could actually do sales simulation and training sessions before he actually calls the customer. It’s this very much, again, this interaction and interplay that’s helping the sales agents be much more effective in the work they do increasing, I estimated to increase the sales significantly in the organization, the productivity of all the salespeople in the process of doing so.

ALISON BEARD: Okay, so let’s get to judgment integration. This given concerns about AI seems to be maybe the most critical, the whole human in the loop idea, but what does it look like in practice? Is it setting up good parameters to start, recognizing when it’s time to interrupt and correct work? Where are those touch points where human judgment comes in and works effectively?

H. JAMES WILSON: So we’re moving from earlier machine learning systems that were effective at diagnostic and predictive tasks into this new generative AI era. Machines aren’t just predicting with high accuracy, they’re also creating content and offering personalized suggestions through natural language and so on. In this new era, there’s a skill premium in a sense on the ability to judge the novelty, the usefulness, and the trustworthiness of generative AI outputs and responses. This goes beyond just fact checking for machine hallucinations.

Obviously, that’s a must have and a baseline requirement, but really creating value in this gen AI era also requires bringing your expert human judgment in areas like law or product design or science into the way that you collaborate with a LLM.

Just to give a quick example, if you think about sectors like biotech or pharmaceuticals, there’s all sorts of judgment calls for those industry professionals to make that involve trust and patient experiment, population privacy and those data sets and safety and the reliability of the information that’s going to be shared with patients and with doctors. And the integration of judgment into those AI interactions, in this case has to happen widely across the enterprise, not just in the R&D lab.

One company that we’ve talked to in our research right now is aiming to launch 15 new products over the next five years. They’re really aiming to turn the whole company into a product innovation lab from marketing to sales to service and so on. They’ve really invested in giving their non-technical experts the right training to make effective judgment calls, and also, the time to learn through experimenting with these tools in their roles. Really, the path to strategic innovation has to run widely across the enterprise, not just in the R&D team or the IT department.

ALISON BEARD: Or the individual using the AI.

H. JAMES WILSON: Exactly.

ALISON BEARD: Assuming that those ethical codes or parameters that the company is using regarding consumer privacy or avoiding bias in algorithmic decisions or the information that’s being given, are there things individuals can do to make sure that as they’re collaborating with AI, that the AI is giving them good outputs?

H. JAMES WILSON: Well, in the article we talk about some basic principles and technologies that if your organization doesn’t have it, they probably should look into getting it such as RAG, which is retrieval augmented generation, and that’s bringing a database of facts into that loop as well so that the outputs are verified through that database. Another thing is just being careful that if you’re using an open-source model or a model off the web freely available to not use any proprietary data or private information and that sort of thing.

PAUL R. DAUGHERTY: I think a really key thing here too is deciding the point where you need to put the judgment in, and this is really what we’re seeing companies look at as they apply generative AI, and it really requires stepping back and rethinking the current business practices and processes you have and looking at where gen AI should play a role and where the individuals need to step in with the judgment integration.

Just one example is in insurance, where we’re seeing a lot of companies look at generative AI and large language models for things like underwriting, which are hugely document-based, language-based processes, thinking lots of information, contracts, policies, et cetera, that needs to be analyzed to underwrite an insurance policy. A great job for generative AI, but in terms of the policy and pricing and risk decisions, you want the judgment of individuals to be applied on that, but the benefits you can get are pretty amazing.

Historically, at a lot of companies, the underwriters have only had time to read about 10% of the inputs coming in because they just didn’t have time to get through it all. Now with generative AI, you can process 100% of it to still the key facts for the underwriters that they could make better decisions and they could ultimately then make the right judgment. Because in this case, generative AI alone wouldn’t be able to do it.

That’s one example. We’re seeing similar examples in customer care, customer service applications where yes, the generative AI can understand the customer’s intent, what they might be calling about in a customer service situation, but you need the customer rep to really understand the nature of the types of solutions that they might have to the customer’s problem and really make that decision on how to best satisfy the customer’s need.

ALISON BEARD: And what advice do you have for leaders who are trying to get their people to embrace AI? It seems like from your research, most people are interested and want to, but surely a few are resistant to change or just not sure how to go about it. What should managers do?

PAUL R. DAUGHERTY: I think it requires a different era or different type of leader going forward. I think to be successful with generative AI, this really is about changing business and reinventing the way you do things, but it requires bringing the whole company along with it. We’re seeing a lot of, at the leading companies, we’re seeing leaders really dive in and embrace it, at the CEO level, at the board level, at the C-suite level, not just the technology executives. I think that’s important.

I think the second leadership quality that’s important is learning with humility and continuing to learn the new techniques and new ways that you apply the technology. Finally, there’s a culture and learning element of it where the more you can help the people, all the people in the organization understand, embrace the technology, the more you’ll be successful in applying it to transform the business. And so democratizing the access to this technology, applying and building the learning platforms at scale that Jim and I have been talking about are going to be really important for the leading successful organizations as we go further into the generative AI era.

H. JAMES WILSON: I think leaders need a vision of the future workforce, and many don’t have that yet. Really important insight that we see in our research and work is as workers develop these skills, they’re going to be able to do more sophisticated and innovation-focused and economically valuable work. Gen AI really enables skilled people a way to move up the knowledge work ladder in a sense.

One company that we looked at in our research, its telecom company had been scaling generative AI while also upskilling its people. They’ve been identifying new career pathways for the people as they’ve been doing that, and they’ve also been redesigning and expanding roles inside the company. Those frontline customer service agents who acquire these fusion skills are really moving into potentially more lucrative roles like technical support and digital marketing and so on. We’re really seeing early evidence that companies that get this dynamic right of redesigning and improving career opportunity around skillful human-machine interaction have a real opportunity to leapfrog in their industry and to really help enable industry reinvention.

ALISON BEARD: This space is evolving so rapidly though. How much time should we as individuals and should organizations spend, sort of experimenting with what’s new and what can change versus doing our jobs and keeping the companies running?

PAUL R. DAUGHERTY: Well, I think from a company perspective, I think this is the job for companies is understanding this is the biggest transformation that we’ve seen to date in terms of exponential technology, allowing companies to really transform what they do and redefine leadership. For companies, it’s incumbent on them to really understand how this will allow them to drive their business differently, both at a strategic and operational level and make the appropriate investments, form the right partnerships to do so, prepare the organization accordingly.

For individuals, I think, again, it is always changing, and I think that can be daunting because you hear every week about new models, new advances, new startups that are coming, but I wouldn’t let that dissuade anyone from really using what’s there now and developing the right skills. Because as you developed skills and became familiar with the early versions of ChatGPT, you’re able to use more powerful versions of the models as they come about. That’ll help you then learn Claude and other models that are out there. I think it’s this continuous learning environment we’re in that we’ve been talking about for years, but is even more important than ever. With generative AI, the good news is we have some technologies that can help us with that learning and informational process more so than ever.

ALISON BEARD: Thank you all so much for being with me today.

H. JAMES WILSON: Thanks, Alison.

PAUL R. DAUGHERTY: Thanks, Alison.

ALISON BEARD: That’s H. James Wilson and Paul R. Daugherty, both of Accenture. Together, they wrote the HBR book, Human Plus Machine: Reimagining Work in the Age of AI, now available in a new and expanded edition, as well as the HBR article, Embracing Gen AI at Work.

We have more episodes and more podcasts to help you manage your team, your organization, and your career. Find them at hbr.org/podcasts or search HBR in Apple Podcasts, Spotify, or wherever you listen.

Thanks to our team, Senior Producer Mary Dooe, Associate Producer Hannah Bates, Audio Product Manager Ian Fox, and Senior Production Specialist Rob Eckhardt, and thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday, I’m Alison Beard.



Source link

About The Author

Scroll to Top