EY Utilizes Generative AI in Business Operations

Everyone is talking about generative artificial intelligence these days. And chief innovation officers are certainly paying attention.

Jeff Wong, EY’s global chief innovation officer, advises CEOs and EY employees on emerging technologies and how to integrate them into business operations. Wong also serves on the board of AI4ALL, a nonprofit organization dedicated to increasing diversity and inclusiveness in artificial intelligence.

EY is working with generative AI and fine-tuning datasets for specific business needs, as do many other companies. Last year, the multinational accounting and consulting firm began building on OpenAI’s GPT technology. However, EY, which has a quarter-million clients and works with the majority of the Fortune Global 500, says it is approaching the revolutionary technology with caution and deliberateness.

Wong recently spoke with Quartz about EY’s use of generative AI, how AI is transforming occupations, ethical implications for organizations that use the technology, and how to tell the difference between hype and reality in the tech sector. For clarity and length, the conversation has been minimally modified.

Quartz: How is EY now utilizing generative AI, and what are your future plans for it?

Wong, Jeff:The first is that we’re exploiting it for our own reasons. The second is that there are obviously a number of clients all over the world who are really interested in the space, so we’re developing generative AI tools for them.

There are hundreds of various projects being proposed internally. We’ve taken great effort in deciding which ones to pursue because we want to ensure that we’re doing it in a safe and considerate manner. Our EY payroll is one example. People have a lot of issues about payroll, especially when they are abroad and are unsure about which tax filings they should make. So we fed all of the payroll tax legislation into a generative AI system that was running on top of one of the mega-platforms. Then, instead of dealing with employees who are not available 24 hours a day, we can query such payroll queries directly in a chat function like ChatGPT. We are still in the early stages of testing, but the time savings and accuracy gains are tremendous.

How does EY protect data privacy and employ generative AI ethically in its operations and client services?

So, you know, this isn’t a question about generative AI. This is a data policy issue. And we have a data policy that includes a customer clause that permits us to use customer data in a privatized way for things like training our AI systems. We have formal authorization to do so, not implicit consent. The second point is that we have a thorough information security review process team, which means we have a complete technical and legal grasp of the actual approach [in terms of] data, information, security, and privacy.

See also  The Hardest Smartphones to Repair, According to Repairability Scores

We have stated that we will not undertake anything if we are unhappy with the bias linked with the outcome. So, if we can’t figure out how to account for bias, we won’t bother with AI so early on. In the case of generative AI, you can pose objective questions such as: Does this law apply in this situation? That’s an objective question, thus bias is less of an issue. However, if bias was a greater worry, we would be hesitant to adopt the initiative.

What is EY’s position on artificial intelligence regulation? How do you stay on top of the ever-changing regulatory landscape?

Because we are a regulated business, we have a close contact with all of the regulators so far. And what’s been amazing about advanced technologies, including AI, is that governments all around the world have invited us to comment on our thoughts and theories about how advanced technology, in general, should be controlled. So we do this for blockchain, quantum, and artificial intelligence.

Regulatory frameworks are not optional for us. They are necessary. We have teams that monitor regulatory regimes all across the world, including advanced technology such as artificial intelligence. As a result, we are keeping a tight eye on it everywhere.

Is it reasonable to fear that generative AI will automate white-collar employment, causing severe workforce disruption?

We believe that this has an incredible ability to automate many of the jobs and job responsibilities that we have around the world, and we expect and aim to exploit it to the fullest extent possible.

Throughout my time at EY, we have used AI and automation technology. Every time we do it, our teams are ecstatic and welcoming of these new technology. When it doesn’t arrive quickly enough, they inquire and demand it. In generative AI, the same thing is happening. We’re receiving a lot of requests from folks all around the world who say, “Hey, we think we can do our part of the business better, faster; we think we can use it in this way to help answer better questions for our clients.”

See also  Achieve more with email marketing software

Is there a possibility that [AI] may trigger employment adjustments and job loss? We have yet to see it. But do we realize the risks that come with it? Yes, we understand, which is why we’ve always felt obligated to provide our staff with the training resources they need to consistently up-skill themselves. So that’s anything from our badges system, which allows them to learn about AI at various levels, blockchain at various levels, quantum at various levels, and we also provide a couple different master’s degrees for free.

Companies all throughout the world, in my opinion, need to think carefully about how they approach this. And, to be honest, I believe governments all across the world should be thinking about this as well. How do they constantly upskill and give opportunity for their citizens around the world to upskill?

You serve on the advisory board of a nonprofit dedicated to increasing diversity and inclusion in the field of artificial intelligence. How do you rate D&I efforts in AI thus far? And, given recent attacks on D&I in education and elsewhere, do these initiatives to include D&I into AI risk being derailed?

Yes, there should be more diversity in crucial technologies, including and possibly especially AI. We’ve found in our own recruiting efforts that it’s quite tough to find a diverse skill set of engineers, product people, policy experts, and artificial intelligence experts who understand and have that context.

We recognized that in order to make this pipeline work, we needed to go all the way back to high school to ensure that the pipeline was robust enough to accommodate a broad collection of voices entering the sector. Without that diversity, we’ve seen some of the problems in the past where people don’t identify when their algorithm produces skewed results. We must find a method to encourage people of many origins, nationalities, genders, and ways of identifying themselves to participate in these game-changing innovations. Otherwise, we risk repeating some of our history’s fundamental biases.

Consider what it would be like to work in an office a year from now, when apps like Microsoft Office and Slack have been updated with generative AI.

I believe that a year from now, certain activities and things will work incredibly well, such as systems that can listen to this conversation here, take notes, pick out the essential parts, and summarize it for you so you don’t have to do it yourself. Some of the other scenarios we’ve seen—where I can tell my machine to pull all the abnormalities in this dataset, show me where income dropped down, or explain why—require more work. So, in a year, I believe we will see some incredibly great tools that work for us, but not necessarily the broad-based promise and change.

See also  Tata makes history as India's first iPhone maker

I’ll also underline that this is quite typical. The excitement surrounding what it can do is nearly always ahead of reality—which is good. I just believe it will take more than a year to come together. Five or eight years from now, the office will most likely look very different, with AI capable of writing and responding to emails properly. Or perhaps your email system and mine have a complete discussion in which your system recognizes, “Hey, there’s something in your article—you’re missing a point, and you should ask this question,” and it asks my system. And they travel back and forth multiple times, and we may only look at the final agreement. You may see that progress, but I believe the time frame will be more in the five-year range rather than the one-year range.

How do you tell the difference between genuine value and marketing hype in generative AI products?

That is actually quite simple. The hype curve is related with people’s ideas about what’s possible, and it’s incredible. I enjoy things that are hyped up. The reality curve is comprised of products that I have seen and handled. Because individuals prefer to show us things, we can see nearly anything in the world where I work. As a result, I can see what’s real.

Now, let’s conduct a short lightning round of (hopefully!) entertaining questions. Tell me whether you believe the following are overhyped, underhyped, or correctly hyped, beginning with AI chatbots.

The promise is overstated, yet it is quite cool. The reality is also fascinating.

Cryptocurrencies

Under-hyped. Overhyped in the past, but now under-hyped.

Drone delivery

This is a great one. Overhyped in the sense that I’m not sure we’ll see them for a time. But we can do them right now.

Portugal (as a sabbatical, holiday, or relocation destination)

Under-hyped. Portugal is fantastic.

Accepting Failure

Under-hyped. Failure is an essential part of the learning process. Everything must be about ongoing learning and growth, and I’m not sure how much you can progress if you’re not ready to risk failure.

Quantum computing

Oh, this one’s a keeper. It is under-discussed because of the repercussions. So I’d say it’s under-hyped. That is a disastrous change in a world where 1% of all encryption is broken. So, if you’re not thinking about it right now, I believe you’re not aware of the hazards connected with these black swan catastrophic change events, and these are things you should be considering.

Subscribe to Our Latest Newsletter

To Read Our Exclusive Content, Sign up Now. $5/Monthly, $50/Yearly

Categories: Technology
Source: vtt.edu.vn

Leave a Comment