[ad_1]
A team of researchers from University College Maastricht recently published a study exploring the use of GPT-3 as an email manager. As someone with an inbox that can only be described as ridiculous, color me intrigued.
The big idea: We spend hours a day reading and responding to emails, what if an AI could automate both processes?
The Maastricht team explored the idea of letting GPT-3 get lost in our messaging systems from a pragmatic point of view. Rather than focusing on GPT-3’s exact ability to respond to specific emails, the team considered whether it would even be worth giving a try.
Their article (read here) breaks down the potential effectiveness of GPT-3 as a messaging secretary by examining its usefulness compared to sophisticated machines, its financial viability compared to human workers, and the impact of errors generated by the sending machine. and recipients.
Context: The quest to create a better email client is endless, but at the end of the day we’re talking about leaving GPT-3 reply to incoming emails. According to the researchers:
Our research indicates that there is a market for GPT-3-based email rationalization in several sectors of the economy, of which we will explore just a few. Across industries, the damage from a small wording error appears minor as the content usually involves neither large sums of money nor human security.
The authors then describe use cases in the insurance, energy and public administration sectors.
Objections: First of all, it should be pointed out that this is pre-printing paper. Often this means the science is good, but the document itself is still under review. This particular paper is currently a bit of a mess. Three separate sections contain the same information, for example, so it’s hard to really discern the purpose of the study.
This seems to indicate that it would save us time and money if GPT-3 could be applied to the task of answering our business emails. But it’s a gigantic “if”.
GPT-3 lives in a black box. A human should reread every email they send because there’s no way to be sure they won’t say something that invites litigation. Aside from concerns that the machine will generate offensive or bogus text, there is also the problem of trying to figure out what good a general knowledge bot would be for this task.
GPT-3 was formed on the internet, so he may be able to tell you the wingspan of an albatross or who won the 1967 World Series, but he certainly can’t decide if you want to compete for an albatross card. anniversary for a co – worker or if you are interested in leading a new subcommittee.
The point is, GPT-3 would probably be worse at responding to general emails than a simple chatbot trained to select a pre-generated response.
Quick setting: A little Googling tells me that the landline was not ubiquitous in the United States until 1998. And now, decades later, only a tiny fraction of American homes still have a landline.
I can’t help but wonder if email will be the standard of communication for much longer – especially if the latest line of innovation is finding ways to keep ourselves out of our own inboxes. Who knows how long we might be away from a hypothetical version of OpenAI’s GPT that is reliable enough to be worth using at any commercial level.
The research here is commendable and the article makes for interesting read, but ultimately the usefulness of GPT-3 as an email responder is purely academic. There are better solutions for inbox filtering and automated reply than a brute force text generator.
Published February 8, 2021 – 20:17 UTC
[ad_2]
Source link