Member-only story
A Question-Answering Bot Powered by Wikipedia Coupled to GPT-3
Still fascinated by the possibilities offered by GPT-3 and its power, here coupled to Wikipedia
If you follow me, you’ve seen I’m fascinated with GPT-3 both as a tool for productivity and as a tool for information retrieval through natural questions. You’ve also seen that GPT-3 often provides correct answers to a question, but sometimes it does not and it can even be misleading or confusing because its answer appears confident despite being wrong. In some cases, but not always, when it cannot find a reasonable completion (i.e. it “doesn’t know” the answer) it tells you so, or it just doesn’t provide any answer. I showed you that factual accuracy can be improved by fine-tuning the model, or more easily, by few-shot learning. But it isn’t easy to decide what information to use in these procedures, let alone how to apply it. Here I present you a rather simple way to enhance your bot by using information that it retrieves directly from Wikipedia. As you will see by reading on, it works quite well.
Introduction
GPT-3 is powering many projects that were unthinkable until a year or so ago. Just look here at the articles I wrote presenting various example applications — with the twist that they are all web-based and running on the client, thus easily achieving things as futuristic-looking as having a natural talk with the computer:
Need for more accurate information
Although there’s a good chance that GPT-3 will provide correct answers to a question given the right settings, sometimes it will reply that it doesn’t know or even not reply at all. However, and this is very bad, it will often provide incorrect answers that can be very misleading or confusing because they are provided with seemingly high confidence. This is something that we saw can be corrected with fine-tuning, or more easily, by few-shot learning. But how, exactly?