Use Hugging Face AI Models To Generate Dialog

,

About

This tutorial was inspired by this.

In this tutorial, you will learn how to interact with AI models hosted on the Hugging Face platform to generate random NPC dialog. We will be using the GPT-2 model for our dialog generation.

Steps

First, let’s see what model we will be using. This GPT-2 model has been pretrained on casual English.

Next, let’s write our script (make sure it’s running on the server). At the top, let’s initialize HttpService as we’ll need to make an HTTP request.

local HttpService = game:GetService("HttpService")

Our input for the model will be a half-completed sentence, such as “Hello! My name is”. Since we are using HTTP requests, we also have to encode the data in JSON format by calling HttpService:JSONEncode.

local HttpService = game:GetService("HttpService")

local data = { ["inputs"] = "Hello! My name is" }
local json = HttpService:JSONEncode(data)

The API endpoint for Hugging Face models is https://api-inference.huggingface.co/models/<MODEL_ID>, with MODEL_ID in this case being gpt2. Now that we have the sufficient information to make our request, let’s see what it looks like:

local HttpService = game:GetService("HttpService")

local data = { ["inputs"] = "Hello! My name is" }
local json = HttpService:JSONEncode(data)

local response = HttpService:PostAsync("https://api-inference.huggingface.co/models/gpt2", json)
-- REMEMBER: We have to decode our data, since it's being returned in JSON format!
local decoded = HttpService:JSONDecode(response)

From here, you can now use your dialog by fetching the generated text using decoded[1].generated_text. Remember to filter the text before showing it to any players!

RESULT

image


Disclaimer: My personal experience with the GPT-2 model has been a bit rough as the generated text sometimes does not make sense as it progresses.

Closing

Congrats! You’ve successfully used the Hugging Face API to generate NPC dialog. Keep in mind, however, that you can use any public model in the Hugging Face model library (Models - Hugging Face) in place of gpt2 to get whatever the model generates - it doesn’t just have to be for dialog.

Happy developing!

15 Likes

Is there a way we can use gpt 2 or gpt-neo to take in a certain question as the prompt and it returns an answer? If so, can i see an example from gpt-2 or gpt-neo?

Just use https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B as the API endpoint.

1 Like

do you have any examples of this?

Yes, there are multiple ways to prompt the AI through the Inference API. You can read more about it with code examples below:

I’m having some trouble with the error “HTTP 429 (Too Many Requests)”… I did just one request. I am using the example code you provided.