Natural language AI in your next project? It’s easier said than done

Want your next project Trash Talk? Rewrite dynamically annoying log messages as sci-fi technonoble? Answer the question with pleasure (or with annoyance)? This type of work can be done with OpenAI’s GPT-3 and much more, a natural language prediction model with an API that is probably much easier to use than you might think.

In fact, if you have basic Python coding skills, or even just a crafting ability curl Statement, you have everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free with signup, but the cost for individual projects will be very low.

The basic idea

OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform any task involving understanding or creating a natural-sounding language.

OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. However, first one must create an account and receive an API key. After that, the door is opened.

Creating an account also gives a person many free credits that can be used to test ideas. Once the free trial is used or has expired, it will cost money to use the API. How much? Not much, to be honest. Everything sent (and received from) in the API is divided into tokens, and prices range from $ 0.0008 to $ 0.06 per thousand tokens. One thousand tokens is roughly 750 words, so small projects are not really a big financial commitment. My free trial came with 18 USD credit, of which I still managed to spend 5%.

Let’s take a closer look at how it works and what can be done with it!

How it works

The API accepts requests in different ways and if you can create one curl Use statements, command lines, or some common Python (or node.js) code, good news! Here’s what you need to do to start trying ideas!

I will describe using API in the most basic way, which Finish. That means presenting API with one SoonFrom which it will provide a text Finish Try to match that prompt. All this is completely text, and formatted as natural language.

Thus using the API can be thought of as a kind of high-performance automated implementation. The engine itself can be thought of as a kind of fantastic-complex state machine, at the same time it is not like anything else.

What kind of stuff is possible?

Here is a very simple example:

Prompt (sent to API) Termination (response from API)
Write a funny error message for low battery condition. “You’re just running!”

The Soon How to “program” a model, and designing a good prompt is a big part of getting good results.

Because the model is capable of understanding language structure and intent, a Soon It can be said that someone is giving a writing job to a high school student.

Here’s a little complicated example that just does it:

Soon Finish
Provide exciting comments about an event in a tabletop game about trains.

Event: A player named Rob sells a company.
Comment:

“Rob has decided to sell their company. This is a big move and will be a big reaction to the game. Who will buy the company? What will they do with it? Only time will tell. “

There are a few things to mention in the example above. The first line of the prompt describes what to do. The next two lines (“Event:” and “Comment:”) set up a model of how the statement should be completed. Note that the “Comment:” line is left blank at the prompt; The model will complete what was started.

The practice of creating effective prompts is called prompt design and is the key to getting all kinds of useful and varied responses from the API. Let’s take a closer look at how prompt design affects results.

Get better feedback

If someone has trouble responding to the model in a useful way, it may be helpful to include multiple examples of what is requested at the prompt to give the model a better idea of ​​what to expect.

The API documentation has an excellent demonstration of this concept and I will briefly describe it below:

Prompt (sent to API) Termination (response from API)
Superheroes suggest three names for a horse. Superman, Batman, Spiderman

We don’t really want this result. However, the prompt itself gives significantly better results with some examples:

Prompt (sent to API) Termination (response from API)
Suggest three names for such a creature superhero.

Animals: Cats
Name: Captain SharpCl, Agent Fluffball, The Incredible Felline
Animals: Dogs
Name: Rough the Protector, Wonder Canine, Sir Barks-a-Lot
Animal: Horse
Name:

Mighty Stead, Blaze the Wonder Horse, Thunderhoof

Doing so increases the cost – keep in mind that one pays per token, both at the prompt and at the output – but providing multiple instances at the prompt can be the key to getting the best results in some cases, as it clears the model of what is being requested and how. His idea should be formatted.

Again, it is helpful to think of the prompt as a writing assignment for a middle-schooler; A middle-schooler who can ultimately be thought of as a fantastic-complex and somewhat changeable state instrument.

Same prompt, different ending

For a single prompt, the API does not necessarily return the same result. Although the nature of the prompt and the data that the model has been trained play a role in, variation in response may also be affected. temperature Setting up a request.

Temperature is a value between 0 and 1, and an expression of how much the model should be determined when predicting a valid termination at a prompt. A temperature of 0 means that submitting the same prompt will result in the same (or very similar) reaction each time. Temperatures above zero will give different fullness each time.

In other words, lower temperatures mean the model takes less risk, resulting in more determinants of finishes. This is useful when someone wants a fulfillment that can be accurately predicted, such as a response that is real in nature. On the other hand, an increase in temperature – 0.7 is a common default value – brings more variation in completion ৷

The model is fine tuning

The natural language model behind the API is pre-trained, but it is still possible to customize the model with a separate dataset created for a specific application.

This function, called fine tuning, allows the model to efficiently provide many more examples than it might be practical to include at each prompt. In fact, once a fine tuning dataset is provided, one no longer needs to include examples at the prompt. Requests will be processed quickly, as well.

This probably won’t be necessary without a narrow application, but if you find that getting solid results for your project depends on a large prompt and you want it to be more efficient, fine tuning is where you need to look. OpenAI provides tools to make this process as easy as possible, if you need it.

What does the code look like?

There is an interactive web tool (playground, an account required) so that anyone can use the model to test ideas without some code, but it also has the convenience of creating a code snippet on request, easy copying and pasting into projects.

Here is the first example of this article, formatted as a general curl Request:

curl https://api.openai.com/v1/engines/text-davinci-002/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "prompt": "Write a witty error message for a low battery condition.",
  "temperature": 0.7,
  "max_tokens": 256,
  "top_p": 1,
  "frequency_penalty": 0,
  "presence_penalty": 0
}'

And the same, this time in Python:


import os
import openai

openai.api_key = os.getenv("OPENAI_API_KEY")

response = openai.Completion.create(
engine="text-davinci-002",
prompt="Write a witty error message for a low battery condition.",
temperature=0.7,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)

Installing the Python package will install a utility that can be used directly from the command line for maximum convenience:


$ openai api completions.create -e text-davinci-002 -p "Write a simple poem about daisies." --stream -M 128

(Comment: --stream Displays the results as they are received, and -M 128 Limits answers to a maximum of 128 tokens.)

The “Write a simple poem about Daisy” prompt has created the following text for me, which will be different each time:


The daisy is a beautiful flower 
That grows in the meadow and in the pasture 
It has a yellow center and white petals 
That make it look like the sun 
The daisy is a symbol of innocence 
And purity and is loved by all

All of the above examples work the same way: they turn off prompts in the OpenAI API (using one’s API key for access, which the above examples assume is set as an environment variable in name). OPENAI_API_KEY), And get an answer with feedback.

Responsible use

It deserves to highlight OpenAI’s commitment to responsible use, including guidelines for best practices for security for applications. That link contains a lot of thought provoking information, but the short version should always remember that it is a tool which is:

  1. Qualified To make things A. Very believable wayAnd
  2. Able to interact with people.

It is not difficult to see that the combination can be harmful if used irresponsibly. Like most tools, one should be aware of abuse, but tools can be a great thing too.

Are you still getting the idea?

Using the API is not free in the long run, but creating an account will give you a set of free credits that can be used to play around and try out a few ideas, and the cost of using even the most expensive engine for personal projects is all my passionate testing So far my free trial has cost only USD 2

Need some inspiration? We have already covered a few projects that have moved in this direction. This robotic putting game uses natural language AI to generate trash talk, and the Deep Dreams Podcast is built entirely as a sleep aid to machine-generated fairy tales and is built with the OpenAI API.

Now that you know what kind of things are possible and how easy they are, maybe you already have some idea? Let us know about them in our comments!

Leave a Reply

Your email address will not be published.