ProNextJS
    Professional Next.js Course
    Loading price
    30-Day Money-Back Guarantee
    lesson

    Add Interactivity with Next.js Server Actions

    Jack HerringtonJack Herrington

    NOTE: OpenAI has a free tier, but you do need to register a credit card with the account even at the free tier. If this is not acceptable to you the ai library can integrate with a number of AI providers as well as local only AI solutions like ollama. (If you choose the local route then the AI will not work in production.)

    Let's add the interactive ChatPT functionality to our app. We want it so that users can input questions and get responses from an AI. We'll need two things for this: an input control, and a way to connect to OpenAI.

    Installing Dependencies

    The first thing to do is add the input control from shadcn. The command is similar to what we've used before:

    npx shadcn-ui@latest add input
    

    Next, we need to install the libraries for connecting to OpenAI. For this, we need the OpenAI library and the ai package from by Vercel:

    pnpm add ai openai
    

    The Plan

    The idea for the app goes something like this:

    In the UI will be a an input field, and a submit button. When the user submits a query, the app will send a request to the server. The server will then return some responses which we can display on the client-side.

    One of the easiest ways to do this in Next.js is to use a Server Action.

    Server Actions are special functions you define that are specifically run on the server. Whenever you call a server function from a client component, Next.js handles fetch and data retrieval from the server.

    Setting Up a Server Action

    Let's set up a server action for connecting to OpenAI.

    First, create a directory in your app for server actions, along with a getCompletions.ts file at src/server-actions/getCompletion.ts. The file is named this way because a "completion" is what it's called in AI land when you chat with a bot.

    There are two different ways to define a server action:

    Adding "use server"; at the top of the file indicates that every function inside the module is a server action. Alternatively, adding "use server"; inside the definition of a function indicates that the function is a server action.

    For the getCompletion file, we'll put use server at the top of the file. Next we'll need to create an OPENAI_API_KEY environment variable, which we'll use to initialize the OpenAI library:

    // inside server-actions/getCompletion.ts
    
    "use server";
    import OpenAI from "openai";
    
    const openai = new OpenAI({
      apiKey: process.env.OPENAI_API_KEY,
    });
    

    When storing the environment variable, you can put it in either .env.local or .env.development.local.

    Next, we'll define the getCompletion function. It will take in a messageHistory that is an array of objects that act as a transcript of the messages between the user and the AI assistant. This is nice because it gives the AI some history of the conversation so far:

    // inside server-actions/getCompletion.ts
    
    export async function getCompletion(
      messageHistory: {
        role: "user" | "assistant";
        content: string;
      }[]
    ) {
      // function implementation will be here
    }
    

    The function will send the request off to the server using the gpt-3.5-turbo model along with the messageHistory. There are several models to choose from, but this is the fastest and cheapest:

    // inside the getCompletion function
    
    const response = await openai.chat.completions.create({
      model: "gpt-3.5-turbo",
      messages: messageHistory,
    });
    

    Once we get the response, we concatenate it with the old message history and return an object with the messages:

    const messages = [
      ...messageHistory,
      response.choices[0].message as unknown as {
        role: "user" | "assistant";
        content: string;
      },
    ];
    
    return { messages };
    

    We'll do more stuff with the messages as we continue building the application, but for now we'll move on to building the chat component.

    Building the Chat Component

    Create a new file at components/Chat.tsx. Obviously, this will be a client component so we'll add "use client"; at the top of the file. Since the component will need state, we'll import the useState Hook from React. We'll also import the input and button from shadcn:

    // inside components/Chat.tsx
    "use client";
    import { useState } from "react";
    
    import { Input } from "@/components/ui/input";
    import { Button } from "@/components/ui/button";
    

    The state of this component will consist of messages which will have a role and content, and a transcript of the messages so far, as well as the current message being typed. We'll create a Message interface that matches that shape:

    interface Message {
      role: "user" | "assistant";
      content: string;
    }
    

    The Chat component will be the default export for the file. It will have a state for the message history and the current message being typed:

    export default function Chat() {
      const [messages, setMessages] = useState<Message[]>([]);
      const [message, setMessage] = useState("");
    }
    

    Next, we'll add some markup for the actual chat UI.

    We'll format it so that AI stuff is on the left and user stuff is on the right:

    // inside the Chat component return
    
    return (
      <div className="flex flex-col">
        {messages.map((message, i) => (
          <div
            key={i}
            className={`mb-5 flex flex-col ${
              message.role === "user" ? "items-end" : "items-start"
            }`}
          >
            <div
              className={`${
                message.role === "user" ? "bg-blue-500" : "bg-gray-500 text-black"
              } rounded-md py-2 px-8`}
            >
              {message.content}
            </div>
          </div>
        ))}
    

    Below this, we'll add in the Input control. Its value will be the current message, and its onChange will call the setMessage function with the new value. We'll also add an onKeyUp handler that will call the onClick function when the user presses the Enter key:

    <div className="flex border-t-2 border-t-gray-500 pt-3 mt-3">
      <Input
        className="flex-grow text-xl"
        placeholder="Question"
        value={message}
        onChange={(e) => setMessage(e.target.value)}
        onKeyUp={(e) => {
          if (e.key === "Enter") {
            onClick();
          }
        }}
      />
    </div>
    

    Next we'll import the getCompletion server action and create the onClick handler.

    The onClick handler is an async function that will call getCompletion, await the result, and give the current messages and the new message as input. When the response comes back, it empties the current message and sets the list of messages to whatever came back from the AI:

    // at the top of the file
    
    import { getCompletion } from "@/app/server-actions/getCompletion";
    
    // inside the Chat component above the return
      const onClick = async () => {
        const completions = await getCompletion([
          ...messages,
          {
            role: "user",
            content: message,
          },
        ]);
        setMessage("");
        setMessages(completions.messages);
      };
    

    Finally, we'll add the Button component for sending the message below the Input, with its onClick set to the onClick handler:

    <Button onClick={onClick} className="ml-3 text-xl">
      Send
    </Button>
    

    Adding the Chat Component to the App

    Now that we've written the Chat component, we need to add it to our app:

    Inside of app/page.tsx, import the component and add it to the page below the h1:

    import Chat from "@/components/Chat";
    
    export default function Home() {
      return (
        <main>
          <h1 className="text-4xl font-bold">Welcome to GPT Chat</h1>
          <Chat />
        </main>
      );
    }
    

    At this point, we can test our application.

    Testing Our App

    Back in the browser, we should see the input control and the send button. When we type in a question and press the send button, we should get a response back from the AI:

    The app is working

    In this case, the AI answers our question that 1 + 2 = 3, so the app is working!

    What's happening is that we are calling the getCompletions server action automatically from the client. It's doing a fetch, getting the data back, populating the messages array, and then being formatted for the UI.

    It's working great, but there are some improvements we can make.

    Make the UI More Visually Pleasing

    In order to make the UI a bit more visually pleasing, we'll bring in the Separator component from shadcn First install the component in the terminal:

    pnpm add @shadcn-ui/separator
    

    Then we'll add it between the Chat component and the heading on the homepage:

    <main>
      <h1 className="text-4xl font-bold">Welcome to GPT Chat</h1>
      <Separator className="my-5"/>
      <Chat />
    </main>
    

    The UI looks better now, but more importantly we need to make sure that the chat functionality only shows if the user is logged in.

    Require Authentication

    The Home component inside of page.tsx is an RSC (React Server Component), because we didn't specify "use client"; at the top of the file. This means that we can't use useSession.

    Instead, we need to use getServerSession from NextAuth:

    import { getServerSession } from "next-auth";
    

    Because getServerSession is async, we need to update the Home component to be an async function. Inside of the component, we'll create a new session variable that we'll get by awaiting the result of getServerSession:

    export default async function Home() {
      const session = await getServerSession();
    
      ...
    

    We'll then add a conditional to check if the user is logged in by looking at session?.user?.email. If they are, we'll show the Chat component. If they're not, we'll show a message telling them to log in:

    export default async function Home() {
      const session = await getServerSession();
    
      return (
        <main className="p-5">
          <h1 className="text-4xl font-bold">Welcome To GPT Chat</h1>
          {!session?.user?.email && <div>You need to log in to use this chat.</div>}
          {session?.user?.email && (
            <>
              <Separator className="my-5" />
              <Chat />
            </>
          )}
        </main>
      );
    }
    

    Refreshing the browser, we should be able to log in and see the chat functionality. If we log out, the chat functionality should disappear.

    Deployment and Next Steps

    Before you push to deployment, make sure to add the OPENAI_API_KEY environment variable to your production environment on Vercel.

    Then you can push your changes to production by adding the new files and committing and pushing your changes in git:

    git add -A && git commit -m "Interactivity" && git push -u origin main
    

    Upon successful deployment, your app should be able to communicate effectively with OpenAI and return accurate responses to user queries.

    Now that we have our application talking to OpenAI and getting all this done, we would want to save our conversations. So, for our next step, we're going to learn how to use a database, both locally and in production.

    Transcript

    All right, so let's go take a look at our app so far. We got our homepage. Now, this is where we want to put some chat to BD functionality. We want to put an input where you can ask a question of the AI. And then we want to go and make a request to the AI, get that data back and show it. So let's go and start by at least giving ourselves an input control in ShadCN.

    So again, we'll use MPX ShadCN UI. That's the module name. And we'll add the input. That's going to give us a really nice look and input control. Next up, we need to add our library so we can connect to OpenAI. For that, we're going to bring in OpenAI as the main library that we're going to talk to. But we're also going to bring in

    the nice Vercel wrapper for AI called AI. So let's kind of talk about how this is going to work. Now, over in our UI, we're going to have an input field, maybe a submit button on it. And you're going to type in a query, and then that's going to go off to the server and get some responses back that we're then going to show on the client. So how are we going to do that?

    Well, one of the easiest ways in Next.js to talk from the client to the server is to use what's called a server action. It's a special type of function you define in your application. You say that it is a server-only function. And then anytime you call that server function from a client component, it is Next.js that will actually handle

    doing the fetch to the server and getting back the data. It is really slick. So let me show you how to do this. So I'm going to create a new directory in our app called server actions. And within that, get completion. So this is going to be our server action

    that connects to OpenAI and gets back a completion for a prompt. That's what they call it in AI land when you do a chat with a bot. Now, there's two different ways to define a server action. You can put use server at the top of the file. That's going to say that every single function inside of that module is a server action,

    or you can put use server inside of the definition of a function. And that's going to say that this particular function is a server function. So I'm going to choose to put it at the top of this particular file. And the next thing I'm going to do is bring in OpenAI and initialize it. Now you do need an OpenAI API key. Thankfully, those are free.

    And you want to put in an environment variable. You can put that in n.development.local, if you want, or I have it just in my local environment of my computer. Then after that, we're going to define our function. In this case, it's getCompletion. That's the name of the function. It's going to take a message history. That is a transcript of the messages

    to and from the user and the AI. So that message history is going to be an array of objects. Each object is going to say who said it. It's either going to be user, me, or you, and then assistant, which would be the AI, and then the content. Nice thing here is it gives that AI

    some history of your conversation so far. Then we're going to send that request off to the server. I'm going to use the GPT 3.5 turbo model. There are lots of different models from OpenAI. This is actually just one of the cheapest and the fastest. We also give it our message history. That's going to have the new prompt from the user.

    Then we await the response from ChatGPT. Then we create a new array called messages that takes the old message history and concatenates onto it the new result from our ChatGPT request. And at the end, we then return an object that has the messages on it. We're actually going to add some more stuff to that object as we go.

    So at the moment, we're just going to have the messages on there. All right, next up, let's build our chat component. So we'll go over here into Components, create a new chat component. It's obviously going to be a client component. It's going to have some state, which would be the text that we're going to type into. We need an input and a button, so we'll bring those in.

    And then we'll define our chat component. Now let's talk about the state we need. Well, we need messages. So let's define a type for a message. It's going to have a role and content. And then we're going to find some state for this component. We're going to have two pieces of state. We're going to have the message history. That's your transcript that you have so far. And then we're going to have the message, assuming you're currently typing in message.

    So now let's go and do some formatting of this. Right at the top here, we'll format our messages. We'll have AI stuff on the left and user stuff on the right. Then down below it, in a div, we're going to have the input control. Then just use the value and the on change, just like you would in any normal React component.

    And then the send button is going to send this off to the AI. So now we need to go and bring in that server action. To do that, we simply import the server action into this component. That's it. Get completion. Now we need to call it, so let's create an onClickHandler for that.

    So this asynchronous click handler is going to call getCompletion, and it's going to await it and get that result back. And as input, it's going to give the current messages plus the new message that is in that message state. And then once it gets that response back, it's going to empty out that message and then set the list of messages to whatever we got back from the AI.

    So now we need to call it, of course. Let's go take that down to our button, give it an onClick there, and just make it a little nicer on the key up. And we'll see if we have an enter, and if we do, then we'll call onClick. Let's go bring this chat component into our app.

    To do that, we go over to our page, we import chat, and then down here, we just invoke it. Let's give it a try. Go over to our local host, go back to main. Well, that looks pretty good. What does one plus two?

    I'll hit Enter. Wow, that's really great. So it's doing basic math for us via AI. That is really good, though. So what's happening is we are calling that getCompletions server function automatically from the client. It's doing a fetch, getting that data back,

    populating that messages array, and we are formatting it. That looks great. OK, a couple of little things, though. I want to clean this up. So one, I want to put a separator between this line and this chat section. So I'm going to go bring in one more ShadCN component. So bring in the separator component.

    And I'll just import that in here and put it above my chat. OK, cool. One more thing, though. We don't want to show this functionality unless we're logged in. And we're in an RSC right now, right? We haven't said useClient inside of here. So we can't use useSession.

    We need a different way to get the session on the server. Thankfully, we have getServerSession. So we can bring in getServerSession from NextAuth. Now, we do need to await getServerSession.

    So that means that this needs to be an async function. So now we can wrap the separator in this chat. In a conditional, it says only when we have a session will we show separator in chat. And the only way you can get a session is if you log in. And the only way you can log in is if you are me, or in this case, you, in your application.

    So let's go and put up a little warning that says you need to be logged in if you want to use this. But if you are logged in, then we'll show the separator and the chat. All right, looks pretty good.

    Let's give it a try. All right, let's hit Refresh. We got our separator. That looks good. We can still use this. Great. Now let's log out and see what happens. And now we say, from the server side, you need to be logged in to use this chat. So there's no way to defeat this. Really nice.

    All right, let's sign in again. All right, before we push this production, make sure to go and add your OpenAI API key to your Vercel environment variables. Fantastic. Now I'm going to go and do our deployment. To do that, I'm again going to add all our local changes, commit it, and then push it to main.

    All right, our deployment is building. OK, looks like it's built. Let's give it a try. Refresh. Awesome. It already knows that we're logged in. I'll ask a new question. And we get a response back, which

    means that my OpenAI API key is set in my Vercel instance. If you are getting an error here, it's probably because that is not set. So you'll want to go and set it, and then redeploy the application to make sure that that environment variable gets set. OK, now that we have our application talking to OpenAI and getting all this stuff back, we

    want to actually keep our conversations around. So we have to go and store them somewhere. So in the next section, we're going to learn how to use a database, both locally and in production. It's really cool. I'll see you in the next session.