Melony
HomeGitHub

Quickstart

Build your first AI chat interface in 30 seconds.

Basic Chat Component

Here's a complete example of a chat component with streaming support:

"use client";
import {
  MelonyProvider,
  useMelonyMessages,
  useMelonySend,
  useMelonyStatus,
} from "melony";

function ChatMessages() {
  const messages = useMelonyMessages();
  const send = useMelonySend();
  const status = useMelonyStatus();

  return (
    <div>
      {messages.map((message) => (
        <div key={message.id}>
          <strong>{message.role}:</strong>
          {message.parts.map((part, i) => (
            <div key={i}>{part.type === "text" && part.text}</div>
          ))}
        </div>
      ))}
      <button onClick={() => send("Hello!")} disabled={status === "streaming"}>
        {status === "streaming" ? "Sending..." : "Send"}
      </button>
    </div>
  );
}

export default function Chat() {
  return (
    <MelonyProvider endpoint="/api/chat">
      <ChatMessages />
    </MelonyProvider>
  );
}

Step by Step

1. Wrap your app with MelonyProvider

The MelonyProvider manages the chat state and handles server communication.

<MelonyProvider endpoint="/api/chat">
  <YourChatComponent />
</MelonyProvider>

2. Use hooks to access chat data

Use the provided hooks to get messages, send new messages, and check status.

const messages = useMelonyMessages(); // Get all messages
const send = useMelonySend(); // Function to send messages
const status = useMelonyStatus(); // Current status: idle, streaming, error

3. Render messages and handle user input

Map over messages and their parts to display the conversation.

{messages.map((message) => (
  <div key={message.id}>
    <strong>{message.role}:</strong>
    {message.parts.map((part, i) => (
      <div key={i}>{part.type === "text" && part.text}</div>
    ))}
  </div>
))}

Server Setup

You'll need a server endpoint that returns streaming responses. Here's a complete example using AI SDK:

API Route (app/api/chat/route.ts)

import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { message } = await req.json();

  const result = await streamText({
    model: openai("gpt-4o"),
    messages: [
      {
        role: "user",
        content: message,
      },
    ],
  });

  return result.toUIMessageStream();
}

Customization

The example above is minimal. You can customize the UI, add styling, and use additional features:

Add custom headers

<MelonyProvider 
  endpoint="/api/chat"
  headers={{ "Authorization": "Bearer your-token" }}
>
  <YourChatComponent />
</MelonyProvider>

Handle different message types

{message.parts.map((part, i) => (
  <div key={i}>
    {part.type === "text" && <p>{part.text}</p>}
    {part.type === "image" && <img src={part.imageUrl} />}
    {part.type === "tool_call" && (
      <div>Tool: {part.toolName}</div>
    )}
  </div>
))}

Next Steps

Text Delta Handling

Learn how melony automatically handles streaming text updates for smooth UX.

Learn more →

Custom Message Types

Extend melony with your own message structures and types.

Learn more →

Advanced Usage

Explore advanced features and hook combinations.

Learn more →

API Reference

Complete reference for all components and hooks.

View API →