AI Integration
Add AI-powered content generation to your editor.
Blokhaus provides a provider-agnostic AI integration system. You supply an AIProvider that returns a ReadableStream of text, and Blokhaus handles the streaming UI, accept/discard flow, and history management.
Core concepts
The AI system has three pieces:
AIProvider-- an interface you implement to connect to any AI backend.AIPlugin-- a React component that registers the AI commands and renders the prompt input.AIPreviewNode-- a Lexical DecoratorNode that manages streaming in local React state and commits to the AST only on accept.
This architecture guarantees that the undo stack contains zero entries during streaming, and exactly one entry if the user accepts the generated content.
The AIProvider interface
interface AIProvider {
/** Human-readable name (e.g., "OpenAI", "Mistral", "Ollama") */
name: string;
/**
* Called with a prompt, surrounding editor context, and optional model config.
* Must return a ReadableStream of text chunks.
* The library consumes this stream -- it does not care about the underlying provider.
*/
generate: (params: AIGenerateParams) => Promise<ReadableStream<string>>;
}
interface AIGenerateParams {
/** The user's prompt text. */
prompt: string;
/** Surrounding editor content serialized as Markdown. */
context: string;
/** Optional model-level configuration. */
config?: AIGenerateConfig;
}
interface AIGenerateConfig {
/** Sampling temperature (0-2). Lower = more deterministic. */
temperature?: number;
/** Maximum tokens to generate. */
maxTokens?: number;
/** System prompt / persona for the model. */
systemPrompt?: string;
}Basic setup
"use client";
import { EditorRoot, AIPlugin } from "@blokhaus/core";
import type { AIProvider } from "@blokhaus/core";
const myProvider: AIProvider = {
name: "My AI",
generate: async ({ prompt, context, config }) => {
const response = await fetch("/api/ai/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt, context, ...config }),
});
if (!response.ok) {
throw new Error(`AI request failed: ${response.status}`);
}
// The response body is a ReadableStream<Uint8Array>.
// Transform it to ReadableStream<string>.
const reader = response.body!.getReader();
const decoder = new TextDecoder();
return new ReadableStream<string>({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
return;
}
controller.enqueue(decoder.decode(value, { stream: true }));
},
});
},
};
export default function EditorPage() {
return (
<EditorRoot namespace="my-editor">
<AIPlugin provider={myProvider} />
</EditorRoot>
);
}Building a custom provider
Example: OpenAI-compatible API route
import { NextRequest } from "next/server";
export async function POST(request: NextRequest) {
const { prompt, context, systemPrompt } = await request.json();
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4o",
stream: true,
messages: [
{
role: "system",
content: systemPrompt ?? "You are a helpful writing assistant.",
},
{
role: "user",
content: `Context:\n${context}\n\nTask:\n${prompt}`,
},
],
}),
});
// Forward the stream directly to the client
return new Response(response.body, {
headers: {
"Content-Type": "text/event-stream",
"Cache-Control": "no-cache",
},
});
}Example: Ollama (local) provider
const ollamaProvider: AIProvider = {
name: "Ollama",
generate: async ({ prompt, context }) => {
const response = await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "llama3",
prompt: `Context:\n${context}\n\nTask:\n${prompt}`,
stream: true,
}),
});
const reader = response.body!.getReader();
const decoder = new TextDecoder();
return new ReadableStream<string>({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
return;
}
const text = decoder.decode(value, { stream: true });
try {
const json = JSON.parse(text);
if (json.response) {
controller.enqueue(json.response);
}
if (json.done) {
controller.close();
}
} catch {
// Partial JSON -- accumulate
}
},
});
},
};The streaming flow
When the user triggers AI generation (via the /ai slash menu item or programmatically), the following sequence occurs:
- The
AIPluginreads the surrounding editor nodes (configurable viacontextWindowSize) and serializes them to Markdown. - An
AIPreviewNodeis inserted into the AST in a singleeditor.update()call. - The
AIPreviewNode's React component callsprovider.generate()and begins consuming the stream. - Tokens arrive and update the node's local React state (
useState). Noeditor.update()calls are made during streaming. This keeps the undo stack clean. - When the stream completes, "Accept" and "Discard" buttons appear.
- Accept: The final Markdown content is parsed into Lexical nodes, and the
AIPreviewNodeis replaced with those nodes in a singleeditor.update()call. This creates exactly one undo history entry. - Discard: The
AIPreviewNodeis removed from the AST in a singleeditor.update()call. The document is unchanged.
AIPluginConfig options
Pass a config prop to AIPlugin to customize behavior:
<AIPlugin
provider={myProvider}
config={{
generate: {
temperature: 0.7,
maxTokens: 2000,
systemPrompt: "You are a technical writing assistant. Use Markdown.",
},
labels: {
header: "AI Assistant",
streaming: "Writing...",
accept: "Insert",
discard: "Cancel",
retry: "Try again",
dismiss: "Close",
defaultError: "Something went wrong. Please try again.",
},
retry: {
maxRetries: 3,
},
contextWindowSize: 5,
onError: (error) => {
console.error("AI error:", error);
// Send to your error tracking service
},
onAccept: (content) => {
// Track analytics
console.log("AI content accepted:", content.length, "chars");
},
onDiscard: () => {
console.log("AI content discarded");
},
}}
/>Full config reference
| Property | Type | Default | Description |
|---|---|---|---|
generate.temperature | number | (provider default) | Sampling temperature (0-2) |
generate.maxTokens | number | (provider default) | Maximum tokens to generate |
generate.systemPrompt | string | (provider default) | System prompt / persona |
labels.header | string | "AI" | Header label on the preview node |
labels.streaming | string | "generating..." | Status text while streaming |
labels.accept | string | "Accept" | Accept button label |
labels.discard | string | "Discard" | Discard button label |
labels.retry | string | "Retry" | Retry button label |
labels.dismiss | string | "Dismiss" | Dismiss button (error state) |
labels.defaultError | string | "An error occurred" | Fallback error message |
retry.maxRetries | number | Infinity | Maximum retry attempts. Set to 0 to disable. |
contextWindowSize | number | 3 | Number of preceding blocks to include as context |
onError | (error: Error) => void | -- | Called when the stream encounters an error |
onAccept | (content: string) => void | -- | Called when the user accepts generated content |
onDiscard | () => void | -- | Called when the user discards generated content |
renderPrompt | (props: AIPromptInputRenderProps) => ReactElement | -- | Custom prompt input component |
Custom prompt input
Replace the built-in prompt input with your own component using renderPrompt:
<AIPlugin
provider={myProvider}
config={{
renderPrompt: ({ position, onSubmit, onClose }) => (
<div
style={{
position: "fixed",
top: position.top,
left: position.left,
zIndex: 60,
}}
>
<input
autoFocus
placeholder="What should I write?"
onKeyDown={(e) => {
if (e.key === "Enter") {
onSubmit(e.currentTarget.value);
}
if (e.key === "Escape") {
onClose();
}
}}
/>
</div>
),
}}
/>The AIPromptInputRenderProps interface:
interface AIPromptInputRenderProps {
/** Current position for fixed positioning */
position: { top: number; left: number };
/** Call with the prompt text to submit */
onSubmit: (prompt: string) => void;
/** Call to close/cancel the prompt */
onClose: () => void;
}The /ai slash menu item
When AIPlugin is included, a /ai item automatically appears in the slash menu. Typing /ai opens the prompt input at the cursor position. The user types a prompt, presses Enter, and the AIPreviewNode appears with streaming content.
Programmatic AI insertion
You can trigger AI generation programmatically using Lexical commands:
import {
OPEN_AI_PROMPT_COMMAND,
INSERT_AI_PREVIEW_COMMAND,
} from "@blokhaus/core";
// Open the prompt input
editor.dispatchCommand(OPEN_AI_PROMPT_COMMAND, undefined);
// Or insert directly with a prompt
editor.dispatchCommand(INSERT_AI_PREVIEW_COMMAND, {
prompt: "Write a summary of the content above",
});Context serialization
Before any AI call, the relevant Lexical nodes are serialized to Markdown using the serializeNodesToMarkdown utility. This is token-efficient and preserves structural semantics. The raw Lexical JSON is never sent to the model.
The number of preceding blocks included as context is controlled by contextWindowSize (default: 3).