AIPlugin
AI content generation with streaming preview.
Import
import { AIPlugin } from "@blokhaus/core";Overview
The AIPlugin provides AI-powered content generation with an isolated streaming preview. When the user triggers an AI prompt (via the /ai slash menu item or programmatically), an AIPreviewNode is inserted into the AST. The streamed response is rendered in a visually distinct block using the --blokhaus-ai-stream CSS token. The preview uses local React state during streaming -- it never calls editor.update() until the user explicitly accepts or discards the result.
This architecture guarantees zero undo history entries during streaming, and exactly one history entry when the user accepts.
Props
| Prop | Type | Default | Description |
|---|---|---|---|
provider | AIProvider | (required) | The AI streaming provider. See below. |
config | AIPluginConfig | {} | Optional configuration for behavior, labels, and callbacks. |
The AIProvider interface
interface AIProvider {
/** Human-readable name (e.g., "OpenAI", "Mistral", "Ollama") */
name: string;
/**
* Called with a prompt, surrounding editor context, and optional model config.
* Must return a ReadableStream of text chunks.
* The library consumes this stream -- it does not care about the underlying provider.
*/
generate: (params: AIGenerateParams) => Promise<ReadableStream<string>>;
}
interface AIGenerateParams {
/** The user's prompt text. */
prompt: string;
/** Surrounding editor content serialized as Markdown. */
context: string;
/** Optional model-level configuration. */
config?: AIGenerateConfig;
}
interface AIGenerateConfig {
/** Sampling temperature (0-2). Lower = more deterministic. */
temperature?: number;
/** Maximum tokens to generate. */
maxTokens?: number;
/** System prompt / persona for the model. */
systemPrompt?: string;
}The AIPluginConfig interface
interface AIPluginConfig {
/** Model-level configuration (temperature, maxTokens, systemPrompt). */
generate?: AIGenerateConfig;
/** Custom labels for the preview node UI. */
labels?: AIPreviewLabels;
/** Retry configuration. */
retry?: AIRetryConfig;
/** Number of preceding blocks to include as context. Default: 3 */
contextWindowSize?: number;
/** Called when the AI stream encounters an error. */
onError?: (error: Error) => void;
/** Called when the user accepts generated content. */
onAccept?: (content: string) => void;
/** Called when the user discards generated content. */
onDiscard?: () => void;
/** Custom prompt input component. Replaces the built-in prompt input. */
renderPrompt?: (props: AIPromptInputRenderProps) => React.ReactElement;
}Usage
Basic setup
"use client";
import { EditorRoot, AIPlugin, InputRulePlugin, SlashMenu } from "@blokhaus/core";
import type { AIProvider } from "@blokhaus/core";
const myProvider: AIProvider = {
name: "My AI",
generate: async ({ prompt, context }) => {
const response = await fetch("/api/ai/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ prompt, context }),
});
if (!response.ok) throw new Error(`AI request failed: ${response.status}`);
const reader = response.body!.getReader();
const decoder = new TextDecoder();
return new ReadableStream<string>({
async pull(controller) {
const { done, value } = await reader.read();
if (done) { controller.close(); return; }
controller.enqueue(decoder.decode(value, { stream: true }));
},
});
},
};
export default function EditorPage() {
return (
<EditorRoot namespace="my-editor">
<InputRulePlugin />
<SlashMenu />
<AIPlugin provider={myProvider} />
</EditorRoot>
);
}With full configuration
<AIPlugin
provider={myProvider}
config={{
generate: {
systemPrompt: "You are a helpful writing assistant. Respond in Markdown.",
maxTokens: 2048,
temperature: 0.7,
},
labels: {
header: "AI Assistant",
streaming: "Writing...",
accept: "Insert",
discard: "Cancel",
retry: "Try again",
},
retry: {
maxRetries: 3,
},
contextWindowSize: 5,
onAccept: (content) => {
analytics.track("ai_content_accepted", { length: content.length });
},
onDiscard: () => {
analytics.track("ai_content_discarded");
},
onError: (error) => {
toast.error("AI generation failed. Please try again.");
console.error("[AI Error]", error);
},
}}
/>Registered commands
| Command | Payload | Description |
|---|---|---|
OPEN_AI_PROMPT_COMMAND | void | Opens the inline AI prompt input at the current cursor position. |
INSERT_AI_PREVIEW_COMMAND | { prompt: string; context: string } | Inserts an AIPreviewNode and begins streaming. |
Dispatching programmatically
import { OPEN_AI_PROMPT_COMMAND } from "@blokhaus/core";
// Open the AI prompt from a custom button
editor.dispatchCommand(OPEN_AI_PROMPT_COMMAND, undefined);Streaming flow
User types /ai --> OPEN_AI_PROMPT_COMMAND dispatched
|
v
Inline prompt input opens (Radix Popover)
|
v
User submits prompt
|
v
Read surrounding nodes (contextWindowSize paragraphs)
|
v
Serialize context to Markdown via serializeNodesToMarkdown()
|
v
Insert AIPreviewNode -- single editor.update()
|
v
AIPreviewNode mounts, calls provider.generate()
|
v
Tokens stream in --> AIPreviewNode local React state updates
(NO editor.update() calls during streaming)
|
v
Stream completes --> "Accept" / "Discard" buttons appear
|
+--[Accept]---> Parse Markdown into Lexical nodes
| Replace AIPreviewNode -- single editor.update()
| (creates exactly 1 undo history entry)
|
+--[Discard]--> Remove AIPreviewNode -- single editor.update()
(AST unchanged from before insertion)Context serialization
Before any AI call, surrounding nodes are serialized to Markdown using serializeNodesToMarkdown(). Raw Lexical JSON is never sent to the model -- it is token-inefficient and the model does not understand it. Plain text is not used either because it destroys structural semantics (headings, lists, links).
The AIPreviewNode must never call editor.update() during streaming. All
streamed tokens are held in local React state (useState). This protects the
undo history from being polluted with intermediate streaming states.
After accepting AI content, a single Cmd+Z undo removes all accepted content
in one step -- not token by token.
Related
- AI Integration Guide -- End-to-end tutorial with Vertex AI setup
- Serialization Guide -- How nodes are serialized to Markdown