SDK · Adapters
Plug Foundry into your existing stack in three lines.
Foundry ships three first-class adapters. Each one is a thin translation layer over the same OpenAI-compatible HTTP proxy — so the inference path, receipts, and on-chain revenue routing are identical regardless of which adapter you choose.
Vercel AI SDK
Implements the LanguageModelV1 interface. Works with generateText, streamText, and generateObject.
import { generateText } from "ai";
import { foundry } from "@foundryprotocol/sdk/adapters/vercel-ai";
const { text } = await generateText({
model: foundry("ingot:0x8e2af4a…"),
prompt: "Translate to Konkani: hello, how are you?",
});
console.log(text);Streaming
import { streamText } from "ai";
import { foundry } from "@foundryprotocol/sdk/adapters/vercel-ai";
const result = await streamText({
model: foundry("ingot:0x8e2af4a…"),
prompt: "Stream a haiku about co-owned models.",
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}LangChain
Implements a BaseChatModel-compatible class. Plugs into LCEL chains, agents, and RAG pipelines.
import { FoundryChat } from "@foundryprotocol/sdk/adapters/langchain";
import { HumanMessage, SystemMessage } from "@langchain/core/messages";
const llm = new FoundryChat({
ingotId: "ingot:0x8e2af4a…",
temperature: 0.6,
});
const res = await llm.invoke([
new SystemMessage("You are a Konkani translation assistant."),
new HumanMessage("Translate: where is the train station?"),
]);
console.log(res.content);
console.log(res.additional_kwargs.foundry.receipt);LangChain is an optional peer dependency
We declare @langchain/core as an optional peer in package.json. If you don't install it, only the adapter module is unavailable — the rest of the SDK works identically.
OpenAI-compatible HTTP
Any tool that speaks the OpenAI API can call a Foundry Ingot. Point the base URL at api.foundryprotocol.xyz/v1 and pass the Ingot ID as x-foundry-ingot-id.
curl https://api.foundryprotocol.xyz/v1/chat/completions \
-H "content-type: application/json" \
-H "x-foundry-ingot-id: 0x8e2af4a000000000000000000000000000000001" \
-d '{
"messages": [
{ "role": "user", "content": "Translate to Konkani: hello" }
],
"stream": false
}'OpenAI's own SDK works out of the box:
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.foundryprotocol.xyz/v1",
apiKey: "not-required-but-the-sdk-insists",
defaultHeaders: {
"x-foundry-ingot-id": "0x8e2af4a000000000000000000000000000000001",
},
});
const res = await client.chat.completions.create({
model: "ingot:0x8e2af4a…",
messages: [{ role: "user", content: "Hello" }],
});
console.log(res.choices[0].message.content);Streaming
All three adapters stream tokens via Server-Sent Events in the OpenAI delta format. The final frame includes a foundryblock with the inference + revenue tx hashes.
data: {
"id": "chatcmpl-foundry-…",
"object": "chat.completion.chunk",
"choices": [{ "index": 0, "delta": {}, "finish_reason": "stop" }],
"foundry": {
"ingotId": "0x8e2af4a…",
"inferenceTxHash":"0x4a7c…",
"revenueTxHash": "0x6f12…"
}
}
data: [DONE]Headers & receipts
x-foundry-ingot-id 0x… required: which Ingot to call
authorization Bearer … optional: integrator API key for rate-limit
content-type application/jsonx-foundry-ingot-id 0x… echo
x-foundry-stub 1 present on stub responses (Sprint 2/3); absent in prod