Start here · Quickstart

Three lines. One inference call. Revenue on-chain.

By the end of this page, you will have called a Foundry Ingot, received a response, and seen the inference and revenue transaction hashes that proved the call paid out to the model's co-owners. Target time: three minutes on a clean machine.

1 · Install

The SDK has one runtime dependency (viem) and is fully tree-shakeable.

terminalbash
pnpm add @foundryprotocol/sdk
# or: npm install @foundryprotocol/sdk
# or: yarn add @foundryprotocol/sdk

2 · Call an Ingot

Three lines. The first creates the client; the second fires the inference call; the third logs the receipt.

hello-foundry.tsts
import { Foundry } from "@foundryprotocol/sdk";

const foundry = new Foundry({ contracts: "aristotle" });
const { output, receipt } = await foundry.inference.run(
  "ingot:0x8e2af4a000000000000000000000000000000001",
  { input: "Translate to Konkani: hello, how are you?" },
);

console.log(output);
console.log(receipt);

No wallet required to read

Inference calls go through the OpenAI-compatible HTTP proxy. You do not need a wallet to call an Ingot — revenue accrues to its co-owners regardless of who initiated the call. You only need a wallet when you want to claimrevenue or contribute to a Forge.

3 · Read the receipt

The receipt makes the on-chain settlement visible:

receipt shapets
{
  ingotId: "ingot:0x8e2af4a…",
  receipt: {
    requestId:        "chatcmpl-foundry-d8e2…",
    inferenceTxHash:  "0x4a7c…",      // 0G Compute dispatch
    revenueTxHash:    "0x6f12…",      // RevenueSplitter deposit
    latencyMs:        842,
  },
}

Both tx hashes land on 0G Aristotle within ~4 seconds. The Forge in Public dashboard reflects them in real time.

Where to go next

  • Want a typed adapter for your existing AI stack? See Adapters — Vercel AI SDK, LangChain, and the OpenAI-compatible HTTP proxy.
  • Want to contribute data to a Forge and earn shares? See Build on Foundry.
  • Want to understand how shares are computed? See Verifiable attribution.