client.models
The models resource is the catalog surface — list every spec the
engine knows about, fetch one by slug, get a typed ModelHandle to
run predictions against.
client.models.list()
handles = client.models.list()for h in handles: print(h.slug, "active=" if h.active else "", h.action_dim)Returns list[ModelHandle]. Hits GET /v1/models once. Each handle
carries the spec metadata locally; subsequent attribute access is free.
Currently exactly one handle per server has active=True — the spec
that server is currently serving. Phase 1 supports single-model
deployments; multi-model routing lands in a later phase.
client.models.get(slug)
model = client.models.get("dreamdojo-2b-gr1")Returns ModelHandle. Hits GET /v1/models/{slug}. Raises
dream.ModelNotFoundError if the slug isn't in the catalog.
The slug is URL-encoded automatically; safe for unusual slugs.
ModelHandle
A handle bundles the spec metadata with a reference to the parent client.
Identity
model.slug # "dreamdojo-2b-gr1"model.active # True if this server serves this specmodel.spec # full ModelSpec dataclass (see below)Flat accessors (most common path)
The spec has nested architectural knobs at model.spec.arch.*. The
flat accessors save the second .arch hop:
model.action_dim # 384model.resolution # (480, 640) — (H, W)model.chunk_size # 12model.model_size # "2B"Methods
model.predict(start_frame=..., actions=...) # → Rolloutmodel.predict_batch(start_frame=..., actions=...) # → BatchRolloutBoth raise dream.ModelNotActiveError if the handle's slug isn't the
active spec on the server. See
predict and
predict_batch.
ModelSpec
The dataclass returned by the catalog endpoint.
@dataclassclass ModelSpec: slug: str # "dreamdojo-2b-gr1" name: str # "DreamDojo 2B · GR-1" provider: str # "Dream Engines" upstream_provider: UpstreamProvider # NVIDIA logo, HF org, … category: str # "Robotics" summary: str description: str intended_use: str training_data: str license: str links: dict[str, str] # {"hf": "...", "paper": "..."} price_per_frame_usd: float # 0.0005 is_flagship: bool is_benchmarked: bool active: bool # per-server runtime flag arch: ModelSpecData # architectural knobs benchmark: Benchmark | None # headline numbers if benchmarkedarch: ModelSpecData carries action_dim, chunk_size, resolution,
runner, default_num_steps, default_guidance, etc.
benchmark: Benchmark | None is populated when is_benchmarked is
true; it has speedup_x, psnr_db, ssim, lpips,
cost_per_rollout_usd, etc.
Static catalog access (no network)
If you want the catalog metadata without a network call, the SDK ships
no offline copy — client.models.list() is the canonical path, and the
data round-trips from the server's web/data/catalog.json on every
deploy. If you need offline access, save the response yourself:
import jsonwith open("catalog.json", "w") as f: json.dump([h.spec.__dict__ for h in client.models.list()], f, default=str)Lazy-property note
Client.models is itself lazy — the Models resource isn't
instantiated until first access. So dream.Client(...) is genuinely
free; the first network call only happens when you reach for .list(),
.get(), or anything else.