Plug brickster.ai into your LLM.
brickster.ai's MCP gives Claude Desktop, Cursor, and any other Model Context Protocol client typed tools over the curated archive — semantic search across everything, plus recency-sorted reads of releases, news, videos, and the reading list. Use it hosted (no key, no install) or self-hosted via @brickster/mcp-server.
Install
Two ways to run it. Drop the snippet into your client's MCP config and restart the client. Per-client instructions live in the MCP docs.
~/Library/Application Support/Claude/claude_desktop_config.json%APPDATA%\Claude\claude_desktop_config.jsonHosted
RecommendedPoint your client at the HTTP endpoint. No install, no key, nothing to keep updated.
{
"mcpServers": {
"brickster": {
"type": "http",
"url": "https://brickster.ai/api/mcp"
}
}
}Works in clients that support remote MCP transports natively (Claude Desktop's custom connectors, Cursor, etc.). Stdio-only clients can wrap it via npx mcp-remote https://brickster.ai/api/mcp.
Self-hosted
Run @brickster/mcp-server locally via npx. Pick this if you want to swap in your own Gemini key, modify the tools, or stay fully off our infra.
{
"mcpServers": {
"brickster": {
"command": "npx",
"args": ["-y", "@brickster/mcp-server"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key"
}
}
}
}search_archive embeds your query before similarity search and needs a Gemini key — get a free one at aistudio.google.com/apikey. The other four tools work without any key.
Tools
Five tools, all read-only. Backed by the same Postgres archive the on-site assistant uses, served via Supabase's public-anon endpoint with row-level security on writes.
search_archive(query, limit?, source_types?)Semantic search over the entire archive — videos, releases, news, projects, community Q&A. Returns ranked matches with title, URL, source kind, and excerpt.
list_recent_releases(days?, limit?, repo?)Recent GitHub releases for repos in the Databricks ecosystem (databricks-sdk-*, dbt-databricks, mlflow, delta-io/delta, the Terraform provider, etc.).
list_recent_news(days?, limit?, source?)Recent articles from the Databricks blog and ecosystem feeds, with LLM summaries when available.
list_recent_videos(days?, limit?, channel_handle?)Recent YouTube uploads from the official Databricks channel and a curated set of community creators.
recommend_books(topic?)Curated reading list for the Databricks ecosystem, optionally filtered by topic tag.
What you can ask
“What's new in MLflow this week?”
→ list_recent_releases(repo: "mlflow/mlflow", days: 7)
“Find videos that explain Photon's vectorisation.”
→ search_archive(query: "Photon vectorisation", source_types: ["video"])
“Recommend a book for getting started with Delta Lake.”
→ recommend_books(topic: "Delta Lake")
Resources
Same retrieval surface as the on-site AI assistant. The hosted endpoint applies a per-IP rate limit on search_archive (15/hour, 40/day) — generous for normal use, never noticeable unless you're scripting a flood. The other four tools aren't gated, and self-hosted has no limit at all.