From Idea to Moderation at Scale, Simplified
Sutro takes the pain away from testing and scaling LLM batch moderation jobs to protect your platform and users.
import sutro as so
from pydantic import BaseModel
class ReviewClassifier(BaseModel):
sentiment: str
user_reviews = '.
User_reviews.csv
User_reviews-1.csv
User_reviews-2.csv
User_reviews-3.csv
system_prompt = 'Classify the review as positive, neutral, or negative.'
results = so.infer(user_reviews, system_prompt, output_schema=ReviewClassifier)
Progress: 1% | 1/514,879 | Input tokens processed: 0.41m, Tokens generated: 591k
█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
Rapidly Prototype
Start small and iterate fast on your content moderation workflows. Accelerate experiments by testing on Sutro before committing to large jobs to find the right prompts and models.
Scale
Scale your moderation workflows to process billions of tokens in hours, not days, with no infrastructure headaches or exploding costs.
Integrate
Seamlessly connect Sutro to your existing data workflows. Sutro's Python SDK is compatible with popular data orchestration tools, like Airflow and Dagster.

Scale Effortlessly
Confidently handle millions of user-generated posts, comments, and reviews at a time without the pain of managing infrastructure.
Get moderation results faster and reduce costs by parallelizing your LLM classification calls through Sutro.

Ship Faster
Automatically organize your content into meaningful categories without involving your ML engineer. Shorten development cycles by getting feedback from large batch jobs in minutes.