Batch LLM Inference is better with Sutro

Run LLM Batch Jobs in Hours, Not Days, at a Fraction of the Cost.

Generate a question/answer pair for the following chunk of vLLM documentation

Inputs

Outputs

Intro to vLLM

vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Loading Models

vLM models can be loaded in two different ways. To pass a loaded model into the vLLM framework for further processing and inference without reloading it from disk or a model hub, first start by generating


Using the Open AI Server

Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in Run:ai Model Streamer Documentation.

vLLM supports loading weights in Safetensors format using the Run:ai Model Streamer. You first need to install vLLM RunAI optional dependency:

Question: Is vLLM compatible with all open-source models? ...

Question: How do I load a custom model from HuggingFace? ...

Question: Can I use the OpenAI compatible server to replace calls...

+128 more…

Batch LLM Inference is better with Sutro

Run LLM Batch Jobs in Hours, Not Days, at a Fraction of the Cost.

Generate a question/answer pair for the following chunk of vLLM documentation

Inputs

Outputs

Intro to vLLM

vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Loading Models

vLM models can be loaded in two different ways. To pass a loaded model into the vLLM framework for further processing and inference without reloading it from disk or a model hub, first start by generating


Using the Open AI Server

Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in Run:ai Model Streamer Documentation.

vLLM supports loading weights in Safetensors format using the Run:ai Model Streamer. You first need to install vLLM RunAI optional dependency:

Question: Is vLLM compatible with all open-source models? ...

Question: How do I load a custom model from HuggingFace? ...

Question: Can I use the OpenAI compatible server to replace calls...

+128 more…

RAG data preparation

Build Better RAG Systems, Faster

Transform massive corpuses of unstructured text into clean, analytics-ready datasets and vector representations to improve your RAG system's performance, all at a fraction of the cost and complexity.

Generate a question/answer pair for the following chunk of vLLM documentation

Inputs

Outputs

Intro to vLLM

vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Loading Models

vLM models can be loaded in two different ways. To pass a loaded model into the vLLM framework for further processing and inference without reloading it from disk or a model hub, first start by generating


Using the Open AI Server

Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in Run:ai Model Streamer Documentation.

vLLM supports loading weights in Safetensors format using the Run:ai Model Streamer. You first need to install vLLM RunAI optional dependency:

Question: Is vLLM compatible with all open-source models? ...

Question: How do I load a custom model from HuggingFace? ...

Question: Can I use the OpenAI compatible server to replace calls...

+128 more…

Batch LLM Inference is better with Sutro

Run LLM Batch Jobs in Hours, Not Days, at a Fraction of the Cost.

Generate a question/answer pair for the following chunk of vLLM documentation

Inputs

Outputs

Intro to vLLM

vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Loading Models

vLM models can be loaded in two different ways. To pass a loaded model into the vLLM framework for further processing and inference without reloading it from disk or a model hub, first start by generating


Using the Open AI Server

Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in Run:ai Model Streamer Documentation.

vLLM supports loading weights in Safetensors format using the Run:ai Model Streamer. You first need to install vLLM RunAI optional dependency:

Question: Is vLLM compatible with all open-source models? ...

Question: How do I load a custom model from HuggingFace? ...

Question: Can I use the OpenAI compatible server to replace calls...

+128 more…

Batch LLM Inference is better with Sutro

Run LLM Batch Jobs in Hours, Not Days, at a Fraction of the Cost.

Generate a question/answer pair for the following chunk of vLLM documentation

Inputs

Outputs

Intro to vLLM

vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.

Loading Models

vLM models can be loaded in two different ways. To pass a loaded model into the vLLM framework for further processing and inference without reloading it from disk or a model hub, first start by generating


Using the Open AI Server

Run:ai Model Streamer is a library to read tensors in concurrency, while streaming it to GPU memory. Further reading can be found in Run:ai Model Streamer Documentation.

vLLM supports loading weights in Safetensors format using the Run:ai Model Streamer. You first need to install vLLM RunAI optional dependency:

Question: Is vLLM compatible with all open-source models? ...

Question: How do I load a custom model from HuggingFace? ...

Question: Can I use the OpenAI compatible server to replace calls...

+128 more…

From Raw Documents to RAG-Ready Data, Simplified

Sutro takes the pain away from preparing data for your Retrieval-Augmented Generation systems. Iterate on your strategy, scale to production, and integrate with your existing stack.

import sutro as so

from pydantic import BaseModel

class ReviewClassifier(BaseModel):

sentiment: str

user_reviews = '.

User_reviews.csv

User_reviews-1.csv

User_reviews-2.csv

User_reviews-3.csv

system_prompt = 'Classify the review as positive, neutral, or negative.'

results = so.infer(user_reviews, system_prompt, output_schema=ReviewClassifier)

Progress: 1% | 1/514,879 | Input tokens processed: 0.41m, Tokens generated: 591k

█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

Prototype

Start small and iterate fast on your data preparation workflows. Accelerate experiments by testing different chunking, extraction, or summarization strategies on Sutro before committing to large jobs.

Scale

Scale your LLM workflows to process billions of tokens in hours, not days. Get your data ready for your vector database with no infrastructure headaches or exploding costs.

Integrate

Seamlessly connect Sutro to your existing LLM workflows. Sutro's Python SDK is compatible with popular data orchestration tools, like Airflow and Dagster.

Improve RAG Performance

Improve your RAG retrieval performance by generating high-quality, diverse, and representative synthetic data or by easily converting large corpuses of free-form text into vector representations for semantic search.

Reduce Costs by 10x

Reduce Costs by 10x

Reduce Costs by 10x

Get results faster and reduce costs by 10x or more. Prepare millions of documents for your RAG pipeline by parallelizing LLM calls through Sutro.

Scale Effortlessly

Confidently process millions of documents and billions of tokens at a time without the pain of managing infrastructure. Scale your data preparation as your needs grow.

Embedding Generation

Longer description goes here, should span multiple lines.

Structured Extraction

Transform unstructured data into structured insights that drive business decisions.

Document Summarization

Condense large documents into concise summaries to quickly extract key information.

Website Data Extraction

Crawl millions of web pages and extract analytics-ready datasets for your company or your customers.

Synthetic Data Generation

Generate high-quality, diverse, and representative synthetic data to improve model performance.

LLM Performance Evaluation

Benchmark your LLM outputs to continuously improve workflows, agents and assistants.

Embedding Generation

Longer description goes here, should span multiple lines.

Structured Extraction

Transform unstructured data into structured insights that drive business decisions.

Document Summarization

Condense large documents into concise summaries to quickly extract key information.

Website Data Extraction

Crawl millions of web pages and extract analytics-ready datasets for your company or your customers.

Synthetic Data Generation

Generate high-quality, diverse, and representative synthetic data to improve model performance.

LLM Performance Evaluation

Benchmark your LLM outputs to continuously improve workflows, agents and assistants.

Embedding Generation

Longer description goes here, should span multiple lines.

Structured Extraction

Transform unstructured data into structured insights that drive business decisions.

Document Summarization

Condense large documents into concise summaries to quickly extract key information.

Website Data Extraction

Crawl millions of web pages and extract analytics-ready datasets for your company or your customers.

Synthetic Data Generation

Generate high-quality, diverse, and representative synthetic data to improve model performance.

LLM Performance Evaluation

Benchmark your LLM outputs to continuously improve workflows, agents and assistants.

FAQ

What is Sutro?

What kinds of tasks can Sutro perform?

How does Sutro reduce costs?

How do I use Sutro?

Does Sutro integrate with other data tools?

What Will You Scale with Sutro?