Skip to main content

Evaluation Quick Start

This quick start will get you up and running with our evaluation SDK and Experiments UI.

1. Install LangSmith

pip install -U langsmith

2. Create an API key

To create an API key head to the Settings page. Then click Create API Key.

3. Set up your environment

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>

4. Run your evaluation

from langsmith import evaluate, Client

# 1. Create and/or select your dataset
client = Client()
dataset = client.clone_public_dataset(
"https://smith.langchain.com/public/a63525f9-bdf2-4512-83e3-077dc9417f96/d"
)

# 2. Define an evaluator
def is_concise(outputs: dict, reference_outputs: dict) -> bool:
return len(outputs["answer"]) < (3 * len(reference_outputs["answer"]))

# 3. Define the interface to your app
def chatbot(inputs: dict) -> dict:
return {"answer": inputs["question"] + "is a good question. I don't know the answer."}

# 4. Run an evaluation
evaluate(
chatbot,
data=dataset.name,
evaluators=[is_concise],
experiment_prefix="my first experiment "
)

5. View Experiments UI

Click the link printed out by your evaluation run to access the LangSmith experiments UI, and explore the results of your evaluation.


Was this page helpful?


You can leave detailed feedback on GitHub.