# AI Model Evaluation Tools

The Evaluation Playground lets users test different AI models and see which one produces the best results for their use case. It helps answer the question, “Which AI model should I pick?” by allowing side-by-side comparisons of model outputs.

Users can either run a single prompt across multiple AI models, or test multiple prompts across different AI models to evaluate performance and consistency.

### In this section:


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.wonderchat.io/setup-guides/adding-chatbot-workflows-1.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
