Evaluation
vespa.evaluation
VespaEvaluatorBase(queries, relevant_docs, vespa_query_fn, app, name='', id_field='', write_csv=False, csv_dir=None)
Bases: ABC
Abstract base class for Vespa evaluators providing initialization and interface.
run()
abstractmethod
Abstract method to be implemented by subclasses.
__call__()
Make the evaluator callable.
VespaEvaluator(queries, relevant_docs, vespa_query_fn, app, name='', id_field='', accuracy_at_k=[1, 3, 5, 10], precision_recall_at_k=[1, 3, 5, 10], mrr_at_k=[10], ndcg_at_k=[10], map_at_k=[100], write_csv=False, csv_dir=None)
Bases: VespaEvaluatorBase
Evaluate retrieval performance on a Vespa application.
This class:
- Iterates over queries and issues them against your Vespa application.
- Retrieves top-k documents per query (with k = max of your IR metrics).
- Compares the retrieved documents with a set of relevant document ids.
- Computes IR metrics: Accuracy@k, Precision@k, Recall@k, MRR@k, NDCG@k, MAP@k.
- Logs vespa search times for each query.
- Logs/returns these metrics.
- Optionally writes out to CSV.
Note: The 'id_field' needs to be marked as an attribute in your Vespa schema, so filtering can be done on it.
Example usage
from vespa.application import Vespa
from vespa.evaluation import VespaEvaluator
queries = {
"q1": "What is the best GPU for gaming?",
"q2": "How to bake sourdough bread?",
# ...
}
relevant_docs = {
"q1": {"d12", "d99"},
"q2": {"d101"},
# ...
}
# relevant_docs can also be a dict of query_id => single relevant doc_id
# relevant_docs = {
# "q1": "d12",
# "q2": "d101",
# # ...
# }
# Or, relevant_docs can be a dict of query_id => map of doc_id => relevance
# relevant_docs = {
# "q1": {"d12": 1, "d99": 0.1},
# "q2": {"d101": 0.01},
# # ...
# Note that for non-binary relevance, the relevance values should be in [0, 1], and that
# only the nDCG metric will be computed.
def my_vespa_query_fn(query_text: str, top_k: int) -> dict:
return {
"yql": 'select * from sources * where userInput("' + query_text + '");',
"hits": top_k,
"ranking": "your_ranking_profile",
}
app = Vespa(url="http://localhost", port=8080)
evaluator = VespaEvaluator(
queries=queries,
relevant_docs=relevant_docs,
vespa_query_fn=my_vespa_query_fn,
app=app,
name="test-run",
accuracy_at_k=[1, 3, 5],
precision_recall_at_k=[1, 3, 5],
mrr_at_k=[10],
ndcg_at_k=[10],
map_at_k=[100],
write_csv=True
)
results = evaluator()
print("Primary metric:", evaluator.primary_metric)
print("All results:", results)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
queries
|
Dict[str, str]
|
A dictionary where keys are query IDs and values are query strings. |
required |
relevant_docs
|
Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]
|
A dictionary mapping query IDs to their relevant document IDs. Can be a set of doc IDs for binary relevance, a dict of doc_id to relevance score (float between 0 and 1) for graded relevance, or a single doc_id string. |
required |
vespa_query_fn
|
Callable[[str, int, Optional[str]], dict]
|
A function that takes a query string, the number of hits to retrieve (top_k), and an optional query_id, and returns a Vespa query body dictionary. |
required |
app
|
Vespa
|
An instance of the Vespa application. |
required |
name
|
str
|
A name for this evaluation run. Defaults to "". |
''
|
id_field
|
str
|
The field name in the Vespa hit that contains the document ID. If empty, it tries to infer the ID from the 'id' field or 'fields.id'. Defaults to "". |
''
|
accuracy_at_k
|
List[int]
|
List of k values for which to compute Accuracy@k. Defaults to [1, 3, 5, 10]. |
[1, 3, 5, 10]
|
precision_recall_at_k
|
List[int]
|
List of k values for which to compute Precision@k and Recall@k. Defaults to [1, 3, 5, 10]. |
[1, 3, 5, 10]
|
mrr_at_k
|
List[int]
|
List of k values for which to compute MRR@k. Defaults to [10]. |
[10]
|
ndcg_at_k
|
List[int]
|
List of k values for which to compute NDCG@k. Defaults to [10]. |
[10]
|
map_at_k
|
List[int]
|
List of k values for which to compute MAP@k. Defaults to [100]. |
[100]
|
write_csv
|
bool
|
Whether to write the evaluation results to a CSV file. Defaults to False. |
False
|
csv_dir
|
Optional[str]
|
Directory to save the CSV file. Defaults to None (current directory). |
None
|
run()
Executes the evaluation by running queries and computing IR metrics.
This method: 1. Executes all configured queries against the Vespa application. 2. Collects search results and timing information. 3. Computes the configured IR metrics (Accuracy@k, Precision@k, Recall@k, MRR@k, NDCG@k, MAP@k). 4. Records search timing statistics. 5. Logs results and optionally writes them to CSV.
Returns:
Name | Type | Description |
---|---|---|
dict |
Dict[str, float]
|
A dictionary containing: - IR metrics with names like "accuracy@k", "precision@k", etc. - Search time statistics ("searchtime_avg", "searchtime_q50", etc.). The values are floats between 0 and 1 for metrics and in seconds for timing. |
VespaMatchEvaluator(queries, relevant_docs, vespa_query_fn, app, name='', id_field='', rank_profile='unranked', write_csv=False, write_verbose=False, csv_dir=None)
Bases: VespaEvaluatorBase
Evaluate recall in the match-phase over a set of queries for a Vespa application.
This class:
- Iterates over queries and issues them against your Vespa application.
- Sends one query with limit 0 to get the number of matched documents.
- Sends one query with recall-parameter set according to the provided relevant documents.
- Compares the retrieved documents with a set of relevant document ids.
- Logs vespa search times for each query.
- Logs/returns these metrics.
- Optionally writes out to CSV.
Note: It is recommended to use a rank profile without any first-phase (and second-phase) ranking if you care about speed of evaluation run. If you do so, you need to make sure that the rank profile you use has the same inputs. For example, if you want to evaluate a YQL query including nearestNeighbor-operator, your rank-profile needs to define the corresponding input tensor. You must also either provide the query tensor or define it as input (e.g 'input.query(embedding)=embed(@query)') in your Vespa query function. Also note that the 'id_field' needs to be marked as an attribute in your Vespa schema, so filtering can be done on it. Example usage:
from vespa.application import Vespa
from vespa.evaluation import VespaEvaluator
queries = {
"q1": "What is the best GPU for gaming?",
"q2": "How to bake sourdough bread?",
# ...
}
relevant_docs = {
"q1": {"d12", "d99"},
"q2": {"d101"},
# ...
}
# relevant_docs can also be a dict of query_id => single relevant doc_id
# relevant_docs = {
# "q1": "d12",
# "q2": "d101",
# # ...
# }
# Or, relevant_docs can be a dict of query_id => map of doc_id => relevance
# relevant_docs = {
# "q1": {"d12": 1, "d99": 0.1},
# "q2": {"d101": 0.01},
# # ...
def my_vespa_query_fn(query_text: str, top_k: int) -> dict:
return {
"yql": 'select * from sources * where userInput("' + query_text + '");',
"hits": top_k,
"ranking": "your_ranking_profile",
}
app = Vespa(url="http://localhost", port=8080)
evaluator = VespaMatchEvaluator(
queries=queries,
relevant_docs=relevant_docs,
vespa_query_fn=my_vespa_query_fn,
app=app,
name="test-run",
id_field="id",
write_csv=True,
write_verbose=True,
)
results = evaluator()
print("Primary metric:", evaluator.primary_metric)
print("All results:", results)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
queries
|
Dict[str, str]
|
A dictionary where keys are query IDs and values are query strings. |
required |
relevant_docs
|
Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]
|
A dictionary mapping query IDs to their relevant document IDs. Can be a set of doc IDs for binary relevance, or a single doc_id string. Graded relevance (dict of doc_id to relevance score) is not supported for match evaluation. |
required |
vespa_query_fn
|
Callable[[str, int, Optional[str]], dict]
|
A function that takes a query string, the number of hits to retrieve (top_k), and an optional query_id, and returns a Vespa query body dictionary. |
required |
app
|
Vespa
|
An instance of the Vespa application. |
required |
name
|
str
|
A name for this evaluation run. Defaults to "". |
''
|
id_field
|
str
|
The field name in the Vespa hit that contains the document ID. If empty, it tries to infer the ID from the 'id' field or 'fields.id'. Defaults to "". |
''
|
write_csv
|
bool
|
Whether to write the summary evaluation results to a CSV file. Defaults to False. |
False
|
write_verbose
|
bool
|
Whether to write detailed query-level results to a separate CSV file. Defaults to False. |
False
|
csv_dir
|
Optional[str]
|
Directory to save the CSV files. Defaults to None (current directory). |
None
|
run()
Executes the match-phase recall evaluation.
This method: 1. Sends a query with limit 0 to get the number of matched documents. 2. Sends a recall query with the relevant documents. 3. Computes recall metrics and match statistics. 4. Logs results and optionally writes them to CSV.
Returns:
Name | Type | Description |
---|---|---|
dict |
Dict[str, float]
|
A dictionary containing recall metrics, match statistics, and search time statistics. |
mean(values)
Compute the mean of a list of numbers without using numpy.
percentile(values, p)
Compute the p-th percentile of a list of values (0 <= p <= 100). This approximates numpy.percentile's behavior.
validate_queries(queries)
Validate and normalize queries. Converts query IDs to strings if they are ints.
validate_qrels(qrels)
Validate and normalize qrels. Converts query IDs to strings if they are ints.
validate_vespa_query_fn(fn)
Validates the vespa_query_fn function.
The function must be callable and accept either 2 or 3 parameters
- (query_text: str, top_k: int)
- or (query_text: str, top_k: int, query_id: Optional[str])
It must return a dictionary when called with test inputs.
Returns True if the function takes a query_id parameter, False otherwise.
filter_queries(queries, relevant_docs)
Filter out queries that have no relevant docs
extract_doc_id_from_hit(hit, id_field)
Extract document ID from a Vespa hit.
calculate_searchtime_stats(searchtimes)
Calculate search time statistics.
execute_queries(app, query_bodies)
Execute queries and collect timing information. Returns the responses and a list of search times.
write_csv(metrics, searchtime_stats, csv_file, csv_dir, name)
Write metrics to CSV file.
log_metrics(name, metrics)
Log metrics with appropriate formatting.