Skip to content

Evaluation

vespa.evaluation

RandomHitsSamplingStrategy

Bases: Enum

Enum for different random hits sampling strategies.

  • RATIO: Sample random hits as a ratio of relevant docs (e.g., 1.0 = equal number, 2.0 = twice as many)
  • FIXED: Sample a fixed number of random hits per query

VespaEvaluatorBase(queries, relevant_docs, vespa_query_fn, app, name='', id_field='', write_csv=False, csv_dir=None)

Bases: ABC

Abstract base class for Vespa evaluators providing initialization and interface.

run() abstractmethod

Abstract method to be implemented by subclasses.

__call__()

Make the evaluator callable.

VespaEvaluator(queries, relevant_docs, vespa_query_fn, app, name='', id_field='', accuracy_at_k=[1, 3, 5, 10], precision_recall_at_k=[1, 3, 5, 10], mrr_at_k=[10], ndcg_at_k=[10], map_at_k=[100], write_csv=False, csv_dir=None)

Bases: VespaEvaluatorBase

Evaluate retrieval performance on a Vespa application.

This class:

  • Iterates over queries and issues them against your Vespa application.
  • Retrieves top-k documents per query (with k = max of your IR metrics).
  • Compares the retrieved documents with a set of relevant document ids.
  • Computes IR metrics: Accuracy@k, Precision@k, Recall@k, MRR@k, NDCG@k, MAP@k.
  • Logs vespa search times for each query.
  • Logs/returns these metrics.
  • Optionally writes out to CSV.

Note: The 'id_field' needs to be marked as an attribute in your Vespa schema, so filtering can be done on it.

Example usage
from vespa.application import Vespa
from vespa.evaluation import VespaEvaluator

queries = {
    "q1": "What is the best GPU for gaming?",
    "q2": "How to bake sourdough bread?",
    # ...
}
relevant_docs = {
    "q1": {"d12", "d99"},
    "q2": {"d101"},
    # ...
}
# relevant_docs can also be a dict of query_id => single relevant doc_id
# relevant_docs = {
#     "q1": "d12",
#     "q2": "d101",
#     # ...
# }
# Or, relevant_docs can be a dict of query_id => map of doc_id => relevance
# relevant_docs = {
#     "q1": {"d12": 1, "d99": 0.1},
#     "q2": {"d101": 0.01},
#     # ...
# Note that for non-binary relevance, the relevance values should be in [0, 1], and that
# only the nDCG metric will be computed.

def my_vespa_query_fn(query_text: str, top_k: int) -> dict:
    return {
        "yql": 'select * from sources * where userInput("' + query_text + '");',
        "hits": top_k,
        "ranking": "your_ranking_profile",
    }

app = Vespa(url="http://localhost", port=8080)

evaluator = VespaEvaluator(
    queries=queries,
    relevant_docs=relevant_docs,
    vespa_query_fn=my_vespa_query_fn,
    app=app,
    name="test-run",
    accuracy_at_k=[1, 3, 5],
    precision_recall_at_k=[1, 3, 5],
    mrr_at_k=[10],
    ndcg_at_k=[10],
    map_at_k=[100],
    write_csv=True
)

results = evaluator()
print("Primary metric:", evaluator.primary_metric)
print("All results:", results)

Parameters:

Name Type Description Default
queries Dict[str, str]

A dictionary where keys are query IDs and values are query strings.

required
relevant_docs Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]

A dictionary mapping query IDs to their relevant document IDs. Can be a set of doc IDs for binary relevance, a dict of doc_id to relevance score (float between 0 and 1) for graded relevance, or a single doc_id string.

required
vespa_query_fn Callable[[str, int, Optional[str]], dict]

A function that takes a query string, the number of hits to retrieve (top_k), and an optional query_id, and returns a Vespa query body dictionary.

required
app Vespa

An instance of the Vespa application.

required
name str

A name for this evaluation run. Defaults to "".

''
id_field str

The field name in the Vespa hit that contains the document ID. If empty, it tries to infer the ID from the 'id' field or 'fields.id'. Defaults to "".

''
accuracy_at_k List[int]

List of k values for which to compute Accuracy@k. Defaults to [1, 3, 5, 10].

[1, 3, 5, 10]
precision_recall_at_k List[int]

List of k values for which to compute Precision@k and Recall@k. Defaults to [1, 3, 5, 10].

[1, 3, 5, 10]
mrr_at_k List[int]

List of k values for which to compute MRR@k. Defaults to [10].

[10]
ndcg_at_k List[int]

List of k values for which to compute NDCG@k. Defaults to [10].

[10]
map_at_k List[int]

List of k values for which to compute MAP@k. Defaults to [100].

[100]
write_csv bool

Whether to write the evaluation results to a CSV file. Defaults to False.

False
csv_dir Optional[str]

Directory to save the CSV file. Defaults to None (current directory).

None

run()

Executes the evaluation by running queries and computing IR metrics.

This method: 1. Executes all configured queries against the Vespa application. 2. Collects search results and timing information. 3. Computes the configured IR metrics (Accuracy@k, Precision@k, Recall@k, MRR@k, NDCG@k, MAP@k). 4. Records search timing statistics. 5. Logs results and optionally writes them to CSV.

Returns:

Name Type Description
dict Dict[str, float]

A dictionary containing: - IR metrics with names like "accuracy@k", "precision@k", etc. - Search time statistics ("searchtime_avg", "searchtime_q50", etc.). The values are floats between 0 and 1 for metrics and in seconds for timing.

Example
{
    "accuracy@1": 0.75,
    "ndcg@10": 0.68,
    "searchtime_avg": 0.0123,
    ...
}

VespaMatchEvaluator(queries, relevant_docs, vespa_query_fn, app, id_field, name='', rank_profile='unranked', write_csv=False, write_verbose=False, csv_dir=None)

Bases: VespaEvaluatorBase

Evaluate recall in the match-phase over a set of queries for a Vespa application.

This class:

  • Iterates over queries and issues them against your Vespa application.
  • Sends one query with limit 0 to get the number of matched documents.
  • Sends one query with recall-parameter set according to the provided relevant documents.
  • Compares the retrieved documents with a set of relevant document ids.
  • Logs vespa search times for each query.
  • Logs/returns these metrics.
  • Optionally writes out to CSV.

Note: It is recommended to use a rank profile without any first-phase (and second-phase) ranking if you care about speed of evaluation run. If you do so, you need to make sure that the rank profile you use has the same inputs. For example, if you want to evaluate a YQL query including nearestNeighbor-operator, your rank-profile needs to define the corresponding input tensor. You must also either provide the query tensor or define it as input (e.g 'input.query(embedding)=embed(@query)') in your Vespa query function. Also note that the 'id_field' needs to be marked as an attribute in your Vespa schema, so filtering can be done on it. Example usage:

from vespa.application import Vespa
from vespa.evaluation import VespaEvaluator

queries = {
    "q1": "What is the best GPU for gaming?",
    "q2": "How to bake sourdough bread?",
    # ...
}
relevant_docs = {
    "q1": {"d12", "d99"},
    "q2": {"d101"},
    # ...
}
# relevant_docs can also be a dict of query_id => single relevant doc_id
# relevant_docs = {
#     "q1": "d12",
#     "q2": "d101",
#     # ...
# }
# Or, relevant_docs can be a dict of query_id => map of doc_id => relevance
# relevant_docs = {
#     "q1": {"d12": 1, "d99": 0.1},
#     "q2": {"d101": 0.01},
#     # ...

def my_vespa_query_fn(query_text: str, top_k: int) -> dict:
    return {
        "yql": 'select * from sources * where userInput("' + query_text + '");',
        "hits": top_k,
        "ranking": "your_ranking_profile",
    }

app = Vespa(url="http://localhost", port=8080)

evaluator = VespaMatchEvaluator(
    queries=queries,
    relevant_docs=relevant_docs,
    vespa_query_fn=my_vespa_query_fn,
    app=app,
    name="test-run",
    id_field="id",
    write_csv=True,
    write_verbose=True,
)

results = evaluator()
print("Primary metric:", evaluator.primary_metric)
print("All results:", results)

Parameters:

Name Type Description Default
queries Dict[str, str]

A dictionary where keys are query IDs and values are query strings.

required
relevant_docs Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]

A dictionary mapping query IDs to their relevant document IDs. Can be a set of doc IDs for binary relevance, or a single doc_id string. Graded relevance (dict of doc_id to relevance score) is not supported for match evaluation.

required
vespa_query_fn Callable[[str, int, Optional[str]], dict]

A function that takes a query string, the number of hits to retrieve (top_k), and an optional query_id, and returns a Vespa query body dictionary.

required
app Vespa

An instance of the Vespa application.

required
name str

A name for this evaluation run. Defaults to "".

''
id_field str

The field name in the Vespa hit that contains the document ID. If empty, it tries to infer the ID from the 'id' field or 'fields.id'. Defaults to "".

required
write_csv bool

Whether to write the summary evaluation results to a CSV file. Defaults to False.

False
write_verbose bool

Whether to write detailed query-level results to a separate CSV file. Defaults to False.

False
csv_dir Optional[str]

Directory to save the CSV files. Defaults to None (current directory).

None

create_grouping_filter(yql, id_field, relevant_ids) staticmethod

Create a grouping filter to append Vespa YQL queries to limit results to relevant documents. | all( group(id_field) filter(regex("", id_field)) each(output(count())))

Parameters: yql (str): The base YQL query string. id_field (str): The field name in the Vespa hit that contains the document ID. relevant_ids (list[str]): List of relevant document IDs to include in the filter.

Returns: str: The modified YQL query string with the grouping filter applied.

extract_matched_ids(resp, id_field) staticmethod

Extract matched document IDs from Vespa query response hits. Parameters: resp (VespaQueryResponse): The Vespa query response object. id_field (str): The field name in the Vespa hit that contains the document ID

Returns: Set[str]: A set of matched document IDs.

run()

Executes the match-phase recall evaluation.

This method: 1. Sends a grouping query to see which of the relevant documents were matched, and get totalCount. 3. Computes recall metrics and match statistics. 4. Logs results and optionally writes them to CSV.

Returns:

Name Type Description
dict Dict[str, float]

A dictionary containing recall metrics, match statistics, and search time statistics.

Example
{
    "match_recall": 0.85,
    "total_relevant_docs": 150,
    "total_matched_relevant": 128,
    "avg_matched_per_query": 45.2,
    "searchtime_avg": 0.015,
    ...
}

VespaCollectorBase(queries, relevant_docs, vespa_query_fn, app, id_field, name='', csv_dir=None, random_hits_strategy=RandomHitsSamplingStrategy.RATIO, random_hits_value=1.0, max_random_hits_per_query=None, collect_matchfeatures=True, collect_rankfeatures=False, collect_summaryfeatures=False, write_csv=True)

Bases: ABC

Abstract base class for Vespa training data collectors providing initialization and interface.

Initialize the VespaFeatureCollector.

Parameters:

Name Type Description Default
queries Dict[str, str]

Dictionary mapping query IDs to query strings

required
relevant_docs Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]

Dictionary mapping query IDs to relevant document IDs

required
vespa_query_fn Callable[[str, int, Optional[str]], dict]

Function to generate Vespa query bodies

required
app Vespa

Vespa application instance

required
id_field str

Field name containing document IDs in Vespa hits (must be defined as an attribute in the schema)

required
name str

Name for this collection run

''
csv_dir Optional[str]

Directory to save CSV files

None
random_hits_strategy Union[RandomHitsSamplingStrategy, str]

Strategy for sampling random hits - either "ratio" or "fixed" - RATIO: Sample random hits as a ratio of relevant docs - FIXED: Sample a fixed number of random hits per query

RATIO
random_hits_value Union[float, int]

Value for the sampling strategy - For RATIO: Ratio value (e.g., 1.0 = equal, 2.0 = twice as many random hits) - For FIXED: Fixed number of random hits per query

1.0
max_random_hits_per_query Optional[int]

Optional maximum limit on random hits per query (only applies when using RATIO strategy to prevent excessive sampling)

None
collect_matchfeatures bool

Whether to collect match features

True
collect_rankfeatures bool

Whether to collect rank features

False
collect_summaryfeatures bool

Whether to collect summary features

False
write_csv bool

Whether to write results to CSV file

True

collect() abstractmethod

Abstract method to be implemented by subclasses.

__call__()

Make the collector callable.

VespaFeatureCollector(queries, relevant_docs, vespa_query_fn, app, id_field, name='', csv_dir=None, random_hits_strategy=RandomHitsSamplingStrategy.RATIO, random_hits_value=1.0, max_random_hits_per_query=None, collect_matchfeatures=True, collect_rankfeatures=False, collect_summaryfeatures=False, write_csv=True)

Bases: VespaCollectorBase

Collects training data for retrieval tasks from a Vespa application.

This class:

  • Iterates over queries and issues them against your Vespa application.
  • Retrieves top-k documents per query.
  • Samples random hits based on the specified strategy.
  • Compiles a CSV file with query-document pairs and their relevance labels.

Important: If you want to sample random hits, you need to make sure that the rank profile you define in your vespa_query_fn has a ranking expression that reflects this. See docs for example. In this case, be aware that the relevance_score value in the returned results (or CSV) will be of no value. This will only have meaning if you use this to collect features for relevant documents only.

Example usage
from vespa.application import Vespa
from vespa.evaluation import VespaFeatureCollector

queries = {
    "q1": "What is the best GPU for gaming?",
    "q2": "How to bake sourdough bread?",
    # ...
}
relevant_docs = {
    "q1": {"d12", "d99"},
    "q2": {"d101"},
    # ...
}

def my_vespa_query_fn(query_text: str, top_k: int) -> dict:
    return {
        "yql": 'select * from sources * where userInput("' + query_text + '");',
        "hits": 10,  # Do not make use of top_k here.
        "ranking": "your_ranking_profile", # This should have `random` as ranking expression
    }

app = Vespa(url="http://localhost", port=8080)

collector = VespaFeatureCollector(
    queries=queries,
    relevant_docs=relevant_docs,
    vespa_query_fn=my_vespa_query_fn,
    app=app,
    id_field="id",  # Field in Vespa hit that contains the document ID (must be an attribute)
    name="retrieval-data-collection",
    csv_dir="/path/to/save/csv",
    random_hits_strategy="ratio",  # or RandomHitsSamplingStrategy.RATIO
    random_hits_value=1.0,  # Sample equal number of random hits to relevant docs
    max_random_hits_per_query=100,  # Optional: cap random hits per query
    collect_matchfeatures=True,  # Collect match features from rank profile
    collect_rankfeatures=False,  # Skip traditional rank features
    collect_summaryfeatures=False,  # Skip summary features
)

collector()

Alternative Usage Examples:

# Example 1: Fixed number of random hits per query
collector = VespaFeatureCollector(
    queries=queries,
    relevant_docs=relevant_docs,
    vespa_query_fn=my_vespa_query_fn,
    app=app,
    id_field="id",  # Required field name
    random_hits_strategy="fixed",
    random_hits_value=50,  # Always sample 50 random hits per query
)

# Example 2: Ratio-based with a cap
collector = VespaFeatureCollector(
    queries=queries,
    relevant_docs=relevant_docs,
    vespa_query_fn=my_vespa_query_fn,
    app=app,
    id_field="id",  # Required field name
    random_hits_strategy="ratio",
    random_hits_value=2.0,  # Sample twice as many random hits as relevant docs
    max_random_hits_per_query=200,  # But never more than 200 per query
)

Parameters:

Name Type Description Default
queries Dict[str, str]

A dictionary where keys are query IDs and values are query strings.

required
relevant_docs Union[Dict[str, Union[Set[str], Dict[str, float]]], Dict[str, str]]

A dictionary mapping query IDs to their relevant document IDs. Can be a set of doc IDs for binary relevance, a dict of doc_id to relevance score (float between 0 and 1) for graded relevance, or a single doc_id string.

required
vespa_query_fn Callable[[str, int, Optional[str]], dict]

A function that takes a query string, the number of hits to retrieve (top_k), and an optional query_id, and returns a Vespa query body dictionary.

required
app Vespa

An instance of the Vespa application.

required
id_field str

The field name in the Vespa hit that contains the document ID. This field must be defined as an attribute in your Vespa schema.

required
name str

A name for this data collection run. Defaults to "".

''
csv_dir Optional[str]

Directory to save the CSV file. Defaults to None (current directory).

None
random_hits_strategy Union[RandomHitsSamplingStrategy, str]

Strategy for sampling random hits. Can be "ratio" (or RandomHitsSamplingStrategy.RATIO) to sample as a ratio of relevant docs, or "fixed" (or RandomHitsSamplingStrategy.FIXED) to sample a fixed number per query. Defaults to "ratio".

RATIO
random_hits_value Union[float, int]

Value for the sampling strategy. For RATIO strategy: ratio value (e.g., 1.0 = equal number, 2.0 = twice as many random hits). For FIXED strategy: fixed number of random hits per query. Defaults to 1.0.

1.0
max_random_hits_per_query Optional[int]

Maximum limit on random hits per query. Only applies to RATIO strategy to prevent excessive sampling. Defaults to None (no limit).

None
collect_matchfeatures bool

Whether to collect match features defined in rank profile's match-features section. Defaults to True.

True
collect_rankfeatures bool

Whether to collect rank features using ranking.listFeatures=true. Defaults to False.

False
collect_summaryfeatures bool

Whether to collect summary features from document summaries. Defaults to False.

False
write_csv bool

Whether to write results to CSV file. Defaults to True.

True

get_recall_param(relevant_doc_ids, get_relevant)

Adds the recall parameter to the Vespa query body based on relevant document IDs.

Parameters:

Name Type Description Default
relevant_doc_ids set

A set of relevant document IDs.

required
get_relevant bool

Whether to retrieve relevant documents.

required

Returns:

Name Type Description
dict dict

The updated Vespa query body with the recall parameter.

calculate_random_hits_count(num_relevant_docs)

Calculate the number of random hits to sample based on the configured strategy.

Parameters:

Name Type Description Default
num_relevant_docs int

Number of relevant documents for the query

required

Returns:

Type Description
int

Number of random hits to sample

collect()

Collects training data by executing queries and saving results to CSV.

This method: 1. Executes all configured queries against the Vespa application. 2. Collects the top-k document IDs and their relevance labels. 3. Optionally writes the data to a CSV file for training purposes. 4. Returns the collected data as a single dictionary with results.

Returns:

Type Description
Dict[str, List[Dict]]

Dict containing:

Dict[str, List[Dict]]
  • 'results': List of dictionaries, each containing all data for a query-document pair (query_id, doc_id, relevance_label, relevance_score, and all extracted features)

VespaNNParameters

Collection of nearest-neighbor query parameters used in nearest-neighbor classes.

VespaNNUnsuccessfulQueryError

Bases: Exception

Exception raised when trying to determine the hit ratio or compute the recall of an unsuccessful query.

VespaNNGlobalFilterHitratioEvaluator(queries, app, verify_target_hits=None)

Determine the hit ratio of the global filter in ANN queries. This hit ratio determines the search strategy used to perform the nearest-neighbor search and is essential to understanding and optimizing the behavior of Vespa on these queries.

This class:

  • Takes a list of queries.
  • Runs the queries with tracing.
  • Determines the hit ratio by examining the trace.

Parameters:

Name Type Description Default
queries Sequence[Mapping[str, str]]

List of ANN queries.

required
app Vespa

An instance of the Vespa application.

required

run()

Determines the hit ratios of the global filters in the supplied ANN queries.

Returns:

Type Description

List[List[float]]: List of lists of hit ratios, which are values from the interval [0.0, 1.0], corresponding to the supplied queries.

VespaNNRecallEvaluator(queries, hits, app, **kwargs)

Determine recall of ANN queries. The recall of an ANN query with k hits is the number of hits that actually are among the k nearest neighbors of the query vector.

This class:

  • Takes a list of queries.
  • First runs the queries as is (with the supplied HTTP parameters).
  • Then runs the queries with the supplied HTTP parameters and an additional parameter enforcing an exact nearest neighbor search.
  • Determines the recall by comparing the results.

Parameters:

Name Type Description Default
queries Sequence[Mapping[str, Any]]

List of ANN queries.

required
hits int

Number of hits to use. Should match the parameter targetHits in the used ANN queries.

required
app Vespa

An instance of the Vespa application.

required
**kwargs dict {}

run()

Computes the recall of the supplied queries.

Returns:

Type Description
List[float]

List[float]: List of recall values from the interval [0.0, 1.0] corresponding to the supplied queries.

VespaQueryBenchmarker(queries, app, repetitions=10, max_concurrent=10, **kwargs)

Determine the searchtime of queries by running them multiple times and taking the average. Using the searchtime has the advantage of not including network latency.

This class:

  • Takes a list of queries.
  • Runs the queries multiple times.
  • Determines the average searchtime of these runs.

Parameters:

Name Type Description Default
queries Sequence[Mapping[str, Any]]

List of queries.

required
app Vespa

An instance of the Vespa application.

required
repetitions int

Number of times to repeat the queries.

10
**kwargs dict {}

run()

Runs the benchmark (including a warm-up run not included in the result).

Returns:

Type Description
List[float]

List[float]: List of searchtimes, corresponding to the supplied queries.

BucketedMetricResults(metric_name, buckets, values, filtered_out_ratios)

Stores aggregated statistics for a metric across query buckets.

Computes mean and various percentiles for values grouped by bucket, where each bucket contains multiple measurements (e.g., response times or recall values).

Parameters:

Name Type Description Default
metric_name str

Name of the metric being measured (e.g., "searchtime", "recall")

required
buckets List[int]

List of bucket indices that contain data

required
values List[List[float]]

List of lists containing measurements, one list per bucket

required
filtered_out_ratios List[float]

Pre-computed filtered-out ratios for each bucket

required

to_dict()

Convert results to dictionary format.

Returns:

Type Description
Dict[str, Any]

Dictionary containing bucket information and all statistics

VespaNNParameterOptimizer(app, queries, hits, buckets_per_percent=2, print_progress=False, max_concurrent=10)

Get suggestions for configuring the nearest-neighbor parameters of a Vespa application.

This class:

  • Sorts ANN queries into buckets based on the hit-ratio of their global filter.
  • For every bucket, can determine the average response time of the queries in this bucket.
  • For every bucket, can determine the average recall of the queries in this bucket.
  • Can suggest a value for postFilterThreshold.
  • Can suggest a value for filterFirstThreshold.
  • Can suggest a value for filterFirstExploration.
  • Can suggest a value for approximateThreshold.

Parameters:

Name Type Description Default
app Vespa

An instance of the Vespa application.

required
queries Sequence[Mapping[str, Any]]

Queries to optimize for.

required
hits int

Number of hits to use in recall computations. Has to match the parameter targetHits in the used ANN queries.

required
buckets_per_percent int

How many buckets are created for every percent point, "resolution" of the suggestions. Defaults to 2.

2
print_progress bool

Whether to print progress information while determining suggestions. Defaults to False.

False

get_bucket_interval_width()

Gets the width of the interval represented by a single bucket.

Returns:

Name Type Description
float float

Width of the interval represented by a single bucket.

get_number_of_buckets()

Gets the number of buckets.

Returns:

Name Type Description
int int

Number of buckets.

get_number_of_nonempty_buckets()

Counts the number of buckets that contain at least one query.

Returns:

Name Type Description
int int

The number of buckets that contain at least one query.

get_non_empty_buckets()

Gets the indices of the non-empty buckets.

Returns:

Type Description
List[int]

List[int]: List of indices of the non-empty buckets.

get_filtered_out_ratios()

Gets the (lower interval ends of the) filtered-out ratios of the non-empty buckets.

Returns:

Type Description
List[float]

List[float]: List of the (lower interval ends of the) filtered-out ratios of the non-empty buckets.

get_number_of_queries()

Gets the number of queries contained in the buckets.

Returns:

Name Type Description
int

Number of queries contained in the buckets.

bucket_to_hitratio(bucket)

Gets the hit ratio (upper endpoint of interval) corresponding to the given bucket index.

Parameters:

Name Type Description Default
bucket int

Index of a bucket.

required

Returns:

Name Type Description
float float

Hit ratio corresponding to the given bucket index.

bucket_to_filtered_out(bucket)

Gets the filtered-out ratio (1 - hit ratio, lower endpoint of interval) corresponding to the given bucket index.

Parameters:

Name Type Description Default
bucket int

Index of a bucket.

required

Returns:

Name Type Description
float float

Filtered-out ratio corresponding to the given bucket index.

buckets_to_filtered_out(buckets)

Applies bucket_to_filtered_out to list of bucket indices.

Parameters:

Name Type Description Default
buckets List[int]

List of bucket indices.

required

Returns:

Type Description
List[float]

List[float]: Filtered-out ratios corresponding to the given bucket indices.

filtered_out_to_bucket(percent)

Gets the index of the bucket containing the given filtered-out ratio.

Parameters:

Name Type Description Default
percent float

Filtered-out ratio.

required

Returns:

Name Type Description
int int

Index of bucket containing the given filtered-out ratio.

distribute_to_buckets(queries_with_hitratios)

Distributes the given queries to buckets according to their given hit ratios.

Parameters:

Name Type Description Default
queries_with_hitratios List[Dict[str, str], float]

Queries with hit ratios.

required

Returns:

Type Description
List[List[str]]

List[List[str]]: List of buckets.

determine_hit_ratios_and_distribute_to_buckets(queries)

Distributes the given queries to buckets by determining their hit ratios.

Parameters:

Name Type Description Default
queries Sequence[Mapping[str, Any]]

Queries.

required

Returns:

Type Description
List[List[str]]

List[List[str]]: List of buckets.

query_from_get_string(get_query) staticmethod

Parses a query in GET format.

Parameters:

Name Type Description Default
get_query str

Query as a single-line GET string.

required

Returns:

Type Description
Dict[str, str]

Dict[str,str]: Query as a dict.

distribute_file_to_buckets(filename)

Distributes the queries from the given file to buckets according to their given hit ratios.

Parameters:

Name Type Description Default
filename str

Name of file with GET queries (one per line).

required

Returns:

Type Description
List[List[str]]

List[List[str]]: List of buckets.

has_sufficient_queries()

Checks whether the given queries are deemed sufficient to give meaningful suggestions.

Returns:

Name Type Description
bool bool

Whether the given queries are deemed sufficient to give meaningful suggestions.

buckets_sufficiently_filled()

Checks whether all non-empty buckets have at least 10 queries.

Returns:

Name Type Description
bool bool

Whether all non-empty buckets have at least 10 queries.

get_query_distribution()

Gets the distribution of queries across all buckets.

Returns:

Type Description

List[float]: List of filtered-out ratios corresponding to non-empty buckets.

List[int]: List of numbers of queries.

benchmark(**kwargs)

For each non-empty bucket, determine the average searchtime.

Parameters:

Name Type Description Default
**kwargs dict {}

Returns:

Name Type Description
BucketedMetricResults BucketedMetricResults

The benchmark results.

compute_average_recalls(**kwargs)

For each non-empty bucket, determine the average recall.

Parameters:

Name Type Description Default
**kwargs dict {}

Returns:

Name Type Description
BucketedMetricResults BucketedMetricResults

The recall results.

suggest_filter_first_threshold(**kwargs)

Suggests a value for filterFirstThreshold based on performed benchmarks.

Parameters:

Name Type Description Default
**kwargs dict

Additional HTTP request parameters. See: https://docs.vespa.ai/en/reference/document-v1-api-reference.html#request-parameters. Should contain ranking.matching.filterFirstExploration!

{}

Returns:

Name Type Description
float dict[str, float | dict[str, List[float]]]

Suggested value for filterFirstThreshold.

suggest_approximate_threshold(**kwargs)

Suggests a value for approximateThreshold based on performed benchmarks.

Parameters:

Name Type Description Default
**kwargs dict

Additional HTTP request parameters. See: https://docs.vespa.ai/en/reference/document-v1-api-reference.html#request-parameters. Should contain ranking.matching.filterFirstExploration and ranking.matching.filterFirstThreshold!

{}

Returns:

Name Type Description
float dict[str, float | dict[str, List[float]]]

Suggested value for approximateThreshold.

suggest_post_filter_threshold(**kwargs)

Suggests a value for postFilterThreshold based on performed benchmarks and recall measurements.

Parameters:

Name Type Description Default
**kwargs dict

Additional HTTP request parameters. See: https://docs.vespa.ai/en/reference/document-v1-api-reference.html#request-parameters. Should contain ranking.matching.filterFirstExploration, ranking.matching.filterFirstThreshold, and ranking.matching.approximateThreshold!

{}

Returns:

Name Type Description
float dict[str, float | dict[str, List[float]]]

Suggested value for postFilterThreshold.

suggest_filter_first_exploration()

Suggests a value for filterFirstExploration based on benchmarks and recall measurements performed on the supplied Vespa app.

Returns:

Name Type Description
dict dict[str, float | dict[str, List[float]]]

A dictionary containing the suggested value, benchmarks, and recall measurements.

run()

Determines suggestions for all parameters supported by this class.

This method: 1. Determines the hit-ratios of supplied ANN queries. 2. Sorts these queries into buckets based on the determined hit-ratio. 3. Determines a suggestion for filterFirstExploration. 4. Determines a suggestion for filterFirstThreshold. 5. Determines a suggestion for approximateThreshold. 6. Determines a suggestion for postFilterThreshold. 7. Reports the determined suggestions and all benchmarks and recall measurements performed.

Returns:

Name Type Description
dict Dict[str, Any]

A dictionary containing the suggested values, information about the query distribution, performed benchmarks, and recall measurements.

Example
{
    "buckets": {
        "buckets_per_percent": 2,
        "bucket_interval_width": 0.005,
        "non_empty_buckets": [
            2,
            20,
            100,
            180,
            190,
            198
        ],
        "filtered_out_ratios": [
            0.01,
            0.1,
            0.5,
            0.9,
            0.95,
            0.99
        ],
        "hit_ratios": [
            0.99,
            0.9,
            0.5,
            0.09999999999999998,
            0.050000000000000044,
            0.010000000000000009
        ],
        "query_distribution": [
            100,
            100,
            100,
            100,
            100,
            100
        ]
    },
    "filterFirstExploration": {
        "suggestion": 0.26953125,
        "benchmarks": {
            "0.0": [
                3.739,
                3.771000000000001,
                3.4500000000000006,
                2.838,
                2.3980000000000015,
                1.7650000000000008
            ],
            "1.0": [
                3.6299999999999977,
                3.6859999999999995,
                3.432000000000002,
                4.166999999999999,
                5.185999999999999,
                7.606999999999999
            ],
            "0.5": [
                3.573,
                3.543999999999999,
                3.535000000000001,
                3.8410000000000006,
                3.9800000000000004,
                5.522999999999999
            ],
            "0.25": [
                3.4939999999999998,
                3.345,
                3.341,
                3.011999999999999,
                2.5979999999999994,
                2.4250000000000007
            ],
            "0.375": [
                3.5869999999999993,
                3.4060000000000006,
                3.252999999999999,
                3.318,
                3.2269999999999994,
                3.7120000000000015
            ],
            "0.3125": [
                3.6000000000000005,
                3.401,
                3.2300000000000004,
                3.089,
                2.845999999999999,
                2.986
            ],
            "0.28125": [
                3.5709999999999993,
                3.606000000000001,
                3.2519999999999993,
                3.005,
                2.728000000000001,
                2.6400000000000006
            ],
            "0.265625": [
                3.613999999999999,
                3.381,
                3.3209999999999997,
                3.059,
                2.7200000000000006,
                2.5120000000000005
            ],
            "0.2734375": [
                3.588999999999998,
                3.399999999999999,
                3.3000000000000016,
                3.017,
                2.695,
                2.5850000000000013
            ]
        },
        "recall_measurements": {
            "0.0": [
                0.8736999999999999,
                0.8717999999999994,
                0.8905000000000004,
                0.9441999999999999,
                0.9026000000000005,
                0.6339999999999995
            ],
            "1.0": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9669999999999997,
                0.9856999999999995,
                0.9954999999999997
            ],
            "0.5": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9656999999999997,
                0.9764999999999998,
                0.9904999999999995
            ],
            "0.25": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9526999999999994,
                0.9297999999999998,
                0.8329
            ],
            "0.375": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9611999999999995,
                0.9592999999999998,
                0.9623999999999998
            ],
            "0.3125": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9573999999999995,
                0.9425000000000003,
                0.9082999999999997
            ],
            "0.28125": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9555999999999993,
                0.9365000000000001,
                0.8779000000000002
            ],
            "0.265625": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9542999999999995,
                0.9322999999999999,
                0.8537
            ],
            "0.2734375": [
                0.8741999999999999,
                0.8717999999999994,
                0.8907000000000005,
                0.9544999999999995,
                0.9342,
                0.8662000000000001
            ]
        }
    },
    "filterFirstThreshold": {
        "suggestion": 0.46,
        "benchmarks": {
            "hnsw": [
                2.818,
                2.6899999999999995,
                3.1060000000000008,
                7.0150000000000015,
                11.572000000000003,
                32.068999999999996
            ],
            "filter_first": [
                3.5249999999999995,
                3.383,
                3.4959999999999973,
                3.1700000000000004,
                2.7319999999999998,
                2.5430000000000006
            ]
        }
    },
    "approximateThreshold": {
        "suggestion": 0.015,
        "benchmarks": {
            "exact": [
                33.882999999999996,
                32.803999999999995,
                24.368000000000002,
                9.366000000000001,
                6.071999999999998,
                2.1279999999999997
            ],
            "filter_first": [
                2.797000000000001,
                2.638000000000001,
                3.1540000000000017,
                2.977,
                2.745,
                2.56
            ]
        }
    },
    "postFilterThreshold": {
        "suggestion": 0.49,
        "benchmarks": {
            "post_filtering": [
                1.9979999999999996,
                2.248,
                3.0170000000000003,
                7.204,
                12.673999999999996,
                11.397999999999993
            ],
            "filter_first": [
                2.8349999999999995,
                3.0579999999999994,
                3.105999999999999,
                3.375999999999999,
                2.8720000000000017,
                2.157999999999999
            ]
        },
        "recall_measurements": {
            "post_filtering": [
                0.828,
                0.8335000000000005,
                0.8943000000000001,
                0.9527999999999994,
                0.9508999999999996,
                0.1917
            ],
            "filter_first": [
                0.828,
                0.8359000000000002,
                0.8980999999999998,
                0.9543999999999996,
                0.9338,
                1.0
            ]
        }
    }
}

mean(values)

Compute the mean of a list of numbers without using numpy.

percentile(values, p)

Compute the p-th percentile of a list of values (0 <= p <= 100). This approximates numpy.percentile's behavior.

validate_queries(queries)

Validate and normalize queries. Converts query IDs to strings if they are ints.

validate_qrels(qrels)

Validate and normalize qrels. Converts query IDs to strings if they are ints.

validate_vespa_query_fn(fn)

Validates the vespa_query_fn function.

The function must be callable and accept either 2 or 3 parameters
  • (query_text: str, top_k: int)
  • or (query_text: str, top_k: int, query_id: Optional[str])

It must return a dictionary when called with test inputs.

Returns True if the function takes a query_id parameter, False otherwise.

filter_queries(queries, relevant_docs)

Filter out queries that have no relevant docs

extract_doc_id_from_hit(hit, id_field)

Extract document ID from a Vespa hit.

get_id_field_from_hit(hit, id_field)

Get the ID field from a Vespa hit.

calculate_searchtime_stats(searchtimes)

Calculate search time statistics.

execute_queries(app, query_bodies, max_concurrent=100)

Execute queries and collect timing information. Returns the responses and a list of search times.

write_csv(metrics, searchtime_stats, csv_file, csv_dir, name)

Write metrics to CSV file.

log_metrics(name, metrics)

Log metrics with appropriate formatting.

extract_features_from_hit(hit, collect_matchfeatures, collect_rankfeatures, collect_summaryfeatures)

Extract features from a Vespa hit based on the collection configuration.

Parameters:

Name Type Description Default
hit dict

The Vespa hit dictionary

required
collect_matchfeatures bool

Whether to collect match features

required
collect_rankfeatures bool

Whether to collect rank features

required
collect_summaryfeatures bool

Whether to collect summary features

required

Returns:

Type Description
Dict[str, float]

Dict mapping feature names to values