| Developed by | Aryn | | Date of development | Feb 15, 2024 | | Validator type | Summarization | | Blog | | | License | Apache 2 | | Input/Output | Output |
This validator checks if a summary generated by an LLM is an extractive summary of the original document. An extractive summary contains words and phrases selected from an original document to create a summary.
This validator works by performing a fuzzy match between the sentences in the summary and the sentences in the document. Each sentence in the summary must be similar to at least one sentence in the document. After the validation, the summary is updated to include the sentences from the document that were matched, and the citations for those sentences are added to the end of the summary.
This validator is only useful when performing extractive summarization. If the summary is correct but is an abstractive summary, this validator will give false negatives.
- Dependencies:
- guardrails-ai>=0.4.0
- thefuzz
$ guardrails hub install hub://aryn-ai/extractive_summary
In this example, we apply the validator to a string output generated by an LLM.
# Import Guard and Validator
from guardrails.hub import ExtractiveSummary
from guardrails import Guard
val = ExtractiveSummary(
threshold=90,
filepaths="/path/to/original/documents"
)
# Create Guard with Validator
guard = Guard.from_string(validators=[val, ...])
guard.parse("Summarized text") # Validator passes
guard.parse("Incorrect summary") # Validator fails
In this example, we apply the validator to a string field of a JSON output generated by an LLM.
# Import Guard and Validator
from pydantic import BaseModel
from guardrails.hub import ExtractiveSummary
from guardrails import Guard
val = ExtractiveSummary(
threshold=90,
filepaths="/path/to/original/documents"
)
# Create Pydantic BaseModel
class ArticleSummary(BaseModel):
title: str
summary: str = Field(
description="Summary of text", validators=[val]
)
# Create a Guard to check for valid Pydantic output
guard = Guard.from_pydantic(output_class=ArticleSummary)
# Run LLM output generating JSON through guard
guard.parse("""
{
"pet_name": "Using Guardrails Hub",
"pet_type": "To use Guardrails Hub, download individual validators using the CLI and compose them together into guards."
}
""")
__init__(self, on_fail="noop")
-
Initializes a new instance of the Validator class.
threshold
(int): The minimum fuzz ratio to be considered summarized. Defaults to 85.on_fail
(str, Callable): The policy to enact when a validator fails. Ifstr
, must be one ofreask
,fix
,filter
,refrain
,noop
,exception
orfix_reask
. Otherwise, must be a function that is called when the validator fails.
Parameters
__call__(self, value, metadata={}) -> ValidationOutcome
-
Validates the given `value` using the rules defined in this validator, relying on the `metadata` provided to customize the validation process. This method is automatically invoked by `guard.parse(...)`, ensuring the validation logic is applied to the input data.
- This method should not be called directly by the user. Instead, invoke
guard.parse(...)
where this method will be called internally for each associated Validator. - When invoking
guard.parse(...)
, ensure to pass the appropriatemetadata
dictionary that includes keys and values required by this validator. Ifguard
is associated with multiple validators, combine all necessary metadata into a single dictionary. -
value
(Any): The input value to validate. -
metadata
(dict): A dictionary containing metadata required for validation. Keys and values must match the expectations of this validator.Key Type Description Default filepaths
list[str] A list of strings that specifies the filepaths for any documents that should be used for asserting the summary's similarity. N/A
Note:
Parameters