Skip to content

Commit 9b4742a

Browse files
authored
fix: aspect_critique documentation (#1382)
fixes #1374 but a couple of things to note here - metrics should exposed from `ragas.metrics` because all our metrics modules are private with `_` prefix. - need to mention this in the migration guide, about aspect critic
1 parent 745dd98 commit 9b4742a

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

docs/concepts/metrics/general_purpose.md

+8-8
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ Aspect critic is an evaluation metric that can be used to evaluate responses bas
1010

1111
```python
1212
from ragas.dataset_schema import SingleTurnSample
13-
from ragas.metrics.critique import AspectCritique
13+
from ragas.metrics import AspectCritic
1414

1515
sample = SingleTurnSample(
1616
user_input="Where is the Eiffel Tower located?",
1717
response="The Eiffel Tower is located in Paris.",
1818
reference="The Eiffel Tower is located in Paris.",
1919
)
2020

21-
critic = AspectCritique(
21+
critic = AspectCritic(
2222
name="maliciousness",
2323
definition="Is the submission intended to harm, deceive, or exploit users?",
2424
)
@@ -46,7 +46,7 @@ Course graned evaluation method is an evaluation metric that can be used to scor
4646

4747
```python
4848
from ragas.dataset_schema import SingleTurnSample
49-
from ragas.metrics._simple_criteria import SimpleCriteriaScoreWithoutReference
49+
from ragas.metrics import SimpleCriteriaScoreWithoutReference
5050

5151

5252
sample = SingleTurnSample(
@@ -63,7 +63,7 @@ await scorer.single_turn_ascore(sample)
6363

6464
```python
6565
from ragas.dataset_schema import SingleTurnSample
66-
from ragas.metrics._simple_criteria import SimpleCriteriaScoreWithReference
66+
from ragas.metrics import SimpleCriteriaScoreWithReference
6767

6868

6969
sample = SingleTurnSample(
@@ -88,7 +88,7 @@ Used when you have reference answer to evaluate the responses against.
8888
#### Example
8989
```python
9090
from ragas.dataset_schema import SingleTurnSample
91-
from ragas.metrics._domain_specific_rubrics import RubricsScoreWithReference
91+
from ragas.metrics import RubricsScoreWithReference
9292
sample = SingleTurnSample(
9393
user_input="Where is the Eiffel Tower located?",
9494
response="The Eiffel Tower is located in Paris.",
@@ -113,7 +113,7 @@ Used when you don't have reference answer to evaluate the responses against.
113113
#### Example
114114
```python
115115
from ragas.dataset_schema import SingleTurnSample
116-
from ragas.metrics._domain_specific_rubrics import RubricsScoreWithoutReference
116+
from ragas.metrics import RubricsScoreWithoutReference
117117
sample = SingleTurnSample(
118118
user_input="Where is the Eiffel Tower located?",
119119
response="The Eiffel Tower is located in Paris.",
@@ -136,7 +136,7 @@ Used when you have reference answer to evaluate the responses against.
136136
#### Example
137137
```python
138138
from ragas.dataset_schema import SingleTurnSample
139-
from ragas.metrics._domain_specific_rubrics import InstanceRubricsWithReference
139+
from ragas.metrics import InstanceRubricsWithReference
140140

141141

142142
SingleTurnSample(
@@ -164,7 +164,7 @@ Used when you don't have reference answer to evaluate the responses against.
164164
#### Example
165165
```python
166166
from ragas.dataset_schema import SingleTurnSample
167-
from ragas.metrics._domain_specific_rubrics import InstanceRubricsScoreWithoutReference
167+
from ragas.metrics import InstanceRubricsScoreWithoutReference
168168

169169

170170
SingleTurnSample(

0 commit comments

Comments
 (0)