Skip to content

peft module documentation completion #1078

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/en/api/peft/ADAPTERS/AdaLoRA.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.adalora.config.AdaLoraConfig
::: mindnlp.peft.tuners.adalora.model.AdaLoraModel
2 changes: 2 additions & 0 deletions docs/en/api/peft/ADAPTERS/Adaption_Prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.adaption_prompt.config.AdaptionPromptConfig
::: mindnlp.peft.tuners.adaption_prompt.model.AdaptionPromptModel
2 changes: 2 additions & 0 deletions docs/en/api/peft/ADAPTERS/IA3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.ia3.config
::: mindnlp.peft.tuners.ia3.model
2 changes: 2 additions & 0 deletions docs/en/api/peft/ADAPTERS/LoKr.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.lokr.config
::: mindnlp.peft.tuners.lokr.model
2 changes: 2 additions & 0 deletions docs/en/api/peft/ADAPTERS/LoRA.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
:::mindnlp.peft.tuners.lora.config
::: mindnlp.peft.tuners.lora.model
2 changes: 2 additions & 0 deletions docs/en/api/peft/MAIN_CLASSES/PEFT_TYPE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
::: mindnlp.peft.utils.peft_types.PeftType
::: mindnlp.peft.utils.peft_types.TaskType
2 changes: 2 additions & 0 deletions docs/en/api/peft/MAIN_CLASSES/Tuner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
::: mindnlp.peft.tuners.tuners_utils.BaseTuner
::: mindnlp.peft.tuners.tuners_utils.BaseTunerLayer
1 change: 1 addition & 0 deletions docs/en/api/peft/MAIN_CLASSES/config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
:::mindnlp.peft.config
1 change: 1 addition & 0 deletions docs/en/api/peft/MAIN_CLASSES/mapping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
:::mindnlp.peft.mapping
1 change: 1 addition & 0 deletions docs/en/api/peft/MAIN_CLASSES/peft_model.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
:::mindnlp.peft.peft_model
1 change: 1 addition & 0 deletions docs/en/api/peft/UTILITIES/Model_merge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
:::mindnlp.peft.utils.merge_utils
12 changes: 12 additions & 0 deletions mindnlp/peft/peft_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -417,6 +417,9 @@ def construct(
return_dict=None,
**kwargs,
):
"""
Forward pass of the model.
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
peft_config = self.active_peft_config
if not peft_config.is_prompt_learning:
Expand Down Expand Up @@ -492,6 +495,9 @@ def construct(
return_dict=None,
**kwargs,
):
"""
Forward pass of the model.
"""
peft_config = self.active_peft_config
if not isinstance(peft_config, PromptLearningConfig):
if self.base_model.config.model_type == "mpt":
Expand Down Expand Up @@ -636,6 +642,9 @@ def construct(
return_dict=None,
**kwargs,
):
"""
Forward pass of the model.
"""
peft_config = self.active_peft_config
if not isinstance(peft_config, PromptLearningConfig):
return self.base_model(
Expand Down Expand Up @@ -845,6 +854,9 @@ def construct(
return_dict=None,
**kwargs,
):
"""
Forward pass of the model.
"""
peft_config = self.active_peft_config
return_dict = return_dict if return_dict is not None else self.config.use_return_dict

Expand Down
13 changes: 8 additions & 5 deletions mindnlp/peft/tuners/adalora/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,12 @@ class AdaLoraModel(LoraModel):
https://openreview.net/forum?id=lq62uWRJjiY

Args:
model ([`transformers.PreTrainedModel`]): The model to be adapted.
model ([`mindspore.nn.Cell`]): The model to be adapted.
config ([`AdaLoraConfig`]): The configuration of the AdaLora model.
adapter_name (`str`): The name of the adapter, defaults to `"default"`.

Returns:
`torch.nn.Module`: The AdaLora model.
AdaLoraModel ([`mindspore.nn.Cell`]): The AdaLora model.

Example::

Expand All @@ -57,9 +57,11 @@ class AdaLoraModel(LoraModel):
)
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> model = AdaLoraModel(model, config, "default")

**Attributes**:
- **model** ([`transformers.PreTrainedModel`]) -- The model to be adapted.
- **peft_config** ([`AdaLoraConfig`]): The configuration of the AdaLora model.
> **Attributes**:

> - **model** ([`transformers.PreTrainedModel`])— The model to be adapted.

> - **peft_config** ([`AdaLoraConfig`]): The configuration of the AdaLora model.
"""

# Note: don't redefine prefix here, it should be inherited from LoraModel
Expand Down Expand Up @@ -266,6 +268,7 @@ def __getattr__(self, name: str):
return getattr(self.model, name)

def construct(self, *args, **kwargs):
"""The construct method of the model"""
outputs = self.model(*args, **kwargs)

if (getattr(outputs, "loss", None) is not None) and isinstance(outputs.loss, Tensor):
Expand Down
1 change: 1 addition & 0 deletions mindnlp/peft/tuners/adaption_prompt/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ def __init__(self, model, configs: Dict, adapter_name: str):
self._mark_only_adaption_prompts_as_trainable(self.model)

def add_adapter(self, adapter_name: str, config: AdaptionPromptConfig) -> None:
"""Add an adapter with the given name and config."""
config = prepare_config(config, self.model)
if adapter_name in self.peft_config:
raise ValueError(f"Adapter named '{adapter_name}' already exists.")
Expand Down
9 changes: 5 additions & 4 deletions mindnlp/peft/tuners/ia3/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class IA3Model(BaseTuner):
adapter_name (`str`): The name of the adapter, defaults to `"default"`.

Returns:
`torch.nn.Module`: The (IA)^3 model.
IA3Model ([`mindspore.nn.Cell`]): The IA3Lora model.

Example:

Expand All @@ -70,10 +70,11 @@ class IA3Model(BaseTuner):
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> ia3_model = IA3Model(config, model)
```
> **Attributes**:

**Attributes**:
- **model** ([`~transformers.PreTrainedModel`]) -- The model to be adapted.
- **peft_config** ([`ia3Config`]): The configuration of the (IA)^3 model.
> - **model** ([`transformers.PreTrainedModel`])— The model to be adapted.

> - **peft_config** ([`IA3Config`]): The configuration of the (IA)^3 model.
"""

prefix: str = "ia3_"
Expand Down
4 changes: 2 additions & 2 deletions mindnlp/peft/tuners/lokr/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,10 @@ class LoKrConfig(PeftConfig):
pattern is not in the common layers pattern.
rank_pattern (`dict`):
The mapping from layer names or regexp expression to ranks which are different from the default rank
specified by `r`.(新)
specified by `r`.
alpha_pattern (`dict`):
The mapping from layer names or regexp expression to alphas which are different from the default alpha
specified by `alpha`.(新)
specified by `alpha`.
"""

r: int = field(default=8, metadata={"help": "lokr attention dimension"})
Expand Down
13 changes: 8 additions & 5 deletions mindnlp/peft/tuners/lokr/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,11 @@ class LoKrModel(BaseTuner):

Args:
model (`mindspore.nn.Cell`): The model to which the adapter tuner layers will be attached.
config ([`LoKrConfig`]): The configuration of the LoKr model.
peft_config ([`LoKrConfig`]): The configuration of the LoKr model.
adapter_name (`str`): The name of the adapter, defaults to `"default"`.

Returns:
`mindspore.nn.Cell`: The LoKr model.
LoKrModel ([`mindspore.nn.Cell`]): The LoKr model.

Example:
```py
Expand Down Expand Up @@ -86,9 +86,12 @@ class LoKrModel(BaseTuner):
>>> model.unet = LoKrModel(model.unet, config_unet, "default")
```

**Attributes**:
- **model** ([`~nn.Cell`]) -- The model to be adapted.
- **peft_config** ([`LoKrConfig`]): The configuration of the LoKr model.
> **Attributes**:

> - **model** ([`~nn.Cell`])— The model to be adapted.

> - **peft_config** ([`LoKrConfig`]): The configuration of the LoKr model.

"""

prefix: str = "lokr_"
Expand Down
12 changes: 7 additions & 5 deletions mindnlp/peft/tuners/lora/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ class LoraModel(BaseTuner):
adapter_name (`str`): The name of the adapter, defaults to `"default"`.

Returns:
`nn.Cell`: The Lora model.
LoraModel ([`mindspore.nn.Cell`]): The Lora model.

Example:

Expand Down Expand Up @@ -120,9 +120,11 @@ class LoraModel(BaseTuner):
>>> lora_model = get_peft_model(model, config)
```

**Attributes**:
- **model** ([`~transformers.PreTrainedModel`]) -- The model to be adapted.
- **peft_config** ([`LoraConfig`]): The configuration of the Lora model.
> **Attributes**:

> - **model** ([`transformers.PreTrainedModel`])— The model to be adapted.

> - **peft_config** ([`LoraConfig`]): The configuration of the Lora model.
"""

prefix: str = "lora_"
Expand Down Expand Up @@ -153,7 +155,7 @@ def _prepare_model(self, peft_config: LoraConfig, model: nn.Module):
Args:
peft_config (`PeftConfig`):
The prepared adapter config.
model (`nn.Module`):
model (`nn.Cell`):
The model that is going to be adapted.
"""
if peft_config.layer_replication:
Expand Down
2 changes: 1 addition & 1 deletion mindnlp/peft/tuners/prompt_tuning/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ class PromptEmbedding(nn.Cell):

Args:
config ([`PromptTuningConfig`]): The configuration of the prompt embedding.
word_embeddings (`nn.Module`): The word embeddings of the base transformer model.
word_embeddings (`nn.Cell`): The word embeddings of the base transformer model.

**Attributes**:
- **embedding** (`nn.Embedding`) -- The embedding layer of the prompt embedding.
Expand Down
30 changes: 15 additions & 15 deletions mindnlp/peft/utils/peft_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,21 +21,21 @@ class PeftType(str, enum.Enum):
"""
Enum class for the different types of adapters in PEFT.

Supported PEFT types:
- PROMPT_TUNING
- MULTITASK_PROMPT_TUNING
- P_TUNING
- PREFIX_TUNING
- LORA
- ADALORA
- BOFT
- ADAPTION_PROMPT
- IA3
- LOHA
- LOKR
- OFT
- POLY
- LN_TUNING
Supported PEFT types:
- PROMPT_TUNING
- MULTITASK_PROMPT_TUNING
- P_TUNING
- PREFIX_TUNING
- LORA
- ADALORA
- BOFT
- ADAPTION_PROMPT
- IA3
- LOHA
- LOKR
- OFT
- POLY
- LN_TUNING
"""

PROMPT_TUNING = "PROMPT_TUNING"
Expand Down
12 changes: 11 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,17 @@ nav:
- engine: api/engine.md
- modules: api/modules.md
- parallel: api/parallel.md
- peft: api/peft.md
- peft:
MAIN CLASSES:
PEFT model: api/peft/MAIN_CLASSES/peft_model.md
PEFT mapping: api/peft/MAIN_CLASSES/mapping.md
Configuration: api/peft/MAIN_CLASSES/config.md
ADAPTERS:
AdaLoRA: api/peft/ADAPTERS/AdaLoRA.md
Adaption_Prompt: api/peft/ADAPTERS/Adaption_Prompt.md
IA3: api/peft/ADAPTERS/IA3.md
LoKr: api/peft/ADAPTERS/LoKr.md
LoRA: api/peft/ADAPTERS/LoRA.md
- sentence: api/sentence.md
- transformers: api/transformers.md
- trl: api/trl.md
Expand Down