Skip to content

Commit

Permalink
fix memory issue of exporter for bi-transformer
Browse files Browse the repository at this point in the history
Summary: Before word_feat for bi-transformer is a required input feature, which makes a lot of stuff inefficient for bi-transformer fine-tuning model. So that we met memory issues when exporting hate speech models, eg f105928516. This diff is to fix the inefficiencies for bi-transformer fine-tuning model.

Differential Revision: D14694641

fbshipit-source-id: 85d183033b0490720bfb248756c4a3ae8395bc79
  • Loading branch information
Haoran Li authored and facebook-github-bot committed Apr 2, 2019
1 parent 19e6274 commit 1ffc483
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion pytext/models/representations/pure_doc_attention.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def __init__(self, config: Config, embed_dim: int) -> None:
self.representation_dim = self.dense.out_dim

def forward(
self, embedded_tokens: torch.Tensor, seq_lengths: torch.Tensor, *args
self, embedded_tokens: torch.Tensor, seq_lengths: torch.Tensor = None, *args
) -> Any:
rep = self.dropout(embedded_tokens)

Expand Down

0 comments on commit 1ffc483

Please sign in to comment.