Skip to content

Commit

Permalink
Update localization for zh-TW
Browse files Browse the repository at this point in the history
  • Loading branch information
hinablue committed Feb 15, 2024
1 parent 62fbae6 commit aa51809
Showing 1 changed file with 11 additions and 4 deletions.
15 changes: 11 additions & 4 deletions localizations/zh-TW.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@
"network dim for conv layer in fixed mode": "固定模式下卷積層的網路維度",
"Sparsity for sparse bias": "稀疏偏差的稀疏度",
"path for the file to save...": "儲存檔案的路徑...",
"Verify LoRA": "驗證 LoRA",
"Verify": "驗證",
"Verification output": "驗證輸出",
"Verification error": "驗證錯誤",
"New Rank": "新維度 (Network Rank)",
Expand Down Expand Up @@ -137,14 +139,17 @@
"(Optional) eg: 0.5": " (選填) 例如:0.5",
"(Optional) eg: 0.1": " (選填) 例如:0.1",
"Specify the learning rate weight of the down blocks of U-Net.": "指定 U-Net 下區塊的學習率權重。",
"Specify the learning rate weight of the mid blocks of U-Net.": "指定 U-Net 中區塊的學習率權重。",
"Specify the learning rate weight of the mid block of U-Net.": "指定 U-Net 中區塊的學習率權重。",
"Specify the learning rate weight of the up blocks of U-Net. The same as down_lr_weight.": "指定 U-Net 上區塊的學習率權重。與 down_lr_weight 相同。",
"If the weight is not more than this value, the LoRA module is not created. The default is 0.": "如果權重不超過此值,則不會創建 LoRA 模組。預設為 0。",
"Blocks": "區塊",
"Block dims": "區塊維度",
"(Optional) eg: 2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2": " (選填) 例如:2,2,2,2,4,4,4,4,6,6,6,6,8,6,6,6,6,4,4,4,4,2,2,2,2",
"Specify the dim (rank) of each block. Specify 25 numbers.": "指定每個區塊的維度 (Rank)。指定 25 個數字。",
"Specify the alpha of each block. Specify 25 numbers as with block_dims. If omitted, the value of network_alpha is used.": "指定每個區塊的 Alpha。與區塊維度一樣,指定 25 個數字。如果省略,則使用網路 Alpha 的值。",
"Conv": "卷積",
"Conv dims": "卷積維度 (dims)",
"Conv alphas": "卷積 Alphas",
"Extend LoRA to Conv2d 3x3 and specify the dim (rank) of each block. Specify 25 numbers.": "將 LoRA 擴展到 Conv2d 3x3,並指定每個區塊的維度 (Rank)。指定 25 個數字。",
"Specify the alpha of each block when expanding LoRA to Conv2d 3x3. Specify 25 numbers. If omitted, the value of conv_alpha is used.": "將 LoRA 擴展到 Conv2d 3x3 時,指定每個區塊的 Alpha。指定 25 個數字。如果省略,則使用卷積 Alpha 的值。",
"Weighted captions": "加權標記文字",
Expand Down Expand Up @@ -203,8 +208,8 @@
"Dreambooth/LoRA Folder preparation": "Dreambooth/LoRA 準備資料夾",
"Dropout caption every n epochs": "在每 N 個週期 (Epoch) 丟棄標記",
"DyLoRA model": "DyLoRA 模型",
"Dynamic method": "動態方法",
"Dynamic parameter": "動態參數",
"Dynamic method": "壓縮演算法",
"Dynamic parameter": "壓縮參數",
"e.g., \"by some artist\". Leave empty if you only want to add a prefix or postfix.": "例如,\"由某個藝術家創作\"。如果你只想加入前綴或後綴,請留空白。",
"e.g., \"by some artist\". Leave empty if you want to replace with nothing.": "例如,\"由某個藝術家創作\"。如果你想用空值取代,請留空白。",
"Enable buckets": "啟用資料桶",
Expand All @@ -227,6 +232,8 @@
"Flip augmentation": "翻轉增強",
"float16": "float16",
"Folders": "資料夾",
"U-Net and Text Encoder can be trained with fp8 (experimental)": "U-Net 與 Text Encoder 可以使用 fp8 訓練 (實驗性功能)",
"fp8 base training (experimental)": "使用 fp8 基礎訓練 (實驗性功能)",
"Full bf16 training (experimental)": "完整使用 bf16 訓練 (實驗性功能)",
"Full fp16 training (experimental)": "完整使用 fp16 訓練 (實驗性功能)",
"Generate caption files for the grouped images based on their folder name": "根據圖片的資料夾名稱生成標記文字檔案",
Expand Down Expand Up @@ -498,4 +505,4 @@
"Training comment": "訓練註解",
"Train a TI using kohya textual inversion python code…": "使用 kohya textual inversion Python 程式訓練 TI 模型",
"Train a custom model using kohya finetune python code…": "使用 kohya finetune Python 程式訓練自定義模型"
}
}

0 comments on commit aa51809

Please sign in to comment.