Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🌐 [i18n-KO] Translated philosophy.md to Korean #25010

Merged
merged 4 commits into from
Aug 10, 2023
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
75 changes: 31 additions & 44 deletions docs/source/ko/philosophy.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,66 +14,53 @@ rendered properly in your Markdown viewer.

-->

# Philosophy
# μ² ν•™ [[philosophy]]

πŸ€— Transformers is an opinionated library built for:
πŸ€— TransformersλŠ” λ‹€μŒκ³Ό 같은 λͺ©μ μœΌλ‘œ λ§Œλ“€μ–΄μ§„ 주관적인 λΌμ΄λΈŒλŸ¬λ¦¬μž…λ‹ˆλ‹€:
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

- machine learning researchers and educators seeking to use, study or extend large-scale Transformers models.
- hands-on practitioners who want to fine-tune those models or serve them in production, or both.
- engineers who just want to download a pretrained model and use it to solve a given machine learning task.
- λŒ€κ·œλͺ¨ Transformers λͺ¨λΈμ„ μ‚¬μš©ν•˜κ±°λ‚˜ μ—°κ΅¬ν•˜κ±°λ‚˜ ν™•μž₯ν•˜λ €λŠ” 기계 ν•™μŠ΅ 연ꡬ원 및 ꡐ윑자λ₯Ό μœ„ν•œ κ²ƒμž…λ‹ˆλ‹€.
- λͺ¨λΈμ„ λ―Έμ„Έ μ‘°μ •ν•˜κ±°λ‚˜ μ œμž‘μš©μœΌλ‘œ μ‚¬μš©ν•˜κ³ μž ν•˜λŠ” μ‹€μ „ 개발자λ₯Ό μœ„ν•œ κ²ƒμž…λ‹ˆλ‹€.
- νŠΉμ • 기계 ν•™μŠ΅ μž‘μ—…μ„ ν•΄κ²°ν•˜κΈ° μœ„ν•΄ μ‚¬μ „ν›ˆλ ¨λœ λͺ¨λΈμ„ λ‹€μš΄λ‘œλ“œν•˜κ³  μ‚¬μš©ν•˜κΈ°λ§Œ ν•˜λ €λŠ” μ—”μ§€λ‹ˆμ–΄λ₯Ό μœ„ν•œ κ²ƒμž…λ‹ˆλ‹€.

The library was designed with two strong goals in mind:
이 λΌμ΄λΈŒλŸ¬λ¦¬λŠ” 두 가지 μ£Όμš” λͺ©ν‘œλ₯Ό 가지고 μ„€κ³„λ˜μ—ˆμŠ΅λ‹ˆλ‹€:

1. Be as easy and fast to use as possible:
1. μ‚¬μš©ν•˜κΈ° 쉽고 λΉ λ₯΄κ²Œ λ§Œλ“œλŠ” 것:

- We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions,
just three standard classes required to use each model: [configuration](main_classes/configuration),
[models](main_classes/model), and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [image processor](main_classes/image_processor) for vision, [feature extractor](main_classes/feature_extractor) for audio, and [processor](main_classes/processors) for multimodal inputs).
- All of these classes can be initialized in a simple and unified way from pretrained instances by using a common
`from_pretrained()` method which downloads (if needed), caches and
loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary,
and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint.
- On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly
using a model for inference on a given task and [`Trainer`] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with `Keras.fit`).
- As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to
extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base
classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy for models, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post.
- μ‚¬μš©μžκ°€ 읡히기 μœ„ν•΄ μ‚¬μš©μžμ—κ²Œ λ…ΈμΆœλ˜λŠ” μΆ”μƒν™”μ˜ 수λ₯Ό μ œν•œν–ˆμŠ΅λ‹ˆλ‹€. μ‹€μ œλ‘œ 거의 좔상화가 μ—†μœΌλ©°, 각 λͺ¨λΈμ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄ ν•„μš”ν•œ μ„Έ 가지 ν‘œμ€€ 클래슀인 [configuration](main_classes/configuration), [models](main_classes/model) 및 μ „μ²˜λ¦¬ 클래슀인 ([tokenizer](main_classes/tokenizer)λŠ” NLP용, [image processor](main_classes/image_processor)λŠ” λΉ„μ „μš©, [feature extractor](main_classes/feature_extractor)λŠ” μ˜€λ””μ˜€μš©, [processor](main_classes/processors)λŠ” λ©€ν‹°λͺ¨λ‹¬ μž…λ ₯용)만 μ‚¬μš©ν•©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- μ΄λŸ¬ν•œ ν΄λž˜μŠ€λŠ” 곡톡적인 `from_pretrained()` λ©”μ„œλ“œλ₯Ό μ‚¬μš©ν•˜μ—¬ 미리 ν›ˆλ ¨λœ μΈμŠ€ν„΄μŠ€μ—μ„œ κ°„λ‹¨ν•˜κ³  ν†΅μΌλœ λ°©μ‹μœΌλ‘œ μ΄ˆκΈ°ν™”ν•  수 μžˆμŠ΅λ‹ˆλ‹€. 이 λ©”μ„œλ“œλŠ” 미리 ν›ˆλ ¨λœ μ²΄ν¬ν¬μΈνŠΈμ—μ„œ κ΄€λ ¨ 클래슀 μΈμŠ€ν„΄μŠ€μ™€ κ΄€λ ¨ 데이터(κ΅¬μ„±μ˜ ν•˜μ΄νΌνŒŒλΌλ―Έν„°, ν† ν¬λ‚˜μ΄μ €μ˜ μ–΄νœ˜, λͺ¨λΈμ˜ κ°€μ€‘μΉ˜)λ₯Ό λ‹€μš΄λ‘œλ“œ(ν•„μš”ν•œ 경우)ν•˜κ³  μΊμ‹œν•˜λ©° λ‘œλ“œν•©λ‹ˆλ‹€. μ²΄ν¬ν¬μΈνŠΈλŠ” [Hugging Face Hub](https://huggingface.co/models)μ—μ„œ μ œκ³΅λ˜κ±°λ‚˜ μ‚¬μš©μž 자체의 μ €μž₯된 μ²΄ν¬ν¬μΈνŠΈμ—μ„œ μ œκ³΅λ©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- 이 μ„Έ 가지 κΈ°λ³Έ 클래슀 μœ„μ— λΌμ΄λΈŒλŸ¬λ¦¬λŠ” [`pipeline`] APIλ₯Ό μ œκ³΅ν•˜μ—¬ 주어진 μž‘μ—…μ— λŒ€ν•΄ λͺ¨λΈμ„ λΉ λ₯΄κ²Œ μΆ”λ‘ ν•˜λŠ” 데 μ‚¬μš©ν•˜κ³ , [`Trainer`]λ₯Ό μ œκ³΅ν•˜μ—¬ PyTorch λͺ¨λΈμ„ λΉ λ₯΄κ²Œ ν›ˆλ ¨ν•˜κ±°λ‚˜ λ―Έμ„Έ μ‘°μ •ν•  수 μžˆλ„λ‘ ν•©λ‹ˆλ‹€(λͺ¨λ“  TensorFlow λͺ¨λΈμ€ `Keras.fit`κ³Ό ν˜Έν™˜λ©λ‹ˆλ‹€).
- 결과적으둜, 이 λΌμ΄λΈŒλŸ¬λ¦¬λŠ” 신경망을 κ΅¬μΆ•ν•˜κΈ° μœ„ν•œ λͺ¨λ“ˆμ‹ 도ꡬ μƒμžκ°€ μ•„λ‹™λ‹ˆλ‹€. 라이브러리λ₯Ό ν™•μž₯ν•˜κ±°λ‚˜ κ΅¬μΆ•ν•˜λ €λ©΄ 일반적인 Python, PyTorch, TensorFlow, Keras λͺ¨λ“ˆμ„ μ‚¬μš©ν•˜κ³  라이브러리의 κΈ°λ³Έ 클래슀λ₯Ό μƒμ†ν•˜μ—¬ λͺ¨λΈ λ‘œλ”© 및 μ €μž₯κ³Ό 같은 κΈ°λŠ₯을 μž¬μ‚¬μš©ν•˜λ©΄ λ©λ‹ˆλ‹€. λͺ¨λΈμ— λŒ€ν•œ μ½”λ”© 철학에 λŒ€ν•΄ 더 μžμ„Ένžˆ μ•Œκ³  μ‹Άλ‹€λ©΄ [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) λΈ”λ‘œκ·Έ 글을 ν™•μΈν•΄λ³΄μ„Έμš”.

2. Provide state-of-the-art models with performances as close as possible to the original models:
2. μ›λž˜ λͺ¨λΈκ³Ό κ°€λŠ₯ν•œ ν•œ κ·Όμ ‘ν•œ μ„±λŠ₯을 μ œκ³΅ν•˜λŠ” μ΅œμ‹  λͺ¨λΈμ„ μ œκ³΅ν•˜λŠ” 것:

- We provide at least one example for each architecture which reproduces a result provided by the official authors
of said architecture.
- The code is usually as close to the original code base as possible which means some PyTorch code may be not as
*pytorchic* as it could be as a result of being converted TensorFlow code and vice versa.
- 각 μ•„ν‚€ν…μ²˜μ— λŒ€ν•΄ 곡식 μ €μžκ°€ μ œκ³΅ν•œ κ²°κ³Όλ₯Ό μž¬ν˜„ν•˜λŠ” 적어도 ν•œ 가지 예제λ₯Ό μ œκ³΅ν•©λ‹ˆλ‹€.
- μ½”λ“œλŠ” μ›λž˜ μ½”λ“œμ™€ κ°€λŠ₯ν•œ ν•œ μœ μ‚¬ν•˜κ²Œ μœ μ§€λ˜λ―€λ‘œ PyTorch μ½”λ“œλŠ” TensorFlow μ½”λ“œλ‘œ λ³€ν™˜λ˜μ–΄ *pytorchic*ν•˜μ§€ μ•Šμ„ 수 있고, κ·Έ λ°˜λŒ€μ˜ κ²½μš°λ„ λ§ˆμ°¬κ°€μ§€μž…λ‹ˆλ‹€.

A few other goals:
기타 λͺ©ν‘œ λͺ‡ 가지:

- Expose the models' internals as consistently as possible:
- λͺ¨λΈμ˜ λ‚΄λΆ€λ₯Ό κ°€λŠ₯ν•œ μΌκ΄€λ˜κ²Œ λ…ΈμΆœμ‹œν‚€κΈ°:

- We give access, using a single API, to the full hidden-states and attention weights.
- The preprocessing classes and base model APIs are standardized to easily switch between models.
- 전체 은닉 μƒνƒœμ™€ μ–΄ν…μ…˜ κ°€μ€‘μΉ˜μ— λŒ€ν•œ μ•‘μ„ΈμŠ€λ₯Ό 단일 APIλ₯Ό μ‚¬μš©ν•˜μ—¬ μ œκ³΅ν•©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- μ „μ²˜λ¦¬ 클래슀 및 κΈ°λ³Έ λͺ¨λΈ APIλŠ” λͺ¨λΈ 간에 μ‰½κ²Œ μ „ν™˜ν•  수 μžˆλ„λ‘ ν‘œμ€€ν™”λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

- Incorporate a subjective selection of promising tools for fine-tuning and investigating these models:
- λ―Έμ„Έ μ‘°μ • 및 λͺ¨λΈ 탐색을 μœ„ν•œ μœ λ§ν•œ 도ꡬ듀을 μ£Όκ΄€μ μœΌλ‘œ μ„ νƒν•˜κΈ°:

- A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning.
- Simple ways to mask and prune Transformer heads.
- λ―Έμ„Έ 쑰정을 μœ„ν•΄ μ–΄νœ˜ 및 μž„λ² λ”©μ— μƒˆλ‘œμš΄ 토큰을 κ°„λ‹¨ν•˜κ³  μΌκ΄€λœ λ°©μ‹μœΌλ‘œ μΆ”κ°€ν•˜λŠ” 방법을 μ œκ³΅ν•©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- Transformer ν—€λ“œλ₯Ό λ§ˆμŠ€ν‚Ήν•˜κ³  κ°€μ§€μΉ˜κΈ°ν•˜λŠ” κ°„λ‹¨ν•œ 방법을 μ œκ³΅ν•©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

- Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another.
- PyTorch, TensorFlow 2.0 및 Flax 간에 μ‰½κ²Œ μ „ν™˜ν•  수 μžˆλ„λ‘ ν•˜μ—¬ ν•˜λ‚˜μ˜ ν”„λ ˆμž„μ›Œν¬λ‘œ ν›ˆλ ¨ν•˜κ³  λ‹€λ₯Έ ν”„λ ˆμž„μ›Œν¬λ‘œ μΆ”λ‘ ν•  수 있게 ν•©λ‹ˆλ‹€.

## Main concepts
## μ£Όμš” κ°œλ… [[main-concepts]]

The library is built around three types of classes for each model:
이 λΌμ΄λΈŒλŸ¬λ¦¬λŠ” 각 λͺ¨λΈμ— λŒ€ν•΄ μ„Έ 가지 μœ ν˜•μ˜ 클래슀λ₯Ό 기반으둜 κ΅¬μΆ•λ˜μ—ˆμŠ΅λ‹ˆλ‹€:

- **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) or JAX/Flax models ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen.html)) that work with the pretrained weights provided in the library.
- **Configuration classes** store the hyperparameters required to build a model (such as the number of layers and hidden size). You don't always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model).
- **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. [Image processors](main_classes/image_processor) preprocess vision inputs, [feature extractors](main_classes/feature_extractor) preprocess audio inputs, and a [processor](main_classes/processors) handles multimodal inputs.
- **λͺ¨λΈ 클래슀**λŠ” λΌμ΄λΈŒλŸ¬λ¦¬μ—μ„œ μ œκ³΅ν•˜λŠ” 사전 ν›ˆλ ¨λœ κ°€μ€‘μΉ˜μ™€ ν•¨κ»˜ μž‘λ™ν•˜λŠ” PyTorch λͺ¨λΈ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)), Keras λͺ¨λΈ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)), JAX/Flax λͺ¨λΈ([flax.linen.Module](https://flax.readthedocs.io/en/latest/api_reference/flax.linen.html))일 수 μžˆμŠ΅λ‹ˆλ‹€.
- **ꡬ성 클래슀**λŠ” λͺ¨λΈμ„ κ΅¬μΆ•ν•˜λŠ” 데 ν•„μš”ν•œ ν•˜μ΄νΌνŒŒλΌλ―Έν„°(예: λ ˆμ΄μ–΄ 수 및 은닉 크기)λ₯Ό μ €μž₯ν•©λ‹ˆλ‹€. ꡬ성 클래슀λ₯Ό 직접 μΈμŠ€ν„΄μŠ€ν™”ν•  ν•„μš”λŠ” μ—†μŠ΅λ‹ˆλ‹€. 특히, μˆ˜μ • 없이 사전 ν›ˆλ ¨λœ λͺ¨λΈμ„ μ‚¬μš©ν•˜λŠ” 경우 λͺ¨λΈμ„ λ§Œλ“€ λ•Œ ꡬ성을 μžλ™μœΌλ‘œ μΈμŠ€ν„΄μŠ€ν™”ν•˜λŠ” 것이 μžλ™μœΌλ‘œ μ²˜λ¦¬λ©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- **μ „μ²˜λ¦¬ 클래슀**λŠ” μ›μ‹œ 데이터λ₯Ό λͺ¨λΈμ΄ μˆ˜μš©ν•˜λŠ” ν˜•μ‹μœΌλ‘œ λ³€ν™˜ν•©λ‹ˆλ‹€. [Tokenizer](main_classes/tokenizer)λŠ” 각 λͺ¨λΈμ˜ μ–΄νœ˜λ₯Ό μ €μž₯ν•˜κ³ , λ¬Έμžμ—΄μ„ 토큰 μž„λ² λ”© 인덱슀 리슀트둜 μΈμ½”λ”©ν•˜κ³  λ””μ½”λ”©ν•˜κΈ° μœ„ν•œ λ©”μ„œλ“œλ₯Ό μ œκ³΅ν•©λ‹ˆλ‹€. [Image processors](main_classes/image_processor)λŠ” λΉ„μ „ μž…λ ₯을 μ „μ²˜λ¦¬ν•˜κ³ , [feature extractors](main_classes/feature_extractor)λŠ” μ˜€λ””μ˜€ μž…λ ₯을 μ „μ²˜λ¦¬ν•˜λ©°, [processor](main_classes/processors)λŠ” λ©€ν‹°λͺ¨λ‹¬ μž…λ ₯을 μ²˜λ¦¬ν•©λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods:
λͺ¨λ“  μ΄λŸ¬ν•œ ν΄λž˜μŠ€λŠ” 사전 ν›ˆλ ¨λœ μΈμŠ€ν„΄μŠ€μ—μ„œ μΈμŠ€ν„΄μŠ€ν™”ν•˜κ³  둜컬둜 μ €μž₯ν•˜λ©°, μ„Έ 가지 λ©”μ„œλ“œλ₯Ό μ‚¬μš©ν•˜μ—¬ Hubμ—μ„œ κ³΅μœ ν•  수 μžˆμŠ΅λ‹ˆλ‹€:
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

- `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either
provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or
stored locally (or on a server) by the user.
- `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using
`from_pretrained()`.
- `push_to_hub()` lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone.
- `from_pretrained()` λ©”μ„œλ“œλ₯Ό μ‚¬μš©ν•˜λ©΄ 라이브러리 μžμ²΄μ—μ„œ μ œκ³΅ν•˜λŠ” 사전 ν›ˆλ ¨λœ 버전(μ§€μ›λ˜λŠ” λͺ¨λΈμ€ [Model Hub](https://huggingface.co/models)μ—μ„œ 찾을 수 있음)μ΄λ‚˜ μ‚¬μš©μžκ°€ 둜컬둜 μ €μž₯ν•œ 경우(λ˜λŠ” μ„œλ²„μ— μ €μž₯ν•œ 경우)의 λͺ¨λΈ, ꡬ성 및 μ „μ²˜λ¦¬ 클래슀λ₯Ό μΈμŠ€ν„΄μŠ€ν™”ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- `save_pretrained()` λ©”μ„œλ“œλ₯Ό μ‚¬μš©ν•˜λ©΄ λͺ¨λΈ, ꡬ성 및 μ „μ²˜λ¦¬ 클래슀λ₯Ό 둜컬둜 μ €μž₯ν•˜μ—¬ `from_pretrained()`λ₯Ό μ‚¬μš©ν•˜μ—¬ λ‹€μ‹œλ‘œλ“œν•  수 μžˆμŠ΅λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved
- `push_to_hub()` λ©”μ„œλ“œλ₯Ό μ‚¬μš©ν•˜λ©΄ λͺ¨λΈ, ꡬ성 및 μ „μ²˜λ¦¬ 클래슀λ₯Ό Hub에 κ³΅μœ ν•˜μ—¬ λͺ¨λ‘μ—κ²Œ μ‰½κ²Œ μ•‘μ„ΈμŠ€ν•  수 μžˆμŠ΅λ‹ˆλ‹€.
TaeYupNoh marked this conversation as resolved.
Show resolved Hide resolved

Loading