Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI to upload arbitrary huge folder #2254

Merged
merged 97 commits into from
Aug 29, 2024
Merged
Show file tree
Hide file tree
Changes from 94 commits
Commits
Show all changes
97 commits
Select commit Hold shift + click to select a range
1004434
still an early draft
Wauplin Apr 12, 2024
5a8605e
this is better
Wauplin Apr 12, 2024
68a6cf1
fix
Wauplin Apr 12, 2024
9b25f38
Merge branch 'main' intto 1738-revampt-download-local-dir
Wauplin Apr 24, 2024
8e903f8
revampt/refactor download process
Wauplin Apr 24, 2024
5f610ee
resume download by default + do not upload .huggingface folder
Wauplin Apr 24, 2024
5a9762f
compute sha256 if necessary
Wauplin Apr 24, 2024
283977d
fix hash
Wauplin Apr 24, 2024
e909022
add tests + fix some stuff
Wauplin Apr 24, 2024
39cfef4
fix snapshot download tests
Wauplin Apr 24, 2024
dbece97
fix test
Wauplin Apr 24, 2024
0206964
lots of docs
Wauplin Apr 24, 2024
82b46b3
add secu
Wauplin Apr 24, 2024
3300b28
as constant
Wauplin Apr 24, 2024
c606a94
dix
Wauplin Apr 24, 2024
95171ef
fix tests
Wauplin Apr 24, 2024
7180746
remove unused code
Wauplin Apr 24, 2024
4e664d4
don't use jsons
Wauplin Apr 24, 2024
7bb263e
style
Wauplin Apr 24, 2024
3595042
Apply suggestions from code review
Wauplin Apr 25, 2024
3401880
Apply suggestions from code review
Wauplin Apr 25, 2024
9210648
Warn more about resume_download
Wauplin Apr 25, 2024
fb477e5
fix test
Wauplin Apr 25, 2024
0eacbc9
Add tests specific to .huggingface folder
Wauplin Apr 25, 2024
1a4320a
remove advice to use hf_transfer when downloading from cli
Wauplin Apr 25, 2024
8c9dc8b
fix torhc test
Wauplin Apr 25, 2024
6260a17
more test fix
Wauplin Apr 25, 2024
c788e2d
Merge branch 'main' into 1738-revampt-download-local-dir
Wauplin Apr 25, 2024
3a45f4b
feedback
Wauplin Apr 25, 2024
30abe07
First draft for large upload CLI
Wauplin Apr 26, 2024
4317f5a
Fixes + CLI
Wauplin Apr 26, 2024
94951e4
verbose by default
Wauplin Apr 26, 2024
0061401
ask for report
Wauplin Apr 26, 2024
8d31fc0
line
Wauplin Apr 26, 2024
4f6f531
suggested changes
Wauplin Apr 26, 2024
41c8ae3
more robust
Wauplin Apr 26, 2024
84f55ff
Apply suggestions from code review
Wauplin Apr 29, 2024
f5d1faa
comment
Wauplin Apr 29, 2024
edc9790
commen
Wauplin Apr 29, 2024
a768426
Merge branch 'main' into 1738-revampt-download-local-dir
Wauplin Apr 29, 2024
d0ea3ea
robust tests
Wauplin Apr 29, 2024
dffa539
fix CI
Wauplin Apr 29, 2024
d414825
ez
Wauplin Apr 29, 2024
0a5605c
rules update
Wauplin Apr 29, 2024
9e6d569
more ribust?
Wauplin Apr 29, 2024
e6fe766
allow for 1s diff
Wauplin Apr 29, 2024
28991c9
don't raise on unlink
Wauplin Apr 29, 2024
a0b61a1
style
Wauplin Apr 29, 2024
fccabe0
robustenss
Wauplin Apr 29, 2024
c089477
Merge branch '1738-revampt-download-local-dir' into large-upload-cli
Wauplin Apr 29, 2024
6295ead
Merge branch 'main' into large-upload-cli
Wauplin Apr 29, 2024
06c9dec
tqdm while recovering
Wauplin Apr 29, 2024
d6e4163
Merge branch 'main' into large-upload-cli
Wauplin May 22, 2024
33be5f0
Merge branch 'main' into large-upload-cli
Wauplin Jun 10, 2024
e48d525
Merge branch 'main' into large-upload-cli
Wauplin Jul 12, 2024
64352de
make sure upload paths are correct on windows
Wauplin Jul 12, 2024
d936caf
test get_local_upload_paths
Wauplin Jul 12, 2024
8dae1ac
only 1 preupload LFS at a time if hf_transfer enabled
Wauplin Jul 12, 2024
8326ea2
upload one at a time if hf_transfer
Wauplin Jul 15, 2024
e747f9b
Add waiting workers in report
Wauplin Jul 15, 2024
3d7d5a3
better reporting
Wauplin Jul 15, 2024
07650cd
Merge branch 'main' into large-upload-cli
Wauplin Jul 15, 2024
095aa90
raise on KeyboardInterrupt + can disable bars
Wauplin Jul 15, 2024
5119e74
Merge branch 'main' into large-upload-cli
Wauplin Jul 15, 2024
76af338
Merge branch 'main' into lt parge-upload-cli
Wauplin Jul 19, 2024
bf0f967
fix type annotation on Python3.8
Wauplin Jul 19, 2024
ba7f248
make repo_type required
Wauplin Jul 19, 2024
dd3c31c
Merge branch 'main' into large-upload-cli
Wauplin Jul 22, 2024
b134e17
dcostring
Wauplin Jul 22, 2024
b725f94
style
Wauplin Jul 22, 2024
cd8bac2
fix circular import
Wauplin Jul 22, 2024
4ccfec0
docs
Wauplin Jul 22, 2024
d8f6846
docstring
Wauplin Jul 22, 2024
5f16da2
init
Wauplin Jul 22, 2024
f03b55d
guide
Wauplin Jul 22, 2024
da0bb42
dedup
Wauplin Jul 22, 2024
1b04201
instructions
Wauplin Jul 22, 2024
84c65a5
add test
Wauplin Jul 22, 2024
f31dfee
styl
Wauplin Jul 22, 2024
d0ae3a0
Merge branch 'main' into large-upload-cli
Wauplin Jul 29, 2024
602f299
tips
Wauplin Jul 29, 2024
dabd594
Apply suggestions from code review
Wauplin Jul 29, 2024
5583f0c
typo
Wauplin Jul 29, 2024
69dd31d
remove comment
Wauplin Jul 29, 2024
86edb11
comments
Wauplin Jul 29, 2024
bc7129f
move determine_task to its own method
Wauplin Jul 29, 2024
0dc048a
rename to upload_large_folder
Wauplin Jul 29, 2024
8181193
fix md
Wauplin Jul 29, 2024
b568a54
update
Wauplin Jul 29, 2024
3bbdc2e
dont wait on exit
Wauplin Jul 29, 2024
ea765be
Fix typo in docs
Wauplin Jul 31, 2024
46ce7a8
Apply suggestions from code review
Wauplin Aug 12, 2024
ba6e4a8
add PR tips
Wauplin Aug 12, 2024
3e2db8e
Merge branch 'large-upload-cli' of github.com:huggingface/huggingface…
Wauplin Aug 12, 2024
cac2b50
Merge branch 'main' into large-upload-cli
Wauplin Aug 29, 2024
cdfb27f
add comment
Wauplin Aug 29, 2024
6d8017a
add comment about --no-bars
Wauplin Aug 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -138,3 +138,5 @@ dmypy.json

# Spell checker config
cspell.json

tmp*
104 changes: 74 additions & 30 deletions docs/source/en/guides/upload.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,80 @@ set, files are uploaded at the root of the repo.

For more details about the CLI upload command, please refer to the [CLI guide](./cli#huggingface-cli-upload).

## Upload a large folder

In most cases, the [`upload_folder`] method and `huggingface-cli upload` command should be the go-to solutions to upload files to the Hub. They ensure a single commit will be made, handle a lot of use cases, and fail explicitly when something wrong happens. However, when dealing with a large amount of data, you will usually prefer a resilient process even if it leads to more commits or requires more CPU usage. The [`upload_large_folder`] method has been implemented in that spirit:
- it is resumable: the upload process is split into many small tasks (hashing files, pre-uploading them, and committing them). Each time a task is completed, the result is cached locally in a `./cache/huggingface` folder inside the folder you are trying to upload. By doing so, restarting the process after an interruption will resume all completed tasks.
- it is multi-threaded: hashing large files and pre-uploading them benefits a lot from multithreading if your machine allows it.
- it is resilient to errors: a high-level retry-mechanism has been added to retry each independent task indefinitely until it passes (no matter if it's a OSError, ConnectionError, PermissionError, etc.). This mechanism is double-edged. If transient errors happen, the process will continue and retry. If permanent errors happen (e.g. permission denied), it will retry indefinitely without solving the root cause.

If you want more technical details about how `upload_large_folder` is implemented under the hood, please have a look to the [`upload_large_folder`] package reference.

Here is how to use [`upload_large_folder`] in a script. The method signature is very similar to [`upload_folder`]:

```py
>>> api.upload_large_folder(
... repo_id="HuggingFaceM4/Docmatix",
... repo_type="dataset",
... folder_path="/path/to/local/docmatix",
... )
```

You will see the following output in your terminal:
```
Repo created: https://huggingface.co/datasets/HuggingFaceM4/Docmatix
Found 5 candidate files to upload
Recovering from metadata files: 100%|█████████████████████████████████████| 5/5 [00:00<00:00, 542.66it/s]

---------- 2024-07-22 17:23:17 (0:00:00) ----------
Files: hashed 5/5 (5.0G/5.0G) | pre-uploaded: 0/5 (0.0/5.0G) | committed: 0/5 (0.0/5.0G) | ignored: 0
Workers: hashing: 0 | get upload mode: 0 | pre-uploading: 5 | committing: 0 | waiting: 11
---------------------------------------------------
```

First, the repo is created if it didn't exist before. Then, the local folder is scanned for files to upload. For each file, we try to recover metadata information (from a previously interrupted upload). From there, it is able to launch workers and print an update status every 1 minute. Here, we can see that 5 files have already been hashed but not pre-uploaded. 5 workers are pre-uploading files while the 11 others are waiting for a task.

A command line is also provided. You can define the number of workers and the level of verbosity in the terminal:

```sh
huggingface-cli upload-large-folder HuggingFaceM4/Docmatix --repo-type=dataset /path/to/local/docmatix --num-workers=16
```

<Tip>

For large uploads, you have to set `repo_type="model"` or `--repo-type=model` explicitly. Usually, this information is implicit in all other `HfApi` methods. This is to avoid having data uploaded to a repository with a wrong type. If that's the case, you'll have to re-upload everything.

</Tip>

<Tip warning={true}>

While being much more robust to upload large folders, `upload_large_folder` is more limited than [`upload_folder`] feature-wise. In practice:
- you cannot set a custom `path_in_repo`. If you want to upload to a subfolder, you need to set the proper structure locally.
- you cannot set a custom `commit_message` and `commit_description` since multiple commits are created.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a follow-up PR: I think we could support this. The PR title could be the commit message (and then just add "Part 1" or something like that)

- you cannot delete from the repo while uploading. Please make a separate commit first.
- you cannot create a PR directly. Please create a PR first (from the UI or using [`create_pull_request`]) and then commit to it by passing `revision`.

</Tip>

### Tips and tricks for large uploads

There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data, getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying.

Check out our [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) guide for best practices on how to structure your repositories on the Hub. Let's move on with some practical tips to make your upload process as smooth as possible.

- **Start small**: We recommend starting with a small amount of data to test your upload script. It's easier to iterate on a script when failing takes only a little time.
- **Expect failures**: Streaming large amounts of data is challenging. You don't know what can happen, but it's always best to consider that something will fail at least once -no matter if it's due to your machine, your connection, or our servers. For example, if you plan to upload a large number of files, it's best to keep track locally of which files you already uploaded before uploading the next batch. You are ensured that an LFS file that is already committed will never be re-uploaded twice but checking it client-side can still save some time. This is what [`upload_large_folder`] does for you.
- **Use `hf_transfer`**: this is a Rust-based [library](https://github.com/huggingface/hf_transfer) meant to speed up uploads on machines with very high bandwidth. To use `hf_transfer`:
Wauplin marked this conversation as resolved.
Show resolved Hide resolved
1. Specify the `hf_transfer` extra when installing `huggingface_hub`
(i.e., `pip install huggingface_hub[hf_transfer]`).
2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable.

<Tip warning={true}>

`hf_transfer` is a power user tool! It is tested and production-ready, but it lacks user-friendly features like advanced error handling or proxies. For more details, please take a look at this [section](https://huggingface.co/docs/huggingface_hub/hf_transfer).

</Tip>

## Advanced features

In most cases, you won't need more than [`upload_file`] and [`upload_folder`] to upload your files to the Hub.
Expand Down Expand Up @@ -418,36 +492,6 @@ you don't store another reference to it. This is expected as we don't want to ke
already uploaded. Finally we create the commit by passing all the operations to [`create_commit`]. You can pass
additional operations (add, delete or copy) that have not been processed yet and they will be handled correctly.

## Tips and tricks for large uploads

There are some limitations to be aware of when dealing with a large amount of data in your repo. Given the time it takes to stream the data,
getting an upload/push to fail at the end of the process or encountering a degraded experience, be it on hf.co or when working locally, can be very annoying.

Check out our [Repository limitations and recommendations](https://huggingface.co/docs/hub/repositories-recommendations) guide for best practices on how to structure your repositories on the Hub. Next, let's move on with some practical tips to make your upload process as smooth as possible.

- **Start small**: We recommend starting with a small amount of data to test your upload script. It's easier to iterate
on a script when failing takes only a little time.
- **Expect failures**: Streaming large amounts of data is challenging. You don't know what can happen, but it's always
best to consider that something will fail at least once -no matter if it's due to your machine, your connection, or our
servers. For example, if you plan to upload a large number of files, it's best to keep track locally of which files you
already uploaded before uploading the next batch. You are ensured that an LFS file that is already committed will never
be re-uploaded twice but checking it client-side can still save some time.
- **Use `hf_transfer`**: this is a Rust-based [library](https://github.com/huggingface/hf_transfer) meant to speed up
uploads on machines with very high bandwidth. To use `hf_transfer`:

1. Specify the `hf_transfer` extra when installing `huggingface_hub`
(e.g. `pip install huggingface_hub[hf_transfer]`).
2. Set `HF_HUB_ENABLE_HF_TRANSFER=1` as an environment variable.

<Tip warning={true}>

`hf_transfer` is a power user tool!
It is tested and production-ready,
but it lacks user-friendly features like advanced error handling or proxies.
For more details, please take a look at this [section](https://huggingface.co/docs/huggingface_hub/hf_transfer).

</Tip>

## (legacy) Upload files with Git LFS

All the methods described above use the Hub's API to upload files. This is the recommended way to upload files to the Hub.
Expand Down
2 changes: 2 additions & 0 deletions src/huggingface_hub/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,7 @@
"update_webhook",
"upload_file",
"upload_folder",
"upload_large_folder",
"whoami",
],
"hf_file_system": [
Expand Down Expand Up @@ -756,6 +757,7 @@ def __dir__():
update_webhook, # noqa: F401
upload_file, # noqa: F401
upload_folder, # noqa: F401
upload_large_folder, # noqa: F401
whoami, # noqa: F401
)
from .hf_file_system import (
Expand Down
195 changes: 191 additions & 4 deletions src/huggingface_hub/_local_folder.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
└── [ 16] file.parquet


Metadata file structure:
Download metadata file structure:
```
# file.txt.metadata
11c5a3d5811f50298f278a704980280950aedb10
Expand Down Expand Up @@ -68,7 +68,7 @@ class LocalDownloadFilePaths:
"""
Paths to the files related to a download process in a local dir.

Returned by `get_local_download_paths`.
Returned by [`get_local_download_paths`].

Attributes:
file_path (`Path`):
Expand All @@ -88,6 +88,30 @@ def incomplete_path(self, etag: str) -> Path:
return self.metadata_path.with_suffix(f".{etag}.incomplete")


@dataclass(frozen=True)
class LocalUploadFilePaths:
"""
Paths to the files related to an upload process in a local dir.

Returned by [`get_local_upload_paths`].

Attributes:
path_in_repo (`str`):
Path of the file in the repo.
file_path (`Path`):
Path where the file will be saved.
lock_path (`Path`):
Path to the lock file used to ensure atomicity when reading/writing metadata.
metadata_path (`Path`):
Path to the metadata file.
"""

path_in_repo: str
file_path: Path
lock_path: Path
metadata_path: Path


@dataclass
class LocalDownloadFileMetadata:
"""
Expand All @@ -111,6 +135,50 @@ class LocalDownloadFileMetadata:
timestamp: float


@dataclass
class LocalUploadFileMetadata:
"""
Metadata about a file in the local directory related to an upload process.
"""

size: int

# Default values correspond to "we don't know yet"
timestamp: Optional[float] = None
should_ignore: Optional[bool] = None
sha256: Optional[str] = None
upload_mode: Optional[str] = None
is_uploaded: bool = False
is_committed: bool = False

def save(self, paths: LocalUploadFilePaths) -> None:
"""Save the metadata to disk."""
with WeakFileLock(paths.lock_path):
with paths.metadata_path.open("w") as f:
new_timestamp = time.time()
f.write(str(new_timestamp) + "\n")

f.write(str(self.size)) # never None
f.write("\n")

if self.should_ignore is not None:
f.write(str(int(self.should_ignore)))
f.write("\n")

if self.sha256 is not None:
f.write(self.sha256)
f.write("\n")

if self.upload_mode is not None:
f.write(self.upload_mode)
f.write("\n")

f.write(str(int(self.is_uploaded)) + "\n")
f.write(str(int(self.is_committed)) + "\n")

self.timestamp = new_timestamp


@lru_cache(maxsize=128) # ensure singleton
def get_local_download_paths(local_dir: Path, filename: str) -> LocalDownloadFilePaths:
"""Compute paths to the files related to a download process.
Expand Down Expand Up @@ -152,6 +220,49 @@ def get_local_download_paths(local_dir: Path, filename: str) -> LocalDownloadFil
return LocalDownloadFilePaths(file_path=file_path, lock_path=lock_path, metadata_path=metadata_path)


@lru_cache(maxsize=128) # ensure singleton
def get_local_upload_paths(local_dir: Path, filename: str) -> LocalUploadFilePaths:
"""Compute paths to the files related to an upload process.

Folders containing the paths are all guaranteed to exist.

Args:
local_dir (`Path`):
Path to the local directory that is uploaded.
filename (`str`):
Path of the file in the repo.

Return:
[`LocalUploadFilePaths`]: the paths to the files (file_path, lock_path, metadata_path).
"""
# filename is the path in the Hub repository (separated by '/')
# make sure to have a cross platform transcription
sanitized_filename = os.path.join(*filename.split("/"))
if os.name == "nt":
if sanitized_filename.startswith("..\\") or "\\..\\" in sanitized_filename:
raise ValueError(
f"Invalid filename: cannot handle filename '{sanitized_filename}' on Windows. Please ask the repository"
" owner to rename this file."
)
file_path = local_dir / sanitized_filename
metadata_path = _huggingface_dir(local_dir) / "upload" / f"{sanitized_filename}.metadata"
lock_path = metadata_path.with_suffix(".lock")

# Some Windows versions do not allow for paths longer than 255 characters.
# In this case, we must specify it as an extended path by using the "\\?\" prefix
if os.name == "nt":
if not str(local_dir).startswith("\\\\?\\") and len(os.path.abspath(lock_path)) > 255:
file_path = Path("\\\\?\\" + os.path.abspath(file_path))
lock_path = Path("\\\\?\\" + os.path.abspath(lock_path))
metadata_path = Path("\\\\?\\" + os.path.abspath(metadata_path))

file_path.parent.mkdir(parents=True, exist_ok=True)
metadata_path.parent.mkdir(parents=True, exist_ok=True)
return LocalUploadFilePaths(
path_in_repo=filename, file_path=file_path, lock_path=lock_path, metadata_path=metadata_path
)


def read_download_metadata(local_dir: Path, filename: str) -> Optional[LocalDownloadFileMetadata]:
"""Read metadata about a file in the local directory related to a download process.

Expand All @@ -165,8 +276,6 @@ def read_download_metadata(local_dir: Path, filename: str) -> Optional[LocalDown
`[LocalDownloadFileMetadata]` or `None`: the metadata if it exists, `None` otherwise.
"""
paths = get_local_download_paths(local_dir, filename)
# file_path = local_file_path(local_dir, filename)
# lock_path, metadata_path = _download_metadata_file_path(local_dir, filename)
with WeakFileLock(paths.lock_path):
if paths.metadata_path.exists():
try:
Expand Down Expand Up @@ -204,6 +313,84 @@ def read_download_metadata(local_dir: Path, filename: str) -> Optional[LocalDown
return None


def read_upload_metadata(local_dir: Path, filename: str) -> LocalUploadFileMetadata:
"""Read metadata about a file in the local directory related to an upload process.

TODO: factorize logic with `read_download_metadata`.

Args:
local_dir (`Path`):
Path to the local directory in which files are downloaded.
filename (`str`):
Path of the file in the repo.

Return:
`[LocalUploadFileMetadata]` or `None`: the metadata if it exists, `None` otherwise.
"""
paths = get_local_upload_paths(local_dir, filename)
with WeakFileLock(paths.lock_path):
if paths.metadata_path.exists():
try:
with paths.metadata_path.open() as f:
timestamp = float(f.readline().strip())

size = int(f.readline().strip()) # never None

_should_ignore = f.readline().strip()
should_ignore = None if _should_ignore == "" else bool(int(_should_ignore))

_sha256 = f.readline().strip()
sha256 = None if _sha256 == "" else _sha256

_upload_mode = f.readline().strip()
upload_mode = None if _upload_mode == "" else _upload_mode
if upload_mode not in (None, "regular", "lfs"):
raise ValueError(f"Invalid upload mode in metadata {paths.path_in_repo}: {upload_mode}")

is_uploaded = bool(int(f.readline().strip()))
is_committed = bool(int(f.readline().strip()))

metadata = LocalUploadFileMetadata(
timestamp=timestamp,
size=size,
should_ignore=should_ignore,
sha256=sha256,
upload_mode=upload_mode,
is_uploaded=is_uploaded,
is_committed=is_committed,
)
except Exception as e:
# remove the metadata file if it is corrupted / not the right format
logger.warning(
f"Invalid metadata file {paths.metadata_path}: {e}. Removing it from disk and continue."
)
try:
paths.metadata_path.unlink()
except Exception as e:
logger.warning(f"Could not remove corrupted metadata file {paths.metadata_path}: {e}")

# TODO: can we do better?
if (
metadata.timestamp is not None
and metadata.is_uploaded # file was uploaded
and not metadata.is_committed # but not committed
and time.time() - metadata.timestamp > 20 * 3600 # and it's been more than 20 hours
): # => we consider it as garbage-collected by S3
metadata.is_uploaded = False

# check if the file exists and hasn't been modified since the metadata was saved
try:
if metadata.timestamp is not None and paths.file_path.stat().st_mtime <= metadata.timestamp:
return metadata
logger.info(f"Ignored metadata for '{filename}' (outdated). Will re-compute hash.")
except FileNotFoundError:
# file does not exist => metadata is outdated
pass

# empty metadata => we don't know anything expect its size
return LocalUploadFileMetadata(size=paths.file_path.stat().st_size)


def write_download_metadata(local_dir: Path, filename: str, commit_hash: str, etag: str) -> None:
"""Write metadata about a file in the local directory related to a download process.

Expand Down
Loading
Loading