Skip to content

Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.

License

Notifications You must be signed in to change notification settings

siddartha-RE/unstructured

Repository files navigation

Open-Source Pre-Processing Tools for Unstructured Data

The unstructured library provides open-source components for pre-processing text documents such as PDFs, HTML and Word Documents. These components are packaged as bricks 🧱, which provide users the building blocks they need to build pipelines targeted at the documents they care about. Bricks in the library fall into three categories:

  • 🧩 Partitioning bricks that break raw documents down into standard, structured elements.
  • 🧹 Cleaning bricks that remove unwanted text from documents, such as boilerplate and sentence fragments.
  • 🎭 Staging bricks that format data for downstream tasks, such as ML inference and data labeling.

✴️ Installation

To install the library, run pip install unstructured.

☕ Getting Started

  • Using pyenv to manage virtualenv's is recommended but not necessary

    • Mac install instructions. See here for more detailed instructions.
      • brew install pyenv-virtualenv
      • pyenv install 3.8.15
    • Linux instructions are available here.
  • Create a virtualenv to work in and activate it, e.g. for one named unstructured:

    pyenv virtualenv 3.8.15 unstructured
    pyenv activate unstructured

  • Run make install-project-local

👏 Quick Tour

You can run this Colab notebook to run the examples below.

The following examples show how to get started with the unstructured library. You can parse TXT, HTML, PDF, EML and DOCX documents with one line of code!

See our documentation page for a full description of the features in the library.

Document Parsing

The easiest way to parse a document in unstructured is to use the partition brick. If you use partition brick, unstructured will detect the file type and route it to the appropriate file-specific partitioning brick. If you are using the partition brick, ensure you first install libmagic using the instructions outlined here partition will always apply the default arguments. If you need advanced features, use a document-specific brick. The partition brick currently works for .txt, .docx, .pptx, .jpg, .png, .eml, .html, and .pdf documents.

from unstructured.partition.auto import partition

elements = partition("example-docs/layout-parser-paper.pdf")

Run print("\n\n".join([str(el) for el in elements])) to get a string representation of the output, which looks like:


LayoutParser : A Unified Toolkit for Deep Learning Based Document Image Analysis

Zejiang Shen 1 ( (cid:0) ), Ruochen Zhang 2 , Melissa Dell 3 , Benjamin Charles Germain Lee 4 , Jacob Carlson 3 , and
Weining Li 5

Abstract. Recent advances in document image analysis (DIA) have been primarily driven by the application of neural
networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation.
However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy
reuse of im- portant innovations by a wide audience. Though there have been on-going efforts to improve reusability and
simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none
of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA
is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper
introduces LayoutParser , an open-source library for streamlining the usage of DL in DIA research and applica- tions.
The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models
for layout de- tection, character recognition, and many other document processing tasks. To promote extensibility,
LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digiti- zation
pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in
real-word use cases. The library is publicly available at https://layout-parser.github.io

Keywords: Document Image Analysis · Deep Learning · Layout Analysis · Character Recognition · Open Source library ·
Toolkit.

Introduction

Deep Learning(DL)-based approaches are the state-of-the-art for a wide range of document image analysis (DIA) tasks
including document image classification [11,

HTML Parsing

You can parse an HTML document using the following workflow:

from unstructured.partition.html import partition_html

elements = partition_html("example-docs/example-10k.html")
print("\n\n".join([str(el) for el in elements[:5]]))

The print statement will show the following text:

UNITED STATES

SECURITIES AND EXCHANGE COMMISSION

Washington, D.C. 20549

FORM 10-K

ANNUAL REPORT PURSUANT TO SECTION 13 OR 15(d) OF THE SECURITIES EXCHANGE ACT OF 1934

And elements will be a list of elements in the HTML document, similar to the following:

[<unstructured.documents.elements.Title at 0x169cbe820>,
 <unstructured.documents.elements.NarrativeText at 0x169cbe8e0>,
 <unstructured.documents.elements.NarrativeText at 0x169cbe3a0>]

PDF Parsing

You can use the following workflow to parse PDF documents.

from unstructured.partition.pdf import partition_pdf

elements = partition_pdf("example-docs/layout-parser-paper.pdf")

The output will look the same as the example from the document parsing section above.

E-mail Parsing

The partition_email function within unstructured is helpful for parsing .eml files. Common e-mail clients such as Microsoft Outlook and Gmail support exproting e-mails as .eml files. partition_email accepts filenames, file-like object, and raw text as input. The following three snippets for parsing .eml files are equivalent:

from unstructured.partition.email import partition_email

elements = partition_email(filename="example-docs/fake-email.eml")

with open("example-docs/fake-email.eml", "r") as f:
  elements = partition_email(file=f)

with open("example-docs/fake-email.eml", "r") as f:
  text = f.read()
elements = partition_email(text=text)

The elements output will look like the following:

[<unstructured.documents.html.HTMLNarrativeText at 0x13ab14370>,
<unstructured.documents.html.HTMLTitle at 0x106877970>,
<unstructured.documents.html.HTMLListItem at 0x1068776a0>,
<unstructured.documents.html.HTMLListItem at 0x13fe4b0a0>]

Run print("\n\n".join([str(el) for el in elements])) to get a string representation of the output, which looks like:

This is a test email to use for unit tests.

Important points:

Roses are red

Violets are blue

Text Document Parsing

The partition_text function within unstructured can be used to parse simple text files into elements.

partition_text accepts filenames, file-like object, and raw text as input. The following three snippets are for parsing text files:

from unstructured.partition.text import partition_text

elements = partition_text(filename="example-docs/fake-text.txt")

with open("example-docs/fake-text.txt", "r") as f:
  elements = partition_text(file=f)

with open("example-docs/fake-text.txt", "r") as f:
  text = f.read()
elements = partition_text(text=text)

The elements output will look like the following:

[<unstructured.documents.html.HTMLNarrativeText at 0x13ab14370>,
<unstructured.documents.html.HTMLTitle at 0x106877970>,
<unstructured.documents.html.HTMLListItem at 0x1068776a0>,
<unstructured.documents.html.HTMLListItem at 0x13fe4b0a0>]

Run print("\n\n".join([str(el) for el in elements])) to get a string representation of the output, which looks like:

This is a test document to use for unit tests.

Important points:

Hamburgers are delicious

Dogs are the best

I love fuzzy blankets

💂‍♂️ Security Policy

See our security policy for information on how to report security vulnerabilities.

📚 Learn more

Section Description
Company Website Unstructured.io product and company info
Documentation Full API documentation

About

Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HTML 85.0%
  • Python 14.0%
  • Other 1.0%