|
1 | 1 |
|
2 | | -# Database Analysis |
| 2 | +# Database Analysis Toolkit |
3 | 3 |
|
4 | | -## Overview |
5 | | - |
6 | | -Database-Analysis is a Python Jupyter notebook designed to ensure data integrity by identifying inconsistencies between two flat files. One of these files serves as the database for an corporation/organization . The tool logs any inconsistencies found, facilitating easy identification and correction of data issues. In addition, it provides some analytics that may be of use. |
| 4 | +The **Database Analysis Toolkit** is a Python-based tool designed to perform comprehensive data analysis on large datasets, currently focusing on geospatial analysis and fuzzy matching. The toolkit supports various data formats and provides configurable options to tailor the analysis to specific needs. This use-case is particularly useful for data engineers, data scientists, and analysts working with large datasets and looking to perform advanced data processing tasks. |
7 | 5 |
|
8 | 6 | ## Features |
9 | 7 |
|
10 | | -- **Data Integrity Checks**: Compares two flat files and logs inconsistencies. |
11 | | -- **Detailed Logging**: Generates a comprehensive log of all inconsistencies found. |
12 | | -- **User-Friendly Interface**: Easy-to-use Jupyter notebook interface. |
13 | | -- **Customizable**: Easily adaptable for different data formats and validation rules. |
| 8 | +- **Geospatial Analysis**: Calculate distances between geographical coordinates using the Haversine formula and identify clusters within a specified threshold. |
| 9 | +- **Fuzzy Matching**: Identify and group similar records within a dataset based on configurable matching criteria. |
| 10 | +- **Support for Multiple File Formats**: Easily load and process data from CSV, Excel, JSON, Parquet, and Feather files. |
| 11 | +- **Customizable**: Configurable through a YAML file or command-line arguments, allowing users to adjust the analysis according to their needs. |
14 | 12 |
|
15 | | -## Installation |
| 13 | +## Project Structure |
16 | 14 |
|
17 | | -Follow these steps to set up your environment and run the Jupyter notebook: |
| 15 | +```plaintext |
| 16 | +. |
| 17 | +├── config/ |
| 18 | +│ └── config.yaml # Configuration file for the analysis |
| 19 | +├── data/ |
| 20 | +│ └── input_file.csv # Input data files (CSV, Excel, JSON, Parquet, Feather) |
| 21 | +├── env/ |
| 22 | +│ ├── linux/environment.yml # Conda environment file for Linux |
| 23 | +│ └── win/environment.yml # Conda environment file for Windows |
| 24 | +├── logs/ |
| 25 | +│ └── logfile.log # Log file storing all logging information |
| 26 | +├── modules/ |
| 27 | +│ ├── data_loader.py # Module for loading data from various formats |
| 28 | +│ ├── fuzzy_matching.py # Module for performing fuzzy matching |
| 29 | +│ └── geospatial_analysis.py # Module for performing geospatial analysis |
| 30 | +├── results/ |
| 31 | +│ └── output_file.csv # Output files generated by the analysis |
| 32 | +├── util/ |
| 33 | +│ └── util.py # Utility functions for saving files and other tasks |
| 34 | +├── database-analysis.py # Main script to run the analysis |
| 35 | +└── README.md # Project documentation |
| 36 | +``` |
| 37 | + |
| 38 | +## Installation |
18 | 39 |
|
19 | 40 | ### Prerequisites |
20 | 41 |
|
21 | | -- Python 3.7 or later |
22 | | -- Jupyter Notebook |
23 | | -- Virtual Environment (recommended) |
| 42 | +- **Conda**: Ensure you have Conda installed. You can install it from [here](https://docs.conda.io/en/latest/miniconda.html). |
| 43 | +- **Python 3.11 or later**: The project is compatible with Python 3.11 and above. |
24 | 44 |
|
25 | 45 | ### Setting Up the Environment |
26 | 46 |
|
27 | | -1. **Clone the Repository** |
28 | | - |
29 | | - ```bash |
30 | | - git clone https://github.com/umarhunter/database-analysis.git |
31 | | - ``` |
32 | | - |
33 | | -2. **Enter the Repo** |
34 | | - ```bash |
35 | | - cd database-analysis |
36 | | - ``` |
37 | | - |
38 | | -3. **Create a Virtual Environment** |
39 | | - |
40 | | - It's recommended to use a virtual environment to manage dependencies. |
41 | | -
|
42 | | - ```bash |
43 | | - python3 -m venv env |
44 | | - ``` |
45 | | -
|
46 | | -4. **Activate the Virtual Environment** |
47 | | -
|
48 | | - - On Windows: |
49 | | -
|
50 | | - ```bash |
51 | | - .\env\Scripts\activate |
52 | | - ``` |
53 | | -
|
54 | | - - On macOS and Linux: |
55 | | -
|
56 | | - ```bash |
57 | | - source env/bin/activate |
58 | | - ``` |
59 | | -
|
60 | | -5. **Install Dependencies** |
61 | | -
|
62 | | - ```bash |
63 | | - pip install -r requirements.txt |
64 | | - ``` |
| 47 | +To create the Conda environment with all necessary dependencies, use the following command: |
65 | 48 |
|
| 49 | +```bash |
| 50 | +conda env create -f environment.yml |
| 51 | +``` |
66 | 52 |
|
67 | | -### Setting Up Jupyter |
68 | | -
|
69 | | -1. **Install Jupyter** |
70 | | -
|
71 | | - If you don't already have Jupyter installed, you can install it using pip: |
72 | | - |
73 | | - ```bash |
74 | | - pip install notebook |
75 | | - ``` |
| 53 | +Activate the environment: |
76 | 54 |
|
77 | | -2. **Start Jupyter Notebook** |
| 55 | +```bash |
| 56 | +conda activate database-analysis-env |
| 57 | +``` |
78 | 58 |
|
79 | | - Navigate to the project directory and start Jupyter Notebook: |
| 59 | +### Manual Installation |
80 | 60 |
|
81 | | - ```bash |
82 | | - jupyter notebook |
83 | | - ``` |
| 61 | +If you prefer to install the dependencies manually or without Conda, you can install them using `pip`: |
84 | 62 |
|
85 | | -3. **Open the Notebook** |
| 63 | +```bash |
| 64 | +pip install pandas rapidfuzz haversine pyyaml |
| 65 | +``` |
86 | 66 |
|
87 | | - In the Jupyter interface, open `database-analysis.ipynb`. |
| 67 | +## Configuration |
| 68 | + |
| 69 | +The toolkit uses a YAML configuration file (`config/config.yaml`) to define various parameters for the analysis, such as: |
| 70 | + |
| 71 | +- **Input and Output Files**: Specify paths for input data and output results. |
| 72 | +- **Analysis Options**: Enable or disable geospatial analysis and fuzzy matching. |
| 73 | +- **Sorting and Thresholds**: Define columns for sorting and thresholds for matching. |
| 74 | + |
| 75 | +### Example Configuration |
| 76 | + |
| 77 | +Here’s a sample `config.yaml` file: |
| 78 | + |
| 79 | +```yaml |
| 80 | +input_file: "data/input.csv" |
| 81 | +output_file: "results/output.csv" |
| 82 | +sort_by_columns: |
| 83 | + - "first_name" |
| 84 | + - "last_name" |
| 85 | +geospatial_analysis: True |
| 86 | +geospatial_columns: |
| 87 | + - "latitude" |
| 88 | + - "longitude" |
| 89 | +geospatial_threshold: 0.005 |
| 90 | +fuzzy_matching: True |
| 91 | +fuzzy_columns: |
| 92 | + - "address" |
| 93 | +fuzzy_threshold: 0.8 |
| 94 | +``` |
88 | 95 |
|
89 | 96 | ## Usage |
90 | 97 |
|
91 | | -1. **Prepare Your Files** |
92 | | - |
93 | | - Ensure you have the two flat files ready. One file should be the reference database, and the other should be the data you want to compare against the database. Sample files have already been provided on your behalf (credit: ```generatedata.com```). |
| 98 | +### Running the Analysis |
94 | 99 |
|
95 | | -2. **Run the Notebook** |
| 100 | +To perform the analysis using the configuration file: |
96 | 101 |
|
97 | | - Follow the instructions within the notebook to load your files and execute the data consistency checks. |
| 102 | +```bash |
| 103 | +python database-analysis.py --config config/config.yaml |
| 104 | +``` |
98 | 105 |
|
99 | | -3. **Review the Logs** |
| 106 | +You can also override specific configurations using command-line arguments: |
100 | 107 |
|
101 | | - The notebook will output a log file detailing any inconsistencies found between the two files. Review this log to identify and correct data issues. |
| 108 | +```bash |
| 109 | +python database-analysis.py --input_file data/input.csv --output_file results/output.csv --geospatial_analysis True --fuzzy_matching True |
| 110 | +``` |
102 | 111 |
|
103 | | -## Project Structure |
| 112 | +### Supported File Formats |
104 | 113 |
|
105 | | -``` |
106 | | -database-analysis/ |
107 | | -│ |
108 | | -├── database-analysis.ipynb # Main Jupyter notebook |
109 | | -├── requirements.txt # Project dependencies |
110 | | -├── data/ # Directory to store your flat files |
111 | | -│ ├── database.csv # Example reference file |
112 | | -│ └── target.csv # Example target file |
113 | | -└── logs/ # Directory to store log files |
114 | | -``` |
| 114 | +- CSV (`.csv`) |
| 115 | +- Excel (`.xlsx`) |
115 | 116 |
|
116 | | -## Author |
| 117 | +### Logging |
117 | 118 |
|
118 | | -This project is created and maintained by @umarhunter. |
| 119 | +All logging information is saved in the `logs/logfile.log` file. The log file includes details about data loading, the execution of geospatial analysis, fuzzy matching, and any errors encountered during processing. |
119 | 120 |
|
120 | 121 | ## Contributing |
121 | 122 |
|
122 | | -Contributions are welcome! Please fork the repository and create a pull request with your changes. I'll gladly look at errors and suggestions. |
| 123 | +We welcome contributions to the Database Analysis Toolkit! If you would like to contribute: |
| 124 | + |
| 125 | +1. Fork the repository. |
| 126 | +2. Create a new branch (`git checkout -b feature/YourFeature`). |
| 127 | +3. Make your changes and commit them (`git commit -m 'Add some feature'`). |
| 128 | +4. Push to the branch (`git push origin feature/YourFeature`). |
| 129 | +5. Open a Pull Request. |
123 | 130 |
|
124 | 131 | ## License |
125 | 132 |
|
126 | | -This project is licensed under the GNU License - see the [LICENSE](LICENSE) file for details. |
| 133 | +This project is licensed under the GNU License. See the `LICENSE` file for more details. |
| 134 | + |
| 135 | +## Acknowledgements |
| 136 | + |
| 137 | +This toolkit leverages Python libraries such as `pandas`, `rapidfuzz`, and `haversine` to perform data analysis. We thank the open-source community for their continuous support and contributions. |
0 commit comments