The implementation of Apache Spark (combine with PySpark, Jupyter Notebook) on top of Hadoop cluster using Docker
-
Updated
May 10, 2024 - Shell
The implementation of Apache Spark (combine with PySpark, Jupyter Notebook) on top of Hadoop cluster using Docker
📈📊 Big Data Notebooks . ▫️ Análisis masivos de datos con pyspark ▫️ Ingesta de datos. ▫️ Algoritmos de machine learning con datos masivos. ▫️ Procesamiento de mensajes en tiempo real con Kafka.
Add a description, image, and links to the apache-hadoop topic page so that developers can more easily learn about it.
To associate your repository with the apache-hadoop topic, visit your repo's landing page and select "manage topics."