Hadoop MapReduce in idiomatic Clojure. Parkour takes your Clojure code’s functional gymnastics and sends it free-running across the urban environment of your Hadoop cluster.
Parkour is a Clojure library for writing distributed programs in the MapReduce pattern which run on the Hadoop MapReduce platform. Parkour does its best to avoid being yet another “framework” – if you know Hadoop, and you know Clojure, then you’re most of the way to knowing Parkour. By combining functional programming, direct access to Hadoop features, and interactive iteration on live data, Parkour supports rapid development of highly efficient Hadoop MapReduce applications.
Parkour is available on Clojars. Add this :dependency
to your Leiningen
project.clj
:
[com.damballa/parkour "0.6.3"]
The Parkour introduction contains an overview of the key concepts, but here is the classic “word count” example, in Parkour:
(defn word-count-m
[coll]
(->> coll
(r/mapcat #(str/split % #"\s+"))
(r/map #(-> [% 1]))))
(defn word-count
[conf lines]
(-> (pg/input lines)
(pg/map #'word-count-m)
(pg/partition [Text LongWritable])
(pg/combine #'ptb/keyvalgroups-r #'+)
(pg/output (seqf/dsink [Text LongWritable]))
(pg/fexecute conf `word-count)))
Parkour’s documentation is divided into a number of separate sections:
- Introduction – A getting-started introduction, with an overview of Parkour’s key concepts.
- Motivation – An explanation of the goals Parkour exists to achieve, with comparison to other libraries and frameworks.
- Namespaces – A tour of Parkour’s namespaces, explaining how each set of functionality fits into the whole.
- REPL integration – A quick guide to using Parkour from a cluster-connected REPL, for iterative development against live data.
- MapReduce in depth – An in-depth examination of the interfaces Parkour uses to run your code in MapReduce jobs.
- Serialization – How Parkour integrates Clojure with Hadoop serialization mechanisms.
- Unified I/O – Unified collection-like local and distributed I/O via Parkour dseqs and dsinks.
- Distributed values – Parkour’s value-oriented interface to the Hadoop distributed cache.
- Multiple I/O – Configuring multiple inputs and/or outputs for single Hadoop MapReduce jobs.
- Reducers vs seqs – Why Parkour’s default idiom uses reducers, and when to use seqs instead.
- Testing – Patterns for testing Parkour MapReduce jobs.
- Deployment – Running Parkour applications on a Hadoop cluster.
- Reference – Generated API reference, via codox.
There is a Parkour mailing list hosted by Librelist for announcements and discussion. To join the mailing list, just email parkour@librelist.org; your first message to that address will subscribe you without being posted. Please report issues on the GitHub issue tracker. And of course, pull requests welcome!
Copyright © 2013-2015 Marshall Bockrath-Vandegrift & Damballa, Inc.
Distributed under the Apache License, Version 2.0.