SQL is not a problem to be solved - it's a powerful tool to be embraced.
This is the philosophy behind Kapper...
Kapper is a lightweight, Dapper-inspired ORM (Object-Relational Mapping) library written in Kotlin, targeting the JVM ecosystem. It embraces SQL rather than abstracting it away, providing a simple, intuitive API for executing queries and mapping results.
Instead of adding another abstraction layer, Kapper embraces three core principles:
- SQL is the Best Query Language: SQL has evolved over decades to be expressive, powerful, and optimized for database operations. Instead of hiding it, we should leverage it directly.
- Minimal Abstraction: Kapper provides just enough abstraction to make database operations comfortable in Kotlin, without trying to reinvent database interactions. Kapper prefers extension of existing APIs than abstraction of them.
- Transparency: What you write is what gets executed. There's no magic query generation or hidden database calls.
Kapper aims to go against the grain of the heavyweight database abstractions. Instead, it makes JDBC support easy without taking away any flexibility. In fact, it can happily live alongside an existing vanilla JDBC integration and/or your existing DB layer.
It does not generate code, it does not introduce another layer of abstraction, it is never intrusive. It does not hide SQL, instead it embraces the fact that SQL is the best language for DB interaction.
Kapper also means hairdresser in Dutch, but I have yet to come up with a reason for why this is relevant.
- Simple API: Kapper provides a familiar set of methods for executing SQL queries, mapping results to Kotlin data classes, and updating/inserting data.
- Extensibility: The Kapper API is implemented as extension functions on the
java.sql.Connection
interface, allowing seamless integration with existing JDBC code. - Lightweight: Kapper has minimal external dependencies, focusing on providing core functionality without bloat.
- Fast: Kapper's extension design means that it is possible to match "raw" JDBC performance, while its auto-mapper equals or outperforms other ORMs.
- Supported DBs: PostgreSQL, MySQL, SQLite, Oracle, MS SQL Server.
Snapshot releases are published from the main branch to GitHub Packages. Stable releases will be published to Maven Central.
Here's a simple example of how to use Kapper:
Example DB table
CREATE TABLE super_heroes (
id UUID PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
age INT
);
and an example DTO:
data class SuperHero(val id: UUID, val name: String, val email: String? = null, val age: Int? = null)
You can use Kapper like so:
// Assuming you have a java.sql.Connection instance, using your favourite connection pooler, for example:
ds.getConnection().use { connection ->
// insert a row:
connection.execute(
"INSERT INTO super_heroes(id, name, email, age) VALUES(:id, :name, :email, :age);",
"id" to UUID.randomUUID(),
"name" to "Batman",
"email" to "batman@dc.com",
"age" to 85,
)
// Execute a SQL query and map the results to a list of SuperHero objects
val heroes: List<SuperHero> = connection.query<SuperHero>("SELECT * FROM super_heroes")
// or query by passing parameters
val olderHeroes = connection.query<SuperHero>("SELECT * FROM super_heroes WHERE age > :age", "age" to 80)
// or find a single
val batman = connection.querySingle<SuperHero>(
"SELECT id, name FROM super_heroes WHERE name = :name",
"name" to "Batman",
)
}
Kapper does not maintain a mapping between classes and DB tables or entities. Instead, it can either auto-map to a given data class or record, if the constructor arguments match the DB fields, a custom mapper can be provided, or a mapping lambda can be passed in. This means Kapper provides a strongly typed mapping, but still allows rows-to-object mapping with lambdas or mapper functions for more flexibility or advanced used cases.
Example of registering an custom mapper class:
data class SuperHero(val id: UUID, val name: String, val email: String? = null, val age: Int? = null)
// Custom mapper
class SuperHeroMapper : Mapper<SuperHero> {
override fun createInstance(
resultSet: ResultSet,
fields: Map<String, Field>,
) = SuperHero(
id = UUID.fromString(resultSet.getString("id")),
name = resultSet.getString("name"),
email = resultSet.getString("email"),
age = resultSet.getInt("age"),
)
}
Kapper.mapperRegistry.registerIfAbsent<SuperHero>(SuperHeroMapper())
val heroes = connection.query<SuperHero>("SELECT * FROM super_heroes")
Or using a mapper lambda:
val heroes = connection.query<SuperHero>(
"SELECT * FROM super_heroes",
{ resultSet, _ ->
Superhero(
id = UUID.fromString(resultSet.getString("id")),
name = resultSet.getString("name"),
email = resultSet.getString("email"),
age = resultSet.getInt("age"),
)
}
)
Further examples and documentation are explored below and in the integration tests, or check out the Kapper-Example repo for more extended examples and documentation, including a comparison with Hibernate and Ktorm.
Kapper supports automatic mapping of SQL query results to Java record classes, in addition to Kotlin data classes. If you use Java records as your DTOs, Kapper will match the column names in your result set to the record's component names and instantiate the record automatically.
For example, given the following Java record:
public record SuperHeroRecord(UUID id, String name, String email, Integer age) {}
You can query and map results just as you would with a Kotlin data class:
val heroes: List<SuperHeroRecord> = connection.query<SuperHeroRecord>("SELECT * FROM super_heroes")
Additional Java examples can be found in the Kapper-Example
Kapper's auto-mapping is flexible:
- If a database field is missing, Kapper will use the default value for that property in a Kotlin data class, or
null
for nullable fields in both Kotlin data classes and Java records. - If the database returns extra columns that are not present in your data class or record, Kapper will simply ignore them.
This allows you to evolve your database schema and DTOs independently, and makes it easy to work with partial queries or legacy tables.
Stable versions of Kapper are published to Maven Central.
To use this in your project, simply add the following dependency to your build.gradle.kts
(or the Groovy equivalent in build.gradle
):
dependencies {
implementation("net.samyn:kapper:<version>")
}
For Maven, use:
<dependency>
<groupId>net.samyn</groupId>
<artifactId>kapper</artifactId>
<version>[VERSION]</version>
</dependency>
For co-routine support (see below), use kapper-coroutines
instead.
Snapshot releases are published from the main
branch to GitHub packages.
In order to use these, add GitHub packages as a repository (and ensure GH_USERNAME
and GH_TOKEN
are set in your environment):
repositories {
mavenCentral()
maven {
name = "GitHubPackages"
url = uri("https://maven.pkg.github.com/driessamyn/kapper")
credentials {
username = project.findProperty("gh.user") as String? ?: System.getenv("GH_USERNAME")
password = project.findProperty("gh.key") as String? ?: System.getenv("GH_TOKEN")
}
}
}
Kapper can auto-map the results of a SQL query to a data class:
data class SuperHero(val id: UUID, val name: String, val email: String? = null, val age: Int? = null)
val heroes = dataSource.connection.use {
it.query<SuperHero>("SELECT * FROM super_heroes")
}
Parameters can optionally be passed to the query by prefixing the query template tokens with :
,?
or $
and providing the values as key-value pairs:
val olderHeroes = dataSource.connection.use {
it.query<SuperHero>(
"SELECT * FROM super_heroes WHERE age > :age",
"age" to 80,
)
}
Kapper also support using a custom mapper in case auto-mapping cannot be used or is not desired. This is done by simply passing a mapper function to the query, using a lambda or a function reference:
val heroAges =
dataSource.connection.use {
it.query<Pair<String, *>>(
"SELECT * FROM super_heroes WHERE age > :age",
{ resultSet, fields ->
Pair(
resultSet.getString(fields["name"]!!.columnIndex),
resultSet.getInt(fields["age"]!!.columnIndex),
)
},
"age" to 80,
)
}
The querySingle
function can be used to return a single result, or null if the query returns no results:
val batman =
dataSource.connection.use {
it.querySingle<SuperHero>(
"SELECT * FROM super_heroes WHERE name = :name",
"name" to "Batman",
)
}
Kapper provides an execute
function to execute SQL statements that do not return a result set, such as INSERT
, UPDATE
, or DELETE
statements.
For example:
datasource.connection.use {
it.execute(
"""
INSERT INTO super_heroes(id, name, email, age)
VALUES (:id, :name, :email, :age)
""".trimIndent(),
"id" to UUID.randomUUID(),
"name" to "Batman",
"email" to "batman@dc.com",
"age" to 85,
)
}
datasource.connection.use {
it.execute(
"""
UPDATE super_heroes
SET age = 86
WHERE name = :name
""".trimIndent(),
"name" to "Batman",
)
}
datasource.connection.use {
it.execute(
"""
DELETE FROM super_heroes
WHERE name = :name
""".trimIndent(),
"name" to "Batman",
)
}
Kapper now supports passing entire DTOs (data classes) to the execute
and executeAll
functions for insert, update, and delete operations, by providing explicit mapping instructions. This makes your code more concise and type-safe, reducing manual parameter mapping.
You can pass a DTO instance together with mapping instructions, where each SQL parameter name is mapped to a property reference of your DTO:
val hero = SuperHero(UUID.randomUUID(), "Wonder Woman", "wonder@dc.com", 3000)
connection.execute(
"INSERT INTO super_heroes(id, name, email, age) VALUES(:id, :name, :email, :age)",
hero,
"id" to SuperHero::id,
"name" to SuperHero::name,
"email" to SuperHero::email,
"age" to SuperHero::age,
)
The executeAll
function allows you to efficiently perform bulk inserts, updates, or deletes by passing a collection of DTOs and mapping instructions.
Kapper handles batching and parameter mapping for you and executeAll
results in a JDBC batch operation.
val heroes = listOf(
SuperHero(UUID.randomUUID(), "Flash", "flash@dc.com", 28),
SuperHero(UUID.randomUUID(), "Aquaman", "aqua@dc.com", 35)
)
connection.executeAll(
"INSERT INTO super_heroes(id, name, email, age) VALUES(:id, :name, :email, :age)",
heroes,
"id" to SuperHero::id,
"name" to SuperHero::name,
"email" to SuperHero::email,
"age" to SuperHero::age,
)
This eliminates repetitive boilerplate and makes bulk operations simple and safe.
Kapper provides extension functions to make working with transactions easier.
The withTransaction
function on Connection
starts a transaction, executes the provided block, and commit the transaction if the block completes successfully.
The same function on DataSrouce
behaves the same except it also creates (and closes) the connection.
For example:
datasource.connection.withTransaction { connection ->
// insert a row:
connection.execute(
"INSERT INTO super_heroes(id, name, email, age) VALUES(:id, :name, :email, :age);",
"id" to UUID.randomUUID(),
"name" to "Batman",
"email" to "batman@dc.com",
"age" to 85,
)
}
is the equivalent of:
datasource.withTransaction { connection ->
// insert a row:
connection.execute(
"INSERT INTO super_heroes(id, name, email, age) VALUES(:id, :name, :email, :age);",
"id" to UUID.randomUUID(),
"name" to "Batman",
"email" to "batman@dc.com",
"age" to 85,
)
}
Kapper supports coroutines with the inclusion of the kapper-coroutines
module:
dependencies {
implementation("net.samyn:kapper-coroutines:<versions>")
}
This module provides an extension function withConnection
on the DataSource
object, optionally allowing you to specify a Dispatcher
.
If no Dispatcher
is provided, the default Dispatchers.IO
is used.
For example:
suspend fun listHeroes(): List<SuperHero> =
dataSource.withConnection {
// Kapper query runs on Dispatchers.IO
it.query<SuperHero>("SELECT * FROM super_heroes")
}
The queryAsFlow
function in the kapper-coroutines
module returns a Flow
of the results, allowing for easy integration with other coroutines constructs, and for example allowing a query to be cancelled:
runBlocking {
val job =
async {
getDataSource(postgresql).withConnection { connection ->
connection.queryAsFlow<PopularMovie>(
"""
SELECT
title,
release_date as releasedate,
gross_worldwide as grossed
FROM movies
ORDER BY gross_worldwide DESC
""".trimIndent(),
)
.runningFold(0L to emptyList<PopularMovie>()) { (totalGross, movieList), movie ->
val newTotal = totalGross + movie.grossed
newTotal to (movieList + movie)
}.takeWhile { (totalGross, _) ->
// query will be cancelled here
totalGross <= 10_000_000_000
}.last()
}
}
println("Query started")
val popularMovies = job.await()
println(
"Popular movies are: ${popularMovies.second.map { it.title }}, " +
"grossing a total of ${String.format("%,d", popularMovies.first)}",
)
}
Note that running a query that returns a very large number of results and filtering them in the Flow
may not be the most efficient way to handle this.
Filtering in the DB query itself is generally more efficient.
The above only serves as an example of how to use Flow
with Kapper and the ability to cancel queries.
The queryAsFlow
function optionally takes a fetchSize
parameter, which can be used to set the fetch size of the underlying JDBC statement.
When not set, the fetch size is set to 1,000.
This allows Kapper to cancel the JDBC Statement
when the Flow
is cancelled, and if supported by the JDBC driver itself.
See Kapper-Example for additional examples.
Note: operations on the JDBC drivers are blocking. This means that regardless of the coroutine support, the actual DB operations are blocking and the connection remains open until the operation is completed. See Coroutine support in Kapper 1.1 for more information.
Kapper's integration tests currently cover Postgresql, MySQL, SQLite, Oracle and MS SQL Server. If you need support for another DB, and/or find an issue with a particular DB, feel free to open an issue or, even better, submit a pull request.
The integration tests by default execute against Postgresql, SQLite, but the -Ddb
property can be used to specify other DB targets.
E.g.:
./gradlew integrationTest -Ddb=oracle
Or to run against all supported DBs:
./gradlew integrationTest -Ddb=all
Kapper is designed to be fast and lightweight, with performance comparable to raw JDBC. This repository contains a benchmark suite that compares Kapper's performance against other popular ORMs like Hibernate and Ktorm. The benchmark suite is designed to be extensible and can be used to add new benchmarks or modify existing ones. Two Kapper benchmarks are included: with auto-mapping and with a custom mapper. In the current version of Kapper the "cost" of auto-mapping in the small single digit microsecond range. Manual mapping is always the fastest and recommended for performance-critical applications or queries.
More details can be found here, and the benchmark results are published in kapper-benchmark-results.
- Dev.to - Kapper, a Fresh Look at ORMs for Kotlin and the JVM
- Dev.to - Coroutine support in Kapper 1.1
- Dev.to - Announcing Kapper 1.2: Cleaner Transactions with Less Boilerplate
- Dev.to - Kapper 1.3 Supports Flows
- Dev.to - Kapper benchmarks - how does Kapper 1.3 compare to the competition?
The following items were part of the v1 roadmap and are planned for future releases. Items will be ticked off as they are implemented.
- Create a benchmark suite to validate performance.
- Create transaction syntax sugar.
- Add co-routine support.
- Add flow support.
- Add MS SQL Server, Oracle and SQLite integration tests.
- Support auto-mapping for Java records.
- Tests & examples in other JVM languages.
- Bulk operations support
- Support DTO argument for
execute
. - Cache query parsing.
- Custom SQL type conversion.
- Improve support for date/time conversion.
- Add support for non-blocking JDBC drivers.
- Improve user documentation / docs website.
Anything else you think is missing, or you want to be prioritised, please open an issue or submit a pull request.
Special thanks to the kind people from YourKit for supporting open source projects and providing us with licenses for their profilers.
YourKit is the creator of YourKit Java Profiler, YourKit .NET Profiler, and YourKit YouMonitor.
We welcome contributions to the Kapper! If you find any issues or have ideas for improvements, please feel free to open an issue or submit a pull request.
Kapper is released under the Apache 2.0 License.