This chapter points out the specialties for repository support for R2DBC. This chapter builds on the core repository support explained in [repositories]. Before reading this chapter, you should have a sound understanding of the basic concepts explained there.
To access domain entities stored in a relational database, you can use our sophisticated repository support that eases implementation quite significantly.
To do so, create an interface for your repository.
Consider the following Person
class:
public class Person {
@Id
private Long id;
private String firstname;
private String lastname;
// … getters and setters omitted
}
The following example shows a repository interface for the preceding Person
class:
public interface PersonRepository extends ReactiveCrudRepository<Person, Long> {
// additional custom query methods go here
}
To configure R2DBC repositories, you can use the @EnableR2dbcRepositories
annotation.
If no base package is configured, the infrastructure scans the package of the annotated configuration class.
The following example shows how to use Java configuration for a repository:
@Configuration
@EnableR2dbcRepositories
class ApplicationConfig extends AbstractR2dbcConfiguration {
@Override
public ConnectionFactory connectionFactory() {
return …
}
}
Because our domain repository extends ReactiveCrudRepository
, it provides you with reactive CRUD operations to access the entities.
On top of ReactiveCrudRepository
, there is also ReactiveSortingRepository
, which adds additional sorting functionality similar to that of PagingAndSortingRepository
.
Working with the repository instance is merely a matter of dependency injecting it into a client.
Consequently, you can retrieve all Person
objects with the following code:
link:../{example-root}/PersonRepositoryTests.java[role=include]
The preceding example creates an application context with Spring’s unit test support, which performs annotation-based dependency injection into test cases.
Inside the test method, we use the repository to query the database.
We use StepVerifier
as a test aid to verify our expectations against the results.
Most of the data access operations you usually trigger on a repository result in a query being run against the databases. Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
interface ReactivePersonRepository extends ReactiveSortingRepository<Person, Long> {
Flux<Person> findByFirstname(String firstname); (1)
Flux<Person> findByFirstname(Publisher<String> firstname); (2)
Flux<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); (3)
Mono<Person> findByFirstnameAndLastname(String firstname, String lastname); (4)
Mono<Person> findFirstByLastname(String lastname); (5)
@Query("SELECT * FROM person WHERE lastname = :lastname")
Flux<Person> findByLastname(String lastname); (6)
@Query("SELECT firstname, lastname FROM person WHERE lastname = $1")
Mono<Person> findFirstByLastname(String lastname); (7)
}
-
The method shows a query for all people with the given
firstname
. The query is derived by parsing the method name for constraints that can be concatenated withAnd
andOr
. Thus, the method name results in a query expression ofSELECT … FROM person WHERE firstname = :firstname
. -
The method shows a query for all people with the given
firstname
once thefirstname
is emitted by the givenPublisher
. -
Use
Pageable
to pass offset and sorting parameters to the database. -
Find a single entity for the given criteria. It completes with
IncorrectResultSizeDataAccessException
on non-unique results. -
Unless <4>, the first entity is always emitted even if the query yields more result rows.
-
The
findByLastname
method shows a query for all people with the given last name. -
A query for a single
Person
entity projecting onlyfirstname
andlastname
columns. The annotated query uses native bind markers, which are Postgres bind markers in this example.
Note that the columns of a select statement used in a @Query
annotation must match the names generated by the NamingStrategy
for the respective property.
If a select statement does not include a matching column, that property is not set. If that property is required by the persistence constructor, either null or (for primitive types) the default value is provided.
The following table shows the keywords that are supported for query methods:
Keyword | Sample | Logical result |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The previous sections describe how to declare queries to access a given entity or collection of entities.
Using keywords from the preceding table can be used in conjunction with delete…By
or remove…By
to create derived queries that delete matching rows.
Delete…By
Queryinterface ReactivePersonRepository extends ReactiveSortingRepository<Person, String> {
Mono<Integer> deleteByLastname(String lastname); (1)
Mono<Void> deletePersonByLastname(String lastname); (2)
Mono<Boolean> deletePersonByLastname(String lastname); (3)
}
-
Using a return type of
Mono<Integer>
returns the number of affected rows. -
Using
Void
just reports whether the rows were successfully deleted without emitting a result value. -
Using
Boolean
reports whether at least one row was removed.
As this approach is feasible for comprehensive custom functionality, you can modify queries that only need parameter binding by annotating the query method with @Modifying
, as shown in the following example:
link:../{example-root}/PersonRepository.java[role=include]
The result of a modifying query can be:
-
Void
(or KotlinUnit
) to discard update count and await completion. -
Integer
or another numeric type emitting the affected rows count. -
Boolean
to emit whether at least one row was updated.
The @Modifying
annotation is only relevant in combination with the @Query
annotation.
Derived custom methods do not require this annotation.
Modifying queries are executed directly against the database. No events or callbacks get called. Therefore also fields with auditing annotations do not get updated if they don’t get updated in the annotated query.
Alternatively, you can add custom modifying behavior by using the facilities described in Custom Implementations for Spring Data Repositories.
Query string definitions can be used together with SpEL expressions to create dynamic queries at runtime. SpEL expressions can provide predicate values which are evaluated right before running the query.
Expressions expose method arguments through an array that contains all the arguments.
The following query uses [0]
to declare the predicate value for lastname
(which is equivalent to the :lastname
parameter binding):
link:../{example-root}/PersonRepository.java[role=include]
SpEL in query strings can be a powerful way to enhance queries. However, they can also accept a broad range of unwanted arguments. You should make sure to sanitize strings before passing them to the query to avoid unwanted changes to your query.
Expression support is extensible through the Query SPI: org.springframework.data.spel.spi.EvaluationContextExtension
.
The Query SPI can contribute properties and functions and can customize the root object.
Extensions are retrieved from the application context at the time of SpEL evaluation when the query is built.
Tip
|
When using SpEL expressions in combination with plain parameters, use named parameter notation instead of native bind markers to ensure a proper binding order. |
Spring Data R2DBC also lets you use Query By Example to fashion queries.
This technique allows you to use a "probe" object.
Essentially, any field that isn’t empty or null
will be used to match.
Here’s an example:
link:../{example-root}/QueryByExampleTests.java[role=include]
-
Create a domain object with the criteria (
null
fields will be ignored). -
Using the domain object, create an
Example
. -
Through the
R2dbcRepository
, execute query (usefindOne
for aMono
).
This illustrates how to craft a simple probe using a domain object.
In this case, it will query based on the Employee
object’s name
field being equal to Frodo
.
null
fields are ignored.
link:../{example-root}/QueryByExampleTests.java[role=include]
-
Create a custom
ExampleMatcher
that matches on ALL fields (usematchingAny()
to match on ANY fields) -
For the
name
field, use a wildcard that matches against the end of the field -
Match columns against
null
(don’t forget thatNULL
doesn’t equalNULL
in relational databases). -
Ignore the
role
field when forming the query. -
Plug the custom
ExampleMatcher
into the probe.
It’s also possible to apply a withTransform()
against any property, allowing you to transform a property before forming the query.
For example, you can apply a toUpperCase()
to a String
-based property before the query is created.
Query By Example really shines when you you don’t know all the fields needed in a query in advance. If you were building a filter on a web page where the user can pick the fields, Query By Example is a great way to flexibly capture that into an efficient query.
Spring Data R2DBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data’s @Id
annotation.
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
Spring Data R2DBC does not attempt to insert values of identifier columns when the entity is new and the identifier value defaults to its initial value.
That is 0
for primitive types and null
if the identifier property uses a numeric wrapper type such as Long
.
One important constraint is that, after saving an entity, the entity must not be new anymore. Note that whether an entity is new is part of the entity’s state. With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
The @Version
annotation provides syntax similar to that of JPA in the context of R2DBC and makes sure updates are only applied to rows with a matching version.
Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the row in the meantime.
In that case, an OptimisticLockingFailureException
is thrown.
The following example shows these features:
@Table
class Person {
@Id Long id;
String firstname;
String lastname;
@Version Long version;
}
R2dbcEntityTemplate template = …;
Mono<Person> daenerys = template.insert(new Person("Daenerys")); (1)
Person other = template.select(Person.class)
.matching(query(where("id").is(daenerys.getId())))
.first().block(); (2)
daenerys.setLastname("Targaryen");
template.update(daenerys); (3)
template.update(other).subscribe(); // emits OptimisticLockingFailureException (4)
-
Initially insert row.
version
is set to0
. -
Load the just inserted row.
version
is still0
. -
Update the row with
version = 0
.Set thelastname
and bumpversion
to1
. -
Try to update the previously loaded row that still has
version = 0
.The operation fails with anOptimisticLockingFailureException
, as the currentversion
is1
.
A query method returning an Interface- or DTO projection is backed by results produced by the actual query.
Interface projections generally rely on mapping results onto the domain type first to consider potential @Column
type mappings and the actual projection proxy uses a potentially partially materialized entity to expose projection data.
Result mapping for DTO projections depends on the actual query type. Derived queries use the domain type to map results, and Spring Data creates DTO instances solely from properties available on the domain type. Declaring properties in your DTO that are not available on the domain type is not supported.
String-based queries use a different approach since the actual query, specifically the field projection, and result type declaration are close together.
DTO projections used with query methods annotated with @Query
map query results directly into the DTO type.
Field mappings on the domain type are not considered.
Using the DTO type directly, your query method can benefit from a more dynamic projection that isn’t restricted to the domain model.
When working with multiple, potentially different databases, your application will require a different approach to configuration.
The provided AbstractR2dbcConfiguration
support class assumes a single ConnectionFactory
from which the Dialect
gets derived.
That being said, you need to define a few beans yourself to configure Spring Data R2DBC to work with multiple databases.
R2DBC repositories require R2dbcEntityOperations
to implement repositories.
A simple configuration to scan for repositories without using AbstractR2dbcConfiguration
looks like:
@Configuration
@EnableR2dbcRepositories(basePackages = "com.acme.mysql", entityOperationsRef = "mysqlR2dbcEntityOperations")
static class MySQLConfiguration {
@Bean
@Qualifier("mysql")
public ConnectionFactory mysqlConnectionFactory() {
return …
}
@Bean
public R2dbcEntityOperations mysqlR2dbcEntityOperations(@Qualifier("mysql") ConnectionFactory connectionFactory) {
DatabaseClient databaseClient = DatabaseClient.create(connectionFactory);
return new R2dbcEntityTemplate(databaseClient, MySqlDialect.INSTANCE);
}
}
Note that @EnableR2dbcRepositories
allows configuration either through databaseClientRef
or entityOperationsRef
.
Using various DatabaseClient
beans is useful when connecting to multiple databases of the same type.
When using different database systems that differ in their dialect, use @EnableR2dbcRepositories
(entityOperationsRef = …)` instead.