This chapter points out the specialties for repository support for JDBC.This builds on the core repository support explained in [repositories]. You should have a sound understanding of the basic concepts explained there.
The main persistence API for relational databases in the Java world is certainly JPA, which has its own Spring Data module. Why is there another one?
JPA does a lot of things in order to help the developer. Among other things, it tracks changes to entities. It does lazy loading for you. It lets you map a wide array of object constructs to an equally wide array of database designs.
This is great and makes a lot of things really easy. Just take a look at a basic JPA tutorial. But it often gets really confusing as to why JPA does a certain thing. Also, things that are really simple conceptually get rather difficult with JPA.
Spring Data JDBC aims to be much simpler conceptually, by embracing the following design decisions:
-
If you load an entity, SQL statements get run. Once this is done, you have a completely loaded entity. No lazy loading or caching is done.
-
If you save an entity, it gets saved. If you do not, it does not. There is no dirty tracking and no session.
-
There is a simple model of how to map entities to tables. It probably only works for rather simple cases. If you do not like that, you should code your own strategy. Spring Data JDBC offers only very limited support for customizing the strategy with annotations.
All Spring Data modules are inspired by the concepts of “repository”, “aggregate”, and “aggregate root” from Domain Driven Design. These are possibly even more important for Spring Data JDBC, because they are, to some extent, contrary to normal practice when working with relational databases.
An aggregate is a group of entities that is guaranteed to be consistent between atomic changes to it.
A classic example is an Order
with OrderItems
.
A property on Order
(for example, numberOfItems
is consistent with the actual number of OrderItems
) remains consistent as changes are made.
References across aggregates are not guaranteed to be consistent at all times. They are guaranteed to become consistent eventually.
Each aggregate has exactly one aggregate root, which is one of the entities of the aggregate. The aggregate gets manipulated only through methods on that aggregate root. These are the atomic changes mentioned earlier.
A repository is an abstraction over a persistent store that looks like a collection of all the aggregates of a certain type.
For Spring Data in general, this means you want to have one Repository
per aggregate root.
In addition, for Spring Data JDBC this means that all entities reachable from an aggregate root are considered to be part of that aggregate root.
Spring Data JDBC assumes that only the aggregate has a foreign key to a table storing non-root entities of the aggregate and no other entity points toward non-root entities.
Warning
|
In the current implementation, entities referenced from an aggregate root are deleted and recreated by Spring Data JDBC. |
You can overwrite the repository methods with implementations that match your style of working and designing your database.
An easy way to bootstrap setting up a working environment is to create a Spring-based project in STS or from Spring Initializr.
First, you need to set up a running database server. Refer to your vendor documentation on how to configure your database for JDBC access.
To create a Spring project in STS:
-
Go to File → New → Spring Template Project → Simple Spring Utility Project, and press Yes when prompted. Then enter a project and a package name, such as
org.spring.jdbc.example
. -
Add the following to the
pom.xml
filesdependencies
element:<dependencies> <!-- other dependency elements omitted --> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-jdbc</artifactId> <version>{version}</version> </dependency> </dependencies>
-
Change the version of Spring in the pom.xml to be
<spring.framework.version>{springVersion}</spring.framework.version>
-
Add the following location of the Spring Milestone repository for Maven to your
pom.xml
such that it is at the same level of your<dependencies/>
element:<repositories> <repository> <id>spring-milestone</id> <name>Spring Maven MILESTONE Repository</name> <url>https://repo.spring.io/libs-milestone</url> </repository> </repositories>
The repository is also browseable here.
There is a GitHub repository with several examples that you can download and play around with to get a feel for how the library works.
The Spring Data JDBC repositories support can be activated by an annotation through Java configuration, as the following example shows:
@Configuration
@EnableJdbcRepositories // (1)
class ApplicationConfig extends AbstractJdbcConfiguration { // (2)
@Bean
DataSource dataSource() { // (3)
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.HSQL).build();
}
@Bean
NamedParameterJdbcOperations namedParameterJdbcOperations(DataSource dataSource) { // (4)
return new NamedParameterJdbcTemplate(dataSource);
}
@Bean
TransactionManager transactionManager(DataSource dataSource) { // (5)
return new DataSourceTransactionManager(dataSource);
}
}
-
@EnableJdbcRepositories
creates implementations for interfaces derived fromRepository
-
AbstractJdbcConfiguration
provides various default beans required by Spring Data JDBC -
Creates a
DataSource
connecting to a database. This is required by the following two bean methods. -
Creates the
NamedParameterJdbcOperations
used by Spring Data JDBC to access the database. -
Spring Data JDBC utilizes the transaction management provided by Spring JDBC.
The configuration class in the preceding example sets up an embedded HSQL database by using the EmbeddedDatabaseBuilder
API of spring-jdbc
.
The DataSource
is then used to set up NamedParameterJdbcOperations
and a TransactionManager
.
We finally activate Spring Data JDBC repositories by using the @EnableJdbcRepositories
.
If no base package is configured, it uses the package in which the configuration class resides.
Extending AbstractJdbcConfiguration
ensures various beans get registered.
Overwriting its methods can be used to customize the setup (see below).
This configuration can be further simplified by using Spring Boot.
With Spring Boot a DataSource
is sufficient once the starter spring-boot-starter-data-jdbc
is included in the dependencies.
Everything else is done by Spring Boot.
There are a couple of things one might want to customize in this setup.
Spring Data JDBC uses implementations of the interface Dialect
to encapsulate behavior that is specific to a database or its JDBC driver.
By default, the AbstractJdbcConfiguration
tries to determine the database in use and register the correct Dialect
.
This behavior can be changed by overwriting jdbcDialect(NamedParameterJdbcOperations)
.
If you use a database for which no dialect is available, then your application won’t startup. In that case, you’ll have to ask your vendor to provide a Dialect
implementation. Alternatively, you can:
-
Implement your own
Dialect
. -
Implement a
JdbcDialectProvider
returning theDialect
. -
Register the provider by creating a
spring.factories
resource underMETA-INF
and perform the registration by adding a line
org.springframework.data.jdbc.repository.config.DialectResolver$JdbcDialectProvider=<fully qualified name of your JdbcDialectProvider>
Saving an aggregate can be performed with the CrudRepository.save(…)
method.
If the aggregate is new, this results in an insert for the aggregate root, followed by insert statements for all directly or indirectly referenced entities.
If the aggregate root is not new, all referenced entities get deleted, the aggregate root gets updated, and all referenced entities get inserted again. Note that whether an instance is new is part of the instance’s state.
Note
|
This approach has some obvious downsides. If only few of the referenced entities have been actually changed, the deletion and insertion is wasteful. While this process could and probably will be improved, there are certain limitations to what Spring Data JDBC can offer. It does not know the previous state of an aggregate. So any update process always has to take whatever it finds in the database and make sure it converts it to whatever is the state of the entity passed to the save method. |
The properties of the following types are currently supported:
-
All primitive types and their boxed types (
int
,float
,Integer
,Float
, and so on) -
Enums get mapped to their name.
-
String
-
java.util.Date
,java.time.LocalDate
,java.time.LocalDateTime
, andjava.time.LocalTime
-
Arrays and Collections of the types mentioned above can be mapped to columns of array type if your database supports that.
-
Anything your database driver accepts.
-
References to other entities. They are considered a one-to-one relationship, or an embedded type. It is optional for one-to-one relationship entities to have an
id
attribute. The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see Back References. Embedded entities do not need anid
. If one is present it gets ignored. -
Set<some entity>
is considered a one-to-many relationship. The table of the referenced entity is expected to have an additional column with a name based on the referencing entity see Back References. -
Map<simple type, some entity>
is considered a qualified one-to-many relationship. The table of the referenced entity is expected to have two additional columns: One named based on the referencing entity for the foreign key (see Back References) and one with the same name and an additional_key
suffix for the map key. You can change this behavior by implementingNamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)
andNamingStrategy.getKeyColumn(RelationalPersistentProperty property)
, respectively. Alternatively you may annotate the attribute with@MappedCollection(idColumn="your_column_name", keyColumn="your_key_column_name")
-
List<some entity>
is mapped as aMap<Integer, some entity>
.
The handling of referenced entities is limited. This is based on the idea of aggregate roots as described above. If you reference another entity, that entity is, by definition, part of your aggregate. So, if you remove the reference, the previously referenced entity gets deleted. This also means references are 1-1 or 1-n, but not n-1 or n-m.
If you have n-1 or n-m references, you are, by definition, dealing with two separate aggregates.
References between those may be encoded as simple id
values, which map properly with Spring Data JDBC.
A better way to encode these, is to make them instances of AggregateReference
.
An AggregateReference
is a wrapper around an id value which marks that value as a reference to a different aggregate.
Also, the type of that aggregate is encoded in a type parameter.
All references in an aggregate result in a foreign key relationship in the opposite direction in the database. By default, the name of the foreign key column is the table name of the referencing entity.
Alternatively you may choose to have them named by the entity name of the referencing entity ignoreing @Table
annotations.
You activate this behaviour by calling setForeignKeyNaming(ForeignKeyNaming.IGNORE_RENAMING)
on the RelationalMappingContext
.
For List
and Map
references an additional column is required for holding the list index or map key. It is based on the foreign key column with an additional _KEY
suffix.
If you want a completely different way of naming these back references you may implement NamingStrategy.getReverseColumnName(PersistentPropertyPathExtension path)
in a way that fits your needs.
AggregateReference
class Person {
@Id long id;
AggregateReference<Person, Long> bestFriend;
}
// ...
Person p1, p2 = // some initialization
p1.bestFriend = AggregateReference.to(p2.id);
When you use the standard implementations of CrudRepository
that Spring Data JDBC provides, they expect a certain table structure.
You can tweak that by providing a {javadoc-base}org/springframework/data/relational/core/mapping/NamingStrategy.html[NamingStrategy
] in your application context.
When the NamingStrategy does not matching on your database table names, you can customize the names with the {javadoc-base}org/springframework/data/relational/core/mapping/Table.html[@Table
] annotation.
The element value
of this annotation provides the custom table name.
The following example maps the MyEntity
class to the CUSTOM_TABLE_NAME
table in the database:
@Table("CUSTOM_TABLE_NAME")
class MyEntity {
@Id
Integer id;
String name;
}
When the NamingStrategy does not matching on your database column names, you can customize the names with the {javadoc-base}org/springframework/data/relational/core/mapping/Column.html[@Column
] annotation.
The element value
of this annotation provides the custom column name.
The following example maps the name
property of the MyEntity
class to the CUSTOM_COLUMN_NAME
column in the database:
class MyEntity {
@Id
Integer id;
@Column("CUSTOM_COLUMN_NAME")
String name;
}
The {javadoc-base}org/springframework/data/relational/core/mapping/MappedCollection.html[@MappedCollection
]
annotation can be used on a reference type (one-to-one relationship) or on Sets, Lists, and Maps (one-to-many relationship).
idColumn
element of the annotation provides a custom name for the foreign key column referencing the id column in the other table.
In the following example the corresponding table for the MySubEntity
class has a NAME
column, and the CUSTOM_MY_ENTITY_ID_COLUMN_NAME
column of the MyEntity
id for relationship reasons:
class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_MY_ENTITY_ID_COLUMN_NAME")
Set<MySubEntity> subEntities;
}
class MySubEntity {
String name;
}
When using List
and Map
you must have an additional column for the position of a dataset in the List
or the key value of the entity in the Map
.
This additional column name may be customized with the keyColumn
Element of the {javadoc-base}org/springframework/data/relational/core/mapping/MappedCollection.html[@MappedCollection
] annotation:
class MyEntity {
@Id
Integer id;
@MappedCollection(idColumn = "CUSTOM_COLUMN_NAME", keyColumn = "CUSTOM_KEY_COLUMN_NAME")
List<MySubEntity> name;
}
class MySubEntity {
String name;
}
Embedded entities are used to have value objects in your java data model, even if there is only one table in your database.
In the following example you see, that MyEntity
is mapped with the @Embedded
annotation.
The consequence of this is, that in the database a table my_entity
with the two columns id
and name
(from the EmbeddedEntity
class) is expected.
However, if the name
column is actually null
within the result set, the entire property embeddedEntity
will be set to null according to the onEmpty
of @Embedded
, which null
s objects when all nested properties are null
.
Opposite to this behavior USE_EMPTY
tries to create a new instance using either a default constructor or one that accepts nullable parameter values from the result set.
class MyEntity {
@Id
Integer id;
@Embedded(onEmpty = USE_NULL) (1)
EmbeddedEntity embeddedEntity;
}
class EmbeddedEntity {
String name;
}
-
Null
sembeddedEntity
ifname
innull
. UseUSE_EMPTY
to instantiateembeddedEntity
with a potentialnull
value for thename
property.
If you need a value object multiple times in an entity, this can be achieved with the optional prefix
element of the @Embedded
annotation.
This element represents a prefix and is prepend for each column name in the embedded object.
Tip
|
Make use of the shortcuts class MyEntity {
@Id
Integer id;
@Embedded.Nullable (1)
EmbeddedEntity embeddedEntity;
}
|
Embedded entities containing a Collection
or a Map
will always be considered non empty since they will at least contain the empty collection or map.
Such an entity will therefore never be null
even when using @Embedded(onEmpty = USE_NULL).
Spring Data JDBC uses the ID to identify entities.
The ID of an entity must be annotated with Spring Data’s @Id
annotation.
When your database has an auto-increment column for the ID column, the generated value gets set in the entity after inserting it into the database.
One important constraint is that, after saving an entity, the entity must not be new any more.
Note that whether an entity is new is part of the entity’s state.
With auto-increment columns, this happens automatically, because the ID gets set by Spring Data with the value from the ID column.
If you are not using auto-increment columns, you can use a BeforeConvert
listener, which sets the ID of the entity (covered later in this document).
Attributes annotated with @ReadOnlyProperty
will not be written to the database by Spring Data JDBC, but they will be read when an entity gets loaded.
Spring Data JDBC will not automatically reload an entity after writing it. Therefore, you have to reload it explicitly if you want to see data that was generated in the database for such columns.
If the annotated attribute is an entity or collection of entities, it is represented by one or more separate rows in separate tables. Spring Data JDBC will not perform any insert, delete or update for these rows.
Attributes annotated with @InsertOnlyProperty
will only be written to the database by Spring Data JDBC during insert operations.
For updates these properties will be ignored.
@InsertOnlyProperty
is only supported for the aggregate root.
Spring Data JDBC supports optimistic locking by means of a numeric attribute that is annotated with
@Version
on the aggregate root.
Whenever Spring Data JDBC saves an aggregate with such a version attribute two things happen:
The update statement for the aggregate root will contain a where clause checking that the version stored in the database is actually unchanged.
If this isn’t the case an OptimisticLockingFailureException
will be thrown.
Also the version attribute gets increased both in the entity and in the database so a concurrent action will notice the change and throw an OptimisticLockingFailureException
if applicable as described above.
This process also applies to inserting new aggregates, where a null
or 0
version indicates a new instance and the increased instance afterwards marks the instance as not new anymore, making this work rather nicely with cases where the id is generated during object construction for example when UUIDs are used.
During deletes the version check also applies but no version is increased.
This section offers some specific information about the implementation and use of Spring Data JDBC.
Most of the data access operations you usually trigger on a repository result in a query being run against the databases. Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:
interface PersonRepository extends PagingAndSortingRepository<Person, String> {
List<Person> findByFirstname(String firstname); (1)
List<Person> findByFirstnameOrderByLastname(String firstname, Pageable pageable); (2)
Slice<Person> findByLastname(String lastname, Pageable pageable); (3)
Page<Person> findByLastname(String lastname, Pageable pageable); (4)
Person findByFirstnameAndLastname(String firstname, String lastname); (5)
Person findFirstByLastname(String lastname); (6)
@Query("SELECT * FROM person WHERE lastname = :lastname")
List<Person> findByLastname(String lastname); (7)
@Query("SELECT * FROM person WHERE lastname = :lastname")
Stream<Person> streamByLastname(String lastname); (8)
@Query("SELECT * FROM person WHERE username = :#{ principal?.username }")
Person findActiveUser(); (6)
}
-
The method shows a query for all people with the given
firstname
. The query is derived by parsing the method name for constraints that can be concatenated withAnd
andOr
. Thus, the method name results in a query expression ofSELECT … FROM person WHERE firstname = :firstname
. -
Use
Pageable
to pass offset and sorting parameters to the database. -
Return a
Slice<Person>
. SelectsLIMIT+1
rows to determine whether there’s more data to consume.ResultSetExtractor
customization is not supported. -
Run a paginated query returning
Page<Person>
. Selects only data within the given page bounds and potentially a count query to determine the total count.ResultSetExtractor
customization is not supported. -
Find a single entity for the given criteria. It completes with
IncorrectResultSizeDataAccessException
on non-unique results. -
In contrast to <3>, the first entity is always emitted even if the query yields more result documents.
-
The
findByLastname
method shows a query for all people with the givenlastname
. -
The
streamByLastname
method returns aStream
, which makes values possible as soon as they are returned from the database. -
You can use the Spring Expression Language to dynamically resolve parameters. In the sample, Spring Security is used to resolve the username of the current user.
The following table shows the keywords that are supported for query methods:
Keyword | Sample | Logical result |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note
|
Query derivation is limited to properties that can be used in a WHERE clause without using joins.
|
The JDBC module supports defining a query manually as a String in a @Query
annotation or as named query in a property file.
Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly. Also, only select queries are supported by this approach.
The following example shows how to use @Query
to declare a query method:
interface UserRepository extends CrudRepository<User, Long> {
@Query("select firstName, lastName from User u where u.emailAddress = :email")
User findByEmailAddress(@Param("email") String email);
}
For converting the query result into entities the same RowMapper
is used by default as for the queries Spring Data JDBC generates itself.
The query you provide must match the format the RowMapper
expects.
Columns for all properties that are used in the constructor of an entity must be provided.
Columns for properties that get set via setter, wither or field access are optional.
Properties that don’t have a matching column in the result will not be set.
The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types.
Separate queries are generated for maps, lists, sets and arrays of entities.
Note
|
Spring fully supports Java 8’s parameter name discovery based on the -parameters compiler flag.
By using this flag in your build as an alternative to debug information, you can omit the @Param annotation for named parameters.
|
Note
|
Spring Data JDBC supports only named parameters. |
If no query is given in an annotation as described in the previous section Spring Data JDBC will try to locate a named query.
There are two ways how the name of the query can be determined.
The default is to take the domain class of the query, i.e. the aggregate root of the repository, take its simple name and append the name of the method separated by a .
.
Alternatively the @Query
annotation has a name
attribute which can be used to specify the name of a query to be looked up.
Named queries are expected to be provided in the property file META-INF/jdbc-named-queries.properties
on the classpath.
The location of that file may be changed by setting a value to @EnableJdbcRepositories.namedQueriesLocation
.
When you specify Stream as the return type of a query method, Spring Data JDBC returns elements as soon as they become available. When dealing with large amounts of data this is suitable for reducing latency and memory requirements.
The stream contains an open connection to the database.
To avoid memory leaks, that connection needs to be closed eventually, by closing the stream.
The recommended way to do that is a try-with-resource clause
.
It also means that, once the connection to the database is closed, the stream cannot obtain further elements and likely throws an exception.
You can configure which RowMapper
to use, either by using the @Query(rowMapperClass = ….)
or by registering a RowMapperMap
bean and registering a RowMapper
per method return type.
The following example shows how to register DefaultQueryMappingConfiguration
:
@Bean
QueryMappingConfiguration rowMappers() {
return new DefaultQueryMappingConfiguration()
.register(Person.class, new PersonRowMapper())
.register(Address.class, new AddressRowMapper());
}
When determining which RowMapper
to use for a method, the following steps are followed, based on the return type of the method:
-
If the type is a simple type, no
RowMapper
is used.Instead, the query is expected to return a single row with a single column, and a conversion to the return type is applied to that value.
-
The entity classes in the
QueryMappingConfiguration
are iterated until one is found that is a superclass or interface of the return type in question. TheRowMapper
registered for that class is used.Iterating happens in the order of registration, so make sure to register more general types after specific ones.
If applicable, wrapper types such as collections or Optional
are unwrapped.
Thus, a return type of Optional<Person>
uses the Person
type in the preceding process.
Note
|
Using a custom RowMapper through QueryMappingConfiguration , @Query(rowMapperClass=…) , or a custom ResultSetExtractor disables Entity Callbacks and Lifecycle Events as the result mapping can issue its own events/callbacks if needed.
|
You can mark a query as being a modifying query by using the @Modifying
on query method, as the following example shows:
@Modifying
@Query("UPDATE DUMMYENTITY SET name = :name WHERE id = :id")
boolean updateName(@Param("id") Long id, @Param("name") String name);
You can specify the following return types:
-
void
-
int
(updated record count) -
boolean
(whether a record was updated)
Modifying queries are executed directly against the database. No events or callbacks get called. Therefore also fields with auditing annotations do not get updated if they don’t get updated in the annotated query.
The CRUD operations and query methods can be delegated to MyBatis. This section describes how to configure Spring Data JDBC to integrate with MyBatis and which conventions to follow to hand over the running of the queries as well as the mapping to the library.
The easiest way to properly plug MyBatis into Spring Data JDBC is by importing MyBatisJdbcConfiguration
into you application configuration:
@Configuration
@EnableJdbcRepositories
@Import(MyBatisJdbcConfiguration.class)
class Application {
@Bean
SqlSessionFactoryBean sqlSessionFactoryBean() {
// Configure MyBatis here
}
}
As you can see, all you need to declare is a SqlSessionFactoryBean
as MyBatisJdbcConfiguration
relies on a SqlSession
bean to be available in the ApplicationContext
eventually.
For each operation in CrudRepository
, Spring Data JDBC runs multiple statements.
If there is a SqlSessionFactory
in the application context, Spring Data checks, for each step, whether the SessionFactory
offers a statement.
If one is found, that statement (including its configured mapping to an entity) is used.
The name of the statement is constructed by concatenating the fully qualified name of the entity type with Mapper.
and a String
determining the kind of statement.
For example, if an instance of org.example.User
is to be inserted, Spring Data JDBC looks for a statement named org.example.UserMapper.insert
.
When the statement is run, an instance of [MyBatisContext
] gets passed as an argument, which makes various arguments available to the statement.
The following table describes the available MyBatis statements:
Name | Purpose | CrudRepository methods that might trigger this statement | Attributes available in the MyBatisContext |
---|---|---|---|
|
Inserts a single entity. This also applies for entities referenced by the aggregate root. |
|
|
|
Updates a single entity. This also applies for entities referenced by the aggregate root. |
|
|
|
Deletes a single entity. |
|
|
|
Deletes all entities referenced by any aggregate root of the type used as prefix with the given property path. Note that the type used for prefixing the statement name is the name of the aggregate root, not the one of the entity to be deleted. |
|
|
|
Deletes all aggregate roots of the type used as the prefix |
|
|
|
Deletes all entities referenced by an aggregate root with the given propertyPath |
|
|
|
Selects an aggregate root by ID |
|
|
|
Select all aggregate roots |
|
|
|
Select a set of aggregate roots by ID values |
|
|
|
Select a set of entities that is referenced by another entity. The type of the referencing entity is used for the prefix. The referenced entities type is used as the suffix. This method is deprecated. Use |
All |
|
|
Select a set of entities that is referenced by another entity via a property path. |
All |
|
|
Select all aggregate roots, sorted |
|
|
|
Select a page of aggregate roots, optionally sorted |
|
|
|
Count the number of aggregate root of the type used as prefix |
|
|
Spring Data JDBC triggers events that get published to any matching ApplicationListener
beans in the application context.
Events and callbacks get only triggered for aggregate roots.
If you want to process non-root entities, you need to do that through a listener for the containing aggregate root.
Entity lifecycle events can be costly, and you may notice a change in the performance profile when loading large result sets. You can disable lifecycle events on the Template API.
For example, the following listener gets invoked before an aggregate gets saved:
@Bean
ApplicationListener<BeforeSaveEvent<Object>> loggingSaves() {
return event -> {
Object entity = event.getEntity();
LOG.info("{} is getting saved.", entity);
};
}
If you want to handle events only for a specific domain type you may derive your listener from AbstractRelationalEventListener
and overwrite one or more of the onXXX
methods, where XXX
stands for an event type.
Callback methods will only get invoked for events related to the domain type and their subtypes, therefore you don’t require further casting.
class PersonLoadListener extends AbstractRelationalEventListener<Person> {
@Override
protected void onAfterLoad(AfterLoadEvent<Person> personLoad) {
LOG.info(personLoad.getEntity());
}
}
The following table describes the available events. For more details about the exact relation between process steps see the description of available callbacks which map 1:1 to events.
Event | When It Is Published |
---|---|
{javadoc-base}org/springframework/data/relational/core/mapping/event/BeforeDeleteEvent.html[ |
Before an aggregate root gets deleted. |
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterDeleteEvent.html[ |
After an aggregate root gets deleted. |
{javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeConvertEvent.html[ |
Before an aggregate root gets converted into a plan for executing SQL statements, but after the decision was made if the aggregate is new or not, i.e. if an update or an insert is in order. This is the correct event if you want to set an id programmatically. |
{javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeSaveEvent.html[ |
Before an aggregate root gets saved (that is, inserted or updated but after the decision about whether if it gets inserted or updated was made). Do not use this for creating Ids for new aggregates. Use |
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterSaveEvent.html[ |
After an aggregate root gets saved (that is, inserted or updated). |
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterLoadEvent.html[ |
After an aggregate root gets created from a database |
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterConvertEvent.html[ |
After an aggregate root gets created from a database |
Warning
|
Lifecycle events depend on an ApplicationEventMulticaster , which in case of the SimpleApplicationEventMulticaster can be configured with a TaskExecutor , and therefore gives no guarantees when an Event is processed.
|
Spring Data JDBC uses the EntityCallback
API for its auditing support and reacts on the callbacks listed in the following table.
Process | EntityCallback / Process Step |
Comment |
---|---|---|
Delete |
{javadoc-base}org/springframework/data/relational/core/mapping/event/BeforeDeleteCallback.html[ |
Before the actual deletion. |
The aggregate root and all the entities of that aggregate get removed from the database. |
||
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterDeleteCallback.html[ |
After an aggregate gets deleted. |
|
Save |
Determine if an insert or an update of the aggregate is to be performed dependen on if it is new or not. |
|
{javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeConvertCallback.html[ |
This is the correct callback if you want to set an id programmatically. In the previous step new aggregates got detected as such and a Id generated in this step would be used in the following step. |
|
Convert the aggregate to a aggregate change, it is a sequence of SQL statements to be executed against the database. In this step the decision is made if an Id is provided by the aggregate or if the Id is still empty and is expected to be generated by the database. |
||
{javadoc-base}/org/springframework/data/relational/core/mapping/event/BeforeSaveCallback.html[ |
Changes made to the aggregate root may get considered, but the decision if an id value will be sent to the database is already made in the previous step. |
|
The SQL statements determined above get executed against the database. |
||
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterSaveCallback.html[ |
After an aggregate root gets saved (that is, inserted or updated). |
|
Load |
Load the aggregate using 1 or more SQL queries. Construct the aggregate from the resultset. |
|
{javadoc-base}org/springframework/data/relational/core/mapping/event/AfterConvertCallback.html[ |
We encourage the use of callbacks over events since they support the use of immutable classes and therefore are more powerful and versatile than events.
Spring Data JDBC does little to no logging on its own.
Instead, the mechanics of JdbcTemplate
to issue SQL statements provide logging.
Thus, if you want to inspect what SQL statements are run, activate logging for Spring’s {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[NamedParameterJdbcTemplate
] or MyBatis.
The methods of CrudRepository
instances are transactional by default.
For reading operations, the transaction configuration readOnly
flag is set to true
.
All others are configured with a plain @Transactional
annotation so that default transaction configuration applies.
For details, see the Javadoc of SimpleJdbcRepository
.
If you need to tweak transaction configuration for one of the methods declared in a repository, redeclare the method in your repository interface, as follows:
interface UserRepository extends CrudRepository<User, Long> {
@Override
@Transactional(timeout = 10)
List<User> findAll();
// Further query method declarations
}
The preceding causes the findAll()
method to be run with a timeout of 10 seconds and without the readOnly
flag.
Another way to alter transactional behavior is by using a facade or service implementation that typically covers more than one repository. Its purpose is to define transactional boundaries for non-CRUD operations. The following example shows how to create such a facade:
@Service
public class UserManagementImpl implements UserManagement {
private final UserRepository userRepository;
private final RoleRepository roleRepository;
UserManagementImpl(UserRepository userRepository,
RoleRepository roleRepository) {
this.userRepository = userRepository;
this.roleRepository = roleRepository;
}
@Transactional
public void addRoleToAllUsers(String roleName) {
Role role = roleRepository.findByName(roleName);
for (User user : userRepository.findAll()) {
user.addRole(role);
userRepository.save(user);
}
}
The preceding example causes calls to addRoleToAllUsers(…)
to run inside a transaction (participating in an existing one or creating a new one if none are already running).
The transaction configuration for the repositories is neglected, as the outer transaction configuration determines the actual repository to be used.
Note that you have to explicitly activate <tx:annotation-driven />
or use @EnableTransactionManagement
to get annotation-based configuration for facades working.
Note that the preceding example assumes you use component scanning.
To let your query methods be transactional, use @Transactional
at the repository interface you define, as the following example shows:
@Transactional(readOnly = true)
interface UserRepository extends CrudRepository<User, Long> {
List<User> findByLastname(String lastname);
@Modifying
@Transactional
@Query("delete from User u where u.active = false")
void deleteInactiveUsers();
}
Typically, you want the readOnly
flag to be set to true, because most of the query methods only read data.
In contrast to that, deleteInactiveUsers()
uses the @Modifying
annotation and overrides the transaction configuration.
Thus, the method is with the readOnly
flag set to false
.
Note
|
It is highly recommended to make query methods transactional. These methods might execute more then one query in order to populate an entity. Without a common transaction Spring Data JDBC executes the queries in different connections. This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one. |
Note
|
It is definitely reasonable to mark read-only queries as such by setting the readOnly flag.
This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject INSERT and UPDATE statements inside a read-only transaction).
Instead, the readOnly flag is propagated as a hint to the underlying JDBC driver for performance optimizations.
|
In order to activate auditing, add @EnableJdbcAuditing
to your configuration, as the following example shows:
@Configuration
@EnableJdbcAuditing
class Config {
@Bean
AuditorAware<AuditableUser> auditorProvider() {
return new AuditorAwareImpl();
}
}
If you expose a bean of type AuditorAware
to the ApplicationContext
, the auditing infrastructure automatically picks it up and uses it to determine the current user to be set on domain types.
If you have multiple implementations registered in the ApplicationContext
, you can select the one to be used by explicitly setting the auditorAwareRef
attribute of @EnableJdbcAuditing
.
Spring Data JDBC supports locking on derived query methods.
To enable locking on a given derived query method inside a repository, you annotate it with @Lock
.
The required value of type LockMode
offers two values: PESSIMISTIC_READ
which guarantees that the data you are reading doesn’t get modified and PESSIMISTIC_WRITE
which obtains a lock to modify the data.
Some databases do not make this distinction.
In that cases both modes are equivalent of PESSIMISTIC_WRITE
.
interface UserRepository extends CrudRepository<User, Long> {
@Lock(LockMode.PESSIMISTIC_READ)
List<User> findByLastname(String lastname);
}
As you can see above, the method findByLastname(String lastname)
will be executed with a pessimistic read lock. If you are using a databse with the MySQL Dialect this will result for example in the following query:
Select * from user u where u.lastname = lastname LOCK IN SHARE MODE
Alternative to LockMode.PESSIMISTIC_READ
you can use LockMode.PESSIMISTIC_WRITE
.