Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ See also xref:repositories/core-concepts.adoc#is-new-state-detection[Entity Stat
Spring Data JDBC offers two ways how it can load aggregates:

. The traditional and before version 3.2 the only way is really simple:
Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or a annotated query.
Each query loads the aggregate roots, independently if the query is based on a `CrudRepository` method, a derived query or an annotated query.
If the aggregate root references other entities those are loaded with separate statements.

. Spring Data JDBC 3.2 allows the use of _Single Query Loading_.
Expand All @@ -36,13 +36,13 @@ The plan is to remove this constraint in the future.
2. The aggregate must not use `AggregateReference` or embedded entities.
The plan is to remove this constraint in the future.

3. The database dialect must support it.Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.
3. The database dialect must support it. Of the dialects provided by Spring Data JDBC all but H2 and HSQL support this.
H2 and HSQL don't support analytic functions (aka windowing functions).

4. It only works for the find methods in `CrudRepository`, not for derived queries and not for annotated queries.
The plan is to remove this constraint in the future.

5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)`
5. Single Query Loading needs to be enabled in the `JdbcMappingContext`, by calling `setSingleQueryLoadingEnabled(true)`.

If any condition is not fulfilled Spring Data JDBC falls back to the default approach of loading aggregates.

Expand Down Expand Up @@ -77,7 +77,7 @@ Operating on single aggregates, named exactly as mentioned above, and with an `A

`save` does the same as the method of same name in a repository.

`insert` and `update` skip the test if the entity is new and assume a new or existing aggregate as indicated by their name.
`insert` and `update` skip the test if the entity is new and assume a new or existing aggregate as indicated by their names.

=== Querying

Expand Down
2 changes: 1 addition & 1 deletion src/main/antora/modules/ROOT/pages/jdbc/events.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class PersonLoadListener extends AbstractRelationalEventListener<Person> {
}
----

The following table describes the available events.For more details about the exact relation between process steps see the link:#jdbc.entity-callbacks[description of available callbacks] which map 1:1 to events.
The following table describes the available events. For more details about the exact relation between process steps see the link:#jdbc.entity-callbacks[description of available callbacks] which map 1:1 to events.

.Available events
|===
Expand Down
16 changes: 8 additions & 8 deletions src/main/antora/modules/ROOT/pages/jdbc/getting-started.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Spring Data JDBC includes direct support for the following databases:
* Microsoft SQL Server
* MySQL
* Oracle
* Postgres
* PostgreSQL

If you use a different database then your application won’t start up.
The <<jdbc.dialects,dialect>> section contains further detail on how to proceed in such case.
Expand All @@ -33,7 +33,7 @@ To create a Spring project in STS:

. Go to File -> New -> Spring Template Project -> Simple Spring Utility Project, and press Yes when prompted.
Then enter a project and a package name, such as `org.spring.jdbc.example`.
. Add the following to the `pom.xml` files `dependencies` element:
. Add the following to the `pom.xml` file `dependencies` element:
+
[source,xml,subs="+attributes"]
----
Expand Down Expand Up @@ -77,7 +77,7 @@ The repository is also https://repo.spring.io/milestone/org/springframework/data

Spring Data JDBC does little to no logging on its own.
Instead, the mechanics of `JdbcTemplate` to issue SQL statements provide logging.
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access.html#jdbc-JdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis].
Thus, if you want to inspect what SQL statements are run, activate logging for Spring's {spring-framework-docs}/data-access/jdbc/core.html#jdbc-NamedParameterJdbcTemplate[`NamedParameterJdbcTemplate`] or https://www.mybatis.org/mybatis-3/logging.html[MyBatis].

You may also want to set the logging level to `DEBUG` to see some additional information.
To do so, edit the `application.properties` file to have the following content:
Expand Down Expand Up @@ -125,8 +125,8 @@ class ApplicationConfig extends AbstractJdbcConfiguration {
}
----

<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository`
<2> javadoc:org.springframework.data.jdbc.repository.config.AbstractJdbcConfiguration[] provides various default beans required by Spring Data JDBC
<1> `@EnableJdbcRepositories` creates implementations for interfaces derived from `Repository`.
<2> javadoc:org.springframework.data.jdbc.repository.config.AbstractJdbcConfiguration[] provides various default beans required by Spring Data JDBC.
<3> Creates a `DataSource` connecting to a database.
This is required by the following two bean methods.
<4> Creates the `NamedParameterJdbcOperations` used by Spring Data JDBC to access the database.
Expand Down Expand Up @@ -158,13 +158,13 @@ Alternatively, you can implement your own `Dialect`.

[TIP]
====
Dialects are resolved by javadoc:org.springframework.data.jdbc.core.dialect.DialectResolver[] from a `JdbcOperations` instance, typically by inspecting `Connection.getMetaData()`.
+ You can let Spring auto-discover your javadoc:org.springframework.data.jdbc.core.dialect.JdbcDialect[] by registering a class that implements `org.springframework.data.jdbc.core.dialect.DialectResolver$JdbcDialectProvider` through `META-INF/spring.factories`.
Dialects are resolved by javadoc:org.springframework.data.jdbc.core.dialect.DialectResolver[] from a `JdbcOperations` instance, typically by inspecting `Connection.getMetaData()`. +
You can let Spring auto-discover your javadoc:org.springframework.data.jdbc.core.dialect.JdbcDialect[] by registering a class that implements `org.springframework.data.jdbc.core.dialect.DialectResolver$JdbcDialectProvider` through `META-INF/spring.factories`.
`DialectResolver` discovers dialect provider implementations from the class path using Spring's `SpringFactoriesLoader`.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume the intention was to insert a hard line break here. Currently, a literal "+" is displayed in the resulting doc instead.

To do so:

. Implement your own `Dialect`.
. Implement a `JdbcDialectProvider` returning the `Dialect`.
. Register the provider by creating a `spring.factories` resource under `META-INF` and perform the registration by adding a line +
`org.springframework.data.jdbc.core.dialect.DialectResolver$JdbcDialectProvider`=<fully qualified name of your JdbcDialectProvider>`
`org.springframework.data.jdbc.core.dialect.DialectResolver$JdbcDialectProvider`=<fully qualified name of your JdbcDialectProvider>`.
====
6 changes: 3 additions & 3 deletions src/main/antora/modules/ROOT/pages/jdbc/mapping.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ The properties of the following types are currently supported:

* All primitive types and their boxed types (`int`, `float`, `Integer`, `Float`, and so on)

* Enums get mapped to their name.
* Enums get mapped to their names.

* `String`

Expand Down Expand Up @@ -135,7 +135,7 @@ p1.bestFriend = AggregateReference.to(p2.id);
----

You should not include attributes in your entities to hold the actual value of a back reference, nor of the key column of maps or lists.
If you want these value to be available in your domain model we recommend to do this in a `AfterConvertCallback` and store the values in transient values.
If you want these values to be available in your domain model we recommend to do this in an `AfterConvertCallback` and store the values in transient values.

:mapped-collection: true
:embedded-entities: true
Expand Down Expand Up @@ -214,7 +214,7 @@ If you are migrating from an older version of Spring Data JDBC and have `Abstrac
[TIP]
====
If you want to rely on https://spring.io/projects/spring-boot[Spring Boot] to bootstrap Spring Data JDBC, but still want to override certain aspects of the configuration, you may want to expose beans of that type.
For custom conversions you may e.g. choose to register a bean of type `JdbcCustomConversions` that will be picked up the by the Boot infrastructure.
For custom conversions you may e.g. choose to register a bean of type `JdbcCustomConversions` that will be picked up by the Boot infrastructure.
To learn more about this please make sure to read the Spring Boot https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#data.sql.jdbc[Reference Documentation].
====

Expand Down
6 changes: 3 additions & 3 deletions src/main/antora/modules/ROOT/pages/jdbc/mybatis.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,20 +44,20 @@ The following table describes the available MyBatis statements:
| Name | Purpose | CrudRepository methods that might trigger this statement | Attributes available in the `MyBatisContext`

| `insert` | Inserts a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`. |
`getInstance`: the instance to be saved
`getInstance`: the instance to be saved.

`getDomainType`: The type of the entity to be saved.

`get(<key>)`: ID of the referencing entity, where `<key>` is the name of the back reference column provided by the `NamingStrategy`.


| `update` | Updates a single entity. This also applies for entities referenced by the aggregate root. | `save`, `saveAll`.|
`getInstance`: The instance to be saved
`getInstance`: The instance to be saved.

`getDomainType`: The type of the entity to be saved.

| `delete` | Deletes a single entity. | `delete`, `deleteById`.|
`getId`: The ID of the instance to be deleted
`getId`: The ID of the instance to be deleted.

`getDomainType`: The type of the entity to be deleted.

Expand Down
12 changes: 6 additions & 6 deletions src/main/antora/modules/ROOT/pages/jdbc/query-methods.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

This section offers some specific information about the implementation and use of Spring Data JDBC.

Most of the data access operations you usually trigger on a repository result in a query being run against the databases.
Most of the data access operations you usually trigger on a repository result in a query being run against the database.
Defining such a query is a matter of declaring a method on the repository interface, as the following example shows:

.PersonRepository with query methods
Expand Down Expand Up @@ -36,7 +36,7 @@ interface PersonRepository extends PagingAndSortingRepository<Person, String> {
The query is derived by parsing the method name for constraints that can be concatenated with `And` and `Or`.
Thus, the method name results in a query expression of `SELECT … FROM person WHERE firstname = :firstname`.
<2> Use `Pageable` to pass offset and sorting parameters to the database.
<3> Return a `Slice<Person>`.Selects `LIMIT+1` rows to determine whether there's more data to consume. `ResultSetExtractor` customization is not supported.
<3> Return a `Slice<Person>`. Selects `LIMIT+1` rows to determine whether there's more data to consume. `ResultSetExtractor` customization is not supported.
<4> Run a paginated query returning `Page<Person>`.Selects only data within the given page bounds and potentially a count query to determine the total count. `ResultSetExtractor` customization is not supported.
<5> Find a single entity for the given criteria.
It completes with `IncorrectResultSizeDataAccessException` on non-unique results.
Expand Down Expand Up @@ -143,7 +143,7 @@ NOTE: Query derivation is limited to properties that can be used in a `WHERE` cl

The JDBC module supports defining a query manually as a String in a `@Query` annotation or as named query in a property file.

Deriving a query from the name of the method is is currently limited to simple properties, that means properties present in the aggregate root directly.
Deriving a query from the name of the method is currently limited to simple properties, that means properties present in the aggregate root directly.
Also, only select queries are supported by this approach.

[[jdbc.query-methods.at-query]]
Expand All @@ -164,12 +164,12 @@ interface UserRepository extends CrudRepository<User, Long> {
For converting the query result into entities the same `RowMapper` is used by default as for the queries Spring Data JDBC generates itself.
The query you provide must match the format the `RowMapper` expects.
Columns for all properties that are used in the constructor of an entity must be provided.
Columns for properties that get set via setter, wither or field access are optional.
Columns for properties that get set via setter or field access are optional.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure about the wither, though I am almost sure it is a typo or defect, so I simply deleted it.

Properties that don't have a matching column in the result will not be set.
The query is used for populating the aggregate root, embedded entities and one-to-one relationships including arrays of primitive types which get stored and loaded as SQL-array-types.
Separate queries are generated for maps, lists, sets and arrays of entities.

Properties one-to-one relationships must have there name prefixed by the name of the relationship plus `_`.
Properties one-to-one relationships must have their name prefixed by the name of the relationship plus `_`.
For example if the `User` from the example above has an `address` with the property `city` the column for that `city` must be labeled `address_city`.


Expand All @@ -190,7 +190,7 @@ Person findWithSpEL(PersonRef person);
----

This can be used to access members of a parameter, as demonstrated in the example above.
For more involved use cases an `EvaluationContextExtension` can be made available in the application context, which in turn can make any object available in to the SpEL.
For more involved use cases an `EvaluationContextExtension` can be made available in the application context, which in turn can make any object available in the SpEL.

The other variant can be used anywhere in the query and the result of evaluating the query will replace the expression in the query string.

Expand Down
4 changes: 2 additions & 2 deletions src/main/antora/modules/ROOT/pages/jdbc/schema-support.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,12 +73,12 @@ databaseChangeLog:
----

Column types are computed from an object implementing the `SqlTypeMapping` strategy interface.
Nullability is inferred from the type and set to `false` if a property type use primitive Java types.
Nullability is inferred from the type and set to `false` if a property type uses primitive Java types.

Schema support can assist you throughout the application development lifecycle.
In differential mode, you provide an existing Liquibase `Database` to the schema writer instance and the schema writer compares existing tables to mapped entities and derives from the difference which tables and columns to create/to drop.
By default, no tables and no columns are dropped unless you configure `dropTableFilter` and `dropColumnFilter`.
Both filter predicate provide the table name respective column name so your code can computer which tables and columns can be dropped.
Both filter predicates provide the table name respective column name so your code can compute which tables and columns can be dropped.

[source,java]
----
Expand Down
8 changes: 4 additions & 4 deletions src/main/antora/modules/ROOT/pages/jdbc/transactions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Thus, the method is with the `readOnly` flag set to `false`.
NOTE: It is highly recommended to make query methods transactional.
These methods might execute more than one query in order to populate an entity.
Without a common transaction Spring Data JDBC executes the queries in different connections.
This may put excessive strain on the connection pool and might even lead to dead locks when multiple methods request a fresh connection while holding on to one.
This may put excessive strain on the connection pool and might even lead to deadlocks when multiple methods request a fresh connection while holding on to one.

NOTE: It is definitely reasonable to mark read-only queries as such by setting the `readOnly` flag.
This does not, however, act as a check that you do not trigger a manipulating query (although some databases reject `INSERT` and `UPDATE` statements inside a read-only transaction).
Expand All @@ -112,7 +112,7 @@ interface UserRepository extends CrudRepository<User, Long> {
----

As you can see above, the method `findByLastname(String lastname)` will be executed with a pessimistic read lock.
If you are using a databse with the MySQL Dialect this will result for example in the following query:
If you are using a database with the MySQL Dialect this will result for example in the following query:

.Resulting Sql query for MySQL dialect
[source,sql]
Expand All @@ -121,7 +121,7 @@ Select * from user u where u.lastname = lastname LOCK IN SHARE MODE
----

NOTE: `@Lock` is currently not supported on string-based queries.
Query-methods created with `@Query`, will ignore the locking information provided by the `@Lock`,
Using `@Lock` on string-based queries will result in the warning in logs.
Query-methods created with `@Query` will ignore the locking information provided by the `@Lock`,
Using `@Lock` on string-based queries will result in a warning in logs.
Future versions will throw an exception.

2 changes: 1 addition & 1 deletion src/main/antora/modules/ROOT/pages/r2dbc.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We provide a "`template`" as a high-level abstraction for storing and querying a
This document is the reference guide for Spring Data R2DBC support.
It explains the concepts and semantics and syntax.

This chapter points out the specialties for repository support for JDBC.
This chapter points out the specialties for repository support for R2DBC.
This builds on the core repository support explained in xref:repositories/introduction.adoc[Working with Spring Data Repositories].
You should have a sound understanding of the basic concepts explained there.

Expand Down
6 changes: 3 additions & 3 deletions src/main/antora/modules/ROOT/partials/mapping.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -185,11 +185,11 @@ CREATE TABLE PERSON_WITH_COMPOSITE_ID (
----
<1> Entities may be represented as records without any special consideration
<1> Entities may be represented as records without any special consideration.
<2> `pk` is marked as id and embedded
<3> the two columns from the embedded `Name` entity make up the primary key in the database.
<3> The two columns from the embedded `Name` entity make up the primary key in the database.
Details of table creation depends on the used database.
Details of table creation depend on the used database.
====

[[entity-persistence.read-only-properties]]
Expand Down
2 changes: 1 addition & 1 deletion src/main/antora/modules/ROOT/partials/sequences.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class MyEntity {
----

When persisting this entity, before the SQL `INSERT`, Spring Data will issue an additional `SELECT` statement to fetch the next value from the sequence.
For instance, for PostgreSQL the query, issued by Spring Data, would look like this:
For instance, for PostgreSQL the query issued by Spring Data would look like this:

.Select for next sequence value in PostgreSQL
[source,sql]
Expand Down