From 264a202230c0831fceee0af00f46e98de26226ed Mon Sep 17 00:00:00 2001 From: Jay Bryant Date: Tue, 30 Jan 2018 09:10:31 -0600 Subject: [PATCH 1/2] Made the "Both" option make sense I edited the leader paragraphs above each code block that was flagged as being XML or Java content, such that the paragraphs make sense when both the XML and the Java blocks are present. I turned the Both button back on. I also caught a bunch of other editing things. --- .../asciidoc/common-patterns.adoc | 73 ++- spring-batch-docs/asciidoc/domain.adoc | 30 +- spring-batch-docs/asciidoc/job.adoc | 562 +++++++++--------- .../asciidoc/jsfiles/DocumentToggle.js | 62 ++ spring-batch-docs/asciidoc/jsr-352.adoc | 540 ++++++++--------- .../asciidoc/readersAndWriters.adoc | 262 ++++++-- spring-batch-docs/asciidoc/repeat.adoc | 2 +- spring-batch-docs/asciidoc/retry.adoc | 372 +++++------- spring-batch-docs/asciidoc/scalability.adoc | 25 +- .../asciidoc/spring-batch-integration.adoc | 220 ++++--- spring-batch-docs/asciidoc/step.adoc | 356 ++++++++--- spring-batch-docs/asciidoc/testing.adoc | 236 +++----- spring-batch-docs/asciidoc/toggle.adoc | 2 +- 13 files changed, 1522 insertions(+), 1220 deletions(-) create mode 100644 spring-batch-docs/asciidoc/jsfiles/DocumentToggle.js diff --git a/spring-batch-docs/asciidoc/common-patterns.adoc b/spring-batch-docs/asciidoc/common-patterns.adoc index bea1b10032..0c1628176e 100644 --- a/spring-batch-docs/asciidoc/common-patterns.adoc +++ b/spring-batch-docs/asciidoc/common-patterns.adoc @@ -50,8 +50,10 @@ public class ItemFailureLoggerListener extends ItemListenerSupport { } ---- -Having implemented this listener, it must be registered with a step, as shown in the -following example: +Having implemented this listener, it must be registered with a step. + +[role="xmlContent"] +The following example shows how to register a listener with a step in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -66,6 +68,9 @@ following example: ---- +[role="javaContent"] +The following example shows how to register a listener with a step Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -134,8 +139,10 @@ public class EarlyCompletionItemReader implements ItemReader { The previous example actually relies on the fact that there is a default implementation of the `CompletionPolicy` strategy that signals a complete batch when the item to be processed is `null`. A more sophisticated completion policy could be implemented and -injected into the `Step` through the `SimpleStepFactoryBean`, as shown in the following -example: +injected into the `Step` through the `SimpleStepFactoryBean`. + +[role="xmlContent"] +The following example shows how to inject a completion policy into a step in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -150,6 +157,9 @@ example: ---- +[role="javaContent"] +The following example shows how to inject a completion policy into a step in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -196,12 +206,15 @@ so this is always an abnormal ending to a job. [[addingAFooterRecord]] === Adding a Footer Record -Often, when writing to flat files, a "footer" record must be appended to the end of the +Often, when writing to flat files, a "`footer`" record must be appended to the end of the file, after all processing has be completed. This can be achieved using the `FlatFileFooterCallback` interface provided by Spring Batch. The `FlatFileFooterCallback` (and its counterpart, the `FlatFileHeaderCallback`) are optional properties of the -`FlatFileItemWriter` and can be added to an item writer as shown in the following -example: +`FlatFileItemWriter` and can be added to an item writer. + +[role="xmlContent"] +The following example shows how to use the `FlatFileHeaderCallback` and the +`FlatFileFooterCallback` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -214,6 +227,10 @@ example: ---- +[role="javaContent"] +The following example shows how to use the `FlatFileHeaderCallback` and the +`FlatFileFooterCallback` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -292,7 +309,10 @@ method, once we are guaranteed that no exceptions are thrown, that we update the In order for the `writeFooter` method to be called, the `TradeItemWriter` (which implements `FlatFileFooterCallback`) must be wired into the `FlatFileItemWriter` as the -`footerCallback`. The following example shows how to do so: +`footerCallback`. + +[role="xmlContent"] +The following example shows how to wire the `TradeItemWriter` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -308,6 +328,9 @@ implements `FlatFileFooterCallback`) must be wired into the `FlatFileItemWriter` ---- +[role="javaContent"] +The following example shows how to wire the `TradeItemWriter` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -409,7 +432,10 @@ multi-line record as a group, so that it can be passed to the `ItemWriter` intac Because a single record spans multiple lines and because we may not know how many lines there are, the `ItemReader` must be careful to always read an entire record. In order to do this, a custom `ItemReader` should be implemented as a wrapper for the -`FlatFileItemReader`, as shown in the following example: +`FlatFileItemReader`. + +[role="xmlContent"] +The following example shows how to implement a custom `ItemReader` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -429,6 +455,9 @@ do this, a custom `ItemReader` should be implemented as a wrapper for the ---- +[role="javaContent"] +The following example shows how to implement a custom `ItemReader` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -459,7 +488,10 @@ delegate `FlatFileItemReader`. See link:readersAndWriters.html#flatFileItemReader[`FlatFileItemReader` in the Readers and Writers chapter] for more details. The delegate reader then uses a `PassThroughFieldSetMapper` to deliver a `FieldSet` for each line back to the wrapping -`ItemReader`, as shown in the following example: +`ItemReader`. + +[role="xmlContent"] +The following example shows how to ensure that each line is properly tokenized in XML: .XML Content [source, xml, role="xmlContent"] @@ -476,6 +508,9 @@ Writers chapter] for more details. The delegate reader then uses a ---- +[role="javaContent"] +The following example shows how to ensure that each line is properly tokenized in Java: + .Java Content [source, java, role="javaContent"] ---- @@ -545,7 +580,10 @@ common metadata about the run would be lost. Furthermore, a multi-step job would need to be split up into multiple jobs as well. Because the need is so common, Spring Batch provides a `Tasklet` implementation for -calling system commands, as shown in the following example: +calling system commands. + +[role="xmlContent"] +The following example shows how to call an external command in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -557,6 +595,9 @@ calling system commands, as shown in the following example: ---- +[role="javaContent"] +The following example shows how to call an external command in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -639,13 +680,16 @@ public class SavingItemWriter implements ItemWriter { } ---- -To make the data available to future `Steps`, it must be "promoted" to the `Job` +To make the data available to future `Steps`, it must be "`promoted`" to the `Job` `ExecutionContext` after the step has finished. Spring Batch provides the `ExecutionContextPromotionListener` for this purpose. The listener must be configured with the keys related to the data in the `ExecutionContext` that must be promoted. It can also, optionally, be configured with a list of exit code patterns for which the promotion should occur (`COMPLETED` is the default). As with all listeners, it must be registered -on the `Step` as shown in the following example: +on the `Step`. + +[role="xmlContent"] +The following example shows how to promote a step to the `Job` `ExecutionContext` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -674,6 +718,9 @@ on the `Step` as shown in the following example: ---- +[role="xmlContent"] +The following example shows how to promote a step to the `Job` `ExecutionContext` in Java: + .Java Configuration [source, java, role="javaContent"] ---- diff --git a/spring-batch-docs/asciidoc/domain.adoc b/spring-batch-docs/asciidoc/domain.adoc index 90d08c801f..5f02796812 100644 --- a/spring-batch-docs/asciidoc/domain.adoc +++ b/spring-batch-docs/asciidoc/domain.adoc @@ -63,10 +63,11 @@ global to all steps, such as restartability. The job configuration contains: ifdef::backend-html5[] [role="javaContent"] -A default simple implementation of the Job interface is provided by Spring Batch in the -form of the `SimpleJob` class, which creates some standard functionality on top of `Job`. -When using java based configuration, a collection of builders is made available for the -instantiation of a `Job`, as shown in the following example: +For those who use Java configuration, Spring Batch provices a default implementation of +the Job interface in the form of the `SimpleJob` class, which creates some standard +functionality on top of `Job`. When using java based configuration, a collection of +builders is made available for the instantiation of a `Job`, as shown in the following +example: [source, java, role="javaContent"] ---- @@ -82,10 +83,11 @@ public Job footballJob() { ---- [role="xmlContent"] -A default simple implementation of the `Job` interface is provided by Spring Batch in the -form of the `SimpleJob` class, which creates some standard functionality on top of `Job`. -However, the batch namespace abstracts away the need to instantiate it directly. Instead, -the `` tag can be used as shown in the following example: +For those who use XML configuration, Spring Batch provides a default implementation of the +`Job` interface in the form of the `SimpleJob` class, which creates some standard +functionality on top of `Job`. However, the batch namespace abstracts away the need to +instantiate it directly. Instead, the `` element can be used, as shown in the +following example: [source, xml, role="xmlContent"] ---- @@ -98,9 +100,9 @@ the `` tag can be used as shown in the following example: endif::backend-html5[] ifdef::backend-pdf[] -A default simple implementation of the Job interface is provided by Spring Batch in the -form of the `SimpleJob` class, which creates some standard functionality on top of `Job`. -When using java based configuration, a collection of builders are made available for the +Spring Batch provides a default implementation of the Job interface in the form of the +`SimpleJob` class, which creates some standard functionality on top of `Job`. When using +Java-based configuration, a collection of builders are made available for the instantiation of a `Job`, as shown in the following example: [source, java] @@ -565,8 +567,8 @@ the course of execution, `StepExecution` and `JobExecution` implementations are by passing them to the repository. [role="xmlContent"] -The batch namespace provides support for configuring a `JobRepository` instance with the -`` tag, as shown in the following example: +The Spring Batch XML namespace provides support for configuring a `JobRepository` instance +with the `` tag, as shown in the following example: [source, xml, role="xmlContent"] ---- @@ -574,7 +576,7 @@ The batch namespace provides support for configuring a `JobRepository` instance ---- [role="javaContent"] -When using java configuration, `@EnableBatchProcessing` annotation provides a +When using Java configuration, the `@EnableBatchProcessing` annotation provides a `JobRepository` as one of the components automatically configured out of the box. === JobLauncher diff --git a/spring-batch-docs/asciidoc/job.adoc b/spring-batch-docs/asciidoc/job.adoc index 79222c54ba..9de336f6c6 100644 --- a/spring-batch-docs/asciidoc/job.adoc +++ b/spring-batch-docs/asciidoc/job.adoc @@ -71,8 +71,8 @@ a list of `Step` instances. ---- [role="xmlContent"] -The examples here use a parent bean definition to create the steps; -see the section on <> +The examples here use a parent bean definition to create the steps. +See the section on <> for more options declaring specific step details inline. The XML namespace defaults to referencing a repository with an id of 'jobRepository', which is a sensible default. However, this can be overridden explicitly: @@ -88,11 +88,9 @@ is a sensible default. However, this can be overridden explicitly: ---- [role="xmlContent"] -In addition to steps a job configuration can contain other elements - that help with parallelisation (``), - declarative flow control (``) and - externalization of flow definitions - (``). +In addition to steps a job configuration can contain other elements that help with +parallelization (``), declarative flow control (``) and externalization +of flow definitions (``). endif::backend-html5[] ifdef::backend-pdf[] @@ -153,17 +151,17 @@ endif::backend-pdf[] ==== Restartability -One key issue when executing a batch job concerns the behavior of -a `Job` when it is restarted. The launching of a -`Job` is considered to be a 'restart' if a -`JobExecution` already exists for the particular -`JobInstance`. Ideally, all jobs should be able to -start up where they left off, but there are scenarios where this is not -possible. __It is entirely up to the developer to ensure that a new `JobInstance` is created in this scenario__. However, Spring Batch does provide some help. If a -`Job` should never be restarted, but should always -be run as part of a new `JobInstance`, then the -restartable property may be set to 'false': +One key issue when executing a batch job concerns the behavior of a `Job` when it is +restarted. The launching of a `Job` is considered to be a 'restart' if a `JobExecution` +already exists for the particular `JobInstance`. Ideally, all jobs should be able to start +up where they left off, but there are scenarios where this is not possible. _It is +entirely up to the developer to ensure that a new `JobInstance` is created in this +scenario._ However, Spring Batch does provide some help. If a `Job` should never be +restarted, but should always be run as part of a new `JobInstance`, then the +restartable property may be set to 'false'. +[role="xmlContent"] +The following example shows how to set the `restartable` field to `false` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -173,6 +171,9 @@ restartable property may be set to 'false': ---- +[role="javaContent"] +The following example shows how to set the `restartable` field to `false` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -185,10 +186,10 @@ public Job footballJob() { } ---- -To phrase it another way, setting restartable to false means "this -`Job` does not support being started again". Restarting a `Job` that is not -restartable will cause a `JobRestartException` to -be thrown: +To phrase it another way, setting restartable to false means "`this +`Job` does not support being started again`". Restarting a `Job` that is not +restartable causes a `JobRestartException` to +be thrown. [source, java] ---- @@ -235,10 +236,10 @@ public interface JobExecutionListener { } ---- -`JobListeners` can be added to a -`SimpleJob` via the listeners element on the -job: +`JobListeners` can be added to a `SimpleJob` by setting listeners on the job. +[role="xmlContent"] +The following example shows how to add a listener element to an XML job definition: .XML Configuration [source, xml, role="xmlContent"] @@ -253,6 +254,9 @@ job: ---- +[role="javaContent"] +The following example shows how to add a listener method to a Java job definition: + .Java Configuration [source, java, role="javaContent"] ---- @@ -265,11 +269,9 @@ public Job footballJob() { } ---- -It should be noted that `afterJob` will be - called regardless of the success or failure of the - Job. If success or failure needs to be determined - it can be obtained from the `JobExecution`: - +It should be noted that the `afterJob` method is called regardless of the success or +failure of the `Job`. If success or failure needs to be determined, it can be obtained +from the `JobExecution`, as follows: [source, java] ---- @@ -352,7 +354,7 @@ A job declared in the XML namespace or using any subclass of ifdef::backend-html5[] [role="xmlContent"] The configuration of a validator is supported through the XML namespace through a child - element of the job, e.g: +element of the job, as shown in the following example: [source, xml, role="xmlContent"] ---- @@ -363,11 +365,12 @@ The configuration of a validator is supported through the XML namespace through ---- [role="xmlContent"] -The validator can be specified as a reference (as above) or as a - nested bean definition in the beans namespace. +The validator can be specified as a reference (as shown earlier) or as a nested bean +definition in the beans namespace. [role="javaContent"] -The configuration of a validator is supported through the java builders, e.g: +The configuration of a validator is supported through the java builders, as shown in the +following example: [source, java, role="javaContent"] ---- @@ -383,7 +386,7 @@ public Job job1() { endif::backend-html5[] ifdef::backend-pdf[] -The configuration of a validator is supported through the java builders, e.g: +The configuration of a validator is supported through the java builders, as follows: [source, java] ---- @@ -406,8 +409,8 @@ XML namespace support is also available for configuration of a `JobParametersVal ---- -The validator can be specified as a reference (as above) or as a - nested bean definition in the beans namespace. +The validator can be specified as a reference (as above) or as a nested bean definition in +the beans namespace. endif::backend-pdf[] @@ -416,73 +419,36 @@ endif::backend-pdf[] === Java Config -Spring 3 brought the ability to configure applications via java in addition to XML. - As of Spring Batch 2.2.0, batch jobs can be configured using the same - java config. There are two components for the java based configuration: - the `@EnableBatchProcessing` annotation and two builders. +Spring 3 brought the ability to configure applications via java instead of XML. As of +Spring Batch 2.2.0, batch jobs can be configured using the same java config. +There are two components for the java based configuration: the `@EnableBatchProcessing` +annotation and two builders. -The `@EnableBatchProcessing` works similarly to the other - @Enable* annotations in the Spring family. In this case, - `@EnableBatchProcessing` provides a base configuration for - building batch jobs. Within this base configuration, an instance of - `StepScope` is created in addition to a number of beans made - available to be autowired: +The `@EnableBatchProcessing` works similarly to the other @Enable* annotations in the +Spring family. In this case, `@EnableBatchProcessing` provides a base configuration for +building batch jobs. Within this base configuration, an instance of `StepScope` is +created in addition to a number of beans made available to be autowired: +* `JobRepository`: bean name "jobRepository" +* `JobLauncher`: bean name "jobLauncher" +* `JobRegistry`: bean name "jobRegistry" +* `PlatformTransactionManager`: bean name "transactionManager" +* `JobBuilderFactory`: bean name "jobBuilders" +* `StepBuilderFactory`: bean name "stepBuilders" - -* `JobRepository` - bean name "jobRepository" - - -* `JobLauncher` - bean name "jobLauncher" - - -* `JobRegistry` - bean name "jobRegistry" - - -* `PlatformTransactionManager` - bean name "transactionManager" - - -* `JobBuilderFactory` - bean name "jobBuilders" - - -* `StepBuilderFactory` - bean name "stepBuilders" - -The core interface for this configuration is the `BatchConfigurer`. - The default implementation provides the beans mentioned above and requires a - `DataSource` as a bean within the context to be provided. This data - source will be used by the JobRepository. You can customize any of these beans - by creating a custom implementation of the `BatchConfigurer` interface. - Typically, extending the `DefaultBatchConfigurer` (which is provided if a - `BatchConfigurer` is not found) and overriding the required getter is sufficient. - However, implementing your own from scratch may be required. The following - example shows how to provide a custom transaction manager: - -[source, java] ----- -@Bean -public BatchConfigurer batchConfigurer() { - return new DefaultBatchConfigurer() { - @Override - public PlatformTransactionManager getTransactionManager() { - return new MyTransactionManager(); - } - }; -} ----- - +The core interface for this configuration is the `BatchConfigurer`. The default +implementation provides the beans mentioned above and requires a `DataSource` as a bean +within the context to be provided. This data source is used by the JobRepository. [NOTE] ==== -Only one configuration class needs to have the - `@EnableBatchProcessing` annotation. Once you have a class - annotated with it, you will have all of the above available. - +Only one configuration class needs to have the `@EnableBatchProcessing` annotation. Once +you have a class annotated with it, you will have all of the above available. ==== - -With the base configuration in place, a user can use the provided builder factories - to configure a job. Below is an example of a two step job configured via the - `JobBuilderFactory` and the `StepBuilderFactory`. +With the base configuration in place, a user can use the provided builder factories to +configure a job. The following example shows a two step job configured with the +`JobBuilderFactory` and the `StepBuilderFactory`: [source, java] @@ -541,11 +507,9 @@ As described in earlier, the <> is used f `Job`, and `Step`. [role="xmlContent"] -The batch - namespace abstracts away many of the implementation details of the - `JobRepository` implementations and their - collaborators. However, there are still a few configuration options - available: +The batch namespace abstracts away many of the implementation details of the +`JobRepository` implementations and their collaborators. However, there are still a few +configuration options available, as shown in the following example: .XML Configuration [source, xml, role="xmlContent"] @@ -559,16 +523,16 @@ The batch ---- [role="xmlContent"] -None of the configuration options listed above are required except - the id. If they are not set, the defaults shown above will be used. They - are shown above for awareness purposes. The - `max-varchar-length` defaults to 2500, which is the - length of the long `VARCHAR` columns in the <> +None of the configuration options listed above are required except the `id`. If they are +not set, the defaults shown above will be used. They are shown above for awareness +purposes. The `max-varchar-length` defaults to 2500, which is the length of the long +`VARCHAR` columns in the <>. [role="javaContent"] When using java configuration, a `JobRepository` is provided for you. A JDBC based one is -provided out of the box if a `DataSource` is provided, the `Map` based one if not. However -you can customize the configuration of the `JobRepository` via an implementation of the +provided out of the box if a `DataSource` is provided, the `Map` based one if not. However, +you can customize the configuration of the `JobRepository` through an implementation of the `BatchConfigurer` interface. .Java Configuration @@ -602,21 +566,21 @@ None of the configuration options listed above are required except ==== Transaction Configuration for the JobRepository -If the namespace or the provided `FactoryBean` is used, transactional advice will be - automatically created around the repository. This is to ensure that the - batch meta data, including state that is necessary for restarts after a - failure, is persisted correctly. The behavior of the framework is not - well defined if the repository methods are not transactional. The - isolation level in the `create*` method attributes is - specified separately to ensure that when jobs are launched, if two - processes are trying to launch the same job at the same time, only one - will succeed. The default isolation level for that method is - SERIALIZABLE, which is quite aggressive: READ_COMMITTED would work just - as well; READ_UNCOMMITTED would be fine if two processes are not likely - to collide in this way. However, since a call to the - `create*` method is quite short, it is unlikely - that the SERIALIZED will cause problems, as long as the database - platform supports it. However, this can be overridden: +If the namespace or the provided `FactoryBean` is used, transactional advice is +automatically created around the repository. This is to ensure that the batch meta-data, +including state that is necessary for restarts after a failure, is persisted correctly. +The behavior of the framework is not well defined if the repository methods are not +transactional. The isolation level in the `create*` method attributes is specified +separately to ensure that, when jobs are launched, if two processes try to launch +the same job at the same time, only one succeeds. The default isolation level for that +method is `SERIALIZABLE`, which is quite aggressive. `READ_COMMITTED` would work just as +well. `READ_UNCOMMITTED` would be fine if two processes are not likely to collide in this +way. However, since a call to the `create*` method is quite short, it is unlikely that +`SERIALIZED` causes problems, as long as the database platform supports it. However, this +can be overridden. + +[role="xmlContent"] +The following example shows how to the isolation level in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -625,6 +589,9 @@ If the namespace or the provided `FactoryBean` is used, transactional advice wil isolation-level-for-create="REPEATABLE_READ" /> ---- +[role="javaContent"] +The following example shows how to the isolation level in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -639,10 +606,12 @@ protected JobRepository createJobRepository() throws Exception { } ---- -If the namespace or factory beans aren't used then it is also - essential to configure the transactional behavior of the repository - using AOP: +If the namespace or factory beans are not used, then it is also essential to configure the +transactional behavior of the repository using AOP. +[role="xmlContent"] +The following example shows how to configure the transactional behavior of the repository +in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -661,10 +630,13 @@ If the namespace or factory beans aren't used then it is also ---- [role="xmlContent"] -This fragment can be used as is, with almost no changes. Remember - also to include the appropriate namespace declarations and to make sure - spring-tx and spring-aop (or the whole of spring) are on the - classpath. +The preceding fragment can be used nearly as is, with almost no changes. Remember also to +include the appropriate namespace declarations and to make sure spring-tx and spring-aop +(or the whole of Spring) are on the classpath. + +[role="javaContent"] +The following example shows how to configure the transactional behavior of the repository +in Java: .Java Configuration [source, java, role="javaContent"] @@ -682,19 +654,17 @@ public TransactionProxyFactoryBean baseProxy() { ---- [[repositoryTablePrefix]] - - ==== Changing the Table Prefix -Another modifiable property of the - `JobRepository` is the table prefix of the - meta-data tables. By default they are all prefaced with BATCH_. - BATCH_JOB_EXECUTION and BATCH_STEP_EXECUTION are two examples. However, - there are potential reasons to modify this prefix. If the schema names - needs to be prepended to the table names, or if more than one set of - meta data tables is needed within the same schema, then the table prefix - will need to be changed: +Another modifiable property of the `JobRepository` is the table prefix of the meta-data +tables. By default they are all prefaced with `BATCH_`. `BATCH_JOB_EXECUTION` and +`BATCH_STEP_EXECUTION` are two examples. However, there are potential reasons to modify this +prefix. If the schema names needs to be prepended to the table names, or if more than one +set of meta data tables is needed within the same schema, then the table prefix needs to +be changed: +[role="xmlContent"] +The following example shows how to change the table prefix in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -703,6 +673,9 @@ Another modifiable property of the table-prefix="SYSTEM.TEST_" /> ---- +[role="xmlContent"] +The following example shows how to change the table prefix in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -717,31 +690,25 @@ protected JobRepository createJobRepository() throws Exception { } ---- -Given the above changes, every query to the meta data tables will - be prefixed with "SYSTEM.TEST_". BATCH_JOB_EXECUTION will be referred to - as SYSTEM.TEST_JOB_EXECUTION. - +Given the preceding changes, every query to the meta-data tables is prefixed with +`SYSTEM.TEST_`. `BATCH_JOB_EXECUTION` is referred to as SYSTEM.`TEST_JOB_EXECUTION`. [NOTE] ==== -Only the table prefix is configurable. The table and column - names are not. - +Only the table prefix is configurable. The table and column names are not. ==== - [[inMemoryRepository]] - - ==== In-Memory Repository -There are scenarios in which you may not want to persist your - domain objects to the database. One reason may be speed; storing domain - objects at each commit point takes extra time. Another reason may be - that you just don't need to persist status for a particular job. For - this reason, Spring batch provides an in-memory Map version of the job - repository: +There are scenarios in which you may not want to persist your domain objects to the +database. One reason may be speed; storing domain objects at each commit point takes extra +time. Another reason may be that you just don't need to persist status for a particular +job. For this reason, Spring batch provides an in-memory `Map` version of the job +repository. +[role="xmlContent"] +The following example shows the inclusion of `MapJobRepositoryFactoryBean` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -752,6 +719,9 @@ There are scenarios in which you may not want to persist your ---- +[role="xmlContent"] +The following example shows the inclusion of `MapJobRepositoryFactoryBean` in XML: + .Java Configuration [source, java, role="javaContent"] ---- @@ -765,30 +735,30 @@ protected JobRepository createJobRepository() throws Exception { ---- -Note that the in-memory repository is volatile and so does not - allow restart between JVM instances. It also cannot guarantee that two - job instances with the same parameters are launched simultaneously, and - is not suitable for use in a multi-threaded Job, or a locally - partitioned `Step`. So use the database version of the repository wherever - you need those features. +Note that the in-memory repository is volatile and so does not allow restart between JVM +instances. It also cannot guarantee that two job instances with the same parameters are +launched simultaneously, and is not suitable for use in a multi-threaded Job, or a locally +partitioned `Step`. So use the database version of the repository wherever you need those +features. -However it does require a transaction manager to be defined - because there are rollback semantics within the repository, and because - the business logic might still be transactional (e.g. RDBMS access). For - testing purposes many people find the - `ResourcelessTransactionManager` useful. +However it does require a transaction manager to be defined because there are rollback +semantics within the repository, and because the business logic might still be +transactional (such as RDBMS access). For testing purposes many people find the +`ResourcelessTransactionManager` useful. -[[nonStandardDatabaseTypesInRepository]] +[[nonStandardDatabaseTypesInRepository]] ==== Non-standard Database Types in a Repository -If you are using a database platform that is not in the list of - supported platforms, you may be able to use one of the supported types, - if the SQL variant is close enough. To do this you can use the raw - `JobRepositoryFactoryBean` instead of the namespace - shortcut and use it to set the database type to the closest - match: +If you are using a database platform that is not in the list of supported platforms, you +may be able to use one of the supported types, if the SQL variant is close enough. To do +this, you can use the raw `JobRepositoryFactoryBean` instead of the namespace shortcut and +use it to set the database type to the closest match. + +[role="xmlContent"] +The following example shows how to use `JobRepositoryFactoryBean` to set the database type +to the closest match in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -799,6 +769,10 @@ If you are using a database platform that is not in the list of ---- +[role="javaContent"] +The following example shows how to use `JobRepositoryFactoryBean` to set the database type +to the closest match in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -811,7 +785,6 @@ protected JobRepository createJobRepository() throws Exception { factory.setTransactionManager(transactionManager); return factory.getObject(); } - ---- (The `JobRepositoryFactoryBean` tries to @@ -836,11 +809,11 @@ If even that doesn't work, or you are not using an RDBMS, then the When using `@EnableBatchProcessing`, a `JobRegistry` is provided out of the box for you. This section addresses configuring your own. -The most basic implementation of the - `JobLauncher` interface is the - `SimpleJobLauncher`. Its only required dependency is - a `JobRepository`, in order to obtain an - execution: +The most basic implementation of the `JobLauncher` interface is the `SimpleJobLauncher`. +Its only required dependency is a `JobRepository`, in order to obtain an execution. + +[role="xmlContent"] +The following example shows a `SimpleJobLauncher` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -851,6 +824,9 @@ The most basic implementation of the ---- +[role="javaContent"] +The following example shows a `SimpleJobLauncher` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -866,29 +842,29 @@ protected JobLauncher createJobLauncher() throws Exception { ... ---- -Once a <> is - obtained, it is passed to the execute method of - Job, ultimately returning the - `JobExecution` to the caller: +Once a <> is obtained, it is passed to the +execute method of `Job`, ultimately returning the `JobExecution` to the caller, as shown +in the following image: .Job Launcher Sequence image::{batch-asciidoc}images/job-launcher-sequence-sync.png[Job Launcher Sequence, scaledwidth="60%"] -The sequence is straightforward and works well when launched from a - scheduler. However, issues arise when trying to launch from an HTTP - request. In this scenario, the launching needs to be done asynchronously - so that the `SimpleJobLauncher` returns immediately - to its caller. This is because it is not good practice to keep an HTTP - request open for the amount of time needed by long running processes such - as batch. An example sequence is below: +The sequence is straightforward and works well when launched from a scheduler. However, +issues arise when trying to launch from an HTTP request. In this scenario, the launching +needs to be done asynchronously so that the `SimpleJobLauncher` returns immediately to its +caller. This is because it is not good practice to keep an HTTP request open for the +amount of time needed by long running processes such as batch. The following image shows +an example sequence: .Asynchronous Job Launcher Sequence image::{batch-asciidoc}images/job-launcher-sequence-async.png[Async Job Launcher Sequence, scaledwidth="60%"] -The `SimpleJobLauncher` can easily be - configured to allow for this scenario by configuring a - `TaskExecutor`: +The `SimpleJobLauncher` can be configured to allow for this scenario by configuring a +`TaskExecutor`. + +[role="xmlContent"] +The following XML example shows a `SimpleJobLauncher` configured to return immediately: .XML Configuration [source, xml, role="xmlContent"] @@ -902,6 +878,9 @@ The `SimpleJobLauncher` can easily be ---- +[role="javaContent"] +The following Java example shows a `SimpleJobLauncher` configured to return immediately: + .Java Configuration [source, java, role="javaContent"] ---- @@ -994,16 +973,21 @@ All of these tasks are accomplished using only the arguments |=============== -These arguments must be passed in with the path first and the - name second. All arguments after these are considered to be - `JobParameters` and must be in the format of 'name=value': +These arguments must be passed in with the path first and the name second. All arguments +after these are considered to be job parameters, are turned into a JobParameters object, +and must be in the format of 'name=value'. +[role="xmlContent"] +The following example shows a date passed as a job parameter to a job defied in XML: [source, role="xmlContent"] ---- >. The first argument is - 'endOfDayJob.xml', which is the Spring - `ApplicationContext` containing the - Job. The second argument, 'endOfDay' represents - the job name. The final argument, 'schedule.date(date)=2007/05/05' - will be converted into `JobParameters`. An - example of the XML configuration is below: +In most cases, you would want to use a manifest to declare your main class in a jar, but, +for simplicity, the class was used directly. This example is using the same 'EndOfDay' +example from the <>. The first +argument is 'endOfDayJob.xml', which is the Spring ApplicationContext containing the +`Job`. The second argument, 'endOfDay' represents the job name. The final argument, +'schedule.date(date)=2007/05/05', is converted into a JobParameters object. +[role="xmlContent"] +The following example shows a sample configuration for `endOfDay` in XML: [source, xml, role="xmlContent"] ---- @@ -1034,15 +1017,16 @@ In most cases you would want to use a manifest to declare your ---- [role="javaContent"] -In most cases you would want to use a manifest to declare your - main class in a jar, but for simplicity, the class was used directly. - This example is using the same 'EndOfDay' example from the <>. The first argument is - 'io.spring.EndOfDayJobConfiguration', which is the fully qualified class name to - the configuration class containing the - Job. The second argument, 'endOfDay' represents - the job name. The final argument, 'schedule.date(date)=2007/05/05' - will be converted into JobParameters. An - example of the java configuration is below: +In most cases you would want to use a manifest to declare your main class in a jar, but, +for simplicity, the class was used directly. This example is using the same 'EndOfDay' +example from the <>. The first +argument is 'io.spring.EndOfDayJobConfiguration', which is the fully qualified class name +to the configuration class containing the Job. The second argument, 'endOfDay' represents +the job name. The final argument, 'schedule.date(date)=2007/05/05' is converted into a +`JobParameters` object. An example of the java configuration follows: + +[role="javaContent"] +The following example shows a sample configuration for `endOfDay` in Java: [source, java, role="javaContent"] ---- @@ -1074,14 +1058,18 @@ public class EndOfDayJobConfiguration { endif::backend-html5[] ifdef::backend-pdf[] -In most cases you would want to use a manifest to declare your - main class in a jar, but for simplicity, the class was used directly. - This example is using the same 'EndOfDay' example from the <>. The first argument is - where your job is configured (either an XML file or a fully qualified class name). - The second argument, 'endOfDay' represents - the job name. The final argument, 'schedule.date(date)=2007/05/05' - will be converted into `JobParameters`. An - example of the configuration is below: +In most cases, you would want to use a manifest to declare your main class in a jar, but, +for simplicity, the class was used directly. This example is using the same 'EndOfDay' +example from the <>. The first +argument is where your job is configured (either an XML file or a fully qualified class +name). The second argument, 'endOfDay' represents the job name. The final argument, +'schedule.date(date)=2007/05/05' is converted into JobParameters. + +// TODO Given that this block is for PDF output, should it have the xmlContent and +// javaContent markers? + +[role="xmlContent"] +The following example shows a sample configuration for `endOfDay` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1095,6 +1083,9 @@ In most cases you would want to use a manifest to declare your class="org.springframework.batch.core.launch.support.SimpleJobLauncher" /> ---- +[role="javaContent"] +The following example shows a sample configuration for `endOfDay` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1126,16 +1117,13 @@ public class EndOfDayJobConfiguration { endif::backend-pdf[] -This example is overly simplistic, since there are many more - requirements to a run a batch job in Spring Batch in general, but it - serves to show the two main requirements of the - `CommandLineJobRunner`: - `Job` and - `JobLauncher` +The preceding example is overly simplistic, since there are many more requirements to a +run a batch job in Spring Batch in general, but it serves to show the two main +requirements of the `CommandLineJobRunner`: `Job` and `JobLauncher`. -[[exitCodes]] +[[exitCodes]] ===== ExitCodes When launching a batch job from the command-line, an enterprise @@ -1300,11 +1288,12 @@ public interface JobExplorer { } ---- -As is evident from the method signatures above, - `JobExplorer` is a read-only version of the - `JobRepository`, and like the - `JobRepository`, it can be easily configured via a - factory bean: +As is evident from the method signatures above, `JobExplorer` is a read-only version of +the `JobRepository`, and, like the `JobRepository`, it can be easily configured by using a +factory bean: + +[role="xmlContent"] +The following example shows how to configure a `JobExplorer` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1313,6 +1302,9 @@ As is evident from the method signatures above, p:dataSource-ref="dataSource" /> ---- +[role="javaContent"] +The following example shows how to configure a `JobExplorer` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1327,11 +1319,12 @@ public JobExplorer getJobExplorer() throws Exception { ... ---- -<>, it was mentioned that the table prefix of the - `JobRepository` can be modified to allow for - different versions or schemas. Because the - `JobExplorer` is working with the same tables, it - too needs the ability to set a prefix: +<>, we noted that the table prefix +of the `JobRepository` can be modified to allow for different versions or schemas. Because +the `JobExplorer` works with the same tables, it too needs the ability to set a prefix. + +[role="xmlContent"] +The following example shows how to set the table prefix for a `JobExplorer` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1340,6 +1333,9 @@ public JobExplorer getJobExplorer() throws Exception { p:tablePrefix="SYSTEM."/> ---- +[role="javaContent"] +The following example shows how to set the table prefix for a `JobExplorer` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1358,21 +1354,25 @@ public JobExplorer getJobExplorer() throws Exception { ==== JobRegistry -A `JobRegistry` (and its parent interface `JobLocator`) is not - mandatory, but it can be useful if you want to keep track of which jobs - are available in the context. It is also useful for collecting jobs - centrally in an application context when they have been created - elsewhere (e.g. in child contexts). Custom `JobRegistry` implementations - can also be used to manipulate the names and other properties of the - jobs that are registered. There is only one implementation provided by - the framework and this is based on a simple map from job name to job - instance. +A `JobRegistry` (and its parent interface `JobLocator`) is not mandatory, but it can be +useful if you want to keep track of which jobs are available in the context. It is also +useful for collecting jobs centrally in an application context when they have been created +elsewhere (for example, in child contexts). Custom `JobRegistry` implementations can also +be used to manipulate the names and other properties of the jobs that are registered. +There is only one implementation provided by the framework and this is based on a simple +map from job name to job instance. + +[role="xmlContent"] +The following example shows how to include a `JobRegistry` for a job defined in XML: [source, xml, role="xmlContent"] ---- ---- +[role="javaContent"] +The following example shows how to include a `JobRegistry` for a job defined in Java: + [role="javaContent"] When using `@EnableBatchProcessing`, a `JobRegistry` is provided out of the box for you. If you want to configure your own: @@ -1396,9 +1396,11 @@ There are two ways to populate a `JobRegistry` automatically: using ===== JobRegistryBeanPostProcessor -This is a bean post-processor that can register all jobs as they - are created: +This is a bean post-processor that can register all jobs as they are created. +[role="xmlContent"] +The following example shows how to include the `JobRegistryBeanPostProcessor` for a job +defined in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1408,6 +1410,10 @@ This is a bean post-processor that can register all jobs as they ---- +[role="javaContent"] +The following example shows how to include the `JobRegistryBeanPostProcessor` for a job +defined in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1426,22 +1432,21 @@ Although it is not strictly necessary, the post-processor in the -===== AutomaticJobRegistrar +===== `AutomaticJobRegistrar` -This is a lifecycle component that creates child contexts and - registers jobs from those contexts as they are created. One advantage - of doing this is that, while the job names in the child contexts still - have to be globally unique in the registry, their dependencies can - have "natural" names. So for example, you can create a set of XML - configuration files each having only one Job, - but all having different definitions of an - `ItemReader` with the same bean name, e.g. - "reader". If all those files were imported into the same context, the - reader definitions would clash and override one another, but with the - automatic registrar this is avoided. This makes it easier to - integrate jobs contributed from separate modules of an - application. +This is a lifecycle component that creates child contexts and registers jobs from those +contexts as they are created. One advantage of doing this is that, while the job names in +the child contexts still have to be globally unique in the registry, their dependencies +can have "natural" names. So for example, you can create a set of XML configuration files +each having only one Job, but all having different definitions of an `ItemReader` with the +same bean name, such as "reader". If all those files were imported into the same context, +the reader definitions would clash and override one another, but with the automatic +regsistrar this is avoided. This makes it easier to integrate jobs contributed from +separate modules of an application. +[role="xmlContent"] +The following example shows how to include the `AutomaticJobRegistrar` for a job defined +in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1460,6 +1465,10 @@ This is a lifecycle component that creates child contexts and ---- +[role="javaContent"] +The following example shows how to include the `AutomaticJobRegistrar` for a job defined +in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1552,14 +1561,12 @@ public interface JobOperator { } ---- -The above operations represent methods from many different - interfaces, such as `JobLauncher`, - `JobRepository`, - `JobExplorer`, and - `JobRegistry`. For this reason, the provided - implementation of `JobOperator`, - `SimpleJobOperator`, has many dependencies: +The above operations represent methods from many different interfaces, such as +`JobLauncher`, `JobRepository`, `JobExplorer`, and `JobRegistry`. For this reason, the +provided implementation of `JobOperator`, `SimpleJobOperator`, has many dependencies. +[role="xmlContent"] +The following example shows a typical bean definition for `SimpleJobOperator` in XML: [source, xml, role="xmlContent"] ---- @@ -1575,6 +1582,9 @@ The above operations represent methods from many different ---- +[role="javaContent"] +The following example shows a typical bean definition for `SimpleJobOperator` in Java: + [source, java, role="javaContent"] ---- /** @@ -1674,10 +1684,8 @@ In this example, the value with a key of 'run.id' is used to ifdef::backend-html5[] [role="xmlContent"] -An incrementer can - be associated with `Job` via the 'incrementer' - attribute in the namespace: - +For jobs defined in XML, an incrementer can be associated with `Job` through the +'incrementer' attribute in the namespace, as follows: [source, xml, role="xmlContent"] ---- @@ -1687,8 +1695,8 @@ An incrementer can ---- [role="javaContent"] -An incrementer can be associated with a 'Job' via the `incrementer` method provided in the -builders: +For jobs defined in Java, an incrementer can be associated with a 'Job' through the +`incrementer` method provided in the builders, as follows: [source, java, role="javaContent"] ---- diff --git a/spring-batch-docs/asciidoc/jsfiles/DocumentToggle.js b/spring-batch-docs/asciidoc/jsfiles/DocumentToggle.js new file mode 100644 index 0000000000..9402920d87 --- /dev/null +++ b/spring-batch-docs/asciidoc/jsfiles/DocumentToggle.js @@ -0,0 +1,62 @@ +$(document).ready(function(){ + + // Make Java the default + setJava(); + + // Initial cookie handler. This part remembers the reader's choice and sets the toggle + // accordingly. + var docToggleCookieString = Cookies.get("docToggle"); + if (docToggleCookieString != null) { + if (docToggleCookieString === "xml") { + $("#xmlButton").prop("checked", true); + setXml(); + } else if (docToggleCookieString === "java") { + $("#javaButton").prop("checked", true); + setJava(); + } else if (docToggleCookieString === "both") { + $("#bothButton").prop("checked", true); + setBoth(); + } + } + + // Click handlers + $("#xmlButton").on("click", function() { + setXml(); + }); + $("#javaButton").on("click", function() { + setJava(); + }); + $("#bothButton").on("click", function() { + setBoth(); + }); + + // Functions to do the work of handling the reader's choice, whether through a click + // or through a cookie. 3652 days is 10 years, give or take a leap day. + function setXml() { + $("*.xmlContent").show(); + $("*.javaContent").hide(); + $("*.javaContent > *").addClass("js-toc-ignore"); + $("*.xmlContent > *").removeClass("js-toc-ignore"); + window.dispatchEvent(new Event("tocRefresh")); + Cookies.set('docToggle', 'xml', { expires: 3652 }); + }; + + function setJava() { + $("*.javaContent").show(); + $("*.xmlContent").hide(); + $("*.xmlContent > *").addClass("js-toc-ignore"); + $("*.javaContent > *").removeClass("js-toc-ignore"); + window.dispatchEvent(new Event("tocRefresh")); + Cookies.set('docToggle', 'java', { expires: 3652 }); + }; + + function setBoth() { + $("*.javaContent").show(); + $("*.xmlContent").show(); + $("*.javaContent > *").removeClass("js-toc-ignore"); + $("*.xmlContent > *").removeClass("js-toc-ignore"); + window.dispatchEvent(new Event("tocRefresh")); + Cookies.set('docToggle', 'both', { expires: 3652 }); + }; + +}); diff --git a/spring-batch-docs/asciidoc/jsr-352.adoc b/spring-batch-docs/asciidoc/jsr-352.adoc index 92969ab4f5..df2fd1459f 100644 --- a/spring-batch-docs/asciidoc/jsr-352.adoc +++ b/spring-batch-docs/asciidoc/jsr-352.adoc @@ -10,66 +10,60 @@ ifndef::onlyonetoggle[] include::toggle.adoc[] endif::onlyonetoggle[] -As of Spring Batch 3.0 support for JSR-352 has been fully implemented. This section is not a replacement for - the spec itself and instead, intends to explain how the JSR-352 specific concepts apply to Spring Batch. - Additional information on JSR-352 can be found via the - JCP here: link:$$https://jcp.org/en/jsr/detail?id=352$$[https://jcp.org/en/jsr/detail?id=352] +As of Spring Batch 3.0 support for JSR-352 has been fully implemented. This section is not +a replacement for the spec itself and instead, intends to explain how the JSR-352 specific +concepts apply to Spring Batch. Additional information on JSR-352 can be found via the +JCP here: +link:$$https://jcp.org/en/jsr/detail?id=352$$[https://jcp.org/en/jsr/detail?id=352] [[jsrGeneralNotes]] === General Notes about Spring Batch and JSR-352 -Spring Batch and JSR-352 are structurally the same. They both have jobs that are made up of steps. They - both have readers, processors, writers, and listeners. However, their interactions are subtly different. - For example, the `org.springframework.batch.core.SkipListener#onSkipInWrite(S item, Throwable t)` - within Spring Batch receives two parameters: the item that was skipped and the Exception that caused the - skip. The JSR-352 version of the same method - (`javax.batch.api.chunk.listener.SkipWriteListener#onSkipWriteItem(List<Object> items, Exception ex)`) - also receives two parameters. However the first one is a `List` of all the items - within the current chunk with the second being the `Exception` that caused the skip. - Because of these differences, it is important to note that there are two paths to execute a job within - Spring Batch: either a traditional Spring Batch job or a JSR-352 based job. While the use of Spring Batch - artifacts (readers, writers, etc) will work within a job configured via JSR-352's JSL and executed via the - `JsrJobOperator`, they will behave according to the rules of JSR-352. It is also - important to note that batch artifacts that have been developed against the JSR-352 interfaces will not work - within a traditional Spring Batch job. +Spring Batch and JSR-352 are structurally the same. They both have jobs that are made up +of steps. They both have readers, processors, writers, and listeners. However, their +interactions are subtly different. For example, the +`org.springframework.batch.core.SkipListener#onSkipInWrite(S item, Throwable t)` within +Spring Batch receives two parameters: the item that was skipped and the Exception that +caused the skip. The JSR-352 version of the same method +(`javax.batch.api.chunk.listener.SkipWriteListener#onSkipWriteItem(List<Object> items, Exception ex)`) +also receives two parameters. However the first one is a `List` of all the items within +the current chunk with the second being the `Exception` that caused the skip. Because of +these differences, it is important to note that there are two paths to execute a job +within Spring Batch: either a traditional Spring Batch job or a JSR-352 based job. While +the use of Spring Batch artifacts (readers, writers, etc) will work within a job +configured with JSR-352's JSL and executed with the `JsrJobOperator`, they behave +according to the rules of JSR-352. It is also important to note that batch artifacts that +have been developed against the JSR-352 interfaces will not work within a traditional +Spring Batch job. [[jsrSetup]] - - === Setup [[jsrSetupContexts]] - - ==== Application Contexts -All JSR-352 based jobs within Spring Batch consist of two application contexts. A parent context, that - contains beans related to the infrastructure of Spring Batch such as the `JobRepository`, - `PlatformTransactionManager`, etc and a child context that consists of the configuration - of the job to be run. The parent context is defined via the `jsrBaseContext.xml` provided - by the framework. This context may be overridden via the `JSR-352-BASE-CONTEXT` system - property. - +All JSR-352 based jobs within Spring Batch consist of two application contexts. A parent +context, that contains beans related to the infrastructure of Spring Batch such as the +`JobRepository`, `PlatformTransactionManager`, etc and a child context that consists of +the configuration of the job to be run. The parent context is defined via the +`baseContext.xml` provided by the framework. This context may be overridden by setting +the `JSR-352-BASE-CONTEXT` system property. [NOTE] ==== -The base context is not processed by the JSR-352 processors for things like property injection so - no components requiring that additional processing should be configured there. - +The base context is not processed by the JSR-352 processors for things like property +injection so that no components requiring that additional processing should be configured +there. ==== [[jsrSetupLaunching]] - - ==== Launching a JSR-352 based job -JSR-352 requires a very simple path to executing a batch job. The following code is all that is needed to - execute your first batch job: - - +JSR-352 requires a very simple path to executing a batch job. The following code is all +that is needed to execute your first batch job: [source, java] ---- @@ -77,114 +71,104 @@ JobOperator operator = BatchRuntime.getJobOperator(); jobOperator.start("myJob", new Properties()); ---- -While that is convenient for developers, the devil is in the details. Spring Batch bootstraps a bit of - infrastructure behind the scenes that a developer may want to override. The following is bootstrapped the - first time `BatchRuntime.getJobOperator()` is called: +While that is convenient for developers, the devil is in the details. Spring Batch +bootstraps a bit of infrastructure behind the scenes that a developer may want to +override. The following is bootstrapped the first time `BatchRuntime.getJobOperator()` +is called: |=============== |__Bean Name__|__Default Configuration__|__Notes__ | - dataSource - | - Apache DBCP BasicDataSource with configured values. - | - By default, HSQLDB is bootstrapped. +dataSource +| +Apache DBCP BasicDataSource with configured values. +| +By default, HSQLDB is bootstrapped. |`transactionManager`|`org.springframework.jdbc.datasource.DataSourceTransactionManager`| - References the dataSource bean defined above. +References the dataSource bean defined above. | - A Datasource initializer - || - This is configured to execute the scripts configured via the - `batch.drop.script` and `batch.schema.script` properties. By - default, the schema scripts for HSQLDB are executed. This behavior can be disabled via - `batch.data.source.init` property. +A Datasource initializer +|| +This is configured to execute the scripts configured via the `batch.drop.script` and +`batch.schema.script` properties. By default, the schema scripts for HSQLDB are executed. +This behavior can be disabled by setting the `batch.data.source.init` property. | - jobRepository - | - A JDBC based `SimpleJobRepository`. - | - This `JobRepository` uses the previously mentioned data source and transaction - manager. The schema's table prefix is configurable (defaults to BATCH_) via the - `batch.table.prefix` property. - +jobRepository | - jobLauncher - |`org.springframework.batch.core.launch.support.SimpleJobLauncher`| - Used to launch jobs. - +A JDBC based `SimpleJobRepository`. | - batchJobOperator - |`org.springframework.batch.core.launch.support.SimpleJobOperator`| - The `JsrJobOperator` wraps this to provide most of it's functionality. +This `JobRepository` uses the previously mentioned data source and transaction +manager. The schema's table prefix is configurable (defaults to BATCH_) via the +`batch.table.prefix` property. | - jobExplorer - |`org.springframework.batch.core.explore.support.JobExplorerFactoryBean`| - Used to address lookup functionality provided by the `JsrJobOperator`. +jobLauncher +|`org.springframework.batch.core.launch.support.SimpleJobLauncher`| +Used to launch jobs. | - jobParametersConverter - |`org.springframework.batch.core.jsr.JsrJobParametersConverter`| - JSR-352 specific implementation of the `JobParametersConverter`. +batchJobOperator +|`org.springframework.batch.core.launch.support.SimpleJobOperator`| +The `JsrJobOperator` wraps this to provide most of it's functionality. | - jobRegistry - |`org.springframework.batch.core.configuration.support.MapJobRegistry`| - Used by the `SimpleJobOperator`. +jobExplorer +|`org.springframework.batch.core.explore.support.JobExplorerFactoryBean`| +Used to address lookup functionality provided by the `JsrJobOperator`. | - placeholderProperties - |`org.springframework.beans.factory.config.PropertyPlaceholderConfigure`| - Loads the properties file `batch-${ENVIRONMENT:hsql}.properties` to configure - the properties mentioned above. ENVIRONMENT is a System property (defaults to hsql) - that can be used to specify any of the supported databases Spring Batch currently - supports. +jobParametersConverter +|`org.springframework.batch.core.jsr.JsrJobParametersConverter`| +JSR-352 specific implementation of the `JobParametersConverter`. +| +jobRegistry +|`org.springframework.batch.core.configuration.support.MapJobRegistry`| +Used by the `SimpleJobOperator`. +| +placeholderProperties +|`org.springframework.beans.factory.config.PropertyPlaceholderConfigure`| +Loads the properties file `batch-${ENVIRONMENT:hsql}.properties` to configure the +properties mentioned above. ENVIRONMENT is a System property (defaults to `hsql`) that +can be used to specify any of the supported databases Spring Batch currently supports. |=============== - - - - - [NOTE] ==== -None of the above beans are optional for executing JSR-352 based jobs. All may be overriden to - provide customized functionality as needed. +None of the above beans are optional for executing JSR-352 based jobs. All may be +overridden to provide customized functionality as needed. ==== [[dependencyInjection]] - - === Dependency Injection -JSR-352 is based heavily on the Spring Batch programming model. As such, while not explicitly requiring a - formal dependency injection implementation, DI of some kind implied. Spring Batch supports all three - methods for loading batch artifacts defined by JSR-352: - - -* Implementation Specific Loader - Spring Batch is built upon Spring and so supports Spring - dependency injection within JSR-352 batch jobs. - - -* Archive Loader - JSR-352 defines the existing of a batch.xml file that provides mappings between a - logical name and a class name. This file must be found within the /META-INF/ directory if it is - used. - - -* Thread Context Class Loader - JSR-352 allows configurations to specify batch artifact - implementations in their JSL by providing the fully qualified class name inline. Spring Batch - supports this as well in JSR-352 configured jobs. - -To use Spring dependency injection within a JSR-352 based batch job consists of configuring batch - artifacts using a Spring application context as beans. Once the beans have been defined, a job can refer to - them as it would any bean defined within the batch.xml. - +JSR-352 is based heavily on the Spring Batch programming model. As such, while not +explicitly requiring a formal dependency injection implementation, DI of some kind +implied. Spring Batch supports all three methods for loading batch artifacts defined by +JSR-352: + +* Implementation Specific Loader: Spring Batch is built upon Spring and so supports +Spring dependency injection within JSR-352 batch jobs. +* Archive Loader: JSR-352 defines the existing of a batch.xml file that provides mappings +between a logical name and a class name. This file must be found within the /META-INF/ +directory if it is used. +* Thread Context Class Loader: JSR-352 allows configurations to specify batch artifact +implementations in their JSL by providing the fully qualified class name inline. Spring +Batch supports this as well in JSR-352 configured jobs. + +To use Spring dependency injection within a JSR-352 based batch job consists of +configuring batch artifacts using a Spring application context as beans. Once the beans +have been defined, a job can refer to them as it would any bean defined within the +batch.xml file. + +[role="xmlContent"] +The following example shows how to use Spring dependency injection within a JSR-352 based +batch job in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -211,6 +195,10 @@ To use Spring dependency injection within a JSR-352 based batch job consists of ---- +[role="javaContent"] +The following example shows how to use Spring dependency injection within a JSR-352 based +batch job in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -234,14 +222,14 @@ public class BatchConfiguration { ---- -The assembly of Spring contexts (imports, etc) works with JSR-352 jobs just as it would with any other - Spring based application. The only difference with a JSR-352 based job is that the entry point for the - context definition will be the job definition found in /META-INF/batch-jobs/. - -To use the thread context class loader approach, all you need to do is provide the fully qualified class - name as the ref. It is important to note that when using this approach or the batch.xml approach, the class - referenced requires a no argument constructor which will be used to create the bean. +The assembly of Spring contexts (imports, etc) works with JSR-352 jobs just as it would +with any other Spring based application. The only difference with a JSR-352 based job is +that the entry point for the context definition will be the job definition found in /META-INF/batch-jobs/. +To use the thread context class loader approach, all you need to do is provide the fully +qualified class name as the ref. It is important to note that when using this approach or +the batch.xml approach, the class referenced requires a no argument constructor which is +used to create the bean. [source, xml] ---- @@ -255,8 +243,6 @@ To use the thread context class loader approach, all you need to do is provide t ---- [[jsrJobProperties]] - - === Batch Properties [[jsrPropertySupport]] @@ -264,9 +250,9 @@ To use the thread context class loader approach, all you need to do is provide t ==== Property Support -JSR-352 allows for properties to be defined at the Job, Step and batch artifact level by way of - configuration in the JSL. Batch properties are configured at each level in the following way: - +JSR-352 allows for properties to be defined at the Job, Step and batch artifact level by +way of configuration in the JSL. Batch properties are configured at each level in the +following way: [source, xml] ---- @@ -276,22 +262,18 @@ JSR-352 allows for properties to be defined at the Job, Step and batch artifact ---- - `Properties` may be configured on any batch artifact. [[jsrBatchPropertyAnnotation]] - - ==== @BatchProperty annotation `Properties` are referenced in batch artifacts by annotating class fields with the - `@BatchProperty` and `@Inject` annotations (both annotations - are required by the spec). As defined by JSR-352, fields for properties must be String typed. Any type - conversion is up to the implementing developer to perform. - -An `javax.batch.api.chunk.ItemReader` artifact could be configured with a - properties block such as the one described above and accessed as such: +`@BatchProperty` and `@Inject` annotations (both annotations are required by the spec). As +defined by JSR-352, fields for properties must be String typed. Any type conversion is up +to the implementing developer to perform. +An `javax.batch.api.chunk.ItemReader` artifact could be configured with a properties block +such as the one described above and accessed as such: [source, java] ---- @@ -304,73 +286,51 @@ public class MyItemReader extends AbstractItemReader { } ---- - The value of the field "propertyName1" will be "propertyValue1" [[jsrPropertySubstitution]] - - ==== Property Substitution -Property substitution is provided by way of operators and simple conditional expressions. The general - usage is `#{operator['key']}`. +Property substitution is provided by way of operators and simple conditional expressions. +The general usage is `#{operator['key']}`. Supported operators: -* jobParameters - access job parameter values that the job was started/restarted with. - - -* jobProperties - access properties configured at the job level of the JSL. - - -* systemProperties - access named system properties. - - -* partitionPlan - access named property from the partition plan of a partitioned step. +* `jobParameters`: access job parameter values that the job was started/restarted with. +* `jobProperties`: access properties configured at the job level of the JSL. +* `systemProperties`: access named system properties. +* `partitionPlan`: access named property from the partition plan of a partitioned step. ---- #{jobParameters['unresolving.prop']}?:#{systemProperties['file.separator']} ---- -The left hand side of the assignment is the expected value, the right hand side is the default value. In -this example, the result will resolve to a value of the system property `file.separator` as -`#{jobParameters['unresolving.prop']}` is assumed to not be resolvable. If neither expressions can be -resolved, an empty String will be returned. Multiple conditions can be used, which are separated by a -';'. +The left hand side of the assignment is the expected value, the right hand side is the +default value. In the preceding + example, the result will resolve to a value of the system property file.separator as + #{jobParameters['unresolving.prop']} is assumed to not be resolvable. If neither +expressions can be resolved, an empty String will be returned. Multiple conditions can be +used, which are separated by a ';'. [[jsrProcessingModels]] - - === Processing Models JSR-352 provides the same two basic processing models that Spring Batch does: - - -* Item based processing - Using an `javax.batch.api.chunk.ItemReader`, an - optional `javax.batch.api.chunk.ItemProcessor`, and an - `javax.batch.api.chunk.ItemWriter`. - - -* Task based processing - Using a `javax.batch.api.Batchlet` - implementation. This processing model is the same as the - `org.springframework.batch.core.step.tasklet.Tasklet` based processing - currently available. - - - +* Item based processing - Using an `javax.batch.api.chunk.ItemReader`, an optional +`javax.batch.api.chunk.ItemProcessor`, and an `javax.batch.api.chunk.ItemWriter`. +* Task based processing - Using a `javax.batch.api.Batchlet` implementation. This +processing model is the same as the `org.springframework.batch.core.step.tasklet.Tasklet` +based processing currently available. ==== Item based processing -Item based processing in this context is a chunk size being set by the number of items read by an - `ItemReader`. To configure a step this way, specify the - `item-count` (which defaults to 10) and optionally configure the - `checkpoint-policy` as item (this is the default). - - +Item based processing in this context is a chunk size being set by the number of items +read by an `ItemReader`. To configure a step this way, specify the `item-count` (which +defaults to 10) and optionally configure the `checkpoint-policy` as item (this is the default). [source, xml] ---- @@ -385,27 +345,22 @@ Item based processing in this context is a chunk size being set by the number of ... ---- - -If item based checkpointing is chosen, an additional attribute `time-limit` is - supported. This sets a time limit for how long the number of items specified has to be processed. If - the timeout is reached, the chunk will complete with however many items have been read by then - regardless of what the `item-count` is configured to be. - - +If item-based checkpointing is chosen, an additional attribute `time-limit` is supported. +This sets a time limit for how long the number of items specified has to be processed. If +the timeout is reached, the chunk will complete with however many items have been read by +then regardless of what the `item-count` is configured to be. ==== Custom checkpointing -JSR-352 calls the process around the commit interval within a step "checkpointing". Item based - checkpointing is one approach as mentioned above. However, this will not be robust enough in many - cases. Because of this, the spec allows for the implementation of a custom checkpointing algorithm by - implementing the `javax.batch.api.chunk.CheckpointAlgorithm` interface. This - functionality is functionally the same as Spring Batch's custom completion policy. To use an - implementation of `CheckpointAlgorithm`, configure your step with the custom - `checkpoint-policy` as shown below where `fooCheckpointer` refers to an - implementation of `CheckpointAlgorithm`. - - +JSR-352 calls the process around the commit interval within a step "checkpointing". +Item-based checkpointing is one approach as mentioned above. However, this is not robust +enough in many cases. Because of this, the spec allows for the implementation of a custom +checkpointing algorithm by implementing the `javax.batch.api.chunk.CheckpointAlgorithm` +interface. This functionality is functionally the same as Spring Batch's custom completion +policy. To use an implementation of `CheckpointAlgorithm`, configure your step with the +custom `checkpoint-policy` as shown below where fooCheckpointer refers to an +implementation of `CheckpointAlgorithm`. [source, xml] ---- @@ -422,86 +377,66 @@ JSR-352 calls the process around the commit interval within a step "checkpointin ---- [[jsrRunningAJob]] - - === Running a job The entrance to executing a JSR-352 based job is through the - `javax.batch.operations.JobOperator`. Spring Batch provides its own implementation of - this interface (`org.springframework.batch.core.jsr.launch.JsrJobOperator`). This - implementation is loaded via the `javax.batch.runtime.BatchRuntime`. Launching a - JSR-352 based batch job is implemented as follows: +`javax.batch.operations.JobOperator`. Spring Batch provides our own implementation to +this interface (`org.springframework.batch.core.jsr.launch.JsrJobOperator`). This +implementation is loaded via the `javax.batch.runtime.BatchRuntime`. Launching a +JSR-352 based batch job is implemented as follows: [source, java] ---- - JobOperator jobOperator = BatchRuntime.getJobOperator(); long jobExecutionId = jobOperator.start("fooJob", new Properties()); - ---- The above code does the following: - - -* Bootstraps a base `ApplicationContext` - In order to provide batch functionality, the framework - needs some infrastructure bootstrapped. This occurs once per JVM. The components that are - bootstrapped are similar to those provided by `@EnableBatchProcessing`. - Specific details can be found in the javadoc for the `JsrJobOperator`. - - - -* Loads an `ApplicationContext` for the job requested - In the example - above, the framework will look in /META-INF/batch-jobs for a file named fooJob.xml and load a - context that is a child of the shared context mentioned previously. - - -* Launch the job - The job defined within the context will be executed asynchronously. The - `JobExecution's` id will be returned. - - - - +* Bootstraps a base `ApplicationContext`: In order to provide batch functionality, the +framework needs some infrastructure bootstrapped. This occurs once per JVM. The +components that are bootstrapped are similar to those provided by +`@EnableBatchProcessing`. Specific details can be found in the javadoc for the +`JsrJobOperator`. +* Loads an `ApplicationContext` for the job requested: In the example +above, the framework looks in /META-INF/batch-jobs for a file named fooJob.xml and load a +context that is a child of the shared context mentioned previously. +* Launch the job: The job defined within the context will be executed asynchronously. +The `JobExecution's` ID is returned. [NOTE] ==== All JSR-352 based batch jobs are executed asynchronously. ==== +When `JobOperator#start` is called using `SimpleJobOperator`, Spring Batch determines if +the call is an initial run or a retry of a previously executed run. Using the JSR-352 +based `JobOperator#start(String jobXMLName, Properties jobParameters)`, the framework +always creates a new JobInstance (JSR-352 job parameters are non-identifying). In order to +restart a job, a call to +`JobOperator#restart(long executionId, Properties restartParameters)` is required. -When `JobOperator#start` is called using `SimpleJobOperator`, - Spring Batch determines if the call is an initial run or a retry of a previously executed run. Using the - JSR-352 based `JobOperator#start(String jobXMLName, Properties jobParameters)`, the - framework will always create a new JobInstance (JSR-352 job parameters are - non-identifying). In order to restart a job, a call to - `JobOperator#restart(long executionId, Properties restartParameters)` is required. [[jsrContexts]] - - === Contexts -JSR-352 defines two context objects that are used to interact with the meta-data of a job or step from - within a batch artifact: `javax.batch.runtime.context.JobContext` and - `javax.batch.runtime.context.StepContext`. Both of these are available in any step - level artifact (`Batchlet`, `ItemReader`, etc) with the - `JobContext` being available to job level artifacts as well - (`JobListener` for example). - -To obtain a reference to the `JobContext` or `StepContext` - within the current scope, simply use the `@Inject` annotation: +JSR-352 defines two context objects that are used to interact with the meta-data of a job +or step from within a batch artifact: `javax.batch.runtime.context.JobContext` and +`javax.batch.runtime.context.StepContext`. Both of these are available in any step-level +artifact (`Batchlet`, `ItemReader`, and others) with the `JobContext` being available to +job-level artifacts as well (JobListener for example). +To obtain a reference to the `JobContext` or `StepContext` within the current scope, use +the `@Inject` annotation, as follows: [source, java] ---- @Inject JobContext jobContext; - ---- - [NOTE] .@Autowire for JSR-352 contexts ==== @@ -509,47 +444,35 @@ Using Spring's @Autowire is not supported for the injection of these contexts. ==== -In Spring Batch, the `JobContext` and `StepContext` wrap their - corresponding execution objects (`JobExecution` and - `StepExecution` respectively). Data stored via - `StepContext#setPersistentUserData(Serializable data)` is stored in the - Spring Batch `StepExecution#executionContext`. +In Spring Batch, the `JobContext` and `StepContext` wrap their corresponding execution +objects (`JobExecution` and `StepExecution` respectively). Data stored through +`StepContext#persistent#setPersistentUserData(Serializable data)` is stored in the Spring +Batch `StepExecution#executionContext`. [[jsrStepFlow]] - - === Step Flow -Within a JSR-352 based job, the flow of steps works similarly as it does within Spring Batch. - However, there are a few subtle differences: - - - -* Decision's are steps - In a regular Spring Batch job, a decision is a state that does not - have an independent `StepExecution` or any of the rights and - responsibilities that go along with being a full step.. However, with JSR-352, a decision - is a step just like any other and will behave just as any other steps (transactionality, - it gets a `StepExecution`, etc). This means that they are treated the - same as any other step on restarts as well. - - -* `next` attribute and step transitions - In a regular job, these are - allowed to appear together in the same step. JSR-352 allows them to both be used in the - same step with the next attribute taking precedence in evaluation. - - +Within a JSR-352 based job, the flow of steps works similarly as it does within Spring +Batch. However, there are a few subtle differences: + +* Decision's are steps - In a regular Spring Batch job, a decision is a state that does +not have an independent `StepExecution` or any of the rights and responsibilities that go +along with being a full step. However, with JSR-352, a decision is a step just like any +other and will behave just as any other steps (transactionality, it gets a +`StepExecution`, and so on). This means that they are treated the same as any other step +on restarts as well. +* `next` attribute and step transitions - In a regular job, these are allowed to appear +together in the same step. JSR-352 allows them to both be used in the same step with the +next attribute taking precedence in evaluation. * Transition element ordering - In a standard Spring Batch job, transition elements are - sorted from most specific to least specific and evaluated in that order. JSR-352 jobs - evaluate transition elements in the order they are specified in the XML. - - +sorted from most specific to least specific and evaluated in that order. JSR-352 jobs +evaluate transition elements in the order they are specified in the XML. [[jsrScaling]] - - === Scaling a JSR-352 batch job +<<<<<<< HEAD Traditional Spring Batch jobs have four ways of scaling (the last two capable of being executed across multiple JVMs): @@ -573,16 +496,28 @@ JSR-352 provides two options for scaling batch jobs. Both options support only * Partitioning - Conceptually the same as Spring Batch however implemented slightly different. +======= +Traditional Spring Batch jobs have four ways of scaling (the last two capable of being +executed across multiple JVMs): +>>>>>>> Made the "Both" option make sense +* Split: Running multiple steps in parallel. +* Multiple threads: Executing a single step via multiple threads. +* Partitioning: Dividing the data up for parallel processing (master/slave). +* Remote Chunking: Executing the processor piece of logic remotely. +JSR-352 provides two options for scaling batch jobs. Both options support only a single +JVM: +* Split: Same as Spring Batch +* Partitioning: Conceptually the same as Spring Batch however implemented slightly +different. [[jsrPartitioning]] - - ==== Partitioning +<<<<<<< HEAD Conceptually, partitioning in JSR-352 is the same as it is in Spring Batch. Meta-data is provided to each worker to identify the input to be processed, with the workers reporting back to the manager the results upon completion. However, there are some important differences: @@ -614,15 +549,39 @@ Conceptually, partitioning in JSR-352 is the same as it is in Spring Batch. Met not get official `StepExecutions`. Because of that, calls to `JsrJobOperator#getStepExecutions(long jobExecutionId)` will only return the `StepExecution` for the manager. +======= +Conceptually, partitioning in JSR-352 is the same as it is in Spring Batch. Meta-data is +provided to each slave to identify the input to be processed with the slaves reporting +back to the master the results upon completion. However, there are some important +differences: + +* Partitioned `Batchlet`: This runs multiple instances of the configured `Batchlet` on +multiple threads. Each instance has its own set of properties as provided by the JSL or +the `PartitionPlan`. +* `PartitionPlan`: With Spring Batch's partitioning, an `ExecutionContext` is provided +for each partition. With JSR-352, a single `javax.batch.api.partition.PartitionPlan` is +provided with an array of `Properties` providing the meta-data for each partition. +* `PartitionMapper`: JSR-352 provides two ways to generate partition meta-data. One is by +setting the JSL (partition properties). The second is via an implementation of the +`javax.batch.api.partition.PartitionMapper` interface. Functionally, this interface is +similar to the `org.springframework.batch.core.partition.support.Partitioner` interface +provided by Spring Batch in that it provides a way to programmatically generate meta-data +for partitioning. +* `StepExecutions`: In Spring Batch, partitioned steps are run as master/slave. Within +JSR-352, the same configuration occurs. However, the slave steps do not get official +`StepExecutions`. Because of that, calls to +`JsrJobOperator#getStepExecutions(long jobExecutionId)` return only the `StepExecution` +for the master. +>>>>>>> Made the "Both" option make sense [NOTE] ==== The child `StepExecutions` still exist in the job repository and are available -via the `JobExplorer` and Spring Batch Admin. - +through the `JobExplorer` and Spring Batch Admin. ==== +<<<<<<< HEAD * Compensating logic - Since Spring Batch implements the manager/worker logic of partitioning using steps, `StepExecutionListeners` can be used to handle compensating logic if something goes wrong. However, since the workers JSR-352 @@ -639,15 +598,32 @@ via the `JobExplorer` and Spring Batch Admin. |`javax.batch.api.partition.PartitionReducer`|Provides the ability to provide compensating logic for a partitioned step. +======= +* Compensating logic - Since Spring Batch implements the master/slave logic of +partitioning using steps, `StepExecutionListeners` can be used to handle compensating +logic if something goes wrong. However, since the slaves JSR-352 provides a collection of +other components for the ability to provide compensating logic when errors occur and to +dynamically set the exit status. These components include the following: + +|=============== +|__Artifact Interface__|__Description__ +|`javax.batch.api.partition.PartitionCollector`|Provides a way for slave steps to send +information back to the master. There is one instance per slave thread. +|`javax.batch.api.partition.PartitionAnalyzer`|End point that receives the information +collected by the `PartitionCollector` as well as the resulting statuses from a completed +partition. +|`javax.batch.api.partition.PartitionReducer`|Provides the ability to provide compensating +logic for a partitioned step. +>>>>>>> Made the "Both" option make sense |=============== -[[jsrTesting]] +[[jsrTesting]] === Testing -Since all JSR-352 based jobs are executed asynchronously, it can be difficult to determine when a job has - completed. To help with testing, Spring Batch provides the - `org.springframework.batch.test.JsrTestUtils`. This utility class provides the - ability to start a job and restart a job and wait for it to complete. Once the job completes, the - associated `JobExecution` is returned. +Since all JSR-352 based jobs are executed asynchronously, it can be difficult to determine +when a job has completed. To help with testing, Spring Batch provides the +`org.springframework.batch.core.jsr.JsrTestUtils`. This utility class provides the +ability to start a job and restart a job and wait for it to complete. Once the job +completes, the associated `JobExecution` is returned. diff --git a/spring-batch-docs/asciidoc/readersAndWriters.adoc b/spring-batch-docs/asciidoc/readersAndWriters.adoc index 80cade7b8b..2654461a97 100644 --- a/spring-batch-docs/asciidoc/readersAndWriters.adoc +++ b/spring-batch-docs/asciidoc/readersAndWriters.adoc @@ -174,7 +174,10 @@ In the preceding example, there is a class `Foo`, a class `Bar`, and a class simple, but any type of transformation could be done here. The `BarWriter` writes `Bar` objects, throwing an exception if any other type is provided. Similarly, the `FooProcessor` throws an exception if anything but a `Foo` is provided. The -`FooProcessor` can then be injected into a `Step`, as shown in the following example: +`FooProcessor` can then be injected into a `Step`. + +[role="xmlContent"] +The following example shows how to inject the `FooProcessor` into a step in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -189,6 +192,9 @@ objects, throwing an exception if any other type is provided. Similarly, the ---- +[role="javaContent"] +The following example shows how to inject the `FooProcessor` into a step in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -267,7 +273,10 @@ compositeProcessor.setDelegates(itemProcessors); ---- Just as with the previous example, the composite processor can be configured into the -`Step`: +`Step`. + +[role="xmlContent"] +The following example shows how to configure the composite processor into the step in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -292,6 +301,10 @@ Just as with the previous example, the composite processor can be configured int ---- +[role="javaContent"] +The following example shows how to configure the composite processor into the step in +Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -405,8 +418,10 @@ Batch Core as part of a `Step` in a `Job`, then they almost certainly need to be registered manually with the `Step`. A reader, writer, or processor that is directly wired into the `Step` gets registered automatically if it implements `ItemStream` or a `StepListener` interface. However, because the delegates are not known to the `Step`, -they need to be injected as listeners or streams (or both if appropriate), as shown in -the following example: +they need to be injected as listeners or streams (or both if appropriate). + +[role="xmlContent"] +The following example shows how to inject a delegate as a stream in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -431,6 +446,9 @@ the following example: ---- +[role="xmlContent"] +The following example shows how to inject a delegate as a stream in XML: + .Java Configuration [source, java, role="javaContent"] ---- @@ -808,8 +826,11 @@ public class PlayerMapper implements FieldSetMapper { For many, having to write a specific `FieldSetMapper` is equally as cumbersome as writing a specific `RowMapper` for a `JdbcTemplate`. Spring Batch makes this easier by providing a `FieldSetMapper` that automatically maps fields by matching a field name with a setter -on the object using the JavaBean specification. Again using the football example, the -`BeanWrapperFieldSetMapper` configuration looks like the following snippet: +on the object using the JavaBean specification. + +[role="xmlContent"] +Again using the football example, the `BeanWrapperFieldSetMapper` configuration looks like +the following snippet in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -824,6 +845,10 @@ on the object using the JavaBean specification. Again using the football example scope="prototype" /> ---- +[role="javaContent"] +Again using the football example, the `BeanWrapperFieldSetMapper` configuration looks like +the following snippet in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -872,7 +897,11 @@ While this looks like one large field, it actually represent 4 distinct fields: . Customer: ID of the customer ordering the item - 9 characters long. When configuring the `FixedLengthLineTokenizer`, each of these lengths must be provided -in the form of ranges, as shown in the following example: +in the form of ranges. + +[role="xmlContent"] +The following example shows how to define ranges for the `FixedLengthLineTokenizer` in +XML: .XML Configuration [source, xml, role="xmlContent"] @@ -886,17 +915,21 @@ in the form of ranges, as shown in the following example: [role="xmlContent"] Because the `FixedLengthLineTokenizer` uses the same `LineTokenizer` interface as -discussed above, it returns the same `FieldSet` as if a delimiter had been used. This +discussed earlier, it returns the same `FieldSet` as if a delimiter had been used. This allows the same approaches to be used in handling its output, such as using the `BeanWrapperFieldSetMapper`. [NOTE] ==== -Supporting the above syntax for ranges requires that a specialized property editor, +Supporting the preceding syntax for ranges requires that a specialized property editor, `RangeArrayPropertyEditor`, be configured in the `ApplicationContext`. However, this bean is automatically declared in an `ApplicationContext` where the batch namespace is used. ==== +[role="javaContent"] +The following example shows how to define ranges for the `FixedLengthLineTokenizer` in +Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -941,8 +974,11 @@ though a "LINEA" has more information than a "LINEB". The `ItemReader` reads each line individually, but we must specify different `LineTokenizer` and `FieldSetMapper` objects so that the `ItemWriter` receives the correct items. The `PatternMatchingCompositeLineMapper` makes this easy by allowing maps -of patterns to `LineTokenizer` instances and patterns to `FieldSetMapper` instances to be -configured, as shown in the following example: +of patterns to `LineTokenizers` and patterns to `FieldSetMappers` to be configured. + +[role="xmlContent"] +The following example shows how to define ranges for the `FixedLengthLineTokenizer` in +XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1002,7 +1038,10 @@ prefixes to lines. The `PatternMatcher` always matches the most specific patter possible, regardless of the order in the configuration. So if "LINE*" and "LINEA*" were both listed as patterns, "LINEA" would match pattern "LINEA*", while "LINEB" would match pattern "LINE*". Additionally, a single asterisk ("*") can serve as a default by matching -any line not matched by any other pattern, as shown in the following example. +any line not matched by any other pattern. + +[role="xmlContent"] +The following example shows how to match a line not matched by any other pattern in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1010,6 +1049,9 @@ any line not matched by any other pattern, as shown in the following example. ---- +[role="javaContent"] +The following example shows how to match a line not matched by any other pattern in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1178,7 +1220,8 @@ public void write(T item) throws Exception { } ---- -A simple configuration might look like the following: +[role="xmlContent"] +In XML, a simple example of configuration might look like the following: .XML Configuration [source, xml, role="xmlContent"] @@ -1191,6 +1234,9 @@ A simple configuration might look like the following: ---- +[role="javaContent"] +In Java, a simple example of configuration might look like the following: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1302,8 +1348,10 @@ public class CustomerCredit { ---- Because a domain object is being used, an implementation of the `FieldExtractor` -interface must be provided, along with the delimiter to use, as shown in the following -example: +interface must be provided, along with the delimiter to use. + +[role="xmlContent"] +The following example shows how to use the `FieldExtractor` with a delimiter in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1323,6 +1371,9 @@ example: ---- +[role="javaContent"] +The following example shows how to use the `FieldExtractor` with a delimiter in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1372,8 +1423,11 @@ public FlatFileItemWriter itemWriter(Resource outputResource) th Delimited is not the only type of flat file format. Many prefer to use a set width for each column to delineate between fields, which is usually referred to as 'fixed width'. -Spring Batch supports this in file writing with the `FormatterLineAggregator`. Using the -same `CustomerCredit` domain object described above, it can be configured as follows: +Spring Batch supports this in file writing with the `FormatterLineAggregator`. + +[role="xmlContent"] +Using the same `CustomerCredit` domain object described above, it can be configured as +follows in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1393,6 +1447,10 @@ same `CustomerCredit` domain object described above, it can be configured as fol ---- +[role="xmlContent"] +Using the same `CustomerCredit` domain object described above, it can be configured as +follows in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1415,13 +1473,19 @@ public FlatFileItemWriter itemWriter(Resource outputResource) th ---- Most of the preceding example should look familiar. However, the value of the format -property is new and is shown in the following element: +property is new. + +[role="xmlContent"] +The following example shows the format property in XML: [source, xml, role="xmlContent"] ---- ---- +[role="javaContent"] +The following example shows the format property in Java: + [source, java, role="javaContent"] ---- ... @@ -1551,9 +1615,10 @@ object to be mapped. The example configuration demonstrates this with the value * `Unmarshaller`: An unmarshalling facility provided by Spring OXM for mapping the XML fragment to an object. +[role="xmlContent"] The following example shows how to define a `StaxEventItemReader` that works with a root -element named `trade`, a resource of `org/springframework/batch/item/xml/domain/trades.xml`, and an unmarshaller -called `tradeMarshaller`. +element named `trade`, a resource of `data/iosample/input/input.xml`, and an unmarshaller +called `tradeMarshaller` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1565,6 +1630,11 @@ called `tradeMarshaller`. ---- +[role="javaContent"] +The following example shows how to define a `StaxEventItemReader` that works with a root +element named `trade`, a resource of `data/iosample/input/input.xml`, and an unmarshaller +called `tradeMarshaller` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1585,7 +1655,10 @@ an alias passed in as a map with the first key and value being the name of the f (that is, a root element) and the object type to bind. Then, similar to a `FieldSet`, the names of the other elements that map to fields within the object type are described as key/value pairs in the map. In the configuration file, we can use a Spring configuration -utility to describe the required alias, as follows: +utility to describe the required alias. + +[role="xmlContent"] +The following example shows how to describe the alias in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1605,6 +1678,9 @@ utility to describe the required alias, as follows: ---- +[role="javaContent"] +The following example shows how to describe the alias in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1674,8 +1750,12 @@ Output works symmetrically to input. The `StaxEventItemWriter` needs a `Resource marshaller, and a `rootTagName`. A Java object is passed to a marshaller (typically a standard Spring OXM Marshaller) which writes to a `Resource` by using a custom event writer that filters the `StartDocument` and `EndDocument` events produced for each -fragment by the OXM tools. The following example uses the -`StaxEventItemWriter`: +fragment by the OXM tools. +// TODO How does `MarshallingEventWriterSerializer` get involved? Because there's a +// property whose name is `marshaller`? + +[role="xmlContent"] +The following XML example uses the `MarshallingEventWriterSerializer`: .XML Configuration [source, xml, role="xmlContent"] @@ -1688,6 +1768,9 @@ fragment by the OXM tools. The following example uses the ---- +[role="javaContent"] +The following Java example uses the `MarshallingEventWriterSerializer`: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1705,10 +1788,12 @@ public StaxEventItemWriter itemWriter(Resource outputResource) { ---- The preceding configuration sets up the three required properties and sets the optional -`overwriteOutput=true` attribute, mentioned earlier in this chapter for specifying whether -an existing file can be overwritten. It should be noted the marshaller used for the -writer in the following example is the exact same as the one used in the reading example -from earlier in the chapter: +`overwriteOutput=true` attrbute, mentioned earlier in this chapter for specifying whether +an existing file can be overwritten. + +[role="xmlContent"] +The following XML example uses the same marshaller as the one used in the reading example +shown earlier in the chapter: .XML Configuration [source, xml, role="xmlContent"] @@ -1728,6 +1813,10 @@ from earlier in the chapter: ---- +[role="javaContent"] +The following Java example uses the same marshaller as the one used in the reading example +shown earlier in the chapter: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1883,9 +1972,12 @@ input for both XML and flat file processing. Consider the following files in a d file-1.txt file-2.txt ignored.txt ---- -`file-1.txt` and `file-2.txt` are formatted the same and, for business reasons, should be -processed together. The `MultiResourceItemReader` can be used to read in both files by -using wildcards, as shown in the following example: +file-1.txt and file-2.txt are formatted the same and, for business reasons, should be +processed together. The `MuliResourceItemReader` can be used to read in both files by +using wildcards. + +[role="xmlContent"] +The following example shows how to read files with wildcards in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1896,6 +1988,9 @@ using wildcards, as shown in the following example: ---- +[role="javaContent"] +The following example shows how to read files with wildcards in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2040,8 +2135,10 @@ is that it allows items to be 'streamed'. The `read` method can be called once, can be written out by an `ItemWriter`, and then the next item can be obtained with `read`. This allows item reading and writing to be done in 'chunks' and committed periodically, which is the essence of high performance batch processing. Furthermore, it -is very easily configured for injection into a Spring Batch `Step`, as shown in the -following example: +is easily configured for injection into a Spring Batch `Step`. + +[role="xmlContent"] +The following example shows how to inject an `ItemReader` into a `Step` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2055,6 +2152,9 @@ following example: ---- +[role="javaContent"] +The following example shows how to inject an `ItemReader` into a `Step` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2156,9 +2256,12 @@ This configured `ItemReader` returns `CustomerCredit` objects in the exact same as described by the `JdbcCursorItemReader`, assuming hibernate mapping files have been created correctly for the `Customer` table. The 'useStatelessSession' property defaults to true but has been added here to draw attention to the ability to switch it on or off. -It is also worth noting that the fetch size of the underlying cursor can be set via the +It is also worth noting that the fetch size of the underlying cursor can be set with the `setFetchSize` property. As with `JdbcCursorItemReader`, configuration is -straightforward, as shown in the following example: +straightforward. + +[role="xmlContent"] +The following example shows how to inject a Hibernate `ItemReader` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2170,6 +2273,9 @@ straightforward, as shown in the following example: ---- +[role="javaContent"] +The following example shows how to inject a Hibernate `ItemReader` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2196,7 +2302,8 @@ The stored procedure can return the cursor in three different ways: * As a ref-cursor returned as an out parameter (used by Oracle and PostgreSQL). * As the return value of a stored function call. -The following example configuration uses the same 'customer credit' example as earlier +[role="xmlContent"] +The following XML example configuration uses the same 'customer credit' example as earlier examples: .XML Configuration @@ -2211,6 +2318,10 @@ examples: ---- +[role="javaContent"] +The following Java example configuration uses the same 'customer credit' example as +earlier examples: + .Java Configuration [source, xml, role="javaContent"] ---- @@ -2231,8 +2342,11 @@ The preceding example relies on the stored procedure to provide a `ResultSet` as returned result (option 1 from earlier). If the stored procedure returned a `ref-cursor` (option 2), then we would need to provide -the position of the out parameter that is the returned `ref-cursor`. The following -example shows how to work with the first parameter being a ref-cursor: +the position of the out parameter that is the returned `ref-cursor`. + +[role="xmlContent"] +The following example shows how to work with the first parameter being a ref-cursor in +XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2247,6 +2361,10 @@ example shows how to work with the first parameter being a ref-cursor: ---- +[role="javaContent"] +The following example shows how to work with the first parameter being a ref-cursor in +Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2264,8 +2382,10 @@ public StoredProcedureItemReader reader(DataSource dataSource) { ---- If the cursor was returned from a stored function (option 3), we would need to set the -property "[maroon]#function#" to `true`. It defaults to `false`. The following example -shows what that would look like: +property "[maroon]#function#" to `true`. It defaults to `false`. + +[role="xmlContent"] +The following example shows property to `true` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2280,6 +2400,9 @@ shows what that would look like: ---- +[role="javaContent"] +The following example shows property to `true` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2300,10 +2423,13 @@ In all of these cases, we need to define a `RowMapper` as well as a `DataSource` actual procedure name. If the stored procedure or function takes in parameters, then they must be declared and -set via the `parameters` property. The following example, for Oracle, declares three -parameters. The first one is the out parameter that returns the ref-cursor, and the +set by using the `parameters` property. The following example, for Oracle, declares three +parameters. The first one is the `out` parameter that returns the ref-cursor, and the second and third are in parameters that takes a value of type `INTEGER`. +[role="xmlContent"] +The following example shows how to work with parameters in XML: + .XML Configuration [source, xml, role="xmlContent"] ---- @@ -2338,6 +2464,9 @@ second and third are in parameters that takes a value of type `INTEGER`. ---- +[role="javaContent"] +The following example shows how to work with parameters in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2396,7 +2525,8 @@ After the reader has been opened, it passes back one item per call to `read` in basic fashion as any other `ItemReader`. The paging happens behind the scenes when additional rows are needed. -The following example configuration uses a similar 'customer credit' example as the +[role="xmlContent"] +The following XML example configuration uses a similar 'customer credit' example as the cursor-based `ItemReaders` shown previously: .XML Configuration @@ -2422,6 +2552,10 @@ cursor-based `ItemReaders` shown previously: ---- +[role="javaContent"] +The following Java example configuration uses a similar 'customer credit' example as the +cursor-based `ItemReaders` shown previously: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2475,8 +2609,11 @@ be garbage collected once the page is processed. The `JpaPagingItemReader` lets you declare a JPQL statement and pass in a `EntityManagerFactory`. It then passes back one item per call to read in the same basic fashion as any other `ItemReader`. The paging happens behind the scenes when additional -entities are needed. The following example configuration uses the same 'customer credit' -example as the JDBC reader shown previously: +entities are needed. + +[role="xmlContent"] +The following XML example configuration uses the same 'customer credit' example as the +JDBC reader shown previously: .XML Configuration [source, xml, role="xmlContent"] @@ -2488,6 +2625,10 @@ example as the JDBC reader shown previously: ---- +[role="xmlContent"] +The following Java example configuration uses the same 'customer credit' example as the +JDBC reader shown previously: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2563,8 +2704,10 @@ another Spring Batch class or because it truly is the main `ItemReader` for a st fairly trivial to write an adapter class for each service that needs wrapping, but because it is such a common concern, Spring Batch provides implementations: `ItemReaderAdapter` and `ItemWriterAdapter`. Both classes implement the standard Spring -method by invoking the delegate pattern and are fairly simple to set up. The following -example uses the `ItemReaderAdapter`: +method by invoking the delegate pattern and are fairly simple to set up. + +[role="xmlContent"] +The following XML example uses the `ItemReaderAdapter`: .XML Configuration [source, xml, role="xmlContent"] @@ -2577,6 +2720,9 @@ example uses the `ItemReaderAdapter`: ---- +[role="javaContent"] +The following Java example uses the `ItemReaderAdapter`: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2600,7 +2746,10 @@ One important point to note is that the contract of the `targetMethod` must be t as the contract for `read`: When exhausted, it returns `null`. Otherwise, it returns an `Object`. Anything else prevents the framework from knowing when processing should end, either causing an infinite loop or incorrect failure, depending upon the implementation -of the `ItemWriter`. The following example uses the `ItemWriterAdapter`: +of the `ItemWriter`. + +[role="xmlContent"] +The following XML example uses the `ItemWriterAdapter`: .XML Configuration [source, xml, role="xmlContent"] @@ -2613,6 +2762,9 @@ of the `ItemWriter`. The following example uses the `ItemWriterAdapter`: ---- +[role="javaContent"] +The following Java example uses the `ItemWriterAdapter`: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2659,7 +2811,10 @@ public interface Validator { The contract is that the `validate` method throws an exception if the object is invalid and returns normally if it is valid. Spring Batch provides an out of the box -`ValidatingItemProcessor`, as shown in the following bean definition: +`ValidatingItemProcessor`. + +[role="xmlContent"] +The following bean definition shows how to configure a `ValidatingItemProcessor` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2675,6 +2830,9 @@ and returns normally if it is valid. Spring Batch provides an out of the box ---- +[role="javaContent"] +The following bean definition shows how to configure a `ValidatingItemProcessor` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2749,7 +2907,10 @@ contain an extra statement in the `where` clause, such as `where PROCESSED_IND = thereby ensuring that only unprocessed records are returned in the case of a restart. In this scenario, it is preferable to not store any state, such as the current row number, since it is irrelevant upon restart. For this reason, all readers and writers include the -'saveState' property, as shown in the following example: +'saveState' property. + +[role="xmlContent"] +The following bean definition shows how to prevent state persistence in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2773,6 +2934,9 @@ since it is irrelevant upon restart. For this reason, all readers and writers in ---- +[role="javaContent"] +The following bean definition shows how to prevent state persistence in Java: + .Java Configuration [source, java, role="javaContent"] ---- diff --git a/spring-batch-docs/asciidoc/repeat.adoc b/spring-batch-docs/asciidoc/repeat.adoc index 13043dd59e..0875d81ebb 100644 --- a/spring-batch-docs/asciidoc/repeat.adoc +++ b/spring-batch-docs/asciidoc/repeat.adoc @@ -236,7 +236,7 @@ configure AOP interceptors, see the Spring User Guide): ---- [role="javaContent"] -The following example demonstrates using java configuration to +The following example demonstrates using Java configuration to repeat a service call to a method called `processMessage` (for more detail on how to configure AOP interceptors, see the Spring User Guide): diff --git a/spring-batch-docs/asciidoc/retry.adoc b/spring-batch-docs/asciidoc/retry.adoc index ecdfe823cd..b7213823fb 100644 --- a/spring-batch-docs/asciidoc/retry.adoc +++ b/spring-batch-docs/asciidoc/retry.adoc @@ -10,30 +10,23 @@ ifndef::onlyonetoggle[] include::toggle.adoc[] endif::onlyonetoggle[] -To make processing more robust and less prone to failure, it sometimes - helps to automatically retry a failed operation in case it might - succeed on a subsequent attempt. Errors that are susceptible to intermittent failure - are often transient in nature. Examples include remote calls to a web - service that fails because of a network glitch or a - `DeadlockLoserDataAccessException` in a database update. +To make processing more robust and less prone to failure, it sometimes helps to +automatically retry a failed operation in case it might succeed on a subsequent attempt. +Errors that are susceptible to intermittent failure are often transient in nature. +Examples include remote calls to a web service that fails because of a network glitch or a +`DeadlockLoserDataAccessException` in a database update. [[retryTemplate]] - - === `RetryTemplate` - [NOTE] ==== The retry functionality was pulled out of Spring Batch as of 2.2.0. It is now part of a new library, https://github.com/spring-projects/spring-retry[Spring Retry]. ==== - -To automate retry - operations Spring Batch has the `RetryOperations` - strategy. The following interface definition for `RetryOperations`: - +To automate retry operations Spring Batch has the `RetryOperations` strategy. The +following interface definition for `RetryOperations`: [source, java] ---- @@ -53,9 +46,8 @@ public interface RetryOperations { } ---- -The basic callback is a simple interface that lets you - insert some business logic to be retried, as shown in the following interface definition: - +The basic callback is a simple interface that lets you insert some business logic to be +retried, as shown in the following interface definition: [source, java] ---- @@ -66,19 +58,15 @@ public interface RetryCallback { } ---- -The callback runs and, if it fails (by throwing an - `Exception`), it is retried until either it is - successful or the implementation aborts. There are a number of - overloaded `execute` methods in the - `RetryOperations` interface. Those methods deal with various use - cases for recovery when all retry attempts are exhausted and deal with - retry state, which allows clients and implementations to store information - between calls (we cover this in more detail later in the chapter). - -The simplest general purpose implementation of - `RetryOperations` is - `RetryTemplate`. It can be used as follows: +The callback runs and, if it fails (by throwing an `Exception`), it is retried until +either it is successful or the implementation aborts. There are a number of overloaded +`execute` methods in the `RetryOperations` interface. Those methods deal with various use +cases for recovery when all retry attempts are exhausted and deal with retry state, which +lets clients and implementations store information between calls (we cover this in more +detail later in the chapter). +The simplest general purpose implementation of `RetryOperations` is `RetryTemplate`. It +can be used as follows: [source, java] ---- @@ -99,36 +87,26 @@ Foo result = template.execute(new RetryCallback() { }); ---- -In the preceding example, we make a web service call and return the result - to the user. If that call fails, then it is retried until a timeout is - reached. +In the preceding example, we make a web service call and return the result to the user. If +that call fails, then it is retried until a timeout is reached. [[retryContext]] - - ==== `RetryContext` -The method parameter for the `RetryCallback` - is a `RetryContext`. Many callbacks - ignore the context, but, if necessary, it can be used as an attribute bag - to store data for the duration of the iteration. +The method parameter for the `RetryCallback` is a `RetryContext`. Many callbacks ignore +the context, but, if necessary, it can be used as an attribute bag to store data for the +duration of the iteration. -A `RetryContext` has a parent context - if there is a nested retry in progress in the same thread. The parent - context is occasionally useful for storing data that need to be shared - between calls to `execute`. +A `RetryContext` has a parent context if there is a nested retry in progress in the same +thread. The parent context is occasionally useful for storing data that need to be shared +between calls to `execute`. [[recoveryCallback]] - - ==== `RecoveryCallback` -When a retry is exhausted, the - `RetryOperations` can pass control to a different - callback, called the `RecoveryCallback`. To use this - feature, clients pass in the callbacks together to the same method, - as shown in the following example: - +When a retry is exhausted, the `RetryOperations` can pass control to a different callback, +called the `RecoveryCallback`. To use this feature, clients pass in the callbacks together +to the same method, as shown in the following example: [source, java] ---- @@ -143,141 +121,100 @@ Foo foo = template.execute(new RetryCallback() { }); ---- -If the business logic does not succeed before the template - decides to abort, then the client is given the chance to do some - alternate processing through the recovery callback. +If the business logic does not succeed before the template decides to abort, then the +client is given the chance to do some alternate processing through the recovery callback. [[statelessRetry]] - - ==== Stateless Retry -In the simplest case, a retry is just a while loop. The - `RetryTemplate` can just keep trying until it - either succeeds or fails. The `RetryContext` - contains some state to determine whether to retry or abort, but this - state is on the stack and there is no need to store it anywhere - globally, so we call this stateless retry. The distinction between - stateless and stateful retry is contained in the implementation of the - `RetryPolicy` (the - `RetryTemplate` can handle both). In a stateless - retry, the retry callback is always executed in the same thread it was on - when it failed. +In the simplest case, a retry is just a while loop. The `RetryTemplate` can just keep +trying until it either succeeds or fails. The `RetryContext` contains some state to +determine whether to retry or abort, but this state is on the stack and there is no need +to store it anywhere globally, so we call this stateless retry. The distinction between +stateless and stateful retry is contained in the implementation of the `RetryPolicy` (the +`RetryTemplate` can handle both). In a stateless retry, the retry callback is always +executed in the same thread it was on when it failed. [[statefulRetry]] - - ==== Stateful Retry -Where the failure has caused a transactional resource to become - invalid, there are some special considerations. This does not apply to a - simple remote call because there is no transactional resource (usually), - but it does sometimes apply to a database update, especially when using - Hibernate. In this case it only makes sense to re-throw the exception - that called the failure immediately, so that the transaction can roll - back and we can start a new, valid transaction. +Where the failure has caused a transactional resource to become invalid, there are some +special considerations. This does not apply to a simple remote call because there is no +transactional resource (usually), but it does sometimes apply to a database update, +especially when using Hibernate. In this case it only makes sense to re-throw the +exception that called the failure immediately, so that the transaction can roll back and +we can start a new, valid transaction. In cases involving transactions, a stateless retry is not good enough, because the - re-throw and roll back necessarily involve leaving the - `RetryOperations.execute()` method and potentially losing the - context that was on the stack. To avoid losing it we have to introduce a - storage strategy to lift it off the stack and put it (at a minimum) in - heap storage. For this purpose, Spring Batch provides a storage strategy called - `RetryContextCache`, which can be injected into the - `RetryTemplate`. The default implementation of the - `RetryContextCache` is in memory, using a simple - `Map`. Advanced usage with multiple processes in a - clustered environment might also consider implementing the - `RetryContextCache` with a cluster cache of some - sort (however, even in a clustered environment, this might be - overkill). - -Part of the responsibility of the - `RetryOperations` is to recognize the failed - operations when they come back in a new execution (and usually wrapped - in a new transaction). To facilitate this, Spring Batch provides the - `RetryState` abstraction. This works in conjunction - with a special `execute` methods in the - `RetryOperations` interface. - -The way the failed operations are recognized is by identifying the - state across multiple invocations of the retry. To identify the state, - the user can provide a `RetryState` object that is - responsible for returning a unique key identifying the item. The - identifier is used as a key in the - `RetryContextCache` interface. - +re-throw and roll back necessarily involve leaving the `RetryOperations.execute()` method +and potentially losing the context that was on the stack. To avoid losing it we have to +introduce a storage strategy to lift it off the stack and put it (at a minimum) in heap +storage. For this purpose, Spring Batch provides a storage strategy called +`RetryContextCache`, which can be injected into the `RetryTemplate`. The default +implementation of the `RetryContextCache` is in memory, using a simple `Map`. Advanced +usage with multiple processes in a clustered environment might also consider implementing +the `RetryContextCache` with a cluster cache of some sort (however, even in a clustered +environment, this might be overkill). + +Part of the responsibility of the `RetryOperations` is to recognize the failed operations +when they come back in a new execution (and usually wrapped in a new transaction). To +facilitate this, Spring Batch provides the `RetryState` abstraction. This works in +conjunction with a special `execute` methods in the `RetryOperations` interface. + +The way the failed operations are recognized is by identifying the state across multiple +invocations of the retry. To identify the state, the user can provide a `RetryState` +object that is responsible for returning a unique key identifying the item. The identifier +is used as a key in the `RetryContextCache` interface. [WARNING] ==== -Be very careful with the implementation of - `Object.equals()` and `Object.hashCode()` in the - key returned by `RetryState`. The best advice is - to use a business key to identify the items. In the case of a JMS - message, the message ID can be used. +Be very careful with the implementation of `Object.equals()` and `Object.hashCode()` in +the key returned by `RetryState`. The best advice is to use a business key to identify the +items. In the case of a JMS message, the message ID can be used. ==== +When the retry is exhausted, there is also the option to handle the failed item in a +different way, instead of calling the `RetryCallback` (which is now presumed to be likely +to fail). Just like in the stateless case, this option is provided by the +`RecoveryCallback`, which can be provided by passing it in to the `execute` method of +`RetryOperations`. -When the retry is exhausted, there is also the option to handle the - failed item in a different way, instead of calling the - `RetryCallback` (which is now presumed to be likely - to fail). Just like in the stateless case, this option is provided by - the `RecoveryCallback`, which can be provided by - passing it in to the `execute` method of - `RetryOperations`. - -The decision to retry or not is actually delegated to a regular - `RetryPolicy`, so the usual concerns about limits - and timeouts can be injected there (described later in this chapter). +The decision to retry or not is actually delegated to a regular `RetryPolicy`, so the +usual concerns about limits and timeouts can be injected there (described later in this +chapter). [[retryPolicies]] - - === Retry Policies -Inside a `RetryTemplate`, the decision to retry - or fail in the `execute` method is determined by a - `RetryPolicy`, which is also a factory for the - `RetryContext`. The - `RetryTemplate` has the responsibility to use the - current policy to create a `RetryContext` and pass - that in to the `RetryCallback` at every attempt. - After a callback fails, the `RetryTemplate` has to - make a call to the `RetryPolicy` to ask it to update - its state (which is stored in the - `RetryContext`) and then asks the policy if - another attempt can be made. If another attempt cannot be made (such as when a - limit is reached or a timeout is detected) then the policy is also - responsible for handling the exhausted state. Simple implementations - throw `RetryExhaustedException`, which causes - any enclosing transaction to be rolled back. More sophisticated - implementations might attempt to take some recovery action, in which case - the transaction can remain intact. - +Inside a `RetryTemplate`, the decision to retry or fail in the `execute` method is +determined by a `RetryPolicy`, which is also a factory for the `RetryContext`. The +`RetryTemplate` has the responsibility to use the current policy to create a +`RetryContext` and pass that in to the `RetryCallback` at every attempt. After a callback +fails, the `RetryTemplate` has to make a call to the `RetryPolicy` to ask it to update its +state (which is stored in the `RetryContext`) and then asks the policy if another attempt +can be made. If another attempt cannot be made (such as when a limit is reached or a +timeout is detected) then the policy is also responsible for handling the exhausted state. +Simple implementations throw `RetryExhaustedException`, which causes any enclosing +transaction to be rolled back. More sophisticated implementations might attempt to take +some recovery action, in which case the transaction can remain intact. [TIP] ==== -Failures are inherently either retryable or not. If the same - exception is always going to be thrown from the business logic, it - does no good to retry it. So do not retry on all exception types. Rather, try to - focus on only those exceptions that you expect to be retryable. It is not - usually harmful to the business logic to retry more aggressively, but - it is wasteful, because, if a failure is deterministic, you spend time - retrying something that you know in advance is fatal. +Failures are inherently either retryable or not. If the same exception is always going to +be thrown from the business logic, it does no good to retry it. So do not retry on all +exception types. Rather, try to focus on only those exceptions that you expect to be +retryable. It is not usually harmful to the business logic to retry more aggressively, but +it is wasteful, because, if a failure is deterministic, you spend time retrying something +that you know in advance is fatal. ==== +Spring Batch provides some simple general purpose implementations of stateless +`RetryPolicy`, such as `SimpleRetryPolicy` and `TimeoutRetryPolicy` (used in the preceding example). -Spring Batch provides some simple general purpose implementations of - stateless `RetryPolicy`, such as - `SimpleRetryPolicy` and - `TimeoutRetryPolicy` (used in the preceding example). - -The `SimpleRetryPolicy` allows a retry on - any of a named list of exception types, up to a fixed number of times. It - also has a list of "fatal" exceptions that should never be retried, and - this list overrides the retryable list so that it can be used to give - finer control over the retry behavior, as shown in the following example: - +The `SimpleRetryPolicy` allows a retry on any of a named list of exception types, up to a +fixed number of times. It also has a list of "fatal" exceptions that should never be +retried, and this list overrides the retryable list so that it can be used to give finer +control over the retry behavior, as shown in the following example: [source, java] ---- @@ -299,31 +236,24 @@ template.execute(new RetryCallback() { }); ---- -There is also a more flexible implementation called - `ExceptionClassifierRetryPolicy`, which allows the - user to configure different retry behavior for an arbitrary set of - exception types though the `ExceptionClassifier` - abstraction. The policy works by calling on the classifier to convert an - exception into a delegate `RetryPolicy`. For - example, one exception type can be retried more times before failure than - another by mapping it to a different policy. +There is also a more flexible implementation called `ExceptionClassifierRetryPolicy`, +which lets the user configure different retry behavior for an arbitrary set of exception +types though the `ExceptionClassifier` abstraction. The policy works by calling on the +classifier to convert an exception into a delegate `RetryPolicy`. For example, one +exception type can be retried more times before failure than another by mapping it to a +different policy. -Users might need to implement their own retry policies for more - customized decisions. For instance, a custom retry policy makes sense when there is a well-known, - solution-specific classification of exceptions into retryable and not - retryable. +Users might need to implement their own retry policies for more customized decisions. For +instance, a custom retry policy makes sense when there is a well-known, solution-specific +classification of exceptions into retryable and not retryable. [[backoffPolicies]] - - === Backoff Policies -When retrying after a transient failure, it often helps to wait a bit - before trying again, because usually the failure is caused by some problem - that can only be resolved by waiting. If a - `RetryCallback` fails, the - `RetryTemplate` can pause execution according to the - `BackoffPolicy`. +When retrying after a transient failure, it often helps to wait a bit before trying again, +because usually the failure is caused by some problem that can only be resolved by +waiting. If a `RetryCallback` fails, the `RetryTemplate` can pause execution according to +the `BackoffPolicy`. The following code shows the interface definition for the `BackOffPolicy` interface: @@ -339,31 +269,22 @@ public interface BackoffPolicy { } ---- -A `BackoffPolicy` is free to implement - the backOff in any way it chooses. The policies provided by Spring Batch - out of the box all use `Object.wait()`. A common use case is to - backoff with an exponentially increasing wait period, to avoid two retries - getting into lock step and both failing (this is a lesson learned from - ethernet). For this purpose, Spring Batch provides the - `ExponentialBackoffPolicy`. +A `BackoffPolicy` is free to implement the backOff in any way it chooses. The policies +provided by Spring Batch out of the box all use `Object.wait()`. A common use case is to +backoff with an exponentially increasing wait period, to avoid two retries getting into +lock step and both failing (this is a lesson learned from ethernet). For this purpose, +Spring Batch provides the `ExponentialBackoffPolicy`. [[retryListeners]] - - === Listeners -Often, it is useful to be able to receive additional callbacks for - cross cutting concerns across a number of different retries. For this - purpose, Spring Batch provides the `RetryListener` - interface. The `RetryTemplate` lets users - register `RetryListeners`, and they are given - callbacks with `RetryContext` and - `Throwable` where available during the - iteration. +Often, it is useful to be able to receive additional callbacks for cross cutting concerns +across a number of different retries. For this purpose, Spring Batch provides the +`RetryListener` interface. The `RetryTemplate` lets users register `RetryListeners`, and +they are given callbacks with `RetryContext` and `Throwable` where available during the +iteration. The following code shows the interface definition for `RetryListener`: - - [source, java] ---- public interface RetryListener { @@ -376,39 +297,28 @@ public interface RetryListener { } ---- -The `open` and - `close` callbacks come before and after the entire - retry in the simplest case, and `onError` applies to - the individual `RetryCallback` calls. The - `close` method might also receive a - `Throwable`. If there has been an error, it is the - last one thrown by the `RetryCallback`. +The `open` and `close` callbacks come before and after the entire retry in the simplest +case, and `onError` applies to the individual `RetryCallback` calls. The `close` method +might also receive a `Throwable`. If there has been an error, it is the last one thrown by +the `RetryCallback`. -Note that, when there is more than one listener, they are in a list, - so there is an order. In this case, `open` is - called in the same order while `onError` and - `close` are called in reverse order. +Note that, when there is more than one listener, they are in a list, so there is an order. +In this case, `open` is called in the same order while `onError` and `close` are called in +reverse order. [[declarativeRetry]] - - === Declarative Retry -Sometimes, there is some business processing that you know you want - to retry every time it happens. The classic example of this is the remote - service call. Spring Batch provides an AOP interceptor that wraps a method - call in a `RetryOperations` implementation for just this purpose. - The `RetryOperationsInterceptor` executes the - intercepted method and retries on failure according to the - `RetryPolicy` in the provided - `RetryTemplate`. +Sometimes, there is some business processing that you know you want to retry every time it +happens. The classic example of this is the remote service call. Spring Batch provides an +AOP interceptor that wraps a method call in a `RetryOperations` implementation for just +this purpose. The `RetryOperationsInterceptor` executes the intercepted method and retries +on failure according to the `RetryPolicy` in the provided `RepeatTemplate`. [role="xmlContent"] -The following example shows a declarative retry that uses the Spring AOP - namespace to retry a service call to a method called - `remoteCall` (for more detail on how to configure - AOP interceptors, see the Spring User Guide): - +The following example shows a declarative retry that uses the Spring AOP namespace to +retry a service call to a method called `remoteCall` (for more detail on how to configure +AOP interceptors, see the Spring User Guide): [source, xml, role="xmlContent"] ---- @@ -424,11 +334,9 @@ The following example shows a declarative retry that uses the Spring AOP ---- [role="javaContent"] -The following example shows a declarative retry that uses java configuration - to retry a service call to a method called - `remoteCall` (for more detail on how to configure - AOP interceptors, see the Spring User Guide): - +The following example shows a declarative retry that uses java configuration to retry a +service call to a method called `remoteCall` (for more detail on how to configure AOP +interceptors, see the Spring User Guide): [source, java, role="javaContent"] ---- @@ -450,7 +358,5 @@ public MyService myService() { } ---- -The preceding example uses a default - `RetryTemplate` inside the interceptor. To change the - policies or listeners, you can inject an instance of - `RetryTemplate` into the interceptor. +The preceding example uses a default `RetryTemplate` inside the interceptor. To change the +policies or listeners, you can inject an instance of `RetryTemplate` into the interceptor. diff --git a/spring-batch-docs/asciidoc/scalability.adoc b/spring-batch-docs/asciidoc/scalability.adoc index ff29b946c4..76997fcb9c 100644 --- a/spring-batch-docs/asciidoc/scalability.adoc +++ b/spring-batch-docs/asciidoc/scalability.adoc @@ -41,8 +41,7 @@ The simplest way to start parallel processing is to add a `TaskExecutor` to your configuration. [role="xmlContent"] -For example, you might add an attribute of the `tasklet`, as shown in the -following example: +For example, you might add an attribute of the `tasklet`, as follows: [source, xml, role="xmlContent"] ---- @@ -52,7 +51,7 @@ following example: ---- [role="javaContent"] -When using java configuration, a `TaskExecutor` can be added to the step +When using java configuration, a `TaskExecutor` can be added to the step, as shown in the following example: .Java Configuration @@ -101,7 +100,8 @@ For example you might increase the throttle-limit, as shown in the following exa ---- [role="javaContent"] -When using java configuration, the builders provide access to the throttle limit: +When using Java configuration, the builders provide access to the throttle limit, as shown +in the following example: .Java Configuration [source, java, role="javaContent"] @@ -175,7 +175,7 @@ as shown in the following example: ---- [role="javaContent"] -When using java configuration, executing steps `(step1,step2)` in parallel with `step3` +When using Java configuration, executing steps `(step1,step2)` in parallel with `step3` is straightforward, as shown in the following example: .Java Configuration @@ -295,7 +295,8 @@ many objects and or processes playing this role, and the `PartitionStep` is show the execution. [role="xmlContent"] -The following example shows the `PartitionStep` configuration: +The following example shows the `PartitionStep` configuration when using XML +configuration: [source, xml, role="xmlContent"] ---- @@ -307,7 +308,8 @@ The following example shows the `PartitionStep` configuration: ---- [role="javaContent"] -The following example shows the `PartitionStep` configuration using java configuration: +The following example shows the `PartitionStep` configuration when using Java +configuration: .Java Configuration [source, java, role="javaContent"] @@ -469,8 +471,10 @@ the `Partitioner` output might resemble the content of the following table: |filecopy:partition2|fileName=/home/data/three |=============== -Then the file name can be bound to a step using late binding to the execution context, as -shown in the following example: +Then the file name can be bound to a step using late binding to the execution context. + +[role="xmlContent"] +The following example shows how to define late binding in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -481,6 +485,9 @@ shown in the following example: ---- +[role="xmlContent"] +The following example shows how to define late binding in Java: + .Java Configuration [source, java, role="javaContent"] ---- diff --git a/spring-batch-docs/asciidoc/spring-batch-integration.adoc b/spring-batch-docs/asciidoc/spring-batch-integration.adoc index ed64a77407..bacfd6aef9 100644 --- a/spring-batch-docs/asciidoc/spring-batch-integration.adoc +++ b/spring-batch-docs/asciidoc/spring-batch-integration.adoc @@ -11,7 +11,6 @@ include::toggle.adoc[] endif::onlyonetoggle[] [[spring-batch-integration-introduction]] - === Spring Batch Integration Introduction Many users of Spring Batch may encounter requirements that are @@ -39,29 +38,19 @@ also be embedded in a job (for example reading or writing items for processing via channels). Remote partitioning and remote chunking provide methods to distribute workloads over a number of workers. - This section covers the following key concepts: [role="xmlContent"] * <> - [[continue-section-list]] * <> - - - * <> - - - * <> - - - * <> + [[namespace-support]] [role="xmlContent"] ==== Namespace Support @@ -261,13 +250,13 @@ Batch reference documentation on ===== Spring Batch Integration Configuration -The following configuration creates a file -`inbound-channel-adapter` to listen for CSV -files in the provided directory, hand them off to our -transformer (`FileMessageToJobRequest`), -launch the job via the __Job Launching Gateway__, and then log the output of the -`JobExecution` with the -`logging-channel-adapter`. +Consider a case where someone needs to create a file `inbound-channel-adapter` to listen +for CSV files in the provided directory, hand them off to a transformer +(`FileMessageToJobRequest`), launch the job through the _Job Launching Gateway_, and then +log the output of the `JobExecution` with the `logging-channel-adapter`. + +[role="xmlContent"] +The following example shows how that common case can be configured in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -297,6 +286,9 @@ launch the job via the __Job Launching Gateway__, and then log the output of the ---- +[role="javaContent"] +The following example shows how that common case can be configured in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -332,13 +324,14 @@ public IntegrationFlow integrationFlow(JobLaunchingGateway jobLaunchingGateway) [[example-itemreader-configuration]] - ===== Example ItemReader Configuration -Now that we are polling for files and launching jobs, we need to -configure our Spring Batch -`ItemReader` (for example) to use the files found at the location defined -by the job parameter called "input.file.name", as shown in the following bean configuration: +Now that we are polling for files and launching jobs, we need to configure our Spring +Batch `ItemReader` (for example) to use the files found at the location defined by the job +parameter called "input.file.name", as shown in the following bean configuration: + +[role="xmlContent"] +The following XML example shows the necessary bean configuration: .XML Configuration [source, xml, role="xmlContent"] @@ -350,6 +343,9 @@ by the job parameter called "input.file.name", as shown in the following bean co ---- +[role="javaContent"] +The following Java example shows the necessary bean configuration: + .Java Configuration [source, java, role="javaContent"] ---- @@ -414,7 +410,10 @@ to a `SubscribableChannel`. When this `Gateway` is receiving messages from a `PollableChannel`, you must either provide a global default `Poller` or provide a `Poller` sub-element to the -`Job Launching Gateway`, as shown in the following example: +`Job Launching Gateway`. + +[role="xmlContent"] +The following example shows how to provide a poller in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -425,6 +424,9 @@ a global default `Poller` or provide a `Poller` sub-element to the ---- +[role="javaContent"] +The following example shows how to provide a poller in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -438,60 +440,49 @@ public JobLaunchingGateway sampleJobLaunchingGateway() { ---- [[providing-feedback-with-informational-messages]] - ==== Providing Feedback with Informational Messages - As Spring Batch jobs can run for long times, providing progress information is often critical. For example, stake-holders may want to be notified if some or all parts of a batch job have failed. Spring Batch provides support for this information being gathered through: - - * Active polling - * Event-driven listeners +When starting a Spring Batch job asynchronously (for example, by using the `Job Launching +Gateway`), a `JobExecution` instance is returned. Thus, `JobExecution.getJobId()` can be +used to continuously poll for status updates by retrieving updated instances of the +`JobExecution` from the `JobRepository` by using the `JobExplorer`. However, this is +considered sub-optimal, and an event-driven approach should be preferred. -When starting a Spring Batch job asynchronously (for example, by using the -`Job Launching Gateway`), a -`JobExecution` instance is returned. Thus, -`JobExecution.getJobId()` can be used to -continuously poll for status updates by retrieving updated -instances of the `JobExecution` from the -`JobRepository` by using the -`JobExplorer`. However, this is considered -sub-optimal, and an event-driven approach should be preferred. - +Therefore, Spring Batch provides listeners, including the three most commonly used +listeners: -Therefore, Spring Batch provides listeners, including the three most commonly used listeners: - -* StepListener -* ChunkListener -* JobExecutionListener +* `StepListener` +* `ChunkListener` +* `JobExecutionListener` In the example shown in the following image, a Spring Batch job has been configured with a -`StepExecutionListener`. Thus, Spring -Integration receives and processes any step before or after -events. For example, the received -`StepExecution` can be inspected by using a -`Router`. Based on the results of that -inspection, various things can occur (such as routing a message -to a Mail Outbound Channel Adapter), so that an Email notification -can be sent out based on some condition. +`StepExecutionListener`. Thus, Spring Integration receives and processes any step before +or after events. For example, the received `StepExecution` can be inspected by using a +`Router`. Based on the results of that inspection, various things can occur (such as +routing a message to a Mail Outbound Channel Adapter), so that an Email notification can +be sent out based on some condition. .Handling Informational Messages image::{batch-asciidoc}images/handling-informational-messages.png[Handling Informational Messages, scaledwidth="60%"] The following two-part example shows how a listener is configured to send a -message to a `Gateway` for a -`StepExecution` events and log its output to a +message to a `Gateway` for a `StepExecution` events and log its output to a `logging-channel-adapter`. -First, create the notification integration beans: +First, create the notification integration beans. + +[role="xmlContent"] +The following example shows the how to create the notification integration beans in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -505,6 +496,9 @@ First, create the notification integration beans: ---- +[role="javaContent"] +The following example shows the how to create the notification integration beans in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -522,11 +516,14 @@ public interface NotificationExecutionListener extends StepExecutionListener {} ---- [role="javaContent"] -NOTE: You will need to add the `@IntegrationComponentScan` annotation to your configuration. +NOTE: You need to add the `@IntegrationComponentScan` annotation to your configuration. [[message-gateway-entry-list]] -Second, modify your job to add a step-level listener: +Second, modify your job to add a step-level listener. + +[role="xmlContent"] +The following example shows the how to add a step-level listener in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -544,6 +541,9 @@ Second, modify your job to add a step-level listener: ---- +[role="javaContent"] +The following example shows the how to add a step-level listener in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -561,25 +561,17 @@ public Job importPaymentsJob() { ==== Asynchronous Processors -Asynchronous Processors help you to scale the processing of -items. In the asynchronous processor use case, an -`AsyncItemProcessor` serves as a dispatcher, -executing the logic of the `ItemProcessor` for an -item on a new thread. Once the item completes, the `Future` is passed to -the `AsynchItemWriter` to be written. - - +Asynchronous Processors help you to scale the processing of items. In the asynchronous +processor use case, an `AsyncItemProcessor` serves as a dispatcher, executing the logic of +the `ItemProcessor` for an item on a new thread. Once the item completes, the `Future` is +passed to the `AsynchItemWriter` to be written. -Therefore, you can increase performance by using asynchronous item -processing, basically allowing you to implement -__fork-join__ scenarios. The -`AsyncItemWriter` gathers the results and +Therefore, you can increase performance by using asynchronous item processing, basically +letting you implement _fork-join_ scenarios. The `AsyncItemWriter` gathers the results and writes back the chunk as soon as all the results become available. - - -The following example shows how to configuration the `AsyncItemProcessor`: - +[role="xmlContent"] +The following example shows how to configuration the `AsyncItemProcessor` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -595,6 +587,9 @@ The following example shows how to configuration the `AsyncItemProcessor`: ---- +[role="xmlContent"] +The following example shows how to configuration the `AsyncItemProcessor` in XML: + .Java Configuration [source, java, role="javaContent"] ---- @@ -607,12 +602,11 @@ public AsyncItemProcessor processor(ItemProcessor itemProcessor, TaskExecutor ta } ---- -The `delegate` property refers -to your `ItemProcessor` bean, and -the `taskExecutor` property -refers to the `TaskExecutor` of your choice. +The `delegate` property refers to your `ItemProcessor` bean, and the `taskExecutor` +property refers to the `TaskExecutor` of your choice. -The following example shows how to configure the `AsyncItemWriter`: +[role="xmlContent"] +The following example shows how to configure the `AsyncItemWriter` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -625,6 +619,9 @@ The following example shows how to configure the `AsyncItemWriter`: ---- +[role="javaContent"] +The following example shows how to configure the `AsyncItemWriter` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -687,8 +684,9 @@ Spring Integration's rich collection of Channel Adapters (such as JMS and AMQP), you can distribute chunks of a Batch job to external systems for processing. -A simple job with a step to be remotely chunked might have a -configuration similar to the following: +[role="xmlContent"] +A job with a step to be remotely chunked might have a configuration similar to the +following in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -703,6 +701,10 @@ configuration similar to the following: ---- +[role="javaContent"] +A job with a step to be remotely chunked might have a configuration similar to the +following in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -717,15 +719,15 @@ public Job chunkJob() { } ---- -The `ItemReader` reference points to the bean you want -to use for reading data on the manager. The `ItemWriter` reference -points to a special `ItemWriter` -(called `ChunkMessageChannelItemWriter`), -as described above. The processor (if any) is left off the -manager configuration, as it is configured on the worker. The -following configuration provides a basic manager setup. You -should check any additional component properties, such as -throttle limits and so on, when implementing your use case. +The `ItemReader` reference points to the bean you want to use for reading data on the +master. The `ItemWriter` reference points to a special `ItemWriter` (called +`ChunkMessageChannelItemWriter`), as described above. The processor (if any) is left off +the master configuration, as it is configured on the slave. You should check any +additional component properties, such as throttle limits and so on, when implementing +your use case. + +[role="xmlContent"] +The following XML configuration provides a basic master setup: .XML Configuration [source, xml, role="xmlContent"] @@ -758,6 +760,9 @@ throttle limits and so on, when implementing your use case. channel="replies"/> ---- +[role="javaContent"] +The following Java configuration provides a basic master setup: + .Java Configuration [source, java, role="javaContent"] ---- @@ -824,8 +829,10 @@ referenced by our job step, uses the `ChunkMessageChannelItemWriter` for writing chunks over the configured middleware. -Now we can move on to the worker configuration, as shown in the following example: +Now we can move on to the slave configuration. +[role="xmlContent"] +The following example shows the slave configuration in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -867,6 +874,9 @@ Now we can move on to the worker configuration, as shown in the following exampl ---- +[role="javaContent"] +The following example shows the slave configuration in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1064,14 +1074,13 @@ remoting fabric or grid environment -Similar to remote chunking, JMS can be used as the "remoting -fabric". In that case, use a `MessageChannelPartitionHandler` instance as the `PartitionHandler` implementation, -as described above. -The following example -assumes an existing partitioned job and focuses on -the `MessageChannelPartitionHandler` and JMS -configuration: +Similar to remote chunking, JMS can be used as the "`remoting fabric`". In that case, use +a `MessageChannelPartitionHandler` instance as the `PartitionHandler` implementation, +as described earlier. +[role="xmlContent"] +The following example assumes an existing partitioned job and focuses on the +`MessageChannelPartitionHandler` and JMS configuration in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1125,6 +1134,10 @@ configuration: class="org.springframework.batch.integration.partition.BeanFactoryStepLocator" /> ---- +[role="javaContent"] +The following example assumes an existing partitioned job and focuses on the +`MessageChannelPartitionHandler` and JMS configuration in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1233,7 +1246,12 @@ public IntegrationFlow outboundJmsStaging() { } ---- -You must also ensure that the partition `handler` attribute maps to the `partitionHandler` bean, as shown in the following example: +You must also ensure that the partition `handler` attribute maps to the `partitionHandler` +bean. + +[role="xmlContent"] +The following example maps the partition `handler` attribute to the `partitionHandler` in +XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1246,6 +1264,10 @@ You must also ensure that the partition `handler` attribute maps to the `partiti ---- +[role="javaContent"] +The following example maps the partition `handler` attribute to the `partitionHandler` in +Java: + .Java Configuration [source, java, role="javaContent"] ---- diff --git a/spring-batch-docs/asciidoc/step.adoc b/spring-batch-docs/asciidoc/step.adoc index b838a674b0..c1859d7477 100644 --- a/spring-batch-docs/asciidoc/step.adoc +++ b/spring-batch-docs/asciidoc/step.adoc @@ -56,8 +56,8 @@ Despite the relatively short list of required dependencies for a `Step`, it is a extremely complex class that can potentially contain many collaborators. [role="xmlContent"] -In order to ease configuration, the Spring Batch namespace can be used, as shown in the -following example: +In order to ease configuration, the Spring Batch XML namespace can be used, as shown in +the following example: .XML Configuration [source, xml, role="xmlContent"] @@ -72,7 +72,7 @@ following example: ---- [role="javaContent"] -When using java configuration, the Spring Batch builders can be used, as shown in the +When using Java configuration, the Spring Batch builders can be used, as shown in the following example: .Java Configuration @@ -121,31 +121,32 @@ transactions during processing. transactions during processing. [role="xmlContent"] -* `job-repository`: The `JobRepository` that periodically stores the `StepExecution` and -`ExecutionContext` during processing (just before committing). For an in-line -(one defined within a ), it is an attribute on the element. For a standalone -step, it is defined as an attribute of the . +* `job-repository`: The XML-specific name of the `JobRepository` that periodically stores +the `StepExecution` and `ExecutionContext` during processing (just before committing). For +an in-line `` (one defined within a ``), it is an attribute on the `` +element. For a standalone ``, it is defined as an attribute of the . [role="javaContent"] -* `repository`: The `JobRepository` that periodically stores the `StepExecution` and -`ExecutionContext` during processing (just before committing). +* `repository`: The The Java-specific name of the `JobRepository` that periodically stores +the `StepExecution` and `ExecutionContext` during processing (just before committing). [role="xmlContent"] -* `commit-interval`: The number of items to be processed before the transaction is -committed. +* `commit-interval`: The XML-specific name of the the number of items to be processed +before the transaction is committed. [role="javaContent"] -* `chunk`: Indicates that this is an item based step and the number of items to be -processed before the transaction is committed. +* `chunk`: The XML-specific name of the dependency that indicates that this is an +item-based step and the number of items to be processed before the transaction is +committed. [role="xmlContent"] It should be noted that `job-repository` defaults to `jobRepository` and -`transaction-manager` defaults to `transactionManger`. Also, the `ItemProcessor` is +`transaction-manager` defaults to `transactionManager`. Also, the `ItemProcessor` is optional, since the item could be directly passed from the reader to the writer. [role="javaContent"] It should be noted that `repository` defaults to `jobRepository` and `transactionManager` -defaults to `transactionManger` (all provided through the infrastructure from +defaults to `transactionManager` (all provided through the infrastructure from `@EnableBatchProcessing`). Also, the `ItemProcessor` is optional, since the item could be directly passed from the reader to the writer. endif::backend-html5[] @@ -289,8 +290,11 @@ since beginning and committing a transaction is expensive. Ideally, it is prefer process as many items as possible in each transaction, which is completely dependent upon the type of data being processed and the resources with which the step is interacting. For this reason, the number of items that are processed within a commit can be -configured. The following example shows a `step` whose `tasklet` has a `commit-interval` -value of 10. +configured. + +[role="xmlContent"] +The following example shows a `step` whose `tasklet` has a `commit-interval` +value of 10 as it would be defined in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -304,6 +308,10 @@ value of 10. ---- +[role="javaContent"] +The following example shows a `step` whose `tasklet` has a `commit-interval` +value of 10 as it would be defined in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -345,8 +353,10 @@ be started. For example, a particular `Step` might need to be configured so that runs once because it invalidates some resource that must be fixed manually before it can be run again. This is configurable on the step level, since different steps may have different requirements. A `Step` that may only be executed once can exist as part of the -same `Job` as a `Step` that can be run infinitely. The following code fragment shows an -example of a start limit configuration: +same `Job` as a `Step` that can be run infinitely. + +[role="xmlContent"] +The following code fragment shows an example of a start limit configuration in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -358,6 +368,9 @@ example of a start limit configuration: ---- +[role="javaContent"] +The following code fragment shows an example of a start limit configuration in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -372,8 +385,8 @@ public Step step1() { } ---- -The step above can be run only once. Attempting to run it again causes a -`StartLimitExceededException` to be thrown. Note that the default value for the +The step shown in the preceding example can be run only once. Attempting to run it again +causes a `StartLimitExceededException` to be thrown. Note that the default value for the start-limit is `Integer.MAX_VALUE`. [[allowStartIfComplete]] @@ -384,7 +397,10 @@ run, regardless of whether or not they were successful the first time. An exampl be a validation step or a `Step` that cleans up resources before processing. During normal processing of a restarted job, any step with a status of 'COMPLETED', meaning it has already been completed successfully, is skipped. Setting `allow-start-if-complete` to -"true" overrides this so that the step always runs, as shown in the following example: +"true" overrides this so that the step always runs. + +[role="xmlContent"] +The following code fragment shows how to define a restartable job in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -396,6 +412,9 @@ has already been completed successfully, is skipped. Setting `allow-start-if-com ---- +[role="javaContent"] +The following code fragment shows how to define a restartable job in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -413,7 +432,9 @@ public Step step1() { [[stepRestartExample]] ===== `Step` Restart Configuration Example -The following example shows how to configure a job to have steps that can be restarted: +[role="xmlContent"] +The following XML example shows how to configure a job to have steps that can be +restarted: .XML Configuration [source, xml, role="xmlContent"] @@ -440,6 +461,10 @@ The following example shows how to configure a job to have steps that can be res ---- +[role="javaContent"] +The following Java example shows how to configure a job to have steps that can be +restarted: + .Java Configuration [source, java, role="javaContent"] ---- @@ -552,7 +577,8 @@ allow for skips. If a vendor is not loaded because it was formatted incorrectly missing necessary information, then there probably are not issues. Usually, these bad records are logged as well, which is covered later when discussing listeners. -The following example shows an example of using a skip limit: +[role="xmlContent"] +The following XML example shows an example of using a skip limit: .XML Configuration [source, xml, role="xmlContent"] @@ -569,6 +595,9 @@ The following example shows an example of using a skip limit: ---- +[role="javaContent"] +The following Java example shows an example of using a skip limit: + .Java Configuration [source, java, role="javaContent"] ---- @@ -598,8 +627,10 @@ skip triggers the exception, not the tenth. One problem with the preceding example is that any other exception besides a `FlatFileParseException` causes the `Job` to fail. In certain scenarios, this may be the correct behavior. However, in other scenarios, it may be easier to identify which -exceptions should cause failure and skip everything else, as shown in the following -example: +exceptions should cause failure and skip everything else. + +[role="xmlContent"] +The following XML example shows an example excluding a particular exception: .XML Configuration [source, xml, role="xmlContent"] @@ -617,6 +648,9 @@ example: ---- +[role="javaContent"] +The following Java example shows an example excluding a particular exception: + .Java Configuration [source, java, role="javaContent"] ---- @@ -648,11 +682,11 @@ ifdef::backend-html5[] The order of the `` and `` elements does not matter. [role="javaContent"] -The order of the `skip` and `noSkip` calls does not matter. +The order of the `skip` and `noSkip` method calls does not matter. endif::backend-html5[] ifdef::backend-pdf[] -The order of specifying include vs exclude (by using either the XML tags or `skip` and +The order of specifying include versus exclude (by using either the XML tags or `skip` and `noSkip` method calls) does not matter. endif::backend-pdf[] @@ -664,8 +698,10 @@ not all exceptions are deterministic. If a `FlatFileParseException` is encounter reading, it is always thrown for that record. Resetting the `ItemReader` does not help. However, for other exceptions, such as a `DeadlockLoserDataAccessException`, which indicates that the current process has attempted to update a record that another process -holds a lock on, waiting and trying again might result in success. In this case, retry -should be configured as follows: +holds a lock on. Waiting and trying again might result in success. + +[role="xmlContent"] +In XML, retry should be configured as follows: [source, xml, role="xmlContent"] ---- @@ -681,6 +717,9 @@ should be configured as follows: ---- +[role="javaContent"] +In Java, retry should be configured as follows: + [source, java, role="javaContent"] ---- @Bean @@ -709,7 +748,10 @@ described earlier, exceptions thrown from the `ItemReader` do not cause a rollba However, there are many scenarios in which exceptions thrown from the `ItemWriter` should not cause a rollback, because no action has taken place to invalidate the transaction. For this reason, the `Step` can be configured with a list of exceptions that should not -cause rollback, as shown in the following example: +cause rollback. + +[role="xmlContent"] +In XML, you can control rollback as follows: .XML Configuration [source, xml, role="xmlContent"] @@ -724,6 +766,9 @@ cause rollback, as shown in the following example: ---- +[role="javaContent"] +In Java, you can control rollback as follows: + .Java Configuration [source, java, role="javaContent"] ---- @@ -748,7 +793,10 @@ from the reader. However, there are certain scenarios in which the reader is bui top of a transactional resource, such as a JMS queue. In this case, since the queue is tied to the transaction that is rolled back, the messages that have been pulled from the queue are put back on. For this reason, the step can be configured to not buffer the -items, as shown in the following example: +items. + +[role="xmlContent"] +The following example shows how to create reader that does not buffer items in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -761,6 +809,9 @@ items, as shown in the following example: ---- +[role="javaContent"] +The following example shows how to create reader that does not buffer items in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -782,8 +833,11 @@ Transaction attributes can be used to control the `isolation`, `propagation`, an `timeout` settings. More information on setting transaction attributes can be found in the https://docs.spring.io/spring/docs/current/spring-framework-reference/data-access.html#transaction[Spring -core documentation]. The following example sets the `isolation`, `propagation`, and -`timeout` transaction attributes: +core documentation]. + +[role="xmlContent"] +The following example sets the `isolation`, `propagation`, and `timeout` transaction +attributes in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -798,6 +852,10 @@ core documentation]. The following example sets the `isolation`, `propagation`, ---- +[role="javaContent"] +The following example sets the `isolation`, `propagation`, and `timeout` transaction +attributes in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -830,7 +888,10 @@ If the `ItemReader`, `ItemProcessor`, or `ItemWriter` itself implements the `Ite interface, then these are registered automatically. Any other streams need to be registered separately. This is often the case where indirect dependencies, such as delegates, are injected into the reader and writer. A stream can be registered on the -`Step` through the 'streams' element, as illustrated in the following example: +`step` through the 'stream' element. + +[role="xmlContent"] +The following example shows how to register a `stream` on a `step` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -846,6 +907,9 @@ delegates, are injected into the reader and writer. A stream can be registered o +[role="javaContent"] +The following example shows how to register a `stream` on a `step` in Java: + @@ -910,8 +974,10 @@ itself since it is empty) can be applied to a step through the `listeners` eleme The `listeners` element is valid inside a step, tasklet, or chunk declaration. It is recommended that you declare the listeners at the level at which its function applies, or, if it is multi-featured (such as `StepExecutionListener` and `ItemReadListener`), -then declare it at the most granular level where it applies. The following example shows -a listener applied at the chunk level: +then declare it at the most granular level where it applies. + +[role="xmlContent"] +The following example shows a listener applied at the chunk level in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -926,6 +992,9 @@ a listener applied at the chunk level: ---- +[role="xmlContent"] +The following example shows a listener applied at the chunk level in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1165,10 +1234,9 @@ statement. ifdef::backend-html5[] [role="xmlContent"] -To create a `TaskletStep`, the 'ref' attribute of the - element should reference a bean that defines a - `Tasklet` object. No element should be - used within the . The following example shows a simple tasklet: +To create a `TaskletStep` in XML, the 'ref' attribute of the `` element should +reference a bean that defines a `Tasklet` object. No `` element should be used +within the ``. The following example shows a simple tasklet: [source, xml, role="xmlContent"] ---- @@ -1178,9 +1246,9 @@ To create a `TaskletStep`, the 'ref' attribute of the ---- [role="javaContent"] -To create a `TaskletStep`, the bean passed to the `tasklet` method of the builder should -implement the `Tasklet` interface. No call to `chunk` should be called when building a -`TaskletStep`. The following example shows a simple tasklet: +To create a `TaskletStep` in Java, the bean passed to the `tasklet` method of the builder +should implement the `Tasklet` interface. No call to `chunk` should be called when +building a `TaskletStep`. The following example shows a simple tasklet: [source, java, role="javaContent"] ---- @@ -1233,8 +1301,10 @@ As with other adapters for the `ItemReader` and `ItemWriter` interfaces, the `Ta interface contains an implementation that allows for adapting itself to any pre-existing class: `TaskletAdapter`. An example where this may be useful is an existing DAO that is used to update a flag on a set of records. The `TaskletAdapter` can be used to call this -class without having to write an adapter for the `Tasklet` interface, as shown in the -following example: +class without having to write an adapter for the `Tasklet` interface. + +[role="xmlContent"] +The following example shows how to define a `TaskletAdapter` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1247,6 +1317,9 @@ following example: ---- +[role="JavaContent"] +The following example shows how to define a `TaskletAdapter` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1304,9 +1377,12 @@ public class FileDeletingTasklet implements Tasklet, InitializingBean { } ---- -The preceding `Tasklet` implementation deletes all files within a given directory. It +The preceding `tasklet` implementation deletes all files within a given directory. It should be noted that the `execute` method is called only once. All that is left is to -reference the `Tasklet` from the `Step`: +reference the `tasklet` from the `step`. + +[role="xmlContent"] +The following example shows how to reference the `tasklet` from the `step` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1328,6 +1404,9 @@ reference the `Tasklet` from the `Step`: ---- +[role="javaContent"] +The following example shows how to reference the `tasklet` from the `step` in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1373,8 +1452,10 @@ in the following image: .Sequential Flow image::{batch-asciidoc}images/sequential-flow.png[Sequential Flow, scaledwidth="60%"] -This can be achieved by using the 'next' attribute of the step element, as shown in the -following example: +This can be achieved by using the 'next' in a `step`. + +[role="xmlContent"] +The following example shows how to use the `next` attribute in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1386,6 +1467,9 @@ following example: ---- +[role="javaContent"] +The following example shows how to use the `next()` method in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1406,11 +1490,9 @@ then the entire `Job` fails and 'step B' does not execute. [role="xmlContent"] [NOTE] ==== -With the Spring Batch namespace, the first step listed in the - configuration is __always__ the first step - run by the `Job`. The order of the other - step elements does not matter, but the first step must always appear - first in the xml. +With the Spring Batch XML namespace, the first step listed in the configuration is +_always_ the first step run by the `Job`. The order of the other step elements does not +matter, but the first step must always appear first in the xml. ==== [[conditionalFlow]] @@ -1418,18 +1500,20 @@ With the Spring Batch namespace, the first step listed in the In the example above, there are only two possibilities: -. The `Step` is successful and the next `Step` should be executed. -. The `Step` failed and, thus, the `Job` should fail. +. The `step` is successful and the next `step` should be executed. +. The `step` failed and, thus, the `job` should fail. In many cases, this may be sufficient. However, what about a scenario in which the -failure of a `Step` should trigger a different `Step`, rather than causing failure? The +failure of a `step` should trigger a different `step`, rather than causing failure? The following image shows such a flow: .Conditional Flow image::{batch-asciidoc}images/conditional-flow.png[Conditional Flow, scaledwidth="60%"] + [[nextElement]] -In order to handle more complex scenarios, the Spring Batch namespace allows transition +[role="xmlContent"] +In order to handle more complex scenarios, the Spring Batch XML namespace allows transitions elements to be defined within the step element. One such transition is the `next` element. Like the `next` attribute, the `next` element tells the `Job` which `Step` to execute next. However, unlike the attribute, any number of `next` elements are allowed on @@ -1438,6 +1522,7 @@ transition elements are used, then all of the behavior for the `Step` transition defined explicitly. Note also that a single step cannot have both a `next` attribute and a `transition` element. +[role="xmlContent"] The `next` element specifies a pattern to match and the step to execute next, as shown in the following example: @@ -1454,6 +1539,12 @@ the following example: ---- +[role="javaContent"] +The Java API offers a fluent set of methods that let you specify the flow and what to do +when a step fails. The following example shows how to specify one step (`stepA`) and then +proceed to either of two different steps (`stepB` and `stepC`), depending on whether +`stepA` succeeds: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1474,7 +1565,7 @@ pattern-matching scheme to match the `ExitStatus` that results from the executio `Step`. [role="javaContent"] -When using java configuration the `on` method uses a simple pattern-matching scheme to +When using java configuration, the `on()` method uses a simple pattern-matching scheme to match the `ExitStatus` that results from the execution of the `Step`. Only two special characters are allowed in the pattern: @@ -1532,7 +1623,7 @@ More specifically, when using XML configuration, the 'next' element shown in the preceding XML configuration example references the exit code of `ExitStatus`. [role="xmlContent"] -When using Java configuration, the 'on' method shown in the preceding +When using Java configuration, the 'on()' method shown in the preceding Java configuration example references the exit code of `ExitStatus`. In English, it says: "go to stepB if the exit code is `FAILED` ". By default, the exit @@ -1540,6 +1631,9 @@ code is always the same as the `BatchStatus` for the `Step`, which is why the en works. However, what if the exit code needs to be different? A good example comes from the skip sample job within the samples project: +[role="xmlContent"] +The following example shows how to work with a different exit code in XML: + .XML Configuration [source, xml, role="xmlContent"] ---- @@ -1550,6 +1644,9 @@ the skip sample job within the samples project: ---- +[role="javaContent"] +The following example shows how to work with a different exit code in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1571,7 +1668,7 @@ public Job job() { . The `Step` completed successfully but with an exit code of 'COMPLETED WITH SKIPS'. In this case, a different step should be run to handle the errors. -The above configuration works. However, something needs to change the exit code based on +The preceding configuration works. However, something needs to change the exit code based on the condition of the execution having skipped records, as shown in the following example: [source, java] @@ -1604,14 +1701,19 @@ While these statuses are determined for the `Step` by the code that is executed, statuses for the `Job` are determined based on the configuration. So far, all of the job configurations discussed have had at least one final `Step` with -no transitions. For example, after the following step executes, the `Job` ends, as shown -in the following example: +no transitions. + +[role="xmlContent"] +In the following XML example, after the `step` executes, the `Job` ends: [source, xml, role="xmlContent"] ---- ---- +[role="javaContent"] +In the following Java example, after the `step` executes, the `Job` ends: + [source, java, role="javaContent"] ---- @Bean @@ -1659,10 +1761,13 @@ also allows for an optional 'exitStatus' parameter that can be used to customize `ExitStatus` of the `Job`. If no 'exitStatus' value is provided, then the `ExitStatus` is `COMPLETED` by default, to match the `BatchStatus`. -In the following scenario, if `step2` fails, then the `Job` stops with a `BatchStatus` of -`COMPLETED` and an `ExitStatus` of `COMPLETED` and `step3` does not run. Otherwise, -execution moves to `step3`. Note that if `step2` fails, the `Job` is not restartable -(because the status is `COMPLETED`). +Consider the following scenario: if `step2` fails, then the `Job` stops with a +`BatchStatus` of `COMPLETED` and an `ExitStatus` of `COMPLETED` and `step3` does not run. +Otherwise, execution moves to `step3`. Note that if `step2` fails, the `Job` is not +restartable (because the status is `COMPLETED`). + +[role="xmlContent"] +The following example shows the scenario in XML: [source, xml, role="xmlContent"] ---- @@ -1676,6 +1781,9 @@ execution moves to `step3`. Note that if `step2` fails, the `Job` is not restart ---- +[role="javaContent"] +The following example shows the scenario in Java: + [source, java, role="javaContent"] ---- @Bean @@ -1703,10 +1811,13 @@ attribute that can be used to customize the `ExitStatus` of the `Job`. If no 'ex attribute is given, then the `ExitStatus` is `FAILED` by default, to match the `BatchStatus`. -In the following scenario, if `step2` fails, then the `Job` stops with a `BatchStatus` of -`FAILED` and an `ExitStatus` of `EARLY TERMINATION` and `step3` does not execute. -Otherwise, execution moves to `step3`. Additionally, if `step2` fails and the `Job` is -restarted, then execution begins again on `step2`. +Consider the following scenario if `step2` fails, then the `Job` stops with a +`BatchStatus` of `FAILED` and an `ExitStatus` of `EARLY TERMINATION` and `step3` does not +execute. Otherwise, execution moves to `step3`. Additionally, if `step2` fails and the +`Job` is restarted, then execution begins again on `step2`. + +[role="xmlContent"] +The following example shows the scenario in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1721,6 +1832,9 @@ restarted, then execution begins again on `step2`. ---- +[role="javaContent"] +The following example shows the scenario in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -1743,15 +1857,18 @@ Configuring a job to stop at a particular step instructs a `Job` to stop with a so that the operator can take some action before restarting the `Job`. [role="xmlContent"] -When using XML configuration 'stop' element requires a 'restart' attribute that specifies -the step where execution should pick up when the "Job is restarted". +When using XML configuration, a 'stop' element requires a 'restart' attribute that specifies +the step where execution should pick up when the Job is restarted. [role="javaContent"] -When using java configuration, the `stopAndRestart` method requires a 'restart' attribute -that specifies the step where execution should pick up when the "Job is restarted". +When using Java configuration, the `stopAndRestart` method requires a 'restart' attribute +that specifies the step where execution should pick up when the Job is restarted. -In the following scenario, if `step1` finishes with `COMPLETE`, then the job stops. -Once it is restarted, execution begins on `step2`. +Consider the following scenario: if `step1` finishes with `COMPLETE`, then the job then +stops. Once it is restarted, execution begins on `step2`. + +[role="xmlContent"] +The following listing shows the scenario in XML: [source, xml, role="xmlContent"] ---- @@ -1762,6 +1879,9 @@ Once it is restarted, execution begins on `step2`. ---- +[role="javaContent"] +The following example shows the scenario in Java: + [source, java, role="javaContent"] ---- @Bean @@ -1904,7 +2024,11 @@ public Job job(Flow flow1, Flow flow2) { Part of the flow in a job can be externalized as a separate bean definition and then re-used. There are two ways to do so. The first is to simply declare the flow as a -reference to one defined elsewhere, as shown in the following example: +reference to one defined elsewhere. + +[role="xmlContent"] +The following example shows how to declare a flow as a reference to a flow defined +elsewhere in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1920,7 +2044,11 @@ reference to one defined elsewhere, as shown in the following example: ---- -.Java Configuration +[role="javaContent"] +The following example shows how to declare a flow as a reference to a flow defined +elsewhere in Java: + +.Java Confguration [source, java, role="javaContent"] ---- @Bean @@ -1952,7 +2080,7 @@ The other form of an externalized flow is to use a `JobStep`. A `JobStep` is sim the flow specified. [role="xmlContent"] -The following XML snippet shows an example of a `JobStep`: +The following example hows an example of a `JobStep` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -1972,7 +2100,7 @@ The following XML snippet shows an example of a `JobStep`: ---- [role="javaContent"] -The following Java snippet shows an example of a `JobStep`: +The following example shows an example of a `JobStep` in Java: .Java Configuration [source, java, role="javaContent"] @@ -2023,7 +2151,10 @@ smaller modules and control the flow of jobs. Both the XML and flat file examples shown earlier use the Spring `Resource` abstraction to obtain a file. This works because `Resource` has a `getFile` method, which returns a `java.io.File`. Both XML and flat file resources can be configured using standard Spring -constructs, as shown in the following example: +constructs: + +[role="xmlContent"] +The following example shows late binding in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2035,6 +2166,9 @@ constructs, as shown in the following example: ---- +[role="javaContent"] +The following example shows late binding in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2055,7 +2189,7 @@ determined at runtime as a parameter to the job. This can be solved using '-D' p to read a system property. [role="xmlContent"] -The following XML snippet shows how to read a file name from a property: +The following example shows how to read a file name from a property in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2067,7 +2201,7 @@ The following XML snippet shows how to read a file name from a property: ---- [role="javaContent"] -The following Java snippet shows how to read a file name from a property: +The following shows how to read a file name from a property in Java: .Java Configuration [source, java, role="javaContent"] @@ -2091,7 +2225,10 @@ already filters and does placeholder replacement on system properties. Often, in a batch setting, it is preferable to parametrize the file name in the `JobParameters` of the job, instead of through system properties, and access them that way. To accomplish this, Spring Batch allows for the late binding of various `Job` and -`Step` attributes, as shown in the following snippet: +`Step` attributes. + +[role="xmlContent"] +The following example shows how to parameterize a file name in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2102,6 +2239,9 @@ way. To accomplish this, Spring Batch allows for the late binding of various `Jo ---- +[role="javaContent"] +The following example shows how to parameterize a file name in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2116,7 +2256,10 @@ public FlatFileItemReader flatFileItemReader(@Value("#{jobParameters['input.file ---- Both the `JobExecution` and `StepExecution` level `ExecutionContext` can be accessed in -the same way, as shown in the following examples: +the same way. + +[role="xmlContent"] +The following example shows how to access the `ExecutionContext` in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2136,6 +2279,9 @@ the same way, as shown in the following examples: ---- +[role="xmlContent"] +The following example shows how to access the `ExecutionContext` in XML: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2168,11 +2314,29 @@ Any bean that uses late-binding must be declared with scope="step". See <> for more information. ==== +[NOTE] +==== +If you are using Spring 3.0 (or above), the expressions in step-scoped beans are in the +Spring Expression Language, a powerful general purpose language with many interesting +features. To provide backward compatibility, if Spring Batch detects the presence of +older versions of Spring, it uses a native expression language that is less powerful and +that has slightly different parsing rules. The main difference is that the map keys in +the example above do not need to be quoted with Spring 2.5, but the quotes are mandatory +in Spring 3.0. +==== +// TODO Where is that older language described? It'd be good to have a link to it here. +// Also, given that we're up to version 5 of Spring, should we still be talking about +// things from before version 3? (In other words, we should provide a link or drop the +// whole thing.) + [[step-scope]] ==== Step Scope -All of the late binding examples from above have a scope of "step" declared on the bean -definition, as shown in the following example: +All of the late binding examples shown earlier have a scope of "`step`" declared on the +bean definition. + +[role="xmlContent"] +The following example shows an example of binding to step scope in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2183,6 +2347,9 @@ definition, as shown in the following example: ---- +[role="javaContent"] +The following example shows an example of binding to step scope in Java: + .Java Configuration [source, java, role="javaContent"] ---- @@ -2228,8 +2395,10 @@ The following example includes the bean definition explicitly: but is a Scope for the `Job` context, so that there is only one instance of such a bean per running job. Additionally, support is provided for late binding of references accessible from the `JobContext` using `#{..}` placeholders. Using this feature, bean -properties can be pulled from the job or job execution context and the job parameters, as -shown in the following examples: +properties can be pulled from the job or job execution context and the job parameters. + +[role="xmlContent"] +The following example shows an example of binding to job scope in XML: .XML Configuration [source, xml, role="xmlContent"] @@ -2247,7 +2416,10 @@ shown in the following examples: ---- -.Java Configuration +[role="javaContent"] +The following example shows an example of binding to job scope in Java: + +.Java Configurtation [source, java, role="javaContent"] ---- @JobScope diff --git a/spring-batch-docs/asciidoc/testing.adoc b/spring-batch-docs/asciidoc/testing.adoc index 916ec7a5f7..65381d8944 100644 --- a/spring-batch-docs/asciidoc/testing.adoc +++ b/spring-batch-docs/asciidoc/testing.adoc @@ -10,36 +10,28 @@ ifndef::onlyonetoggle[] include::toggle.adoc[] endif::onlyonetoggle[] -As with other application styles, it is extremely important to - unit test any code written as part of a batch job. The Spring core - documentation covers how to unit and integration test with Spring in great - detail, so it is not be repeated here. It is important, however, to think - about how to 'end to end' test a batch job, which is what this chapter - covers. The spring-batch-test project includes classes that - facilitate this end-to-end test approach. +As with other application styles, it is extremely important to unit test any code written +as part of a batch job. The Spring core documentation covers how to unit and integration +test with Spring in great detail, so it is not be repeated here. It is important, however, +to think about how to 'end to end' test a batch job, which is what this chapter covers. +The spring-batch-test project includes classes that facilitate this end-to-end test +approach. [[creatingUnitTestClass]] === Creating a Unit Test Class -In order for the unit test to run a batch job, the framework must - load the job's `ApplicationContext`. Two annotations are used to trigger - this behavior: +In order for the unit test to run a batch job, the framework must load the job's +ApplicationContext. Two annotations are used to trigger this behavior: +* `@RunWith(SpringJUnit4ClassRunner.class)`: Indicates that the class should use Spring's +JUnit facilities +* `@ContextConfiguration(...)`: Indicates which resources to configure the +`ApplicationContext` with. -* `@RunWith(SpringRunner.class)`: - Indicates that the class should use Spring's JUnit facilities - - -* `@ContextConfiguration(...)`: - Indicates which resources to configure the `ApplicationContext` with. - -Starting from v4.1, it is also possible to inject Spring Batch test utilities -like the `JobLauncherTestUtils` and `JobRepositoryTestUtils` in the test context -using the `@SpringBatchTest` annotation. - -The following example shows the annotations in use: +[role="javaContent"] +The following Java example shows the two annotations in use: .Using Java Configuration [source, java, role="javaContent"] @@ -50,6 +42,9 @@ The following example shows the annotations in use: public class SkipSampleFunctionalTests { ... } ---- +[role="xmlContent"] +The following XML example shows the two annotations in use: + .Using XML Configuration [source, java, role="xmlContent"] ---- @@ -65,24 +60,22 @@ public class SkipSampleFunctionalTests { ... } === End-To-End Testing of Batch Jobs -'End To End' testing can be defined as testing the complete run of a - batch job from beginning to end. This allows for a test that sets up a - test condition, executes the job, and verifies the end result. - -In the following example, the batch job reads from the database and - writes to a flat file. The test method begins by setting up the database - with test data. It clears the CUSTOMER table and then inserts 10 new - records. The test then launches the `Job` by using the - `launchJob()` method. The - `launchJob()` method is provided by the - `JobLauncherTestUtils` class. The `JobLauncherTestUtils` class also provides the - `launchJob(JobParameters)` method, which - allows the test to give particular parameters. The - `launchJob()` method returns the - `JobExecution` object, which is useful for asserting - particular information about the `Job` run. In the - following case, the test verifies that the `Job` ended - with status "COMPLETED": +'End To End' testing can be defined as testing the complete run of a batch job from +beginning to end. This allows for a test that sets up a test condition, executes the job, +and verifies the end result. + +Consider an example of a batch job that reads from the database and writes to a flat file. +The test method begins by setting up the database with test data. It clears the CUSTOMER +table and then inserts 10 new records. The test then launches the `Job` by using the +`launchJob()` method. The `launchJob()` method is provided by the `JobLauncherTestUtils` +class. The `JobLauncherTestUtils` class also provides the `launchJob(JobParameters)` +method, which allows the test to give particular parameters. The `launchJob()` method +returns the `JobExecution` object, which is useful for asserting particular information +about the `Job` run. In the following case, the test verifies that the `Job` ended with +status "COMPLETED". + +[role="xmlContent"] +The following listing shows the example in XML: .XML Based Configuration [source, java, role="xmlContent"] @@ -119,6 +112,9 @@ public class SkipSampleFunctionalTests { } ---- +[role="javaContent"] +The following listing shows the example in Java: + .Java Based Configuration [source, java, role="javaContent"] ---- @@ -158,16 +154,13 @@ public class SkipSampleFunctionalTests { === Testing Individual Steps -For complex batch jobs, test cases in the end-to-end testing - approach may become unmanageable. In these cases, it may be more useful to - have test cases to test individual steps on their own. The - `JobLauncherTestUtils` class contains a method called - `launchStep`, which takes a step name and runs just - that particular `Step`. This approach allows for more - targeted tests letting the test set up data for only that step and - to validate its results directly. The following example shows how to use the - `launchStep` method to load a `Step` by name: - +For complex batch jobs, test cases in the end-to-end testing approach may become +unmanageable. It these cases, it may be more useful to have test cases to test individual +steps on their own. The `AbstractJobTests` class contains a method called `launchStep`, +which takes a step name and runs just that particular `Step`. This approach allows for +more targeted tests letting the test set up data for only that step and to validate its +results directly. The following example shows how to use the `launchStep` method to load a +`Step` by name: [source, java] ---- @@ -178,17 +171,14 @@ JobExecution jobExecution = jobLauncherTestUtils.launchStep("loadFileStep"); === Testing Step-Scoped Components -Often, the components that are configured for your steps at runtime - use step scope and late binding to inject context from the step or job - execution. These are tricky to test as standalone components, unless you - have a way to set the context as if they were in a step execution. That is - the goal of two components in Spring Batch: - `StepScopeTestExecutionListener` and - `StepScopeTestUtils`. - -The listener is declared at the class level, and its job is to - create a step execution context for each test method, as shown in the following example: +Often, the components that are configured for your steps at runtime use step scope and +late binding to inject context from the step or job execution. These are tricky to test as +standalone components, unless you have a way to set the context as if they were in a step +execution. That is the goal of two components in Spring Batch: +`StepScopeTestExecutionListener` and `StepScopeTestUtils`. +The listener is declared at the class level, and its job is to create a step execution +context for each test method, as shown in the following example: [source, java] ---- @@ -218,56 +208,18 @@ public class StepScopeTestExecutionListenerIntegrationTests { } ---- -There are two `TestExecutionListeners`. One - is the regular Spring Test framework, which handles dependency injection - from the configured application context to inject the reader. The - other is the Spring Batch - `StepScopeTestExecutionListener`. It works by looking - for a factory method in the test case for a - `StepExecution`, using that as the context for - the test method, as if that execution were active in a `Step` at runtime. The - factory method is detected by its signature (it must return a - `StepExecution`). If a factory method is not provided, - then a default `StepExecution` is created. - -Starting from v4.1, the `StepScopeTestExecutionListener` and - `JobScopeTestExecutionListener` are imported as test execution listeners - if the test class is annotated with `@SpringBatchTest`. The preceding test - example can be configured as follows: - -[source, java] ----- -@SpringBatchTest -@RunWith(SpringRunner.class) -@ContextConfiguration -public class StepScopeTestExecutionListenerIntegrationTests { - - // This component is defined step-scoped, so it cannot be injected unless - // a step is active... - @Autowired - private ItemReader reader; - - public StepExecution getStepExecution() { - StepExecution execution = MetaDataInstanceFactory.createStepExecution(); - execution.getExecutionContext().putString("input.data", "foo,bar,spam"); - return execution; - } - - @Test - public void testReader() { - // The reader is initialized and bound to the input data - assertNotNull(reader.read()); - } - -} ----- - -The listener approach is convenient if you want the duration of the - step scope to be the execution of the test method. For a more flexible - but more invasive approach, you can use the - `StepScopeTestUtils`. The following example counts the - number of items available in the reader shown in the previous example: +There are two `TestExecutionListeners`. One is the regular Spring Test framework, which +handles dependency injection from the configured application context to inject the reader. +The other is the Spring Batch `StepScopeTestExecutionListener`. It works by looking for a +factory method in the test case for a `StepExecution`, using that as the context for the +test method, as if that execution were active in a `Step` at runtime. The factory method +is detected by its signature (it must return a `StepExecution`). If a factory method is +not provided, then a default `StepExecution` is created. +The listener approach is convenient if you want the duration of the step scope to be the +execution of the test method. For a more flexible but more invasive approach, you can use +the `StepScopeTestUtils`. The following example counts the number of items available in +the reader shown in the previous example: [source, java] ---- @@ -287,21 +239,15 @@ int count = StepScopeTestUtils.doInStepScope(stepExecution, [[validatingOutputFiles]] - === Validating Output Files -When a batch job writes to the database, it is easy to query the - database to verify that the output is as expected. However, if the batch - job writes to a file, it is equally important that the output be verified. - Spring Batch provides a class called `AssertFile` to - facilitate the verification of output files. The method called - `assertFileEquals` takes two - `File` objects (or two - `Resource` objects) and asserts, line by line, that - the two files have the same content. Therefore, it is possible to create a - file with the expected output and to compare it to the actual - result, as shown in the following example: - +When a batch job writes to the database, it is easy to query the database to verify that +the output is as expected. However, if the batch job writes to a file, it is equally +important that the output be verified. Spring Batch provides a class called `AssertFile` +to facilitate the verification of output files. The method called `assertFileEquals` takes +two `File` objects (or two `Resource` objects) and asserts, line by line, that the two +files have the same content. Therefore, it is possible to create a file with the expected +output and to compare it to the actual result, as shown in the following example: [source, java] ---- @@ -317,11 +263,9 @@ AssertFile.assertFileEquals(new FileSystemResource(EXPECTED_FILE), === Mocking Domain Objects -Another common issue encountered while writing unit and integration - tests for Spring Batch components is how to mock domain objects. A good - example is a `StepExecutionListener`, as illustrated - in the following code snippet: - +Another common issue encountered while writing unit and integration tests for Spring Batch +components is how to mock domain objects. A good example is a `StepExecutionListener`, as +illustrated in the following code snippet: [source, java] ---- @@ -336,13 +280,11 @@ public class NoWorkFoundStepExecutionListener extends StepExecutionListenerSuppo } ---- -The preceding listener example is provided by the framework and checks a - `StepExecution` for an empty read count, thus - signifying that no work was done. While this example is fairly simple, it - serves to illustrate the types of problems that may be encountered when - attempting to unit test classes that implement interfaces requiring Spring - Batch domain objects. Consider the following unit test for the listener's in the preceding example: - +The preceding listener example is provided by the framework and checks a `StepExecution` +for an empty read count, thus signifying that no work was done. While this example is +fairly simple, it serves to illustrate the types of problems that may be encountered when +attempting to unit test classes that implement interfaces requiring Spring Batch domain +objects. Consider the following unit test for the listener's in the preceding example: [source, java] ---- @@ -362,18 +304,13 @@ public void noWork() { } ---- -Because the Spring Batch domain model follows good object-oriented - principles, the `StepExecution` requires a - `JobExecution`, which requires a - `JobInstance` and - `JobParameters`, to create a valid - `StepExecution`. While this is good in a solid domain - model, it does make creating stub objects for unit testing verbose. To - address this issue, the Spring Batch test module includes a factory for - creating domain objects: `MetaDataInstanceFactory`. - Given this factory, the unit test can be updated to be more - concise, as shown in the following example: - +Because the Spring Batch domain model follows good object-oriented principles, the +`StepExecution` requires a `JobExecution`, which requires a `JobInstance` and +`JobParameters`, to create a valid `StepExecution`. While this is good in a solid domain +model, it does make creating stub objects for unit testing verbose. To address this issue, +the Spring Batch test module includes a factory for creating domain objects: +`MetaDataInstanceFactory`. Given this factory, the unit test can be updated to be more +concise, as shown in the following example: [source, java] ---- @@ -391,7 +328,6 @@ public void testAfterStep() { } ---- -The preceding method for creating a simple - `StepExecution` is just one convenience method - available within the factory. A full method listing can be found in its - link:$$https://docs.spring.io/spring-batch/docs/current/api/org/springframework/batch/test/MetaDataInstanceFactory.html$$[Javadoc]. +The preceding method for creating a simple `StepExecution` is just one convenience method +available within the factory. A full method listing can be found in its +link:$$http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/test/MetaDataInstanceFactory.html$$[Javadoc]. diff --git a/spring-batch-docs/asciidoc/toggle.adoc b/spring-batch-docs/asciidoc/toggle.adoc index 92c92f5fdd..d39eb5ab08 100644 --- a/spring-batch-docs/asciidoc/toggle.adoc +++ b/spring-batch-docs/asciidoc/toggle.adoc @@ -8,7 +8,7 @@ ifdef::backend-html5[]
- +
+++ From 8b771b20344c6cfadfd58ade17cf5dc470a4e41c Mon Sep 17 00:00:00 2001 From: Jay Bryant Date: Fri, 14 Dec 2018 10:01:21 -0600 Subject: [PATCH 2/2] Small fixes Mahmoud Ben Hassine caught a couple of sentences that were problematic when the Both option is on. I then caught a couple of other problems. I fixed all of that. Thanks for reading closely, Mahmoud. I always appreciate that. --- spring-batch-docs/asciidoc/job.adoc | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/spring-batch-docs/asciidoc/job.adoc b/spring-batch-docs/asciidoc/job.adoc index 9de336f6c6..fe260a3680 100644 --- a/spring-batch-docs/asciidoc/job.adoc +++ b/spring-batch-docs/asciidoc/job.adoc @@ -30,7 +30,7 @@ options and runtime concerns of a `Job`. ifdef::backend-html5[] [role="javaContent"] -There are multiple implementations of the <> interface, however +There are multiple implementations of the <> interface. However, builders abstract away the difference in configuration. [source, java, role="javaContent"] @@ -56,10 +56,9 @@ builders can also contain other elements that help with parallelisation (`Split` declarative flow control (`Decision`) and externalization of flow definitions (`Flow`). [role="xmlContent"] -There are multiple implementations of the <> interface, however, the namespace -abstracts away the differences in configuration. It has only three -required dependencies: a name, a `JobRepository` , and -a list of `Step` instances. +Whether you use Java or XML, there are multiple implementations of the <> +interface. However, the namespace abstracts away the differences in configuration. It has +only three required dependencies: a name, `JobRepository` , and a list of `Step` instances. [source, xml, role="xmlContent"] ---- @@ -719,8 +718,8 @@ The following example shows the inclusion of `MapJobRepositoryFactoryBean` in XM ---- -[role="xmlContent"] -The following example shows the inclusion of `MapJobRepositoryFactoryBean` in XML: +[role="javaContent"] +The following example shows the inclusion of `MapJobRepositoryFactoryBean` in Java: .Java Configuration [source, java, role="javaContent"]