Skip to content

Improve Documentation if an ItemWriter is used with a Database connection #4378

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
git9999999 opened this issue May 16, 2023 · 1 comment
Closed
Labels
status: declined Features that we don't intend to implement or Bug reports that are invalid or missing enough details

Comments

@git9999999
Copy link

Hi

I have asked a Question https://stackoverflow.com/questions/76073523/spring-batch-with-faulttolerant-config-swallow-sql-exception-on-commit-of-a-item/76099432

Here is the same info again:

We make heavy use of Spring Batch (5.0.1) in our new Project (SpringBoot 3.0.5) and MS Sql Server. We like to use the "faultTolerant" feature. eg:

return new StepBuilder("abcStep", jobRepository)
     .<String, String>chunk(1, transactionManager)
     .reader(aItemReader)
     .writer(aItemWriter)
     .faultTolerant()
     .backOffPolicy(
         JobCommonConfig.milliSecondsBetweenRetiesPolicy(
            this.milliSecondsBetweenReties))
     .retryLimit(this.retryLimit)
     .retry(Throwable.class)
     .build();

Up to now, everything works fine. Now we have found a case where we process data in the ItemWriter that are not ok. Some data is too long for the DB field. We use JPA and do no flush.

WARN  o.h.engine.jdbc.spi.SqlExceptionHelper   : SQL Error: 2628, SQLState: S0001
ERROR o.h.engine.jdbc.spi.SqlExceptionHelper   : String or binary data would be truncated in table 'abc.dbo.T_.....', column 'FILE_NAME_BASE64'. Truncated value: '....'.
INFO  o.s.batch.core.step.tasklet.TaskletStep  : Commit failed while step execution data was already updated. Reverting to old version.
INFO  o.s.batch.core.step.AbstractStep         : Step: [aStep] executed in 124ms

My expectation would be, on an error, the Step stops with an exception. But this does not happen, it continues the Job. It also rollback my Business transaction.😱😱😱
For use, it looks like everything worked correct, but we just missed some data. 😱😱😱

If i add a flush() with the entitymanager, i see the exception and Spring Batch behaves as expected and stops the Job.

If i change the Step to this, it also works correct (stop the Job):

return new StepBuilder("abcStep", jobRepository)
     .<String, String>chunk(1, transactionManager)
     .reader(aItemReader)
     .writer(aItemWriter)
     .build();

I found two entries that could explain the problem:

#1189

#3950

What do i wrong? Is the Spring Batch faultTolerant broken?


  1. I think the Documentation should be improved, telling you to flush when you use a DB connection without a correct Writer.

  2. If possible Spring Batch should stop processing when this error occur:

WARN  o.h.engine.jdbc.spi.SqlExceptionHelper   : SQL Error: 2628, SQLState: S0001
ERROR o.h.engine.jdbc.spi.SqlExceptionHelper   : String or binary data would be truncated in table 'abc.dbo.T_.....', column 'FILE_NAME_BASE64'. Truncated value: '....'.
INFO  o.s.batch.core.step.tasklet.TaskletStep  : Commit failed while step execution data was already updated. Reverting to old version.
INFO  o.s.batch.core.step.AbstractStep         : Step: [aStep] executed in 124ms
@git9999999 git9999999 added the status: waiting-for-triage Issues that we did not analyse yet label May 16, 2023
@fmbenhassine
Copy link
Contributor

We use JPA and do no flush.

What do i wrong? Is the Spring Batch faultTolerant broken?

No, the fault-tolerance feature is not broken. As mentioned in SO, the provided JpaItemWriter does the flush and things work as expected with that built-in writer.

Now if you use a custom writer, it is up to you to flush items. In SO, I said "This is probably a documentation issue", and after verifying, this is already in the documentation in the Database ItemWriters section. Here is the relevant excerpt:

Users can create their own DAOs that implement the ItemWriter interface [...]. Batching database
output does not have any inherent flaws, assuming we are careful to flush and there are no errors
in the data.

So since you are providing a custom JPA writer, it is assumed that you flush items yourself.

@fmbenhassine fmbenhassine closed this as not planned Won't fix, can't repro, duplicate, stale May 23, 2023
@fmbenhassine fmbenhassine added status: declined Features that we don't intend to implement or Bug reports that are invalid or missing enough details and removed status: waiting-for-triage Issues that we did not analyse yet labels May 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: declined Features that we don't intend to implement or Bug reports that are invalid or missing enough details
Projects
None yet
Development

No branches or pull requests

2 participants