## Background With this use case we would like to use the Cobol Parser and decoder in order to - split input files by records, - process each record using a custom code - the custom code would extract some fields of each record, process it, and write them back in the same EBCDIC format - The processor could be just a library outside of Spark framework - The solution should support all types of file types (F, V, VB, custom record extractor, etc) - Options used with `spark-cobol` should be ideally reused in the file processor as well It seems most of building blocks for building the solution is already in place. ## Feature Add a way to process EBCDIC data in-place without converting to Spark. ## Example -- ## Proposed Solution <img width="466" height="590" alt="Image" src="https://github.com/user-attachments/assets/2aed44ab-345a-4141-9249-23db94b4c649" />