-
Notifications
You must be signed in to change notification settings - Fork 7
Architecture
echo edited this page May 4, 2025
·
8 revisions
With the upcoming releases of versions 3.0 and 2.7, Brain4J has undergone significant changes and improvements, aimed at enhancing customizability, scalability, and openness.
The project is now organized into two primary modules:
-
brain4j-core
: Includes all components related to machine learning, such as model architectures, training pipelines, optimization algorithms, and learning utilities. -
brain4j-math
: Provides low-level operations, including GPU acceleration, advanced matrix and tensor manipulations, dataset management, and high-performance computational utilities.
Starting from version 2.7, all model parameters (including weights and biases) are now managed through Tensor
objects rather than being encapsulated in higher-level abstractions like Synapse
or Neuron
.
This transition brings notable advantages:
- Reduces RAM usage
- Improves flexibility and modularity.
- Better support for GPU and parallel computing.
- Enhances scalability for large models.
To accelerate convergence and reduce training time, Brain4J supports dataset batching.
- Pre-2.7 Behavior: During each epoch, every sample in a batch would initiate a lightweight virtual thread. The system would wait for all propagations to complete before proceeding to the next batch.
- Post-2.7 Behavior: With the introduction of tensorized operations, entire batches are now merged into a single tensor. The neural network processes the batch simultaneously, significantly improving computational efficiency and memory utilization.
Check out Optimizing Inference
This wiki is still under construction. If you feel that you can contribute, please do so! Thanks.