Skip to content

Releases: chaobrain/braintools

Version 0.1.7

05 Jan 12:00
db019e3

Choose a tag to compare

Major Features

New Training Framework (braintools.trainer)

  • PyTorch Lightning-like training API for JAX-based neural network training with comprehensive features:
    • LightningModule: Base class for defining training models with training_step(), validation_step(), and configure_optimizers() hooks
    • Trainer: Orchestration class for managing training loops, epochs, and device placement
    • TrainOutput/EvalOutput: Structured output types for training and evaluation results

Callbacks System

  • 10+ built-in callbacks for customizing training behavior:
    • ModelCheckpoint: Automatic model saving based on monitored metrics
    • EarlyStopping: Stop training when metrics plateau
    • LearningRateMonitor: Track and log learning rate changes
    • GradientClipCallback: Gradient clipping for training stability
    • Timer: Track training time
    • RichProgressBar / TQDMProgressBar: Visual progress indicators
    • LambdaCallback / PrintCallback: Custom callback utilities

Logging Backends

  • 6 pluggable logging backends:
    • TensorBoardLogger: TensorBoard integration
    • WandBLogger: Weights & Biases integration
    • CSVLogger: Simple CSV file logging
    • NeptuneLogger: Neptune.ai integration
    • MLFlowLogger: MLFlow integration
    • CompositeLogger: Combine multiple loggers

Data Loading Utilities

  • JAX-compatible data loading with distributed support:
    • DataLoader / DistributedDataLoader: Efficient batch loading
    • Dataset, ArrayDataset, DictDataset, IterableDataset: Dataset abstractions
    • Sampler, RandomSampler, SequentialSampler, BatchSampler, DistributedSampler: Sampling strategies

Distributed Training

  • Multi-device and multi-host training strategies:
    • SingleDeviceStrategy: Single device execution
    • DataParallelStrategy: Data parallelism across devices
    • ShardedDataParallelStrategy / FullyShardedDataParallelStrategy: Memory-efficient sharded training
    • AutoStrategy: Automatic strategy selection
    • all_reduce, broadcast: Distributed communication primitives

Checkpointing

  • Comprehensive checkpoint management:
    • CheckpointManager: Manage multiple checkpoints with retention policies
    • save_checkpoint / load_checkpoint: Save and restore model states
    • find_checkpoint / list_checkpoints: Checkpoint discovery utilities

Progress Bar System

  • Multiple progress bar implementations:
    • SimpleProgressBar: Basic text-based progress
    • TQDMProgressBarWrapper: TQDM-based progress
    • RichProgressBarWrapper: Rich library-based progress

Improvements

API Documentation

  • Enhanced module documentation: All public modules now include comprehensive docstrings with examples, parameter descriptions, and usage guidelines directly in __init__.py files
  • Reorganized imports: Cleaner and more consistent import structure across all modules

Breaking Changes

Removed braintools.param Module

  • The entire braintools.param module has been removed, including:
    • Data containers (Data)
    • Parameter wrappers (Param, Const)
    • State containers (ArrayHidden, ArrayParam)
    • Regularization classes (GaussianReg, L1Reg, L2Reg)
    • All transform classes (SigmoidT, SoftplusT, AffineT, etc.)
    • Utility functions (get_param(), get_size())
  • Users relying on these features should migrate to alternative implementations or pin to version 0.1.6

Version 0.1.6

25 Dec 14:10
206c5ec

Choose a tag to compare

New Features

Parameter Management Expansion (braintools.param)

  • Hierarchical data container: Added Data for composed state storage and cloning.
  • Parameter wrappers: Added Param and Const with built-in transforms and optional regularization.
  • State containers: Added ArrayHidden and ArrayParam with transform-aware .data access.
  • Regularization priors: Added GaussianReg, L1Reg, and L2Reg with optional trainable hyperparameters.
  • Utilities: Added get_param() and get_size() helpers for parameter/state handling.

Transforms

  • New ReluT transform for lower-bounded parameters.
  • Expanded transform suite now includes PositiveT, NegativeT, ScaledSigmoidT, PowerT,
    OrderedT, SimplexT, and UnitVectorT.

Improvements

API Consistency

  • Transform naming cleanup: Standardized transform class names with the *T suffix
    (e.g., SigmoidT, SoftplusT, AffineT, ChainT, MaskedT, ClipT).

Documentation

  • Expanded param API docs: Added sections for data containers, state containers, regularization,
    utilities, and updated transform listings in docs/apis/param.rst.
  • API index update: Added param API page to docs/index.rst.

Tests

  • New test coverage: Added tests for data containers, modules, regularization, state, transforms,
    and utilities across the param module.

Breaking Changes

  • Transform API renames: Transform classes now use the *T suffix (e.g., Sigmoid -> SigmoidT).
  • Custom transform removed: The Custom transform is no longer part of the public API.

Bug Fixes

  • Initializer RNG: TruncatedNormal now defaults to numpy.random when no RNG is provided.

What's Changed

  • ⬆️ Bump actions/download-artifact from 6 to 7 by @dependabot[bot] in #66
  • ⬆️ Bump actions/upload-artifact from 5 to 6 by @dependabot[bot] in #65

Full Changelog: v0.1.5...v0.1.6

Version 0.1.5

14 Dec 06:03
cf67678

Choose a tag to compare

New Features

Parameter Transformation Module (braintools.param)

  • 7 new bijective transforms for constrained optimization and probabilistic modeling:
    • Positive: Constrains parameters to (0, +∞) using exponential transformation
    • Negative: Constrains parameters to (-∞, 0) using negative softplus
    • ScaledSigmoid: Sigmoid with adjustable sharpness/temperature parameter (beta)
    • Power: Box-Cox family power transformation for variance stabilization
    • Ordered: Ensures monotonically increasing output vectors (useful for cutpoints in ordinal regression)
    • Simplex: Stick-breaking transformation for probability vectors summing to 1
    • UnitVector: Projects vectors onto the unit sphere (L2 norm = 1)
  • Jacobian computation: Added log_abs_det_jacobian() method to Transform base class and implementations for probabilistic modeling
    • Implemented for: Identity, Sigmoid, Softplus, Log, Exp, Affine, Chain, Positive

Surrogate Gradient Enhancements (braintools.surrogate)

  • Gradient computation of hyperparameters of surrogate gradient functions.
  • Fix batching issue in surrogate gradient functions

Improvements

API Enhancements

  • __repr__ methods: Added string representations to all Transform classes and Param class for better debugging
  • Enhanced documentation: Updated docs/apis/param.rst with comprehensive API reference
    • Organized sections: Base Classes, Parameter Wrapper, Bounded Transforms, Positive/Negative Transforms, Advanced Transforms, Composition Transforms
    • Descriptive explanations for each transform's use case

Code Quality

  • Comprehensive test coverage: Added 28 new tests for param module (45 total tests passing)
    • Tests for all new transforms: roundtrip, constraints, repr methods
    • Tests for log_abs_det_jacobian correctness
    • Tests for edge cases and numerical stability

What's Changed

  • ⬆️ Bump actions/checkout from 4 to 6 by @dependabot[bot] in #62
  • Refactor surrogate gradient tests; enhance checkpointing, add ClippedTransform by @chaoming0625 in #61
  • Introduce param module and refactor surrogate gradients by @chaoming0625 in #63
  • Add param transforms with Jacobians; enhance surrogate gradients by @chaoming0625 in #64

Full Changelog: v0.1.4...v0.1.5

Version 0.1.4

31 Oct 13:52
8979850

Choose a tag to compare

New Features

Learning Rate Scheduler Enhancements (braintools.optim)

  • New apply() method: Added apply() method to all LR schedulers for more flexible learning rate application
    • Allows applying learning rate transformations without stepping the scheduler
    • Useful for custom training loops and learning rate inspection
  • Comprehensive test coverage: Added 118+ comprehensive tests covering all 17 learning rate schedulers
    • Tests for basic functionality, optimizer integration, JIT compilation, state persistence
    • Full coverage of edge cases and special modes for each scheduler
    • Validates correctness with @brainstate.transform.jit compilation

Improvements

Documentation

  • Restructured tutorial organization: Renamed and reorganized documentation files for better clarity
    • Moved module tutorials into subdirectories (conn/, init/, input/, file/, surrogate/)
    • Updated table of contents structure across all modules
    • Improved navigation with consolidated index files (index.md instead of toc_*.md)
  • Enhanced visual branding: Updated project logo from JPG to high-resolution PNG format
    • Better quality and transparency support
    • Consistent branding across documentation

Code Quality

  • Test improvements: Refactored scheduler tests with better organization and coverage
    • Each scheduler now has 5-10 dedicated tests
    • Tests verify: basic functionality, optimizer integration, JIT compilation, multiple param groups, state dict save/load
    • Discovered and documented key implementation behaviors (epoch counting, initialization patterns)

CI/CD

  • Updated GitHub Actions: Bumped actions to latest versions for improved security and performance
    • actions/download-artifact: v5 → v6
    • actions/upload-artifact: v4 → v5
    • Better artifact handling in CI pipeline

Bug Fixes

  • Fixed edge cases in learning rate scheduler state management
  • Corrected epoch counting behavior in milestone-based schedulers
  • Improved JIT compilation compatibility for all schedulers

Notes

  • All 17 learning rate schedulers now have comprehensive test coverage (100%)
  • Enhanced reliability for training workflows with thorough validation
  • Improved developer experience with better documentation structure

What's Changed

New Contributors

Full Changelog: v0.1.0...v0.1.4

Version 0.1.0

06 Oct 02:45
aae5e7a

Choose a tag to compare

What's Changed

  • Add Momentum optimizers, ZeroInit; refactor state management by @oujago in #54
  • Introduce braintools.surrogate module; enhance init and optim by @oujago in #55

Major Features

Surrogate Gradients Module (braintools.surrogate)

  • New comprehensive surrogate gradient system for training spiking neural networks (SNNs)
  • 18+ surrogate gradient functions with straight-through estimator support:
    • Sigmoid-based: Sigmoid, SoftSign, Arctan, ERF
    • Piecewise: PiecewiseQuadratic, PiecewiseExp, PiecewiseLeakyRelu
    • ReLU-based: ReluGrad, LeakyRelu, LogTailedRelu
    • Distribution-inspired: GaussianGrad, MultiGaussianGrad, InvSquareGrad, SlayerGrad
    • Advanced: S2NN, QPseudoSpike, SquarewaveFourierSeries, NonzeroSignLog
  • Customizable hyperparameters (alpha, sigma, width, etc.) for fine-tuning gradient behavior
  • Comprehensive tutorials: 2 detailed notebooks covering basics and customization
  • Enables gradient-based training of SNNs via backpropagation through time
  • Over 2,600 lines of implementation with extensive test coverage

New Features

Learning Rate Schedulers (braintools.optim)

  • ExponentialDecayLR scheduler: Fine-grained exponential decay with step-based control
    • Support for transition steps, staircase mode, delayed start, and bounded decay
    • Better control than epoch-based ExponentialLR for step-level scheduling
    • Compatible with Optax's exponential_decay schedule

Improvements

API Refinements

  • Deprecation warnings added for future API changes:
    • Deprecated beta1 and beta2 parameters in Adam optimizer (use b1 and b2 instead)
    • Deprecated unit parameter in various initializers (use UNITLESS by default)
    • Deprecated init_call function replaced with param for improved consistency
  • Enhanced state management: Refactored UniqueStateManager to utilize pytree methods
  • Comprehensive tests: Added extensive tests for UniqueStateManager methods and edge cases

Documentation

  • Updated API documentation for new surrogate gradient module
  • Added learning rate scheduler documentation for ExponentialDecayLR
  • Enhanced optimizer tutorials with updated examples
  • Clarified docstrings for FixedProb class and variance scaling initializer

Code Quality

Internal Improvements

  • Updated copyright information from BDP Ecosystem Limited to BrainX Ecosystem Limited
  • Improved consistency across codebase with standardized function signatures
  • Better default parameter handling (UNITLESS for unit parameters)
  • Enhanced test coverage for state management and optimizers

Metric Enhancements

  • Improved correlation and firing metrics implementation
  • Enhanced LFP (Local Field Potential) analysis functions
  • Better error handling and validation in metric computations

Breaking Changes

  • Deprecation notices (not yet removed, but will be in future versions):
    • beta1/beta2 parameters in Adam optimizer (use b1/b2)
    • unit parameter in initializers (defaults to UNITLESS)
    • init_call function (use param instead)

Notes

  • This release focuses on enabling gradient-based training for spiking neural networks
  • The surrogate gradient module is a major addition for neuromorphic computing and SNN research
  • Enhanced learning rate scheduling provides more control for training workflows

New Contributors

Full Changelog: v0.0.13...v0.1.0

Version 0.0.13

02 Oct 03:36
6103790

Choose a tag to compare

What's Changed

  • Standardize notebook formatting and enhance load_matfile by @chaoming0625 in #39
  • Add comprehensive connectivity module and enhance input generation by @chaoming0625 in #40
  • Add input module tutorials and improve connectivity generation by @chaoming0625 in #41
  • Standardize docstring code examples and enhance Step duration inference by @chaoming0625 in #42
  • Introduce composable connectivity API and new spike processing modules by @chaoming0625 in #43
  • Introduce modular, model-specific connectivity with unified API by @chaoming0625 in #44
  • Enhance docstrings with examples and detailed descriptions by @chaoming0625 in #45
  • Revert "Enhance docstrings with examples and detailed descriptions" by @chaoming0625 in #46
  • Refactor conn module, add composable initialization API by @chaoming0625 in #47
  • ⬆️ Bump actions/setup-python from 5 to 6 by @dependabot[bot] in #49
  • Introduce comprehensive Optax optimizers and LR schedulers by @chaoming0625 in #50
  • Add braintools.init module, unify initializers, simplify conn API by @chaoming0625 in #51
  • Add topological network patterns; refactor connectivity API by @chaoming0625 in #52
  • Introduce unified init, advanced conn patterns, and Optax optimizers by @chaoming0625 in #53

Major Features

New Initialization Framework (braintools.init)

  • Unified initialization API consolidating all weight and parameter initialization strategies
  • Distance-based initialization: Support for distance-modulated weight patterns
  • Variance scaling strategies: Xavier, He, LeCun initialization methods
  • Orthogonal initialization for improved training stability
  • Composite distributions for complex initialization patterns
  • Simplified API with consistent parameter naming across all initializers

Advanced Connectivity Patterns (braintools.conn)

  • Topological network patterns:
    • Small-world and scale-free networks
    • Hierarchical and core-periphery structures
    • Modular and clustered random connectivity
  • Enhanced biological connectivity:
    • Excitatory-inhibitory balanced networks
    • Distance-dependent connectivity with multiple profiles
    • Compartment-specific connectivity (dendrite, soma, axon)
  • Spatial connectivity improvements:
    • 2D convolutional kernels for spatial networks
    • Position-based connectivity with normalization
    • Distance modulation using composable profiles

Comprehensive Optax Integration (braintools.optim)

  • Full Optax optimizer support: Adam, SGD, RMSProp, AdaGrad, AdaDelta, and more
  • Advanced learning rate schedulers:
    • Cosine annealing with warm restarts
    • Polynomial decay with warmup
    • Piecewise constant schedules
    • Sequential and chained schedulers
  • Improved optimizer state management with unique state handling
  • Parameter groups with per-group learning rates

Improvements

API Enhancements

  • Simplified conn module API with direct class access
  • Refactored initialization calls for consistency
  • Improved type annotations throughout
  • Better default parameter handling

Documentation & Tutorials

  • Updated tutorial structure for connectivity patterns
  • New examples for topological networks
  • Enhanced API documentation with detailed examples
  • Improved code readability in tutorials

Code Quality

  • Comprehensive test coverage for new features
  • Better error handling and validation
  • Consistent naming conventions
  • Removed deprecated and redundant code

Breaking Changes

  • Renamed PointNeuronConnectivity to PointConnectivity
  • Renamed ConvKernel to Conv2dKernel
  • Unified initializer names (e.g., ConstantWeightConstant)
  • Removed PopulationRateConnectivity class
  • Changed some parameter names for clarity (e.g., unified use of rng parameter)

Full Changelog: v0.0.12...v0.0.13

Version 0.0.12

24 Sep 05:43
ed3bc07

Choose a tag to compare

Major Features

Comprehensive Visualization System

  • New visualization modules for neural data analysis:
    • neural.py: Spike rasters, population activity, connectivity matrices, firing rate maps
    • three_d.py: 3D visualizations for neural networks, brain surfaces, trajectories, electrode arrays
    • statistical.py: Statistical plotting tools (confusion matrices, ROC curves, correlation plots)
    • interactive.py: Interactive visualizations with Plotly support
    • colormaps.py: Neural-specific colormaps and publication-ready styling
  • 15+ new tutorial notebooks covering all visualization techniques
  • Brain-specific colormaps for membrane potential, spike activity, and connectivity

Enhanced Numerical Integration

  • New ODE integrators:
    • Runge-Kutta methods: RK23, RK45, RKF45, DOP853, DOPRI5, SSPRK33
    • Specialized methods: Midpoint, Heun, RK4(3/8), Ralston RK2/RK3, Bogacki-Shampine
  • New SDE integrators: Heun, Tamed Euler, Implicit Euler, SRK2, SRK3, SRK4
  • IMEX integrators for stiff equations: Euler, ARS(2,2,2), CNAB
  • DDE integrators for delay differential equations
  • Comprehensive test coverage and accuracy verification

Advanced Spike Processing

  • Spike encoders: Rate, Poisson, Population, Latency, and Temporal encoders
  • Enhanced spike operations with bitwise functionality
  • Spike metrics: Victor-Purpura distance, spike train synchrony, correlation indices
  • Tutorial notebooks for spike encoding and analysis

New Optimization Framework

  • NevergradOptimizer: Integration with Nevergrad optimization library
  • ScipyOptimizer: Enhanced scipy optimization with flexible bounds support
  • Refactored optimizer architecture for better extensibility
  • Support for dict and sequence parameter bounds

Improvements

File Management

  • Enhanced msgpack serialization with mismatch handling options
  • Improved checkpoint loading with better error recovery
  • Support for handling mismatched keys during state restoration

Metrics and Analysis

  • LFP analysis functions: Power spectral density, coherence analysis, phase-amplitude coupling
  • Functional connectivity: Dynamic connectivity computation
  • Classification metrics: Binary, multiclass, focal loss, and smoothing techniques
  • Regression losses: MSE, MAE, Huber, and quantile losses

Documentation

  • Added comprehensive API documentation for all new modules
  • Created tutorials for:
    • ODE/SDE integration methods
    • Classification and regression losses
    • Pairwise and embedding similarity
    • Spiking metrics and LFP analysis
    • Advanced neural visualization techniques
  • Updated project description from "brain modeling" to "brain simulation"
  • Changed references from BrainPy to BrainTools throughout

Code Quality

  • Added extensive unit tests for all new modules
  • Improved type hints and parameter documentation
  • Better error handling and validation
  • Consistent API design across modules

Breaking Changes

  • Refactored optimizer module structure (moved from single optimizer.py to separate modules)
  • Removed unused key parameter from spike encoder methods
  • Updated some function signatures for clarity

Bug Fixes

  • Fixed Softplus unit scaling issues
  • Corrected paths in publish workflow
  • Fixed formatting in ODE integrator documentation
  • Resolved msgpack checkpoint handling errors

What's Changed

  • ⬆️ Bump actions/download-artifact from 4 to 5 by @dependabot[bot] in #34
  • ⬆️ Bump actions/setup-python from 5 to 6 by @dependabot[bot] in #35
  • Add new metrics, integrators, encoders, and optimizers; update documentation by @chaoming0625 in #36
  • Add comprehensive visualization modules and msgpack mismatch handling by @chaoming0625 in #37
  • Add animation and dynamics tutorial by @chaoming0625 in #38

Full Changelog: v0.0.11...v0.0.12

Version 0.0.11

15 Sep 05:52
a6b6eb1

Choose a tag to compare

What's Changed

Full Changelog: v0.0.10...v0.0.11

Version 0.0.10

13 Sep 03:16

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: https://github.com/chaobrain/braintools/commits/v0.0.10