-
Notifications
You must be signed in to change notification settings - Fork 85
Verified & Integrated MXINT Implementation and Evaluation on NeRF (ADLS Group 0) #277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
omaralkhatib03
wants to merge
22
commits into
DeepWok:main
Choose a base branch
from
splogdes:main
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* fix mxint * fix doc string * add mxint relu * update mxint quantize test * remove mxint linear test * run python black --------- Co-authored-by: Ollie Cosgrove <oc121@gpu35.doc.ic.ac.uk>
* Added mxint_matrix_cat.sv: Untested Revewing Interface * Added Test Bench * Added Test Bench * Added Test Bench * Run python black * verible-v0.0-2776-gbaf0efe9 * Removed waves argument * Removed wrong file --------- Co-authored-by: splogdes <95136830+splogdes@users.noreply.github.com>
* fix mxint cast * run verible * Working Accumulator * Working Dot Product * fix accumulate * quantizer * working dot product * cleanup * better testing * wip dot_product * fix dot product tb * fix accumulate * Working linear layer for mxint no bias * format code * Fix mxint linear tb * remove prints * small fix to accumulate shift value * fix accumulator and update tb * final changes to accum tb * verible format * fix dot product testbench * fix bitwidths to match new accum * verible fix --------- Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com>
* better testing on linear layer * format * even better random and asserts * organize random probabilities * HACK to avoid mxint_cast issues * update tb to match working configurations --------- Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com>
* Use bias in mxint quantise to follow the spec more closely * python black
* Added mxint relu * fix test bench and reformat --------- Co-authored-by: Omar Alkhatib <oa321@ic.ac.uk>
* Added support to compute bias * verible format * Fixed accumulator tb + added tb to workflow * black format * Fix mxint cast and add support for off by one mode in cocotb monitor (#9) * fix mxint cast mostly still bugs improve mxint linear * add an off by one mode to the monitor * Remove asserts from mxint_cast.sv * Add off by one to test bench monitor * Fix over flow problems * better testing on linear layer (#7) * better testing on linear layer * format * even better random and asserts * organize random probabilities * HACK to avoid mxint_cast issues * update tb to match working configurations --------- Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com> * Fixed bias * Use bias in mxint quantise to follow the spec more closely (#5) * Use bias in mxint quantise to follow the spec more closely * python black * Implemented cat for non-square block-size (#10) * Feat add relu (#11) * Added mxint relu * fix test bench and reformat --------- Co-authored-by: Omar Alkhatib <oa321@ic.ac.uk> * Fixed bias * Removed zeros * Fixed accum tb --------- Co-authored-by: splogdes <95136830+splogdes@users.noreply.github.com> Co-authored-by: Luigi Rinaldi <82770895+luigirinaldi@users.noreply.github.com> Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com>
* Better relu TB * Better relu TB * black format
* Increase casting range * Defo working mxint cast * format * add asserts * add asserts * Change bounds of linear tb
* Training finally working * TPE Search working * TPE Search Working Correctly * Training OK * Testing fixed * Configs * remove a.json * PSNR as search metric + Corrected Formatting
* add relu dependencies * wiring works * working quite stably relu * quantize relu * less logging * wip * working relu and mlp * format * fix rebase artifacts * format python * remove cat from test --------- Co-authored-by: splogdes <95136830+splogdes@users.noreply.github.com> Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com>
* Added vivado stuff * Verilog format * Modified code to make it syntheisable * Working Optuna Fully Deep Connected Analysis * migrating to run host * Fixing mlp features * Re-definint parameters * Clean up * Fixed test_emit_verilog_mxint.py
Co-authored-by: luigirinaldi <luigirinaldi@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR introduces a significant refactor of the MXINT implementation. It enhances verification and fully integrates MXINT quantization into the Mase Framework. These changes enable the emission of deep fully connected PyTorch networks and facilitate software analysis of MXINT quantization using Optuna. Additionally, we provide enhancements to NeRF vision and leverage the model for evaluation purposes.
Contributions
Detailed Implementation Notes
Changes to the analysis
integertofixed-pointto avoid double namingadd_hardware_metadatapassINTERNAL_COMPdictionary and the individual ip entriesMXint_reluquantizepassdata_outvariables, using thedata_inconfig if adata_outconfig is not explicitly definedintegerto befixed-pointif it is encounteredChanges to emission
emit_bram)emit_parameters_in_mem_internal_mxintfunctionmxint_bram_templateemit_parameters_in_dat_internalto handle mxint datatypeemit_tb)emit_top)Changes to the cocotb interfaces
off-by-onemulti-signal monitortest_emit_verilog_linear_mxint..._mlpgenerates and tests a multi layer perceptron with random number of layers and random dimensions..._linearsimply tests the linear layer, to ensure the weighs and biases being emitted are correctINTERNAL_COMPdictionary and the individual ip entriesMXint_reluquantizepassdata_outvariables, using thedata_inconfig if adata_outconfig is not explicitly definedintegerto befixed-pointif it is encounteredReferences