Skip to content

Fix result of not equal compare w signaling NaNs (OpenHW)#19

Open
rgiunti wants to merge 8 commits intopulp-platform:pulpfrom
FondazioneChipsIT:rgiunti/spatz/vmfne-sig-NaN
Open

Fix result of not equal compare w signaling NaNs (OpenHW)#19
rgiunti wants to merge 8 commits intopulp-platform:pulpfrom
FondazioneChipsIT:rgiunti/spatz/vmfne-sig-NaN

Conversation

@rgiunti
Copy link
Copy Markdown

@rgiunti rgiunti commented Apr 3, 2026

Context

Needed for Spatz vmfne test with signaling NaN. Stable fpnew for Spatz is version 0.1.3. This commit has been placed on top of it.

Original OpenHW PR description

  • 🩹 Set result bit of not equal compare on signaling NaN

  • 💡 Update comment w.r.t. signaling NaNs in compares

michael-platzer and others added 8 commits April 3, 2026 10:53
* 🩹 Set result bit of not equal compare on signaling NaN

* 💡 Update comment w.r.t. signaling NaNs in compares
…IMD) (openhwgroup#8)

* Add new multi-format DivSqrt unit from openC910 supporting FP64, FP32, FP16, and SIMD operations
…8ALT are enabled (openhwgroup#9)

* Fix synchronization of THMULTI DivSqrt lanes when FP16ALT, FP8 or FP8ALT are enabled

* Update CHANGELOG-PULP.md
* Add FP16ALT support to THMULTI DivSqrt
…enhwgroup#17)

* Add FP4, FP6, FP6ALT formats and MXDOTP operation support to fpnew_pkg

Extended fpnew_pkg.sv with new floating-point formats and MXDOTP operation
group for MX dot product operations:

- New formats: FP6(E3M2), FP6ALT(E2M3), FP4(E2M1)
- Increased NUM_FP_FORMATS from 6 to 9
- Added MXDOTP operation group (6th group)
- New operations: MXDOTPF (FP), MXDOTPI (INT)
- Updated all format masks from 6-bit to 9-bit
- Added bias_constant() helper function for MXDOTP
- Updated FPU configurations (DEFAULT_NOREGS, DEFAULT_SNITCH)

* Add MXDOTP multi-format package definitions

Introduces fpnew_mxdotp_multi_pkg.sv with parameterized configuration for
MXDOTP operations supporting mixed-precision
arithmetic with low precision formats.

Configuration:
- Source formats: FP4, FP6, FP6ALT, FP8, FP8ALT, INT8
- Destination formats: FP32, FP16ALT

* Add MXDOTP multi-format core implementation

Add core MXDOTP implementation supporting
very low-precision floating-point formats (FP4, FP6, FP8) and INT8.

New files:
- fpnew_mxdotp_multi_modules.sv: 14 modules implementing
  the MXDOTP datapath (classification, multiplication, shifting,
  accumulation, normalization, rounding)
- fpnew_mxdotp_multi.sv: Top-level MXDOTP unit integrating all modules

* Add MXDOTP wrapper

New file:
- fpnew_mxdotp_multi_wrapper.sv: Wrapper handling operand unpacking,
  FP6 extended operand processing (3-step with unroll factor), NaN-boxing,
  and scale extraction

Changes to core module:
- Add NumPipeRegs and PipeConfig as module parameters
- Compute NUM_INP_REGS, NUM_MID_REGS, NUM_OUT_REGS from parameters

* Extend classifier for MX floating-point formats

Add MX parameter and format-specific classification logic to support low-precision formats used in MXDOTP operations.

Changes:
- Add MX parameter (default 1) to enable MX-specific classification
- FP8ALT (E4M3): No infinity, NaN when exp=all1s and man=all1s
- FP6/FP6ALT/FP4 (E3M2/E2M3/E2M1): No infinity or NaN
- Other formats: Standard IEEE-754 classification

* Add configurable format parameters to MXDOTP wrapper and pkg

* Integrate MXDOTP into opgroup multifmt slice

- Add elaboration-time checks: fatal for Width!=64, missing FP32,
  missing FP8/INT8; warnings for inactive FP6/FP6ALT/FP4
- Add NUM_MX_LANES localparam and lane generation for MXDOTP
- Instantiate fpnew_mxdotp_multi_wrapper with FpFmtConfig and IntFmtConfig

* Update SDOTP wrapper format masks for extended format support

- Widen FpSrcFmtConfig bitmasks from 6b to 9b to match the extended
  NUM_FP_FORMATS (FP6, FP6ALT, FP4 added but masked off for SDOTP)

* Add MXDOTP sources to Bender and src_files

* Update documentation for MXDOTP

* Parameterize MXDOTP format configuration and rename package constants

* Make INT8 optional and unify FP8/INT8 product width

Relax format validation in fpnew_opgroup_multifmt_slice to require only
FP8 and FP8ALT as mandatory base formats, allowing INT8 to be disabled.

* Use bias constant function instead of fixed constant

* Fix default mxdotp operation

* Fix classifier consistency for fp4

* Remove the warning message for MXDOTP about enabled formats
@rgiunti rgiunti requested a review from gamzeisl as a code owner April 3, 2026 09:04
@gamzeisl
Copy link
Copy Markdown

gamzeisl commented Apr 8, 2026

The PR branch does not seem to be cleanly based on the current pulp branch. The latest commits from pulp appear to have been reapplied, which makes the diff noisy. Could you please rebase the branch onto the latest pulp and keep only the PR-specific commits?

@rgiunti
Copy link
Copy Markdown
Author

rgiunti commented Apr 8, 2026

The PR branch does not seem to be cleanly based on the current pulp branch. The latest commits from pulp appear to have been reapplied, which makes the diff noisy. Could you please rebase the branch onto the latest pulp and keep only the PR-specific commits?

Hi @gamzeisl. The reason why I didn't do that is because Spatz makes use of cvfpu version pulp0.1.3. This means that we need this new commit to be on top of the pulp commit that corresponds to pulp 0.1.3 tag. I tried to do as you suggested because it's a cleaner approach but this would imply that we have to use the latest commit for Spatz but this creates problems right now.

@gamzeisl
Copy link
Copy Markdown

gamzeisl commented Apr 8, 2026

Hi @rgiunti, thanks for the explanation. In that case, merging this PR would not be very useful for Spatz, given the changes since pulp0.1.3. I would suggest keeping a local, Spatz-specific branch for now, until the Spatz FPU is upgraded to the latest pulp version.

For this PR, I would like to merge the NaN fix if you can provide it as a single clean commit on top of the current branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants