Skip to content

Conversation

@Beanow
Copy link

@Beanow Beanow commented Sep 2, 2022

Besides refactoring to split out the ruleCheck (and TS, oop! my hands slipped there),
this has an example of testing the collected feedback against a known 100% setup.

Since a correct rule should not break when compared to a perfect bias.
One feedback comes up bad currently, you can see this with:

yarn
yarn test --watch
 FAIL  core/core.test.ts
  ✕ Smoke test rule checker (3 ms)

  ● Smoke test rule checker

    expect(received).toEqual(expected) // deep equality

    - Expected  - 1
    + Received  + 9

    - Set {}
    + Set {
    +   Object {
    +     "biasIndex": 3,
    +     "dx": 0.10089285714285717,
    +     "feedback": "good",
    +     "timestamp": 1661958970541,
    +     "value": 0.11517857142857141,
    +   },
    + }

Suggesting (assuming this feedback is correct) the good breakpoint might be slightly off.

export const goodBreakpoint: number = 0.1 + eps;
// versus dx: 0.10089285714285717

I understand this isn't exactly a "minimal" PR.
So feel free to pick and choose as you see fit 😄

@vercel
Copy link

vercel bot commented Sep 2, 2022

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated
f1-manager-calc ✅ Ready (Inspect) Visit Preview Sep 2, 2022 at 5:38PM (UTC)

@iebb
Copy link
Owner

iebb commented Sep 3, 2022

would test about the 0.1 case, since i haven't precisely test this one than the optimal (39/5600) one. would look into it, but i haven't received feedbacks leading to wrong results because of good/great, so i assume it should be somewhere working.

this code definitely needs lots of rework to meet the quality standards, which ts could be a good entrypoint. would check this carefully when i have time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants