Skip to content

Conversation

@dmunch
Copy link

@dmunch dmunch commented Nov 12, 2025

Proposed changes

Adding support for EmbeddingGemma to Embedders.

Basically cherry-picked all commits from ml-explore/mlx-swift-examples#398 from to the new repository, made sure everything compiles and ran swift-format.

Pretty new to MLX, and thought that would be a good learning opportunity and try to get something in. Let me know what kind of modifications etc. you'd still like to have to get this merged.

Checklist

Put an x in the boxes that apply.

  • I have read the CONTRIBUTING document
  • I have run pre-commit run --all-files to format my code / installed pre-commit prior to committing changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have updated the necessary documentation (if needed)

dmunch and others added 10 commits November 12, 2025 17:20
Cherry-Pick of 86bb1265168363cc5096b8df5f82075a5702ef2e
Co-authored-by: Tom Nickson <tnickson@apple.com>
Cherry-Pick of d44e2c3d6d5365655aa0e179432cf3548ecd17d4
Co-authored-by: Tom Nickson <tnickson@apple.com>
- Add useBidirectionalAttention config parameter
- Apply sliding window size adjustment for bidirectional mode
- Implement createBidirectionalSlidingWindowMask function
- Update mask creation logic to support both causal and bidirectional attention
- Based on patches 40694 and 40700 for EmbeddingGemma support

Cherry-Picked
Commit: 46be017e9f4b076f2d0842cf78175ac42d894b0a
Co-authored-by: Tom Nickson <tnickson@apple.com>
Cherry-Picked
Commit: 8dc179ccc21b26fb0856016ec9f2b7d5792979e0
Co-authored-by: Tom Nickson <tnickson@apple.com>
Commit: 733e142542cfaf85ca0304d37f908b176c54edfc
Co-authored-by: Tom Nickson <tnickson@apple.com>
Commit: 96ee882cd7c6fd3573b034686d3f3c5afe1ee04a
Co-authored-by: Tom Nickson <tnickson@apple.com>
// Copyright © 2024 Apple Inc.

import MLX
import MLXLMCommon
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just to pick up the quantization? Think the Embedders should not require MLXLMCommon / MLXLLM if possible. A copy of Quantization is OK. If we end up with a lot of duplication then I think MLXLMCommon might make sense.

"gemma3_text": {
url in
let configuration = try JSONDecoder().decode(
Gemma3TextConfiguration.self, from: Data(contentsOf: url))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense to copy this config type into Embedders rather than add new linkage. Even sharing config types between models in the same library is rarely done.

Comment on lines +2 to +3
import MLXLLM
import MLXLMCommon
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See elsewhere -- I think this should be done without adding linkage to additional libraries.


@ModuleInfo private var model: Gemma3Model
@ModuleInfo(key: "lm_head") var lmHead: Linear
@ModuleInfo(key: "lm_head") var lmHead: Module // Can be Linear or QuantizedLinear
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QuantizedLinear is a subtype of Linear so this should have been OK as-is -- did you see a problem here?

@davidkoski
Copy link
Collaborator

@dmunch this looks good overall, but see my comments about not adding a new dependency on MLXLMCommon and MLXLLM.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants