-
Notifications
You must be signed in to change notification settings - Fork 0
Multiple slaves in one device #24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
+550
−87
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Problem:
When creating a new Modbus device with a template selected in config_flow,
the template was stored in entry.options[CONF_TEMPLATE] but never loaded
because the migration code created the slave structure without copying
the template reference.
Root cause:
1. Config flow stores: options = {CONF_TEMPLATE: "builtin:SDM230"}
2. Migration runs and creates: slaves = [{"slave_id": 1, "registers": []}]
3. Template loading checks: slave_info.get("template") → None (not found!)
4. Template never loaded
Solution:
During migration, check for pending template in entry.options[CONF_TEMPLATE]
and copy it to the slave structure so the per-slave template loading code
can find and load it:
- slave_data["template"] = pending_template
- Then remove CONF_TEMPLATE from global options (moved to slave)
This ensures templates work correctly for:
- New devices created with templates
- Migrated devices (backward compatibility)
- Multi-slave configurations (each slave can have its own template)
…iBayb Fix template loading for new Modbus devices in multi-slave architecture
Problem: After template loading, coordinator showed 8 entities loaded but platforms created 0 entities (sensor sync: active=0, defined=0). Root cause: - Coordinator reads entities from: CONF_SLAVES[slave_index]['registers'] ✓ - Entity platforms read from: entry.options.get(CONF_REGISTERS, []) ✗ After migration, entities are moved to CONF_SLAVES structure but platforms still looked in CONF_REGISTERS (empty after migration). Solution: Updated sync_entities() in entity_base.py to: 1. Check if protocol is Modbus 2. Check if CONF_SLAVES structure exists 3. Read entities from slaves[coordinator.slave_index]['registers'] 4. Fall back to old CONF_REGISTERS structure for backward compatibility This ensures entity platforms can discover entities correctly in both: - New multi-slave architecture (CONF_SLAVES) - Legacy single-slave setup (CONF_REGISTERS)
…iBayb Fix entity platform discovery for multi-slave Modbus architecture
Problem:
After creating first Modbus device, the "Manage Slaves" menu option only
appeared when len(slaves) > 1. This meant users with a single slave device
had no way to add additional slave devices to the same connection.
Root cause:
In async_step_init(), the logic was:
- len(slaves) > 1: show "select_slave" menu ✓
- len(slaves) == 1: show entity management only, NO slave menu ✗
- No slaves: backward compat mode
Solution:
Changed logic to always show "Manage Slaves" menu when slaves structure exists:
- len(slaves) >= 1: show "Manage Slaves" menu (allows adding more)
- len(slaves) == 1: also show entity shortcuts for convenience ("Add entity (quick)")
- Label shows "(1 slave, add more)" to hint that more can be added
This allows users to:
1. Add multiple slave devices to one Modbus connection
2. Configure each slave's entities independently
3. Still have quick shortcuts for single-slave setups
…iBayb Fix multi-slave menu not appearing for single slave devices
Problem:
When adding a second slave device, Home Assistant showed 3 devices instead of 2:
1. "Modbus Hub" - original device with 17 entities
2. "Modbus Hub - Modbus Hub" - duplicate with 0 entities (orphaned)
3. "Modbus Hub - Slave 11" - new slave device
Root cause:
The coordinator_key format changed based on number of slaves:
- 1 slave: coordinator_key = entry.entry_id
- 2+ slaves: coordinator_key = f"{entry.entry_id}_slave_{slave_id}"
When adding the second slave, BOTH slaves got new device identifiers
(entry_id_slave_1 and entry_id_slave_11), but the old device with
identifier entry.entry_id remained orphaned in the device registry.
Solution:
1. __init__.py: Always use consistent coordinator_key format:
- coordinator_key = f"{entry.entry_id}_slave_{slave_id}" (for ALL slaves)
- Store coordinator_key in coordinator object for reference
- Maintain backward compatibility by also storing first slave at entry.entry_id
2. Platform files (sensor, number, select, switch):
- Use coordinator.coordinator_key for device identifier if available
- Fall back to entry.entry_id for backward compatibility
- Ensures entities attach to correct device
This ensures:
- No duplicate devices when adding/removing slaves
- Consistent device identification regardless of slave count
- Backward compatibility with existing single-slave setups
- Clean multi-slave architecture
…iBayb Fix duplicate device creation when adding multiple slaves
Problem:
When deleting a slave device through the options menu, the slave was removed
from the configuration and disappeared from the menu, but the device remained
visible as an empty device in the Home Assistant device registry/overview.
Root cause:
The delete slave logic only:
1. Removed slave from options[CONF_SLAVES]
2. Reloaded the config entry
But it didn't remove the device registry entry for that slave, leaving an
orphaned device with identifier "{entry_id}_slave_{slave_id}".
Solution:
In options_flow.py async_step_select_slave():
1. Import device_registry helper
2. Before reloading, look up the device by its identifier
3. Remove the device from device registry using async_remove_device()
This ensures:
- Clean removal of slave devices
- No orphaned devices in the overview
- Proper device lifecycle management
…iBayb Fix device registry cleanup when deleting Modbus slaves
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.