Skip to content
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
# Manual Test Scenarios: SPV Sync Error Status

## Context

When SPV sync encounters a fatal error (e.g., masternode sync failure), the app
should transition from "Syncing" to a distinct Error state, with the connectivity
icon turning **magenta** (with "!" glyph and slow pulse) and the tooltip showing
the specific error message.

## Prerequisites

- Dash Evo Tool built with the fix applied
- Access to Testnet (or a network where SPV sync can be triggered)
- SPV backend mode enabled (not RPC mode)

## Scenario 1: Verify error state on sync failure

**Goal:** Confirm the connectivity icon transitions to Error (magenta) when
SPV sync fails, distinct from Disconnected (red).

### Steps

1. Launch Dash Evo Tool and connect to Testnet in SPV mode.
2. Observe the top-left connectivity icon during sync — it should pulse orange
(Syncing state).
3. If sync completes successfully, the icon should turn green (Running state).
4. If sync fails (e.g., masternode QRInfo failure visible in logs), observe:
- The connectivity icon turns **magenta** with a slow pulsation and a white
**"!"** glyph in the center.
- Hovering over the icon shows tooltip: **"SPV sync error: {detail}"** with
the specific error message (e.g., "Sync manager Masternode failed: ...").
- Below that: **"SPV: Error"** detail line.
5. Open the Network Chooser screen and check the SPV status detail — it should
display the error message.

### Expected Result

- Icon transitions from orange (Syncing) to magenta (Error) on sync failure.
- Error icon is visually distinct from red (Disconnected) — magenta color,
slow pulse, "!" glyph.
- Tooltip shows "SPV sync error: ..." with the specific error message.
- Error message is visible in the status detail panel.

## Scenario 2: Verify normal sync still works

**Goal:** Confirm the fix doesn't break the happy path.

### Steps

1. Launch Dash Evo Tool and connect to Testnet in SPV mode.
2. Wait for sync to complete (may take several minutes on first sync).
3. Observe the connectivity icon transitions:
- Orange (Syncing) during sync.
- Green (Running) after sync completes.
4. Hover over the icon — tooltip should show "SPV synced" with "SPV: Running".

### Expected Result

- Sync completes normally, icon turns green.
- No false error transitions during normal sync.

## Scenario 3: Verify error message content

**Goal:** Confirm the error message stored in `last_error` contains useful
diagnostic information.

### Steps

1. Trigger an SPV sync that fails (e.g., by connecting to a network with
known chain lock propagation issues).
2. Check application logs for the error:
- Look for `SPV manager ... reported error: ...` log line.
3. Hover over the connectivity icon and verify the tooltip shows the same
error message (not a generic "Sync failed" without context).

### Expected Result

- Log contains `SPV manager "Masternode" reported error: Masternode sync failed: ...`.
- Tooltip shows the specific error from the sync manager, including
the block hash reference.

## Scenario 4: Verify Error state is distinct from Disconnected

**Goal:** Confirm the user can visually distinguish Error from Disconnected.

### Steps

1. With the app in SPV mode, trigger a sync error (Scenario 1).
2. Note the icon appearance: magenta, pulsating, "!" glyph.
3. Switch to a network with no connectivity (e.g., disconnect network).
4. Note the icon appearance: red, static, no glyph.

### Expected Result

- Error state: magenta circle, slow pulse, white "!" glyph.
- Disconnected state: red circle, static (no pulse), no glyph.
- The two states are clearly visually distinguishable.

## Notes

- The actual QRInfo chain lock error is an upstream issue
(dashpay/rust-dashcore#470). This fix ensures the app **reports** the error
correctly rather than silently staying stuck in "Syncing".
- A separate upstream issue (dashpay/rust-dashcore#469) tracks the missing
`try_emit_progress()` call on error paths in dash-spv.
46 changes: 37 additions & 9 deletions src/context/connection_status.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ use std::time::{Duration, Instant};
const REFRESH_CONNECTED: Duration = Duration::from_secs(4);
const REFRESH_DISCONNECTED: Duration = Duration::from_secs(1);

/// Three-state connection indicator matching the UI's red/orange/green circle.
/// Connection indicator matching the UI's colored circle.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u8)]
pub enum OverallConnectionState {
Expand All @@ -24,13 +24,16 @@ pub enum OverallConnectionState {
Syncing = 1,
/// Fully connected and operational — green indicator.
Synced = 2,
/// Connected but sync failed — magenta indicator with "!" glyph.
Error = 3,
}

impl From<u8> for OverallConnectionState {
fn from(v: u8) -> Self {
match v {
1 => Self::Syncing,
2 => Self::Synced,
3 => Self::Error,
_ => Self::Disconnected,
}
}
Expand All @@ -48,6 +51,9 @@ pub struct ConnectionStatus {
backend_mode: AtomicU8,
disable_zmq: AtomicBool,
overall_state: AtomicU8,
// NOTE: Mutex (not RwLock) is intentional — single reader (tooltip hover),
// single writer (poll cycle), minimal contention. RwLock overhead not justified.
spv_last_error: Mutex<Option<String>>,
last_update: Mutex<Instant>,
dapi_total_endpoints: AtomicU16,
dapi_available_endpoints: AtomicU16,
Expand All @@ -62,6 +68,7 @@ impl ConnectionStatus {
backend_mode: AtomicU8::new(CoreBackendMode::Rpc.as_u8()),
disable_zmq: AtomicBool::new(false),
overall_state: AtomicU8::new(OverallConnectionState::Disconnected as u8),
spv_last_error: Mutex::new(None),
last_update: Mutex::new(Instant::now()),
dapi_total_endpoints: AtomicU16::new(0),
dapi_available_endpoints: AtomicU16::new(0),
Expand All @@ -87,6 +94,9 @@ impl ConnectionStatus {
OverallConnectionState::Disconnected as u8,
Ordering::Relaxed,
);
if let Ok(mut err) = self.spv_last_error.lock() {
*err = None;
}
// Set last_update to epoch so the next trigger_refresh fires immediately
if let Ok(mut last) = self.last_update.lock() {
*last = Instant::now() - REFRESH_CONNECTED;
Expand Down Expand Up @@ -208,6 +218,7 @@ impl ConnectionStatus {
SpvStatus::Starting | SpvStatus::Syncing | SpvStatus::Stopping => {
OverallConnectionState::Syncing
}
SpvStatus::Error => OverallConnectionState::Error,
_ => OverallConnectionState::Disconnected,
}
}
Expand Down Expand Up @@ -246,6 +257,7 @@ impl ConnectionStatus {
OverallConnectionState::Synced => "Connected to Dash Core Wallet",
// RPC mode doesn't currently produce Syncing, but kept for forward-compat.
OverallConnectionState::Syncing => "Syncing to Dash Core Wallet",
OverallConnectionState::Error => "Connection error",
OverallConnectionState::Disconnected if self.rpc_online() => {
"Dash Core connection incomplete"
}
Expand All @@ -256,13 +268,24 @@ impl ConnectionStatus {
format!("{header}\n{rpc_status}\n{zmq_status}\n{dapi_status}")
}
CoreBackendMode::Spv => {
let header = match overall {
OverallConnectionState::Synced => "Ready",
OverallConnectionState::Syncing => "Syncing",
OverallConnectionState::Disconnected => "Disconnected",
let header: std::borrow::Cow<'_, str> = match overall {
OverallConnectionState::Synced => "Ready".into(),
OverallConnectionState::Syncing => "Syncing".into(),
OverallConnectionState::Error => {
let detail = self
.spv_last_error
.lock()
.ok()
.and_then(|g| g.clone())
.unwrap_or_else(|| "unknown error".to_string());
format!("SPV sync error: {detail}").into()
}
OverallConnectionState::Disconnected => "Disconnected".into(),
};
let spv_label = if spv_status == SpvStatus::Running {
"SPV: Synced".to_string()
} else if spv_status == SpvStatus::Error {
"SPV: Error".to_string()
} else {
app_context
.spv_manager()
Expand Down Expand Up @@ -369,10 +392,15 @@ impl ConnectionStatus {

match backend_mode {
CoreBackendMode::Spv => {
// SPV status is updated elsewhere
let spv_status = app_context.spv_manager().status().status;
tracing::trace!("ConnectionStatus: polled SPV status = {:?}", spv_status);
self.set_spv_status(spv_status);
let snapshot = app_context.spv_manager().status();
tracing::trace!(
"ConnectionStatus: polled SPV status = {:?}",
snapshot.status
);
self.set_spv_status(snapshot.status);
if let Ok(mut err) = self.spv_last_error.lock() {
*err = snapshot.last_error;
}
}
CoreBackendMode::Rpc => {
// Update ZMQ status if there's a new event
Expand Down
86 changes: 86 additions & 0 deletions src/spv/manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ use dash_sdk::dash_spv::network::PeerNetworkManager;
use dash_sdk::dash_spv::storage::DiskStorageManager;
use dash_sdk::dash_spv::sync::SyncEvent;
use dash_sdk::dash_spv::sync::SyncProgress as SpvSyncProgress;
use dash_sdk::dash_spv::sync::SyncState;
use dash_sdk::dash_spv::types::ValidationMode;
use dash_sdk::dash_spv::{ClientConfig, DashSpvClient, Hash, LLMQType, QuorumHash};
use dash_sdk::dpp::dashcore::{Address, InstantLock, Network, Transaction, Txid};
Expand Down Expand Up @@ -1044,11 +1045,49 @@ impl SpvManager {
});
}

/// Identify which sync manager phase is in Error state, if any.
/// Checks masternodes first as the most common failure point,
/// rather than pipeline execution order used by `spv_phase_summary()`.
fn failed_manager_name(progress: &SpvSyncProgress) -> &'static str {
if progress
.masternodes()
.is_ok_and(|p| p.state() == SyncState::Error)
{
return "Masternodes";
}
if progress
.headers()
.is_ok_and(|p| p.state() == SyncState::Error)
{
return "Headers";
}
if progress
.filter_headers()
.is_ok_and(|p| p.state() == SyncState::Error)
{
return "Filter headers";
}
if progress
.filters()
.is_ok_and(|p| p.state() == SyncState::Error)
{
return "Filters";
}
if progress
.blocks()
.is_ok_and(|p| p.state() == SyncState::Error)
{
return "Blocks";
}
"unknown phase"
}

fn spawn_progress_watcher(
&self,
mut progress_rx: tokio::sync::watch::Receiver<SpvSyncProgress>,
) {
let status = Arc::clone(&self.status);
let last_error = Arc::clone(&self.last_error);
let sync_progress_state = Arc::clone(&self.sync_progress_state);
let progress_updated_at = Arc::clone(&self.progress_updated_at);
let cancel = self.subtasks.cancellation_token.clone();
Expand All @@ -1063,6 +1102,12 @@ impl SpvManager {
}
let watch_progress = progress_rx.borrow().clone();
let is_synced = watch_progress.is_synced();
let is_error = watch_progress.state() == SyncState::Error;
let failed_phase = if is_error {
Some(Self::failed_manager_name(&watch_progress))
} else {
None
};

// Update sync progress state
if let Ok(mut stored_sync) = sync_progress_state.write() {
Expand All @@ -1076,10 +1121,27 @@ impl SpvManager {
if let Ok(mut status_guard) = status.write() {
if is_synced {
*status_guard = SpvStatus::Running;
} else if is_error {
*status_guard = SpvStatus::Error;
} else if !matches!(*status_guard, SpvStatus::Stopping | SpvStatus::Stopped | SpvStatus::Error) {
*status_guard = SpvStatus::Syncing;
}
}
// Write last_error outside status lock to maintain
// consistent lock ordering (status → release → last_error).
if is_error
&& let Ok(mut err_guard) = last_error.write()
&& err_guard.is_none()
{
// Note: this path is currently unreachable due to upstream
// bug dashpay/rust-dashcore#469 (progress channel never
// receives SyncState::Error). Once fixed, this will fire.
let phase = failed_phase.unwrap_or("unknown phase");
*err_guard = Some(format!(
"Sync failed: {} (reported by SPV progress channel)",
phase
));
}
}
}
}
Expand All @@ -1091,6 +1153,7 @@ impl SpvManager {
let reconcile_tx = self.reconcile_tx.lock().ok().and_then(|g| g.clone());
let finality_tx = self.finality_tx.lock().ok().and_then(|g| g.clone());
let status = Arc::clone(&self.status);
let last_error = Arc::clone(&self.last_error);
let cancel = self.subtasks.cancellation_token.clone();

self.subtasks.spawn_sync("spv_sync_event_handler", async move {
Expand Down Expand Up @@ -1135,6 +1198,29 @@ impl SpvManager {
{
*guard = SpvStatus::Running;
}

// Transition to Error when a sync manager reports a
// fatal failure. The dash-spv library emits this event
// but does NOT update the progress channel on the error
// path, so we must react to the event directly.
if let SyncEvent::ManagerError { ref manager, ref error } = event {
tracing::error!("SPV manager {} reported error: {}", manager, error);
if let Ok(mut guard) = status.write() {
*guard = SpvStatus::Error;
drop(guard); // Maintain lock ordering: status → release → last_error
}
// TODO: truncate error string to ~512 chars to prevent
// unbounded memory from adversarial peer errors (CWE-400).
let msg = format!("Sync manager {} failed: {}", manager, error);
if let Ok(mut err_guard) = last_error.write() {
if err_guard.is_none() {
*err_guard = Some(msg);
} else {
tracing::warn!("SPV last_error already set, ignoring subsequent: {}", msg);
}
}
}

if should_signal
&& let Some(ref tx) = reconcile_tx
{
Expand Down
Loading
Loading