Files
clash-proxy/src-tauri/src/core/handle.rs

331 lines
9.3 KiB
Rust
Raw Normal View History

refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
use crate::{
APP_HANDLE, config::Config, constants::timing, logging, singleton, utils::logging::Type,
};
use parking_lot::RwLock;
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
use serde_json::{Value, json};
use smartstring::alias::String;
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
use std::{
sync::Arc,
thread,
time::{SystemTime, UNIX_EPOCH},
};
use tauri::{AppHandle, Manager, WebviewWindow};
use tauri_plugin_mihomo::{Mihomo, MihomoExt};
use tokio::sync::RwLockReadGuard;
2022-09-11 20:58:55 +08:00
use super::notification::{ErrorMessage, FrontendEvent, NotificationSystem};
#[derive(Debug, Clone)]
2022-09-11 20:58:55 +08:00
pub struct Handle {
is_exiting: Arc<RwLock<bool>>,
startup_errors: Arc<RwLock<Vec<ErrorMessage>>>,
startup_completed: Arc<RwLock<bool>>,
pub(crate) notification_system: Arc<RwLock<Option<NotificationSystem>>>,
2022-09-11 20:58:55 +08:00
}
impl Default for Handle {
fn default() -> Self {
Self {
is_exiting: Arc::new(RwLock::new(false)),
startup_errors: Arc::new(RwLock::new(Vec::new())),
startup_completed: Arc::new(RwLock::new(false)),
notification_system: Arc::new(RwLock::new(Some(NotificationSystem::new()))),
}
}
}
refactor: optimize singleton macro usage with Default trait implementations (#4279) * refactor: implement DRY principle improvements across backend Major DRY violations identified and addressed: 1. **IPC Stream Monitor Pattern**: - Created `utils/ipc_monitor.rs` with generic `IpcStreamMonitor` trait - Added `IpcMonitorManager` for common async task management patterns - Eliminates duplication across traffic.rs, memory.rs, and logs.rs 2. **Singleton Pattern Duplication**: - Created `utils/singleton.rs` with `singleton\!` and `singleton_with_logging\!` macros - Replaces 16+ duplicate singleton implementations across codebase - Provides consistent, tested patterns for global instances 3. **macOS Activation Policy Refactoring**: - Consolidated 3 duplicate methods into single parameterized `set_activation_policy()` - Eliminated code duplication while maintaining backward compatibility - Reduced maintenance burden for macOS-specific functionality These improvements enhance maintainability, reduce bug potential, and ensure consistent patterns across the backend codebase. * fix: resolve test failures and clippy warnings - Fix doctest in singleton.rs by using rust,ignore syntax and proper code examples - Remove unused time::Instant import from ipc_monitor.rs - Add #[allow(dead_code)] attributes to future-use utility modules - All 11 unit tests now pass successfully - All clippy checks pass with -D warnings strict mode - Documentation tests properly ignore example code that requires full context * refactor: migrate code to use new utility tools (partial) Progress on systematic migration to use created utility tools: 1. **Reorganized IPC Monitor**: - Moved ipc_monitor.rs to src-tauri/src/ipc/monitor.rs for better organization - Updated module structure to emphasize IPC relationship 2. **IpcManager Singleton Migration**: - Replaced manual OnceLock singleton pattern with singleton_with_logging\! macro - Simplified initialization code and added consistent logging - Removed unused imports (OnceLock, logging::Type) 3. **ProxyRequestCache Singleton Migration**: - Migrated from once_cell::sync::OnceCell to singleton\! macro - Cleaner, more maintainable singleton pattern - Consistent with project-wide singleton approach These migrations demonstrate the utility and effectiveness of the created tools: - Less boilerplate code - Consistent patterns across codebase - Easier maintenance and debugging * feat: complete migration to new utility tools - phase 1 Successfully migrated core components to use the created utility tools: - Moved `ipc_monitor.rs` to `src-tauri/src/ipc/monitor.rs` - Better organization emphasizing IPC relationship - Updated module exports and imports - **IpcManager**: Migrated to `singleton_with_logging\!` macro - **ProxyRequestCache**: Migrated to `singleton\!` macro - Eliminated ~30 lines of boilerplate singleton code - Consistent logging and initialization patterns - Removed unused imports (OnceLock, once_cell, logging::Type) - Cleaner, more maintainable code structure - All 11 unit tests pass successfully - Zero compilation warnings - **Lines of code reduced**: ~50+ lines of boilerplate - **Consistency improved**: Unified singleton patterns - **Maintainability enhanced**: Centralized utility functions - **Test coverage maintained**: 100% test pass rate Remaining complex monitors (traffic, memory, logs) will be migrated to use the shared IPC monitoring patterns in the next phase, which requires careful refactoring of their streaming logic. * refactor: complete singleton pattern migration to utility macros Migrate remaining singleton patterns across the backend to use standardized utility macros, achieving significant code reduction and consistency improvements. - **LogsMonitor** (ipc/logs.rs): `OnceLock` → `singleton_with_logging\!` - **Sysopt** (core/sysopt.rs): `OnceCell` → `singleton_lazy\!` - **Tray** (core/tray/mod.rs): Complex `OnceCell` → `singleton_lazy\!` - **Handle** (core/handle.rs): `OnceCell` → `singleton\!` - **CoreManager** (core/core.rs): `OnceCell` → `singleton_lazy\!` - **TrafficMonitor** (ipc/traffic.rs): `OnceLock` → `singleton_lazy_with_logging\!` - **MemoryMonitor** (ipc/memory.rs): `OnceLock` → `singleton_lazy_with_logging\!` - `singleton_lazy\!` - For complex initialization patterns - `singleton_lazy_with_logging\!` - For complex initialization with logging - **Code Reduction**: -33 lines of boilerplate singleton code - **DRY Compliance**: Eliminated duplicate initialization patterns - **Consistency**: Unified singleton approach across codebase - **Maintainability**: Centralized singleton logic in utility macros - **Zero Breaking Changes**: All existing APIs remain compatible All tests pass and clippy warnings resolved. * refactor: optimize singleton macros using Default trait implementation Simplify singleton macro usage by implementing Default trait for complex initialization patterns, significantly improving code readability and maintainability. - **MemoryMonitor**: Move IPC client initialization to Default impl - **TrafficMonitor**: Move IPC client initialization to Default impl - **Sysopt**: Move Arc<Mutex> initialization to Default impl - **Tray**: Move struct field initialization to Default impl - **CoreManager**: Move Arc<Mutex> initialization to Default impl ```rust singleton_lazy_with_logging\!(MemoryMonitor, INSTANCE, "MemoryMonitor", || { let ipc_path_buf = ipc_path().unwrap(); let ipc_path = ipc_path_buf.to_str().unwrap_or_default(); let client = IpcStreamClient::new(ipc_path).unwrap(); MemoryMonitor::new(client) }); ``` ```rust impl Default for MemoryMonitor { /* initialization logic */ } singleton_lazy_with_logging\!(MemoryMonitor, INSTANCE, "MemoryMonitor", MemoryMonitor::default); ``` - **Code Reduction**: -17 lines of macro closure code (80%+ simplification) - **Separation of Concerns**: Initialization logic moved to proper Default impl - **Readability**: Single-line macro calls vs multi-line closures - **Testability**: Default implementations can be tested independently - **Rust Idioms**: Using standard Default trait pattern - **Performance**: Function calls more efficient than closures All tests pass and clippy warnings resolved. * refactor: implement MonitorData and StreamingParser traits for IPC monitors * refactor: add timeout and retry_interval fields to IpcStreamMonitor; update TrafficMonitorState to derive Default * refactor: migrate AppHandleManager to unified singleton control - Replace manual singleton implementation with singleton_with_logging\! macro - Remove std::sync::Once dependency in favor of OnceLock-based pattern - Improve error handling for macOS activation policy methods - Maintain thread safety with parking_lot::Mutex for AppHandle storage - Add proper initialization check to prevent duplicate handle assignment - Enhance logging consistency across AppHandleManager operations * refactor: improve hotkey management with enum-based operations - Add HotkeyFunction enum for type-safe function selection - Add SystemHotkey enum for predefined system shortcuts - Implement Display and FromStr traits for type conversions - Replace string-based hotkey registration with enum methods - Add register_system_hotkey() and unregister_system_hotkey() methods - Maintain backward compatibility with string-based register() method - Migrate singleton pattern to use singleton_with_logging\! macro - Extract hotkey function execution logic into centralized execute_function() - Update lib.rs to use new enum-based SystemHotkey operations - Improve type safety and reduce string manipulation errors Benefits: - Type safety prevents invalid hotkey function names - Centralized function execution reduces code duplication - Enum-based API provides better IDE autocomplete support - Maintains full backward compatibility with existing configurations * fix: resolve LightWeightState initialization order panic - Modify with_lightweight_status() to safely handle unmanaged state using try_state() - Return Option<R> instead of R to gracefully handle state unavailability - Update is_in_lightweight_mode() to use unwrap_or(false) for safe defaults - Add state availability check in auto_lightweight_mode_init() before access - Maintain singleton check priority while preventing early state access panics - Fix clippy warnings for redundant pattern matching Resolves runtime panic: "state() called before manage() for LightWeightState" * refactor: add unreachable patterns for non-macOS in hotkey handling * refactor: simplify SystemHotkey enum by removing redundant cfg attributes * refactor: add macOS conditional compilation for system hotkey registration methods * refactor: streamline hotkey unregistration and error logging for macOS
2025-07-31 14:35:13 +08:00
singleton!(Handle, HANDLE);
impl Handle {
refactor: optimize singleton macro usage with Default trait implementations (#4279) * refactor: implement DRY principle improvements across backend Major DRY violations identified and addressed: 1. **IPC Stream Monitor Pattern**: - Created `utils/ipc_monitor.rs` with generic `IpcStreamMonitor` trait - Added `IpcMonitorManager` for common async task management patterns - Eliminates duplication across traffic.rs, memory.rs, and logs.rs 2. **Singleton Pattern Duplication**: - Created `utils/singleton.rs` with `singleton\!` and `singleton_with_logging\!` macros - Replaces 16+ duplicate singleton implementations across codebase - Provides consistent, tested patterns for global instances 3. **macOS Activation Policy Refactoring**: - Consolidated 3 duplicate methods into single parameterized `set_activation_policy()` - Eliminated code duplication while maintaining backward compatibility - Reduced maintenance burden for macOS-specific functionality These improvements enhance maintainability, reduce bug potential, and ensure consistent patterns across the backend codebase. * fix: resolve test failures and clippy warnings - Fix doctest in singleton.rs by using rust,ignore syntax and proper code examples - Remove unused time::Instant import from ipc_monitor.rs - Add #[allow(dead_code)] attributes to future-use utility modules - All 11 unit tests now pass successfully - All clippy checks pass with -D warnings strict mode - Documentation tests properly ignore example code that requires full context * refactor: migrate code to use new utility tools (partial) Progress on systematic migration to use created utility tools: 1. **Reorganized IPC Monitor**: - Moved ipc_monitor.rs to src-tauri/src/ipc/monitor.rs for better organization - Updated module structure to emphasize IPC relationship 2. **IpcManager Singleton Migration**: - Replaced manual OnceLock singleton pattern with singleton_with_logging\! macro - Simplified initialization code and added consistent logging - Removed unused imports (OnceLock, logging::Type) 3. **ProxyRequestCache Singleton Migration**: - Migrated from once_cell::sync::OnceCell to singleton\! macro - Cleaner, more maintainable singleton pattern - Consistent with project-wide singleton approach These migrations demonstrate the utility and effectiveness of the created tools: - Less boilerplate code - Consistent patterns across codebase - Easier maintenance and debugging * feat: complete migration to new utility tools - phase 1 Successfully migrated core components to use the created utility tools: - Moved `ipc_monitor.rs` to `src-tauri/src/ipc/monitor.rs` - Better organization emphasizing IPC relationship - Updated module exports and imports - **IpcManager**: Migrated to `singleton_with_logging\!` macro - **ProxyRequestCache**: Migrated to `singleton\!` macro - Eliminated ~30 lines of boilerplate singleton code - Consistent logging and initialization patterns - Removed unused imports (OnceLock, once_cell, logging::Type) - Cleaner, more maintainable code structure - All 11 unit tests pass successfully - Zero compilation warnings - **Lines of code reduced**: ~50+ lines of boilerplate - **Consistency improved**: Unified singleton patterns - **Maintainability enhanced**: Centralized utility functions - **Test coverage maintained**: 100% test pass rate Remaining complex monitors (traffic, memory, logs) will be migrated to use the shared IPC monitoring patterns in the next phase, which requires careful refactoring of their streaming logic. * refactor: complete singleton pattern migration to utility macros Migrate remaining singleton patterns across the backend to use standardized utility macros, achieving significant code reduction and consistency improvements. - **LogsMonitor** (ipc/logs.rs): `OnceLock` → `singleton_with_logging\!` - **Sysopt** (core/sysopt.rs): `OnceCell` → `singleton_lazy\!` - **Tray** (core/tray/mod.rs): Complex `OnceCell` → `singleton_lazy\!` - **Handle** (core/handle.rs): `OnceCell` → `singleton\!` - **CoreManager** (core/core.rs): `OnceCell` → `singleton_lazy\!` - **TrafficMonitor** (ipc/traffic.rs): `OnceLock` → `singleton_lazy_with_logging\!` - **MemoryMonitor** (ipc/memory.rs): `OnceLock` → `singleton_lazy_with_logging\!` - `singleton_lazy\!` - For complex initialization patterns - `singleton_lazy_with_logging\!` - For complex initialization with logging - **Code Reduction**: -33 lines of boilerplate singleton code - **DRY Compliance**: Eliminated duplicate initialization patterns - **Consistency**: Unified singleton approach across codebase - **Maintainability**: Centralized singleton logic in utility macros - **Zero Breaking Changes**: All existing APIs remain compatible All tests pass and clippy warnings resolved. * refactor: optimize singleton macros using Default trait implementation Simplify singleton macro usage by implementing Default trait for complex initialization patterns, significantly improving code readability and maintainability. - **MemoryMonitor**: Move IPC client initialization to Default impl - **TrafficMonitor**: Move IPC client initialization to Default impl - **Sysopt**: Move Arc<Mutex> initialization to Default impl - **Tray**: Move struct field initialization to Default impl - **CoreManager**: Move Arc<Mutex> initialization to Default impl ```rust singleton_lazy_with_logging\!(MemoryMonitor, INSTANCE, "MemoryMonitor", || { let ipc_path_buf = ipc_path().unwrap(); let ipc_path = ipc_path_buf.to_str().unwrap_or_default(); let client = IpcStreamClient::new(ipc_path).unwrap(); MemoryMonitor::new(client) }); ``` ```rust impl Default for MemoryMonitor { /* initialization logic */ } singleton_lazy_with_logging\!(MemoryMonitor, INSTANCE, "MemoryMonitor", MemoryMonitor::default); ``` - **Code Reduction**: -17 lines of macro closure code (80%+ simplification) - **Separation of Concerns**: Initialization logic moved to proper Default impl - **Readability**: Single-line macro calls vs multi-line closures - **Testability**: Default implementations can be tested independently - **Rust Idioms**: Using standard Default trait pattern - **Performance**: Function calls more efficient than closures All tests pass and clippy warnings resolved. * refactor: implement MonitorData and StreamingParser traits for IPC monitors * refactor: add timeout and retry_interval fields to IpcStreamMonitor; update TrafficMonitorState to derive Default * refactor: migrate AppHandleManager to unified singleton control - Replace manual singleton implementation with singleton_with_logging\! macro - Remove std::sync::Once dependency in favor of OnceLock-based pattern - Improve error handling for macOS activation policy methods - Maintain thread safety with parking_lot::Mutex for AppHandle storage - Add proper initialization check to prevent duplicate handle assignment - Enhance logging consistency across AppHandleManager operations * refactor: improve hotkey management with enum-based operations - Add HotkeyFunction enum for type-safe function selection - Add SystemHotkey enum for predefined system shortcuts - Implement Display and FromStr traits for type conversions - Replace string-based hotkey registration with enum methods - Add register_system_hotkey() and unregister_system_hotkey() methods - Maintain backward compatibility with string-based register() method - Migrate singleton pattern to use singleton_with_logging\! macro - Extract hotkey function execution logic into centralized execute_function() - Update lib.rs to use new enum-based SystemHotkey operations - Improve type safety and reduce string manipulation errors Benefits: - Type safety prevents invalid hotkey function names - Centralized function execution reduces code duplication - Enum-based API provides better IDE autocomplete support - Maintains full backward compatibility with existing configurations * fix: resolve LightWeightState initialization order panic - Modify with_lightweight_status() to safely handle unmanaged state using try_state() - Return Option<R> instead of R to gracefully handle state unavailability - Update is_in_lightweight_mode() to use unwrap_or(false) for safe defaults - Add state availability check in auto_lightweight_mode_init() before access - Maintain singleton check priority while preventing early state access panics - Fix clippy warnings for redundant pattern matching Resolves runtime panic: "state() called before manage() for LightWeightState" * refactor: add unreachable patterns for non-macOS in hotkey handling * refactor: simplify SystemHotkey enum by removing redundant cfg attributes * refactor: add macOS conditional compilation for system hotkey registration methods * refactor: streamline hotkey unregistration and error logging for macOS
2025-07-31 14:35:13 +08:00
pub fn new() -> Self {
Self::default()
2022-11-14 01:26:33 +08:00
}
pub fn init(&self) {
if self.is_exiting() {
return;
}
let mut system_opt = self.notification_system.write();
if let Some(system) = system_opt.as_mut()
&& !system.is_running
{
system.start();
}
2022-11-12 11:37:23 +08:00
}
2022-09-11 20:58:55 +08:00
pub fn app_handle() -> &'static AppHandle {
#[allow(clippy::expect_used)]
APP_HANDLE.get().expect("App handle not initialized")
}
pub async fn mihomo() -> RwLockReadGuard<'static, Mihomo> {
Self::app_handle().mihomo().read().await
}
pub fn get_window() -> Option<WebviewWindow> {
Self::app_handle().get_webview_window("main")
2022-11-12 11:37:23 +08:00
}
2022-09-11 20:58:55 +08:00
2022-11-14 01:26:33 +08:00
pub fn refresh_clash() {
let handle = Self::global();
if handle.is_exiting() {
return;
}
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
{
let system_opt = handle.notification_system.read();
if let Some(system) = system_opt.as_ref() {
system.send_event(FrontendEvent::RefreshClash);
}
2022-11-12 11:37:23 +08:00
}
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
Self::spawn_proxy_snapshot();
2022-09-11 20:58:55 +08:00
}
2022-11-14 01:26:33 +08:00
pub fn refresh_verge() {
let handle = Self::global();
if handle.is_exiting() {
return;
2022-11-12 11:37:23 +08:00
}
2022-09-11 20:58:55 +08:00
let system_opt = handle.notification_system.read();
if let Some(system) = system_opt.as_ref() {
system.send_event(FrontendEvent::RefreshVerge);
2022-11-12 11:37:23 +08:00
}
2022-09-11 20:58:55 +08:00
}
pub fn notify_profile_changed(profile_id: String) {
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
let handle = Self::global();
if handle.is_exiting() {
return;
}
let system_opt = handle.notification_system.read();
if let Some(system) = system_opt.as_ref() {
system.send_event(FrontendEvent::ProfileChanged {
current_profile_id: profile_id,
});
}
}
pub fn notify_profile_switch_finished(
profile_id: String,
success: bool,
notify: bool,
task_id: u64,
) {
Self::send_event(FrontendEvent::ProfileSwitchFinished {
profile_id,
success,
notify,
task_id,
});
}
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
pub fn notify_rust_panic(message: String, location: String) {
Self::send_event(FrontendEvent::RustPanic { message, location });
}
pub fn notify_timer_updated(profile_index: String) {
Self::send_event(FrontendEvent::TimerUpdated { profile_index });
}
pub fn notify_profile_update_started(uid: String) {
Self::send_event(FrontendEvent::ProfileUpdateStarted { uid });
}
pub fn notify_profile_update_completed(uid: String) {
Self::send_event(FrontendEvent::ProfileUpdateCompleted { uid });
refactor: profile switch (#5197) * refactor: proxy refresh * fix(proxy-store): properly hydrate and filter backend provider snapshots * fix(proxy-store): add monotonic fetch guard and event bridge cleanup * fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses * docs: UPDATELOG.md * fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info * fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height * fix(proxy-groups): restrict reduced-height viewport to chain-mode column * refactor(profiles): introduce a state machine * refactor:replace state machine with reducer * refactor:introduce a profile switch worker * refactor: hooked up a backend-driven profile switch flow * refactor(profile-switch): serialize switches with async queue and enrich frontend events * feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles * chore: translate comments and log messages to English to avoid encoding issues * refactor: migrate backend queue to SwitchDriver actor * fix(profile): unify error string types in validation helper * refactor(profile): make switch driver fully async and handle panics safely * refactor(cmd): move switch-validation helper into new profile_switch module * refactor(profile): modularize switch logic into profile_switch.rs * refactor(profile_switch): modularize switch handler - Break monolithic switch handler into proper module hierarchy - Move shared globals, constants, and SwitchScope guard to state.rs - Isolate queue orchestration and async task spawning in driver.rs - Consolidate switch pipeline and config patching in workflow.rs - Extract request pre-checks/YAML validation into validation.rs * refactor(profile_switch): centralize state management and add cancellation flow - Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling. - Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications. - Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order. - Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs. * feat(profile_switch): integrate explicit state machine for profile switching - workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest. Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards. - workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`, ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches. - workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine. - workflow/state_machine.rs:1 introduces a dedicated state machine module. It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching, `CoreManager::update_config`, failure rollback, and tray/notification side-effects. Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop. * refactor(profile-switch): integrate stage-aware panic handling - src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1 Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics. - src-tauri/src/cmd/profile_switch/workflow.rs:25 Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings. - src-tauri/src/cmd/profile_switch/driver.rs:1 Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics. * refactor(profile-switch): add watchdog, heartbeat, and async timeout guards - Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations. - Add watchdog in driver to cancel stalled switches (5s heartbeat timeout). - Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls. - Improve logs for stage transitions and watchdog timeouts to clarify cancellation points. * refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO * feat(profile-switch): track cleanup and coordinate pipeline - Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50) - Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247) - Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442) * feat(profile-switch): unify post-switch cleanup handling - workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`. All failure/timeout paths stash post-switch work into a single CleanupHandle. Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling. - driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done. Direct driver-side panics now schedule failure cleanup via the shared helper. * tmp * Revert "tmp" This reverts commit e582cf4a652231a67a7c951802cb19b385f6afd7. * refactor: queue frontend events through async dispatcher * refactor: queue frontend switch/proxy events and throttle notices * chore: frontend debug log * fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation - Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages) - Allows frontend to receive task completion notifications for UI feedback while crash isolation continues - src-tauri/src/core/handle.rs now only suppresses notify_profile_changed - Serialized emitter, frontend logging bridge, and other diagnostics unchanged * refactor: refreshClashData * refactor(proxy): stabilize proxy switch pipeline and rendering - Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot - Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration - Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts - Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending) - Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot * refactor(profiles): move manual activating logic to reducer for deterministic queue tracking * refactor: replace proxy-data event bridge with pure polling and simplify proxy store - Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx). - Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts). - Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts). * refactor(proxy): streamline proxies-updated handling and store event flow - AppDataProvider now treats `proxies-updated` as the fast path: the listener calls `applyLiveProxyPayload` immediately and schedules only a single fallback `fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade). Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and the multi-stage queue on profile updates completion was removed (src/providers/app-data-provider.tsx). - Rebuilt proxy-store to support the event flow: restored `setLive`, provider normalization, and an animation-frame + async queue that applies payloads without blocking. Exposed `applyLiveProxyPayload` so providers can push events directly into the store (src/stores/proxy-store.ts). * refactor: switch delay * refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished - AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx). - Retain existing detailed timing logs for monitoring other stages. - Frontend success notifications remain instant; background refreshes continue asynchronously. * fix(profiles): prevent duplicate toast on page remount * refactor(profile-switch): make active switches preemptible and prevent queue piling - Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82) - Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232) - Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301) - Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208) * refactor(core): make core reload phase controllable, reduce 0xcfffffff risk - CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205) - `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211) - `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247) * chore(frontend-logs): downgrade routine event logs from info to debug - Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…) - Genuine warnings/errors (failures/timeouts) remain at warn/error - Core stage logs remain info to keep backend tracking visible * refactor(frontend-emit): make emit_via_app fire-and-forget async - `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269) - Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329) * refactor(ui): restructure profile switch for event-driven speed + polling stability - Backend - SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs) - `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch - New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs) - Notification system - `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff - Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs) - Frontend - services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents` - `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers: - immediate `globalMutate("getProfiles")` to refresh current profile - background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking) - forced `mutateSwitchStatus` to correct state - original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx - Commands / API cleanup - removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling * refactor(frontend): optimize profile switch with optimistic updates * refactor(profile-switch): switch to event-driven flow with Profile Store - SwitchManager pushes events; frontend polls get_profile_switch_events - Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches - UI flicker removed * fix(app-data): re-hook profile store updates during switch hydration * fix(notification): restore frontend event dispatch and non-blocking emits * fix(app-data-provider): restore proxy refresh and seed snapshot after refactor * fix: ensure switch completion events are received and handle proxies-updated * fix(app-data-provider): dedupe switch results by taskId and fix stale profile state * fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout * docs: UPDATELOG.md * chore: add necessary comments * fix(core): always dispatch async proxy snapshot after RefreshClash event * fix(proxy-store, provider): handle pending snapshots and proxy profiles - Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support. - Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures. - In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate. * fix(proxy): re-hook tray refresh events into proxy refresh queue - Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup. - Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path. * fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders - src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items. - src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready. * fix(profile-switch): preserve queued requests and avoid stale connection teardown - Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario. - Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419). * fix(profile-switch, layout): improve profile validation and restore backend refresh - Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71). - Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55). * feat(profile-switch): handle cancellations for superseded requests - Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482) - Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581) - Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20) * fix(profiles): wrap logging payload for Tauri frontend_log * fix(profile-switch): add rollback and error propagation for failed persistence - Added rollback on apply failure so Mihomo restores to the previous profile before exiting the success path early (state_machine.rs:474). - Reworked persist_profiles_with_timeout to surface timeout/join/save errors, convert them into CmdResult failures, and trigger rollback + error propagation when persistence fails (state_machine.rs:703). * fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks * fix(profile-switch): preserve pending queue and surface discarded switches * fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches * fix(app-data-provider): restore backend-driven refresh and reattach fallbacks * fix(profile-switch): queue concurrent updates and add bounded wait/backoff * fix(proxy): trigger live refresh on app start for proxy snapshot * refactor(profile-switch): split flow into layers and centralize async cleanup - Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API. - Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency. - Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable. - Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
2025-10-30 17:29:15 +08:00
Self::spawn_proxy_snapshot();
}
pub fn notify_proxies_updated(payload: Value) {
Self::send_event(FrontendEvent::ProxiesUpdated { payload });
}
pub async fn build_proxy_snapshot() -> Option<Value> {
let mihomo_guard = Self::mihomo().await;
let proxies = match mihomo_guard.get_proxies().await {
Ok(data) => match serde_json::to_value(&data) {
Ok(value) => value,
Err(error) => {
logging!(
warn,
Type::Frontend,
"Failed to serialize proxies snapshot: {error}"
);
return None;
}
},
Err(error) => {
logging!(
warn,
Type::Frontend,
"Failed to fetch proxies for snapshot: {error}"
);
return None;
}
};
drop(mihomo_guard);
let providers_guard = Self::mihomo().await;
let providers_value = match providers_guard.get_proxy_providers().await {
Ok(data) => serde_json::to_value(&data).unwrap_or_else(|error| {
logging!(
warn,
Type::Frontend,
"Failed to serialize proxy providers for snapshot: {error}"
);
Value::Null
}),
Err(error) => {
logging!(
warn,
Type::Frontend,
"Failed to fetch proxy providers for snapshot: {error}"
);
Value::Null
}
};
drop(providers_guard);
let profile_guard = Config::profiles().await;
let profile_id = profile_guard.latest_ref().current.clone();
drop(profile_guard);
let emitted_at = SystemTime::now()
.duration_since(UNIX_EPOCH)
.map(|duration| duration.as_millis() as i64)
.unwrap_or(0);
let payload = json!({
"proxies": proxies,
"providers": providers_value,
"profileId": profile_id,
"emittedAt": emitted_at,
});
Some(payload)
}
fn spawn_proxy_snapshot() {
tauri::async_runtime::spawn(async {
if let Some(payload) = Handle::build_proxy_snapshot().await {
Handle::notify_proxies_updated(payload);
}
});
}
2022-11-14 01:26:33 +08:00
pub fn notice_message<S: Into<String>, M: Into<String>>(status: S, msg: M) {
let handle = Self::global();
let status_str = status.into();
let msg_str = msg.into();
if !*handle.startup_completed.read() {
let mut errors = handle.startup_errors.write();
errors.push(ErrorMessage {
status: status_str,
message: msg_str,
});
return;
}
if handle.is_exiting() {
return;
}
Self::send_event(FrontendEvent::NoticeMessage {
status: status_str,
message: msg_str,
});
}
fn send_event(event: FrontendEvent) {
let handle = Self::global();
if handle.is_exiting() {
return;
}
let system_opt = handle.notification_system.read();
if let Some(system) = system_opt.as_ref() {
system.send_event(event);
}
2022-09-26 20:46:29 +08:00
}
pub fn mark_startup_completed(&self) {
*self.startup_completed.write() = true;
self.send_startup_errors();
}
fn send_startup_errors(&self) {
let errors = {
let mut errors = self.startup_errors.write();
std::mem::take(&mut *errors)
};
if errors.is_empty() {
return;
}
let _ = thread::Builder::new()
.name("startup-errors-sender".into())
.spawn(move || {
thread::sleep(timing::STARTUP_ERROR_DELAY);
let handle = Handle::global();
if handle.is_exiting() {
return;
}
let system_opt = handle.notification_system.read();
if let Some(system) = system_opt.as_ref() {
for error in errors {
if handle.is_exiting() {
break;
}
system.send_event(FrontendEvent::NoticeMessage {
status: error.status,
message: error.message,
});
thread::sleep(timing::ERROR_BATCH_DELAY);
}
}
});
}
pub fn set_is_exiting(&self) {
*self.is_exiting.write() = true;
let mut system_opt = self.notification_system.write();
if let Some(system) = system_opt.as_mut() {
system.shutdown();
}
}
pub fn is_exiting(&self) -> bool {
*self.is_exiting.read()
}
2022-09-11 20:58:55 +08:00
}
#[cfg(target_os = "macos")]
impl Handle {
pub fn set_activation_policy(&self, policy: tauri::ActivationPolicy) -> Result<(), String> {
Self::app_handle()
.set_activation_policy(policy)
.map_err(|e| e.to_string().into())
}
pub fn set_activation_policy_regular(&self) {
let _ = self.set_activation_policy(tauri::ActivationPolicy::Regular);
}
pub fn set_activation_policy_accessory(&self) {
let _ = self.set_activation_policy(tauri::ActivationPolicy::Accessory);
}
}