* refactor: proxy refresh
* fix(proxy-store): properly hydrate and filter backend provider snapshots
* fix(proxy-store): add monotonic fetch guard and event bridge cleanup
* fix(proxy-store): tweak fetch sequencing guard to prevent snapshot invalidation from wiping fast responses
* docs: UPDATELOG.md
* fix(proxy-snapshot, proxy-groups): restore last-selected proxy and group info
* fix(proxy): merge static and provider entries in snapshot; fix Virtuoso viewport height
* fix(proxy-groups): restrict reduced-height viewport to chain-mode column
* refactor(profiles): introduce a state machine
* refactor:replace state machine with reducer
* refactor:introduce a profile switch worker
* refactor: hooked up a backend-driven profile switch flow
* refactor(profile-switch): serialize switches with async queue and enrich frontend events
* feat(profiles): centralize profile switching with reducer/driver queue to fix stuck UI on rapid toggles
* chore: translate comments and log messages to English to avoid encoding issues
* refactor: migrate backend queue to SwitchDriver actor
* fix(profile): unify error string types in validation helper
* refactor(profile): make switch driver fully async and handle panics safely
* refactor(cmd): move switch-validation helper into new profile_switch module
* refactor(profile): modularize switch logic into profile_switch.rs
* refactor(profile_switch): modularize switch handler
- Break monolithic switch handler into proper module hierarchy
- Move shared globals, constants, and SwitchScope guard to state.rs
- Isolate queue orchestration and async task spawning in driver.rs
- Consolidate switch pipeline and config patching in workflow.rs
- Extract request pre-checks/YAML validation into validation.rs
* refactor(profile_switch): centralize state management and add cancellation flow
- Introduced SwitchManager in state.rs to unify mutex, sequencing, and SwitchScope handling.
- Added SwitchCancellation and SwitchRequest wrappers to encapsulate cancel tokens and notifications.
- Updated driver to allocate task IDs via SwitchManager, cancel old tokens, and queue next jobs in order.
- Updated workflow to check cancellation and sequence at each phase, replacing global flags with manager APIs.
* feat(profile_switch): integrate explicit state machine for profile switching
- workflow.rs:24 now delegates each switch to SwitchStateMachine, passing an owned SwitchRequest.
Queue cancellation and state-sequence checks are centralized inside the machine instead of scattered guards.
- workflow.rs:176 replaces the old helper with `SwitchStateMachine::new(manager(), None, profiles).run().await`,
ensuring manual profile patches follow the same workflow (locking, validation, rollback) as queued switches.
- workflow.rs:180 & 275 expose `validate_profile_yaml` and `restore_previous_profile` for reuse inside the state machine.
- workflow/state_machine.rs:1 introduces a dedicated state machine module.
It manages global mutex acquisition, request/cancellation state, YAML validation, draft patching,
`CoreManager::update_config`, failure rollback, and tray/notification side-effects.
Transitions check for cancellations and stale sequences; completions release guards via `SwitchScope` drop.
* refactor(profile-switch): integrate stage-aware panic handling
- src-tauri/src/cmd/profile_switch/workflow/state_machine.rs:1
Defines SwitchStage and SwitchPanicInfo as crate-visible, wraps each transition in with_stage(...) with catch_unwind, and propagates CmdResult<bool> to distinguish validation failures from panics while keeping cancellation semantics.
- src-tauri/src/cmd/profile_switch/workflow.rs:25
Updates run_switch_job to return Result<bool, SwitchPanicInfo>, routing timeout, validation, config, and stage panic cases separately. Reuses SwitchPanicInfo for logging/UI notifications; patch_profiles_config maps state-machine panics into user-facing error strings.
- src-tauri/src/cmd/profile_switch/driver.rs:1
Adds SwitchJobOutcome to unify workflow results: normal completions carry bool, and panics propagate SwitchPanicInfo. The driver loop now logs panics explicitly and uses AssertUnwindSafe(...).catch_unwind() to guard setup-phase panics.
* refactor(profile-switch): add watchdog, heartbeat, and async timeout guards
- Introduce SwitchHeartbeat for stage tracking and timing; log stage transitions with elapsed durations.
- Add watchdog in driver to cancel stalled switches (5s heartbeat timeout).
- Wrap blocking ops (Config::apply, tray updates, profiles_save_file_safe, etc.) with time::timeout to prevent async stalls.
- Improve logs for stage transitions and watchdog timeouts to clarify cancellation points.
* refactor(profile-switch): async post-switch tasks, early lock release, and spawn_blocking for IO
* feat(profile-switch): track cleanup and coordinate pipeline
- Add explicit cleanup tracking in the driver (`cleanup_profiles` map + `CleanupDone` messages) to know when background post-switch work is still running before starting a new workflow. (driver.rs:29-50)
- Update `handle_enqueue` to detect “cleanup in progress”: same-profile retries are short-circuited; other requests collapse the pending queue, cancelling old tokens so only the latest intent survives. (driver.rs:176-247)
- Rework scheduling helpers: `start_next_job` refuses to start while cleanup is outstanding; discarded requests release cancellation tokens; cleanup completion explicitly restarts the pipeline. (driver.rs:258-442)
* feat(profile-switch): unify post-switch cleanup handling
- workflow.rs (25-427) returns `SwitchWorkflowResult` (success + CleanupHandle) or `SwitchWorkflowError`.
All failure/timeout paths stash post-switch work into a single CleanupHandle.
Cleanup helpers (`notify_profile_switch_finished` and `close_connections_after_switch`) run inside that task for proper lifetime handling.
- driver.rs (29-439) propagates CleanupHandle through `SwitchJobOutcome`, spawns a bridge to wait for completion, and blocks `start_next_job` until done.
Direct driver-side panics now schedule failure cleanup via the shared helper.
* tmp
* Revert "tmp"
This reverts commit e582cf4a65.
* refactor: queue frontend events through async dispatcher
* refactor: queue frontend switch/proxy events and throttle notices
* chore: frontend debug log
* fix: re-enable only ProfileSwitchFinished events - keep others suppressed for crash isolation
- Re-enabled only ProfileSwitchFinished events; RefreshClash, RefreshProxy, and ProfileChanged remain suppressed (they log suppression messages)
- Allows frontend to receive task completion notifications for UI feedback while crash isolation continues
- src-tauri/src/core/handle.rs now only suppresses notify_profile_changed
- Serialized emitter, frontend logging bridge, and other diagnostics unchanged
* refactor: refreshClashData
* refactor(proxy): stabilize proxy switch pipeline and rendering
- Add coalescing buffer in notification.rs to emit only the latest proxies-updated snapshot
- Replace nextTick with queueMicrotask in asyncQueue.ts for same-frame hydration
- Hide auto-generated GLOBAL snapshot and preserve optional metadata in proxy-snapshot.ts
- Introduce stable proxy rendering state in AppDataProvider (proxyTargetProfileId, proxyDisplayProfileId, isProxyRefreshPending)
- Update proxy page to fade content during refresh and overlay status banner instead of showing incomplete snapshot
* refactor(profiles): move manual activating logic to reducer for deterministic queue tracking
* refactor: replace proxy-data event bridge with pure polling and simplify proxy store
- Replaced the proxy-data event bridge with pure polling: AppDataProvider now fetches the initial snapshot and drives refreshes from the polled switchStatus, removing verge://refresh-* listeners (src/providers/app-data-provider.tsx).
- Simplified proxy-store by dropping the proxies-updated listener queue and unused payload/normalizer helpers; relies on SWR/provider fetch path + calcuProxies for live updates (src/stores/proxy-store.ts).
- Trimmed layout-level event wiring to keep only notice/show/hide subscriptions, removing obsolete refresh listeners (src/pages/_layout/useLayoutEvents.ts).
* refactor(proxy): streamline proxies-updated handling and store event flow
- AppDataProvider now treats `proxies-updated` as the fast path: the listener
calls `applyLiveProxyPayload` immediately and schedules only a single fallback
`fetchLiveProxies` ~600 ms later (replacing the old 0/250/1000/2000 cascade).
Expensive provider/rule refreshes run in parallel via `Promise.allSettled`, and
the multi-stage queue on profile updates completion was removed
(src/providers/app-data-provider.tsx).
- Rebuilt proxy-store to support the event flow: restored `setLive`, provider
normalization, and an animation-frame + async queue that applies payloads without
blocking. Exposed `applyLiveProxyPayload` so providers can push events directly
into the store (src/stores/proxy-store.ts).
* refactor: switch delay
* refactor(app-data-provider): trigger getProfileSwitchStatus revalidation on profile-switch-finished
- AppDataProvider now listens to `profile-switch-finished` and calls `mutate("getProfileSwitchStatus")` to immediately update state and unlock buttons (src/providers/app-data-provider.tsx).
- Retain existing detailed timing logs for monitoring other stages.
- Frontend success notifications remain instant; background refreshes continue asynchronously.
* fix(profiles): prevent duplicate toast on page remount
* refactor(profile-switch): make active switches preemptible and prevent queue piling
- Add notify mechanism to SwitchCancellation to await cancellation without busy-waiting (state.rs:82)
- Collapse pending queue to a single entry in the driver; cancel in-flight task on newer request (driver.rs:232)
- Update handle_update_core to watch cancel token and 30s timeout; release locks, discard draft, and exit early if canceled (state_machine.rs:301)
- Providers revalidate status immediately on profile-switch-finished events (app-data-provider.tsx:208)
* refactor(core): make core reload phase controllable, reduce 0xcfffffff risk
- CoreManager::apply_config now calls `reload_config_with_retry`, each attempt waits up to 5s, retries 3 times; on failure, returns error with duration logged and triggers core restart if needed (src-tauri/src/core/manager/config.rs:175, 205)
- `reload_config_with_retry` logs attempt info on timeout or error; if error is a Mihomo connection issue, fallback to original restart logic (src-tauri/src/core/manager/config.rs:211)
- `reload_config_once` retains original Mihomo call for retry wrapper usage (src-tauri/src/core/manager/config.rs:247)
* chore(frontend-logs): downgrade routine event logs from info to debug
- Logs like `emit_via_app entering spawn_blocking`, `Async emit…`, `Buffered proxies…` are now debug-level (src-tauri/src/core/notification.rs:155, :265, :309…)
- Genuine warnings/errors (failures/timeouts) remain at warn/error
- Core stage logs remain info to keep backend tracking visible
* refactor(frontend-emit): make emit_via_app fire-and-forget async
- `emit_via_app` now a regular function; spawns with `tokio::spawn` and logs a warn if `emit_to` fails, caller returns immediately (src-tauri/src/core/notification.rs:269)
- Removed `.await` at Async emit and flush_proxies calls; only record dispatch duration and warn on failure (src-tauri/src/core/notification.rs:211, :329)
* refactor(ui): restructure profile switch for event-driven speed + polling stability
- Backend
- SwitchManager maintains a lightweight event queue: added `event_sequence`, `recent_events`, and `SwitchResultEvent`; provides `push_event` / `events_after` (state.rs)
- `handle_completion` pushes events on success/failure and keeps `last_result` (driver.rs) for frontend incremental fetch
- New Tauri command `get_profile_switch_events(after_sequence)` exposes `events_after` (profile_switch/mod.rs → profile.rs → lib.rs)
- Notification system
- `NotificationSystem::process_event` only logs debug, disables WebView `emit_to`, fixes 0xcfffffff
- Related emit/buffer functions now safe no-op, removed unused structures and warnings (notification.rs)
- Frontend
- services/cmds.ts defines `SwitchResultEvent` and `getProfileSwitchEvents`
- `AppDataProvider` holds `switchEventSeqRef`, polls incremental events every 0.25s (busy) / 1s (idle); each event triggers:
- immediate `globalMutate("getProfiles")` to refresh current profile
- background refresh of proxies/providers/rules via `Promise.allSettled` (failures logged, non-blocking)
- forced `mutateSwitchStatus` to correct state
- original switchStatus effect calls `handleSwitchResult` as fallback; other toast/activation logic handled in profiles.tsx
- Commands / API cleanup
- removed `pub use profile_switch::*;` in cmd::mod.rs to avoid conflicts; frontend uses new command polling
* refactor(frontend): optimize profile switch with optimistic updates
* refactor(profile-switch): switch to event-driven flow with Profile Store
- SwitchManager pushes events; frontend polls get_profile_switch_events
- Zustand store handles optimistic profiles; AppDataProvider applies updates and background-fetches
- UI flicker removed
* fix(app-data): re-hook profile store updates during switch hydration
* fix(notification): restore frontend event dispatch and non-blocking emits
* fix(app-data-provider): restore proxy refresh and seed snapshot after refactor
* fix: ensure switch completion events are received and handle proxies-updated
* fix(app-data-provider): dedupe switch results by taskId and fix stale profile state
* fix(profile-switch): ensure patch_profiles_config_by_profile_index waits for real completion and handle join failures in apply_config_with_timeout
* docs: UPDATELOG.md
* chore: add necessary comments
* fix(core): always dispatch async proxy snapshot after RefreshClash event
* fix(proxy-store, provider): handle pending snapshots and proxy profiles
- Added pending snapshot tracking in proxy-store so `lastAppliedFetchId` no longer jumps on seed. Profile adoption is deferred until a qualifying fetch completes. Exposed `clearPendingProfile` for rollback support.
- Cleared pending snapshot state whenever live payloads apply or the store resets, preventing stale optimistic profile IDs after failures.
- In provider integration, subscribed to the pending proxy profile and fed it into target-profile derivation. Cleared it on failed switch results so hydration can advance and UI status remains accurate.
* fix(proxy): re-hook tray refresh events into proxy refresh queue
- Reattached listen("verge://refresh-proxy-config", …) at src/providers/app-data-provider.tsx:402 and registered it for cleanup.
- Added matching window fallback handler at src/providers/app-data-provider.tsx:430 so in-app dispatches share the same refresh path.
* fix(proxy-snapshot/proxy-groups): address review findings on snapshot placeholders
- src/utils/proxy-snapshot.ts:72-95 now derives snapshot group members solely from proxy-groups.proxies, so provider ids under `use` no longer generate placeholder proxy items.
- src/components/proxy/proxy-groups.tsx:665-677 lets the hydration overlay capture pointer events (and shows a wait cursor) so users can’t interact with snapshot-only placeholders before live data is ready.
* fix(profile-switch): preserve queued requests and avoid stale connection teardown
- Keep earlier queued switches intact by dropping the blanket “collapse” call: after removing duplicates for the same profile, new requests are simply appended, leaving other profiles pending (driver.rs:376). Resolves queue-loss scenario.
- Gate connection cleanup on real successes so cancelled/stale runs no longer tear down Mihomo connections; success handler now skips close_connections_after_switch when success == false (workflow.rs:419).
* fix(profile-switch, layout): improve profile validation and restore backend refresh
- Hardened profile validation using `tokio::fs` with a 5s timeout and offloading YAML parsing to `AsyncHandler::spawn_blocking`, preventing slow disks or malformed files from freezing the runtime (src-tauri/src/cmd/profile_switch/validation.rs:9, 71).
- Restored backend-triggered refresh handling by listening for `verge://refresh-clash-config` / `verge://refresh-verge-config` and invoking shared refresh services so SWR caches stay in sync with core events (src/pages/_layout/useLayoutEvents.ts:6, 45, 55).
* feat(profile-switch): handle cancellations for superseded requests
- Added a `cancelled` flag and constructor so superseded requests publish an explicit cancellation instead of a failure (src-tauri/src/cmd/profile_switch/state.rs:249, src-tauri/src/cmd/profile_switch/driver.rs:482)
- Updated the profile switch effect to log cancellations as info, retain the shared `mutate` call, and skip emitting error toasts while still refreshing follow-up work (src/pages/profiles.tsx:554, src/pages/profiles.tsx:581)
- Exposed the new flag on the TypeScript contract to keep downstream consumers type-safe (src/services/cmds.ts:20)
* fix(profiles): wrap logging payload for Tauri frontend_log
* fix(profile-switch): add rollback and error propagation for failed persistence
- Added rollback on apply failure so Mihomo restores to the previous profile
before exiting the success path early (state_machine.rs:474).
- Reworked persist_profiles_with_timeout to surface timeout/join/save errors,
convert them into CmdResult failures, and trigger rollback + error propagation
when persistence fails (state_machine.rs:703).
* fix(profile-switch): prevent mid-finalize reentrancy and lingering tasks
* fix(profile-switch): preserve pending queue and surface discarded switches
* fix(profile-switch): avoid draining Mihomo sockets on failed/cancelled switches
* fix(app-data-provider): restore backend-driven refresh and reattach fallbacks
* fix(profile-switch): queue concurrent updates and add bounded wait/backoff
* fix(proxy): trigger live refresh on app start for proxy snapshot
* refactor(profile-switch): split flow into layers and centralize async cleanup
- Introduced `SwitchDriver` to encapsulate queue and driver logic while keeping the public Tauri command API.
- Added workflow/cleanup helpers for notification dispatch and Mihomo connection draining, re-exported for API consistency.
- Replaced monolithic state machine with `core.rs`, `context.rs`, and `stages.rs`, plus a thin `mod.rs` re-export layer; stage methods are now individually testable.
- Removed legacy `workflow/state_machine.rs` and adjusted visibility on re-exported types/constants to ensure compilation.
371 lines
12 KiB
Rust
371 lines
12 KiB
Rust
use super::CoreManager;
|
|
use crate::{
|
|
config::*,
|
|
constants::timing,
|
|
core::{handle, validate::CoreConfigValidator},
|
|
logging,
|
|
utils::{dirs, help, logging::Type},
|
|
};
|
|
use anyhow::{Result, anyhow};
|
|
use smartstring::alias::String;
|
|
use std::{path::PathBuf, time::Instant};
|
|
use tauri_plugin_mihomo::Error as MihomoError;
|
|
use tokio::time::{sleep, timeout};
|
|
|
|
const RELOAD_TIMEOUT: std::time::Duration = std::time::Duration::from_secs(5);
|
|
const MAX_RELOAD_ATTEMPTS: usize = 3;
|
|
|
|
impl CoreManager {
|
|
pub async fn use_default_config(&self, error_key: &str, error_msg: &str) -> Result<()> {
|
|
use crate::constants::files::RUNTIME_CONFIG;
|
|
|
|
let runtime_path = dirs::app_home_dir()?.join(RUNTIME_CONFIG);
|
|
let clash_config = Config::clash().await.latest_ref().0.clone();
|
|
|
|
*Config::runtime().await.draft_mut() = Box::new(IRuntime {
|
|
config: Some(clash_config.clone()),
|
|
exists_keys: vec![],
|
|
chain_logs: Default::default(),
|
|
});
|
|
|
|
help::save_yaml(&runtime_path, &clash_config, Some("# Clash Verge Runtime")).await?;
|
|
handle::Handle::notice_message(error_key, error_msg);
|
|
Ok(())
|
|
}
|
|
|
|
pub async fn update_config(&self) -> Result<(bool, String)> {
|
|
if handle::Handle::global().is_exiting() {
|
|
return Ok((true, String::new()));
|
|
}
|
|
|
|
if !self.should_update_config()? {
|
|
return Ok((true, String::new()));
|
|
}
|
|
|
|
let start = Instant::now();
|
|
|
|
let _permit = self
|
|
.update_semaphore
|
|
.try_acquire()
|
|
.map_err(|_| anyhow!("Config update already in progress"))?;
|
|
|
|
let result = self.perform_config_update().await;
|
|
|
|
match &result {
|
|
Ok((success, msg)) => {
|
|
logging!(
|
|
info,
|
|
Type::Core,
|
|
"[ConfigUpdate] Finished (success={}, elapsed={}ms, msg={})",
|
|
success,
|
|
start.elapsed().as_millis(),
|
|
msg
|
|
);
|
|
}
|
|
Err(err) => {
|
|
logging!(
|
|
error,
|
|
Type::Core,
|
|
"[ConfigUpdate] Failed after {}ms: {}",
|
|
start.elapsed().as_millis(),
|
|
err
|
|
);
|
|
}
|
|
}
|
|
|
|
result
|
|
}
|
|
|
|
fn should_update_config(&self) -> Result<bool> {
|
|
let now = Instant::now();
|
|
let mut last = self.last_update.lock();
|
|
|
|
if let Some(last_time) = *last
|
|
&& now.duration_since(last_time) < timing::CONFIG_UPDATE_DEBOUNCE
|
|
{
|
|
return Ok(false);
|
|
}
|
|
|
|
*last = Some(now);
|
|
Ok(true)
|
|
}
|
|
|
|
async fn perform_config_update(&self) -> Result<(bool, String)> {
|
|
logging!(debug, Type::Core, "[ConfigUpdate] Pipeline start");
|
|
let total_start = Instant::now();
|
|
|
|
let mut stage_timer = Instant::now();
|
|
Config::generate().await?;
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"[ConfigUpdate] Generation completed in {}ms",
|
|
stage_timer.elapsed().as_millis()
|
|
);
|
|
|
|
stage_timer = Instant::now();
|
|
let validation_result = CoreConfigValidator::global().validate_config().await;
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"[ConfigUpdate] Validation completed in {}ms",
|
|
stage_timer.elapsed().as_millis()
|
|
);
|
|
|
|
match validation_result {
|
|
Ok((true, _)) => {
|
|
stage_timer = Instant::now();
|
|
let run_path = Config::generate_file(ConfigType::Run).await?;
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"[ConfigUpdate] Runtime file generated in {}ms",
|
|
stage_timer.elapsed().as_millis()
|
|
);
|
|
stage_timer = Instant::now();
|
|
self.apply_config(run_path).await?;
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"[ConfigUpdate] Core apply completed in {}ms",
|
|
stage_timer.elapsed().as_millis()
|
|
);
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"[ConfigUpdate] Pipeline succeeded in {}ms",
|
|
total_start.elapsed().as_millis()
|
|
);
|
|
Ok((true, String::new()))
|
|
}
|
|
Ok((false, error_msg)) => {
|
|
Config::runtime().await.discard();
|
|
logging!(
|
|
warn,
|
|
Type::Core,
|
|
"[ConfigUpdate] Validation reported failure after {}ms: {}",
|
|
total_start.elapsed().as_millis(),
|
|
error_msg
|
|
);
|
|
Ok((false, error_msg))
|
|
}
|
|
Err(e) => {
|
|
Config::runtime().await.discard();
|
|
logging!(
|
|
error,
|
|
Type::Core,
|
|
"[ConfigUpdate] Validation errored after {}ms: {}",
|
|
total_start.elapsed().as_millis(),
|
|
e
|
|
);
|
|
Err(e)
|
|
}
|
|
}
|
|
}
|
|
|
|
pub async fn put_configs_force(&self, path: PathBuf) -> Result<()> {
|
|
self.apply_config(path).await
|
|
}
|
|
|
|
pub(super) async fn apply_config(&self, path: PathBuf) -> Result<()> {
|
|
let path_str = dirs::path_to_str(&path)?;
|
|
|
|
let reload_start = Instant::now();
|
|
match self.reload_config_with_retry(path_str).await {
|
|
Ok(_) => {
|
|
Config::runtime().await.apply();
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"Configuration applied (reload={}ms)",
|
|
reload_start.elapsed().as_millis()
|
|
);
|
|
Ok(())
|
|
}
|
|
Err(err) => {
|
|
if Self::should_restart_for_anyhow(&err) {
|
|
logging!(
|
|
warn,
|
|
Type::Core,
|
|
"Reload failed after {}ms with retryable/timeout error; attempting restart: {}",
|
|
reload_start.elapsed().as_millis(),
|
|
err
|
|
);
|
|
match self.retry_with_restart(path_str).await {
|
|
Ok(_) => return Ok(()),
|
|
Err(retry_err) => {
|
|
logging!(
|
|
error,
|
|
Type::Core,
|
|
"Reload retry with restart failed: {}",
|
|
retry_err
|
|
);
|
|
Config::runtime().await.discard();
|
|
return Err(retry_err);
|
|
}
|
|
}
|
|
}
|
|
Config::runtime().await.discard();
|
|
logging!(
|
|
error,
|
|
Type::Core,
|
|
"Failed to apply config after {}ms: {}",
|
|
reload_start.elapsed().as_millis(),
|
|
err
|
|
);
|
|
Err(anyhow!("Failed to apply config: {}", err))
|
|
}
|
|
}
|
|
}
|
|
|
|
async fn retry_with_restart(&self, config_path: &str) -> Result<()> {
|
|
if handle::Handle::global().is_exiting() {
|
|
return Err(anyhow!("Application exiting"));
|
|
}
|
|
|
|
logging!(warn, Type::Core, "Restarting core for config reload");
|
|
self.restart_core().await?;
|
|
sleep(timing::CONFIG_RELOAD_DELAY).await;
|
|
|
|
self.reload_config_with_retry(config_path).await?;
|
|
Config::runtime().await.apply();
|
|
logging!(info, Type::Core, "Configuration applied after restart");
|
|
Ok(())
|
|
}
|
|
|
|
async fn reload_config_with_retry(&self, path: &str) -> Result<()> {
|
|
for attempt in 1..=MAX_RELOAD_ATTEMPTS {
|
|
let attempt_start = Instant::now();
|
|
let reload_future = self.reload_config_once(path);
|
|
match timeout(RELOAD_TIMEOUT, reload_future).await {
|
|
Ok(Ok(())) => {
|
|
logging!(
|
|
debug,
|
|
Type::Core,
|
|
"reload_config attempt {}/{} succeeded in {}ms",
|
|
attempt,
|
|
MAX_RELOAD_ATTEMPTS,
|
|
attempt_start.elapsed().as_millis()
|
|
);
|
|
return Ok(());
|
|
}
|
|
Ok(Err(err)) => {
|
|
logging!(
|
|
warn,
|
|
Type::Core,
|
|
"reload_config attempt {}/{} failed after {}ms: {}",
|
|
attempt,
|
|
MAX_RELOAD_ATTEMPTS,
|
|
attempt_start.elapsed().as_millis(),
|
|
err
|
|
);
|
|
if attempt == MAX_RELOAD_ATTEMPTS {
|
|
return Err(anyhow!(
|
|
"Failed to reload config after {} attempts: {}",
|
|
attempt,
|
|
err
|
|
));
|
|
}
|
|
}
|
|
Err(_) => {
|
|
logging!(
|
|
warn,
|
|
Type::Core,
|
|
"reload_config attempt {}/{} timed out after {:?}",
|
|
attempt,
|
|
MAX_RELOAD_ATTEMPTS,
|
|
RELOAD_TIMEOUT
|
|
);
|
|
if attempt == MAX_RELOAD_ATTEMPTS {
|
|
return Err(anyhow!(
|
|
"Config reload timed out after {:?} ({} attempts)",
|
|
RELOAD_TIMEOUT,
|
|
MAX_RELOAD_ATTEMPTS
|
|
));
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
Err(anyhow!(
|
|
"Config reload retry loop exited unexpectedly ({} attempts)",
|
|
MAX_RELOAD_ATTEMPTS
|
|
))
|
|
}
|
|
|
|
async fn reload_config_once(&self, path: &str) -> Result<(), MihomoError> {
|
|
logging!(
|
|
info,
|
|
Type::Core,
|
|
"[ConfigUpdate] reload_config_once begin path={} ",
|
|
path
|
|
);
|
|
let start = Instant::now();
|
|
let result = handle::Handle::mihomo()
|
|
.await
|
|
.reload_config(true, path)
|
|
.await;
|
|
let elapsed = start.elapsed().as_millis();
|
|
match result {
|
|
Ok(()) => {
|
|
logging!(
|
|
info,
|
|
Type::Core,
|
|
"[ConfigUpdate] reload_config_once succeeded (elapsed={}ms)",
|
|
elapsed
|
|
);
|
|
Ok(())
|
|
}
|
|
Err(err) => {
|
|
logging!(
|
|
warn,
|
|
Type::Core,
|
|
"[ConfigUpdate] reload_config_once failed (elapsed={}ms, err={})",
|
|
elapsed,
|
|
err
|
|
);
|
|
Err(err)
|
|
}
|
|
}
|
|
}
|
|
|
|
fn should_restart_for_anyhow(err: &anyhow::Error) -> bool {
|
|
if let Some(mihomo_err) = err.downcast_ref::<MihomoError>() {
|
|
return Self::should_restart_on_error(mihomo_err);
|
|
}
|
|
let msg = err.to_string();
|
|
msg.contains("timed out")
|
|
|| msg.contains("reload")
|
|
|| msg.contains("Failed to apply config")
|
|
}
|
|
|
|
fn should_restart_on_error(err: &MihomoError) -> bool {
|
|
match err {
|
|
MihomoError::ConnectionFailed | MihomoError::ConnectionLost => true,
|
|
MihomoError::Io(io_err) => Self::is_connection_io_error(io_err.kind()),
|
|
MihomoError::Reqwest(req_err) => {
|
|
req_err.is_connect()
|
|
|| req_err.is_timeout()
|
|
|| Self::contains_error_pattern(&req_err.to_string())
|
|
}
|
|
MihomoError::FailedResponse(msg) => Self::contains_error_pattern(msg),
|
|
_ => false,
|
|
}
|
|
}
|
|
|
|
fn is_connection_io_error(kind: std::io::ErrorKind) -> bool {
|
|
matches!(
|
|
kind,
|
|
std::io::ErrorKind::ConnectionAborted
|
|
| std::io::ErrorKind::ConnectionRefused
|
|
| std::io::ErrorKind::ConnectionReset
|
|
| std::io::ErrorKind::NotFound
|
|
)
|
|
}
|
|
|
|
fn contains_error_pattern(text: &str) -> bool {
|
|
use crate::constants::error_patterns::CONNECTION_ERRORS;
|
|
CONNECTION_ERRORS.iter().any(|p| text.contains(p))
|
|
}
|
|
}
|