Fix CPU hotplug activation race in the timer migration code,

by Frederic Weisbecker.
 
 Signed-off-by: Ingo Molnar <mingo@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmn+lQYRHG1pbmdvQGtl
 cm5lbC5vcmcACgkQEnMQ0APhK1iNhhAAmzI4kh9kulol1M73u+LiQTdx0zMcDcSM
 NiHeZ+QbNNMqkfnj6xxOVGViPLTVSXpa+dU4hdeHM3n7vYnlGDiHBm4EM95z7S28
 Jk39z6Wq/0vmR938rhVVNgMbyx3N+rHV15ReyY88vq6OcI4SdJ9sin72ZW8piuBi
 EeHZLOyPsz+qf4tG6UFHP3uzM7fQZ9DrVxzcDZE4+xbVpk1kXLuAGo89wXF0GbUX
 NhUYtHakvARehLg3N+0qo/DUGRXYp1GF+qfIXH7V6g91eMhPureUrhzsKVmbu5BL
 OXMMskWG20LhySrLX+fKDk8DH3n3G+dTFCgM5kc6JaR4vLT0prEJHeVsIX+ydOLF
 sZqqkaVhKvFNlxjrqbjRNBV09/Cd9XN1wYUqnBeh55qzB+yoOnfS9+z7dsguf38K
 YKiCmlt2T68NL5/ZUvXF3ghm5ic7dTyZFUyM/abo7rSTeGByh5vruD19r2zp0C1/
 hgSPmAm8fuJAzQCRUzXxDsY5hLhKtv+1h9+Y9humakzfxUdneydsHigz5YjZ6tzI
 E8LyLsBZnSB6IKZfVetwpg0xTxB75GtSwa+p0XUg2+q2ss94mYmWMFM+ZKWxaTYq
 yI5paLXZT1DO9Fx/4gsCDlUrR1y096GL6O/1t3TOUsoJ/9kEFvOL/ZF4sjEnWrch
 CYI+DPoUOUA=
 =UHOI
 -----END PGP SIGNATURE-----

Merge tag 'timers-urgent-2026-05-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull timer fix from Ingo Molnar:
 "Fix CPU hotplug activation race in the timer migration code, by
  Frederic Weisbecker"

* tag 'timers-urgent-2026-05-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  timers/migration: Fix another hotplug activation race
This commit is contained in:
Linus Torvalds 2026-05-08 20:03:39 -07:00
commit 6e1e5a33e8

View File

@ -1860,19 +1860,37 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node,
* child to the new parents. So tmigr_active_up() activates the
* new parents while walking up from the old root to the new.
*
* * It is ensured that @start is active, as this setup path is
* executed in hotplug prepare callback. This is executed by an
* already connected and !idle CPU. Even if all other CPUs go idle,
* the CPU executing the setup will be responsible up to current top
* level group. And the next time it goes inactive, it will release
* the new childmask and parent to subsequent walkers through this
* @child. Therefore propagate active state unconditionally.
* * It is ensured that @start is active, (or on the way to be activated
* by another CPU that woke up before the current one) as this setup path
* is executed in hotplug prepare callback. This is executed by an already
* connected and !idle CPU in the hierarchy.
*
* * The below RmW atomic operation ensures that:
*
* 1) If the old root has been completely activated, the latest state is
* acquired (the below implicit acquire pairs with the implicit release
* from cmpxchg() in tmigr_active_up()).
*
* 2) If the old root is still on the way to be activated, the lagging behind
* CPU performing the activation will acquire the links up to the new root.
* (The below implicit release pairs with the implicit acquire from cmpxchg()
* in tmigr_active_up()).
*
* 3) Every subsequent CPU below the old root will acquire the new links while
* walking through the old root (The below implicit release pairs with the
* implicit acquire from cmpxchg() in either tmigr_active_up()) or
* tmigr_inactive_up().
*/
state.state = atomic_read(&start->migr_state);
WARN_ON_ONCE(!state.active);
state.state = atomic_fetch_or(0, &start->migr_state);
WARN_ON_ONCE(!start->parent);
data.childmask = start->groupmask;
__walk_groups_from(tmigr_active_up, &data, start, start->parent);
/*
* If the state of the old root is inactive, another CPU is on its way to activate
* it and propagate to the new root.
*/
if (state.active) {
data.childmask = start->groupmask;
__walk_groups_from(tmigr_active_up, &data, start, start->parent);
}
}
/* Root update */