mirror of
https://github.com/torvalds/linux.git
synced 2026-05-13 00:28:54 +02:00
d78ddeb893
25976 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f087b0bad4 |
Miscellaneous futex fixes:
- Tighten up the sys_futex_requeue() ABI a bit, to disallow
dissimilar futex flags and potential UaF access. (Peter Zijlstra)
- Fix UaF between futex_key_to_node_opt() and vma_replace_policy()
(Hao-Yu Yang)
- Clear stale exiting pointer in futex_lock_pi() retry path,
which bug triggered a warning (and potential misbehavior)
in stress-testing. (Davidlohr Bueso)
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQJFBAABCgAvFiEEBpT5eoXrXCwVQwEKEnMQ0APhK1gFAmnItYsRHG1pbmdvQGtl
cm5lbC5vcmcACgkQEnMQ0APhK1h0Ow/+O4JG+XMdmTn2OLZQBFVqlt40RG+UAudW
tLyop+xovbLXk0FbZZen233yrYLnD8Qea9+LSIi1KPOKdklT4fimgDrVhSdZVMcY
mPJxkZ1NTRDpl3qpjgaT3yXslgpQQlzDLjAkyyC6SWQh7RHlZ9JB4UICei9sOeE8
WgwBrCfDMJXgjG5sAvkIhQFJaNEb/5HBHJu2Nxldlkl0of+le1e0mGUdY7H0ntxx
kJ4jCsrpCw+tzhZw7RmGe8ouEDkbt2f7EBT10pjZ1cEP22Rogn1AzgdD7pkbhcU3
WxbBGjSfm3ziPADELf0IhVYE/s+UXKTK9LfKDt4Q5W7paTV+o4Fllr+sraBmOoxp
Ova5RZzSdq5NCGAcmFJWg/rxEyZf+uC7GjwPx/80huzgJvBthdjOagdVKm/3BNv8
4d4H9KX67zWXE/XQHMt3g/7KYlVD9tzcp+v+h/rs8fiwkCkElwbljLISEr1StyOv
CDaCouu6FQE8MoVTNPPbH1DfEyIgNKHdXfkQbt/OkGVB9/dX71zFmxEXRCYAGMb1
wZXJ0j/oZZYcapmcb/xk5YjcZEfEU9QnMHHtCdR+dDL56xbsT5cDIzrb09LF6H/9
WiMxhx2V3RNBNUwbOKGX6vdqa2kXG+Pjp6sq9Af7VB9oy+nbWpfcVFqWbWwKAP1N
7e/EmnoxPTs=
=85AU
-----END PGP SIGNATURE-----
Merge tag 'locking-urgent-2026-03-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull futex fixes from Ingo Molnar:
- Tighten up the sys_futex_requeue() ABI a bit, to disallow dissimilar
futex flags and potential UaF access (Peter Zijlstra)
- Fix UaF between futex_key_to_node_opt() and vma_replace_policy()
(Hao-Yu Yang)
- Clear stale exiting pointer in futex_lock_pi() retry path, which
triggered a warning (and potential misbehavior) in stress-testing
(Davidlohr Bueso)
* tag 'locking-urgent-2026-03-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
futex: Clear stale exiting pointer in futex_lock_pi() retry path
futex: Fix UaF between futex_key_to_node_opt() and vma_replace_policy()
futex: Require sys_futex_requeue() to have identical flags
|
||
|
|
0bcb517f0a |
10 hotfixes. 8 are cc:stable. 9 are for MM.
There's a 3-patch series of DAMON fixes from Josh Law and SeongJae Park. The rest are singletons - please see the changelogs for details. -----BEGIN PGP SIGNATURE----- iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCacgT5AAKCRDdBJ7gKXxA jgYVAQDLam92lD7fD0IwdOFE36zlOcJNK4WjHSEf2y9fFFnUWQD/QPnVX+q7fR5t ZE8ZKhVkMa2w6A2VISdt9Msqm++5RQg= =oCV1 -----END PGP SIGNATURE----- Merge tag 'mm-hotfixes-stable-2026-03-28-10-45' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "10 hotfixes. 8 are cc:stable. 9 are for MM. There's a 3-patch series of DAMON fixes from Josh Law and SeongJae Park. The rest are singletons - please see the changelogs for details" * tag 'mm-hotfixes-stable-2026-03-28-10-45' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: mm/mseal: update VMA end correctly on merge bug: avoid format attribute warning for clang as well mm/pagewalk: fix race between concurrent split and refault mm/memory: fix PMD/PUD checks in follow_pfnmap_start() mm/damon/sysfs: check contexts->nr in repeat_call_fn mm/damon/sysfs: check contexts->nr before accessing contexts_arr[0] mm/damon/sysfs: fix param_ctx leak on damon_sysfs_new_test_ctx() failure mm/swap: fix swap cache memcg accounting MAINTAINERS, mailmap: update email address for Harry Yoo mm/huge_memory: fix folio isn't locked in softleaf_to_folio() |
||
|
|
2697dd8ae7 |
mm/mseal: update VMA end correctly on merge
Previously we stored the end of the current VMA in curr_end, and then upon
iterating to the next VMA updated curr_start to curr_end to advance to the
next VMA.
However, this doesn't take into account the fact that a VMA might be
updated due to a merge by vma_modify_flags(), which can result in curr_end
being stale and thus, upon setting curr_start to curr_end, ending up with
an incorrect curr_start on the next iteration.
Resolve the issue by setting curr_end to vma->vm_end unconditionally to
ensure this value remains updated should this occur.
While we're here, eliminate this entire class of bug by simply setting
const curr_[start/end] to be clamped to the input range and VMAs, which
also happens to simplify the logic.
Link: https://lkml.kernel.org/r/20260327173104.322405-1-ljs@kernel.org
Fixes:
|
||
|
|
3b89863c3f |
mm/pagewalk: fix race between concurrent split and refault
The splitting of a PUD entry in walk_pud_range() can race with a
concurrent thread refaulting the PUD leaf entry causing it to try walking
a PMD range that has disappeared.
An example and reproduction of this is to try reading numa_maps of a
process while VFIO-PCI is setting up DMA (specifically the
vfio_pin_pages_remote call) on a large BAR for that process.
This will trigger a kernel BUG:
vfio-pci 0000:03:00.0: enabling device (0000 -> 0002)
BUG: unable to handle page fault for address: ffffa23980000000
PGD 0 P4D 0
Oops: Oops: 0000 [#1] SMP NOPTI
...
RIP: 0010:walk_pgd_range+0x3b5/0x7a0
Code: 8d 43 ff 48 89 44 24 28 4d 89 ce 4d 8d a7 00 00 20 00 48 8b 4c 24
28 49 81 e4 00 00 e0 ff 49 8d 44 24 ff 48 39 c8 4c 0f 43 e3 <49> f7 06
9f ff ff ff 75 3b 48 8b 44 24 20 48 8b 40 28 48 85 c0 74
RSP: 0018:ffffac23e1ecf808 EFLAGS: 00010287
RAX: 00007f44c01fffff RBX: 00007f4500000000 RCX: 00007f44ffffffff
RDX: 0000000000000000 RSI: 000ffffffffff000 RDI: ffffffff93378fe0
RBP: ffffac23e1ecf918 R08: 0000000000000004 R09: ffffa23980000000
R10: 0000000000000020 R11: 0000000000000004 R12: 00007f44c0200000
R13: 00007f44c0000000 R14: ffffa23980000000 R15: 00007f44c0000000
FS: 00007fe884739580(0000) GS:ffff9b7d7a9c0000(0000)
knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffffa23980000000 CR3: 000000c0650e2005 CR4: 0000000000770ef0
PKRU: 55555554
Call Trace:
<TASK>
__walk_page_range+0x195/0x1b0
walk_page_vma+0x62/0xc0
show_numa_map+0x12b/0x3b0
seq_read_iter+0x297/0x440
seq_read+0x11d/0x140
vfs_read+0xc2/0x340
ksys_read+0x5f/0xe0
do_syscall_64+0x68/0x130
? get_page_from_freelist+0x5c2/0x17e0
? mas_store_prealloc+0x17e/0x360
? vma_set_page_prot+0x4c/0xa0
? __alloc_pages_noprof+0x14e/0x2d0
? __mod_memcg_lruvec_state+0x8d/0x140
? __lruvec_stat_mod_folio+0x76/0xb0
? __folio_mod_stat+0x26/0x80
? do_anonymous_page+0x705/0x900
? __handle_mm_fault+0xa8d/0x1000
? __count_memcg_events+0x53/0xf0
? handle_mm_fault+0xa5/0x360
? do_user_addr_fault+0x342/0x640
? arch_exit_to_user_mode_prepare.constprop.0+0x16/0xa0
? irqentry_exit_to_user_mode+0x24/0x100
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fe88464f47e
Code: c0 e9 b6 fe ff ff 50 48 8d 3d be 07 0b 00 e8 69 01 02 00 66 0f 1f
84 00 00 00 00 00 64 8b 04 25 18 00 00 00 85 c0 75 14 0f 05 <48> 3d 00
f0 ff ff 77 5a c3 66 0f 1f 84 00 00 00 00 00 48 83 ec 28
RSP: 002b:00007ffe6cd9a9b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 00007fe88464f47e
RDX: 0000000000020000 RSI: 00007fe884543000 RDI: 0000000000000003
RBP: 00007fe884543000 R08: 00007fe884542010 R09: 0000000000000000
R10: fffffffffffffbc5 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000
</TASK>
Fix this by validating the PUD entry in walk_pmd_range() using a stable
snapshot (pudp_get()). If the PUD is not present or is a leaf, retry the
walk via ACTION_AGAIN instead of descending further. This mirrors the
retry logic in walk_pte_range(), which lets walk_pmd_range() retry if the
PTE is not being got by pte_offset_map_lock().
Link: https://lkml.kernel.org/r/20260325-pagewalk-check-pmd-refault-v2-1-707bff33bc60@akamai.com
Fixes:
|
||
|
|
ffef67b93a |
mm/memory: fix PMD/PUD checks in follow_pfnmap_start()
follow_pfnmap_start() suffers from two problems:
(1) We are not re-fetching the pmd/pud after taking the PTL
Therefore, we are not properly stabilizing what the lock actually
protects. If there is concurrent zapping, we would indicate to the
caller that we found an entry, however, that entry might already have
been invalidated, or contain a different PFN after taking the lock.
Properly use pmdp_get() / pudp_get() after taking the lock.
(2) pmd_leaf() / pud_leaf() are not well defined on non-present entries
pmd_leaf()/pud_leaf() could wrongly trigger on non-present entries.
There is no real guarantee that pmd_leaf()/pud_leaf() returns something
reasonable on non-present entries. Most architectures indeed either
perform a present check or make it work by smart use of flags.
However, for example loongarch checks the _PAGE_HUGE flag in pmd_leaf(),
and always sets the _PAGE_HUGE flag in __swp_entry_to_pmd(). Whereby
pmd_trans_huge() explicitly checks pmd_present(), pmd_leaf() does not do
that.
Let's check pmd_present()/pud_present() before assuming "the is a present
PMD leaf" when spotting pmd_leaf()/pud_leaf(), like other page table
handling code that traverses user page tables does.
Given that non-present PMD entries are likely rare in VM_IO|VM_PFNMAP, (1)
is likely more relevant than (2). It is questionable how often (1) would
actually trigger, but let's CC stable to be sure.
This was found by code inspection.
Link: https://lkml.kernel.org/r/20260323-follow_pfnmap_fix-v1-1-5b0ec10872b3@kernel.org
Fixes:
|
||
|
|
6557004a8b |
mm/damon/sysfs: check contexts->nr in repeat_call_fn
damon_sysfs_repeat_call_fn() calls damon_sysfs_upd_tuned_intervals(),
damon_sysfs_upd_schemes_stats(), and
damon_sysfs_upd_schemes_effective_quotas() without checking contexts->nr.
If nr_contexts is set to 0 via sysfs while DAMON is running, these
functions dereference contexts_arr[0] and cause a NULL pointer
dereference. Add the missing check.
For example, the issue can be reproduced using DAMON sysfs interface and
DAMON user-space tool (damo) [1] like below.
$ sudo damo start --refresh_interval 1s
$ echo 0 | sudo tee \
/sys/kernel/mm/damon/admin/kdamonds/0/contexts/nr_contexts
Link: https://patch.msgid.link/20260320163559.178101-3-objecting@objecting.org
Link: https://lkml.kernel.org/r/20260321175427.86000-4-sj@kernel.org
Link: https://github.com/damonitor/damo [1]
Fixes:
|
||
|
|
1bfe9fb5ed |
mm/damon/sysfs: check contexts->nr before accessing contexts_arr[0]
Multiple sysfs command paths dereference contexts_arr[0] without first
verifying that kdamond->contexts->nr == 1. A user can set nr_contexts to
0 via sysfs while DAMON is running, causing NULL pointer dereferences.
In more detail, the issue can be triggered by privileged users like
below.
First, start DAMON and make contexts directory empty
(kdamond->contexts->nr == 0).
# damo start
# cd /sys/kernel/mm/damon/admin/kdamonds/0
# echo 0 > contexts/nr_contexts
Then, each of below commands will cause the NULL pointer dereference.
# echo update_schemes_stats > state
# echo update_schemes_tried_regions > state
# echo update_schemes_tried_bytes > state
# echo update_schemes_effective_quotas > state
# echo update_tuned_intervals > state
Guard all commands (except OFF) at the entry point of
damon_sysfs_handle_cmd().
Link: https://lkml.kernel.org/r/20260321175427.86000-3-sj@kernel.org
Fixes:
|
||
|
|
7fe000eb32 |
mm/damon/sysfs: fix param_ctx leak on damon_sysfs_new_test_ctx() failure
Patch series "mm/damon/sysfs: fix memory leak and NULL dereference
issues", v4.
DAMON_SYSFS can leak memory under allocation failure, and do NULL pointer
dereference when a privileged user make wrong sequences of control. Fix
those.
This patch (of 3):
When damon_sysfs_new_test_ctx() fails in damon_sysfs_commit_input(),
param_ctx is leaked because the early return skips the cleanup at the out
label. Destroy param_ctx before returning.
Link: https://lkml.kernel.org/r/20260321175427.86000-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20260321175427.86000-2-sj@kernel.org
Fixes:
|
||
|
|
9e0d0ddfbc |
mm/swap: fix swap cache memcg accounting
The swap readahead path was recently refactored and while doing this, the
order between the charging of the folio in the memcg and the addition of
the folio in the swap cache was inverted.
Since the accounting of the folio is done while adding the folio to the
swap cache and the folio is not charged in the memcg yet, the accounting
is then done at the node level, which is wrong.
Fix this by charging the folio in the memcg before adding it to the swap cache.
Link: https://lkml.kernel.org/r/20260320050601.1833108-1-alex@ghiti.fr
Fixes:
|
||
|
|
dabb83ecf4 |
dma-mapping fixes for Linux 7.0
A set of fixes for DMA-mapping subsystem, which resolve false-positive warnings from KMSAN and DMA-API debug (Shigeru Yoshida and Leon Romanovsky) as well as a simple build fix (Miguel Ojeda). -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQSrngzkoBtlA8uaaJ+Jp1EFxbsSRAUCacP+awAKCRCJp1EFxbsS RB0IAP94SD5YWfKauuYfXw0uxuy0POZR7rNX93y1LMxuorU+9QEAucOy9wIfjxKC Sc6U+fR4cmmIUKKnj7DCiR/EW8cOuQU= =b7th -----END PGP SIGNATURE----- Merge tag 'dma-mapping-7.0-2026-03-25' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux Pull dma-mapping fixes from Marek Szyprowski: "A set of fixes for DMA-mapping subsystem, which resolve false- positive warnings from KMSAN and DMA-API debug (Shigeru Yoshida and Leon Romanovsky) as well as a simple build fix (Miguel Ojeda)" * tag 'dma-mapping-7.0-2026-03-25' of git://git.kernel.org/pub/scm/linux/kernel/git/mszyprowski/linux: dma-mapping: add missing `inline` for `dma_free_attrs` mm/hmm: Indicate that HMM requires DMA coherency RDMA/umem: Tell DMA mapping that UMEM requires coherency iommu/dma: add support for DMA_ATTR_REQUIRE_COHERENT attribute dma-direct: prevent SWIOTLB path when DMA_ATTR_REQUIRE_COHERENT is set dma-mapping: Introduce DMA require coherency attribute dma-mapping: Clarify valid conditions for CPU cache line overlap dma-mapping: handle DMA_ATTR_CPU_CACHE_CLEAN in trace output dma-debug: Allow multiple invocations of overlapping entries dma: swiotlb: add KMSAN annotations to swiotlb_bounce() |
||
|
|
190a8c48ff |
futex: Fix UaF between futex_key_to_node_opt() and vma_replace_policy()
During futex_key_to_node_opt() execution, vma->vm_policy is read under
speculative mmap lock and RCU. Concurrently, mbind() may call
vma_replace_policy() which frees the old mempolicy immediately via
kmem_cache_free().
This creates a race where __futex_key_to_node() dereferences a freed
mempolicy pointer, causing a use-after-free read of mpol->mode.
[ 151.412631] BUG: KASAN: slab-use-after-free in __futex_key_to_node (kernel/futex/core.c:349)
[ 151.414046] Read of size 2 at addr ffff888001c49634 by task e/87
[ 151.415969] Call Trace:
[ 151.416732] __asan_load2 (mm/kasan/generic.c:271)
[ 151.416777] __futex_key_to_node (kernel/futex/core.c:349)
[ 151.416822] get_futex_key (kernel/futex/core.c:374 kernel/futex/core.c:386 kernel/futex/core.c:593)
Fix by adding rcu to __mpol_put().
Fixes:
|
||
|
|
84481e705a |
mm/damon/stat: monitor all System RAM resources
DAMON_STAT usage document (Documentation/admin-guide/mm/damon/stat.rst)
says it monitors the system's entire physical memory. But, it is
monitoring only the biggest System RAM resource of the system. When there
are multiple System RAM resources, this results in monitoring only an
unexpectedly small fraction of the physical memory. For example, suppose
the system has a 500 GiB System RAM, 10 MiB non-System RAM, and 500 GiB
System RAM resources in order on the physical address space. DAMON_STAT
will monitor only the first 500 GiB System RAM. This situation is
particularly common on NUMA systems.
Select a physical address range that covers all System RAM areas of the
system, to fix this issue and make it work as documented.
[sj@kernel.org: return error if monitoring target region is invalid]
Link: https://lkml.kernel.org/r/20260317053631.87907-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20260316235118.873-1-sj@kernel.org
Fixes:
|
||
|
|
631c111150 |
mm/zswap: add missing kunmap_local()
Commit |
||
|
|
26f775a054 |
mm/damon/core: avoid use of half-online-committed context
One major usage of damon_call() is online DAMON parameters update. It is
done by calling damon_commit_ctx() inside the damon_call() callback
function. damon_commit_ctx() can fail for two reasons: 1) invalid
parameters and 2) internal memory allocation failures. In case of
failures, the damon_ctx that attempted to be updated (commit destination)
can be partially updated (or, corrupted from a perspective), and therefore
shouldn't be used anymore. The function only ensures the damon_ctx object
can safely deallocated using damon_destroy_ctx().
The API callers are, however, calling damon_commit_ctx() only after
asserting the parameters are valid, to avoid damon_commit_ctx() fails due
to invalid input parameters. But it can still theoretically fail if the
internal memory allocation fails. In the case, DAMON may run with the
partially updated damon_ctx. This can result in unexpected behaviors
including even NULL pointer dereference in case of damos_commit_dests()
failure [1]. Such allocation failure is arguably too small to fail, so
the real world impact would be rare. But, given the bad consequence, this
needs to be fixed.
Avoid such partially-committed (maybe-corrupted) damon_ctx use by saving
the damon_commit_ctx() failure on the damon_ctx object. For this,
introduce damon_ctx->maybe_corrupted field. damon_commit_ctx() sets it
when it is failed. kdamond_call() checks if the field is set after each
damon_call_control->fn() is executed. If it is set, ignore remaining
callback requests and return. All kdamond_call() callers including
kdamond_fn() also check the maybe_corrupted field right after
kdamond_call() invocations. If the field is set, break the kdamond_fn()
main loop so that DAMON sill doesn't use the context that might be
corrupted.
[sj@kernel.org: let kdamond_call() with cancel regardless of maybe_corrupted]
Link: https://lkml.kernel.org/r/20260320031553.2479-1-sj@kernel.org
Link: https://sashiko.dev/#/patchset/20260319145218.86197-1-sj%40kernel.org
Link: https://lkml.kernel.org/r/20260319145218.86197-1-sj@kernel.org
Link: https://lore.kernel.org/20260319043309.97966-1-sj@kernel.org [1]
Fixes:
|
||
|
|
3a206a8649 |
mm/rmap: clear vma->anon_vma on error
Commit |
||
|
|
f5ebf241c4 |
mm/hmm: Indicate that HMM requires DMA coherency
HMM is fundamentally about allowing a sophisticated device to perform DMA
directly to a process’s memory while the CPU accesses that same memory at
the same time. It is similar to SVA but does not rely on IOMMU support.
Because the entire model depends on concurrent access to shared memory, it
fails as a uAPI if SWIOTLB substitutes the memory or if the CPU caches are
not coherent with DMA.
Until now, there has been no reliable way to report this, and various
approximations have been used:
int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map,
size_t nr_entries, size_t dma_entry_size)
{
<...>
/*
* The HMM API violates our normal DMA buffer ownership rules and can't
* transfer buffer ownership. The dma_addressing_limited() check is a
* best approximation to ensure no swiotlb buffering happens.
*/
dma_need_sync = !dev->dma_skip_sync;
if (dma_need_sync || dma_addressing_limited(dev))
return -EOPNOTSUPP;
So let's mark mapped buffers with DMA_ATTR_REQUIRE_COHERENT attribute
to prevent silent data corruption if someone tries to use hmm in a system
with swiotlb or incoherent DMA
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-8-1dde90a7f08b@nvidia.com
|
||
|
|
8a91ebb337 |
6 hotfixes. 4 are cc:stable. 3 are for MM.
All are singletons - please see the changelogs for details. -----BEGIN PGP SIGNATURE----- iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCabhW5gAKCRDdBJ7gKXxA jiy+AQDq0x5J94Ezavp0//wqpQ3gwCbo+E8cOE4qb3ig20gBFAEA2hr8qvWYeGCV r+FhX6PAmY6fxL8e/akqR3h7fR7iPAk= =g8d7 -----END PGP SIGNATURE----- Merge tag 'mm-hotfixes-stable-2026-03-16-12-15' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "6 hotfixes. 4 are cc:stable. 3 are for MM. All are singletons - please see the changelogs for details" * tag 'mm-hotfixes-stable-2026-03-16-12-15' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: MAINTAINERS: update email address for Ignat Korchagin mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared THP mm/rmap: fix incorrect pte restoration for lazyfree folios mm/huge_memory: fix use of NULL folio in move_pages_huge_pmd() build_bug.h: correct function parameters names in kernel-doc crash_dump: don't log dm-crypt key bytes in read_key_from_user_keying |
||
|
|
8174dafb2d |
slab fixes for 7.0-rc3
-----BEGIN PGP SIGNATURE----- iQFPBAABCAA5FiEEe7vIQRWZI0iWSE3xu+CwddJFiJoFAmmz1VkbFIAAAAAABAAO bWFudTIsMi41KzEuMTIsMiwyAAoJELvgsHXSRYia4jUH/1TsSN3hqNIpnXAuWQgt Kfxz77iaOvSAkV3eC8or/rlJCHUUvINEB/7wRVYqiRcLlRMwboZN0tIPi3ykGb5x SJ1NfkR8L8fQrTukFVZYX6izI14G7+s8BW1eI8Cxc3Pdd31gRU/mOT9TPCgKFyVZ LkHF5eED49V+VZFzlWRsvrcy4GU5Ve+X0Nn5hC83OLxXd32YlKA4WT66Fumer0dB 8PrsE+VmnExoW5aHQNEIOqDgGncbslXiDrSTsjlsehHKRX5e8Wxl3zcvKuOPband Mpo/D3KTdoPMQN/yPTpAwQLIbvq/TLO5WSpXBFc6AxIxXTR3qben3D0tK167WBt3 4EM= =9hfQ -----END PGP SIGNATURE----- Merge tag 'slab-for-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab Pull slab fixes from Vlastimil Babka: - Fix for a memory leak that can occur when already so low on memory that we can't allocate a new slab anymore (Qing Wang) - Fix for a case where slabobj_ext array for a slab might be allocated from the same slab, making it permanently non-freeable (Harry Yoo) * tag 'slab-for-7.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: slab: fix memory leak when refill_sheaf() fails mm/slab: fix an incorrect check in obj_exts_alloc_size() |
||
|
|
464b1c1158 |
slab: fix memory leak when refill_sheaf() fails
When refill_sheaf() partially fills one sheaf (e.g., fills 5 objects
but need to fill 10), it will update sheaf->size and return -ENOMEM.
However, the callers (alloc_full_sheaf() and __pcs_replace_empty_main())
directly call free_empty_sheaf() on failure, which only does kfree(sheaf),
causing the partially allocated objects memory in sheaf->objects[] leaked.
Fix this by calling sheaf_flush_unused() before free_empty_sheaf() to
free objects of sheaf->objects[]. And also add a WARN_ON() in
free_empty_sheaf() to catch any future cases where a non-empty sheaf is
being freed.
Fixes:
|
||
|
|
939080834f |
mm/huge_memory: fix early failure try_to_migrate() when split huge pmd for shared THP
Commit |
||
|
|
29f40594a2 |
mm/rmap: fix incorrect pte restoration for lazyfree folios
We batch unmap anonymous lazyfree folios by folio_unmap_pte_batch. If the
batch has a mix of writable and non-writable bits, we may end up setting
the entire batch writable. Fix this by respecting writable bit during
batching.
Although on a successful unmap of a lazyfree folio, the soft-dirty bit is
lost, preserve it on pte restoration by respecting the bit during
batching, to make the fix consistent w.r.t both writable bit and
soft-dirty bit.
I was able to write the below reproducer and crash the kernel.
Explanation of reproducer (set 64K mTHP to always):
Fault in a 64K large folio. Split the VMA at mid-point with
MADV_DONTFORK. fork() - parent points to the folio with 8 writable ptes
and 8 non-writable ptes. Merge the VMAs with MADV_DOFORK so that
folio_unmap_pte_batch() can determine all the 16 ptes as a batch. Do
MADV_FREE on the range to mark the folio as lazyfree. Write to the memory
to dirty the pte, eventually rmap will dirty the folio. Then trigger
reclaim, we will hit the pte restoration path, and the kernel will crash
with the trace given below.
The BUG happens at:
BUG_ON(atomic_inc_return(&ptc->anon_map_count) > 1 && rw);
The code path is asking for anonymous page to be mapped writable into the
pagetable. The BUG_ON() firing implies that such a writable page has been
mapped into the pagetables of more than one process, which breaks
anonymous memory/CoW semantics.
[ 21.134473] kernel BUG at mm/page_table_check.c:118!
[ 21.134497] Internal error: Oops - BUG: 00000000f2000800 [#1] SMP
[ 21.135917] Modules linked in:
[ 21.136085] CPU: 1 UID: 0 PID: 1735 Comm: dup-lazyfree Not tainted 7.0.0-rc1-00116-g018018a17770 #1028 PREEMPT
[ 21.136858] Hardware name: linux,dummy-virt (DT)
[ 21.137019] pstate: 21400005 (nzCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
[ 21.137308] pc : page_table_check_set+0x28c/0x2a8
[ 21.137607] lr : page_table_check_set+0x134/0x2a8
[ 21.137885] sp : ffff80008a3b3340
[ 21.138124] x29: ffff80008a3b3340 x28: fffffdffc3d14400 x27: ffffd1a55e03d000
[ 21.138623] x26: 0040000000000040 x25: ffffd1a55f7dd000 x24: 0000000000000001
[ 21.139045] x23: 0000000000000001 x22: 0000000000000001 x21: ffffd1a55f217f30
[ 21.139629] x20: 0000000000134521 x19: 0000000000134519 x18: 005c43e000040000
[ 21.140027] x17: 0001400000000000 x16: 0001700000000000 x15: 000000000000ffff
[ 21.140578] x14: 000000000000000c x13: 005c006000000000 x12: 0000000000000020
[ 21.140828] x11: 0000000000000000 x10: 005c000000000000 x9 : ffffd1a55c079ee0
[ 21.141077] x8 : 0000000000000001 x7 : 005c03e000040000 x6 : 000000004000ffff
[ 21.141490] x5 : ffff00017fffce00 x4 : 0000000000000001 x3 : 0000000000000002
[ 21.141741] x2 : 0000000000134510 x1 : 0000000000000000 x0 : ffff0000c08228c0
[ 21.141991] Call trace:
[ 21.142093] page_table_check_set+0x28c/0x2a8 (P)
[ 21.142265] __page_table_check_ptes_set+0x144/0x1e8
[ 21.142441] __set_ptes_anysz.constprop.0+0x160/0x1a8
[ 21.142766] contpte_set_ptes+0xe8/0x140
[ 21.142907] try_to_unmap_one+0x10c4/0x10d0
[ 21.143177] rmap_walk_anon+0x100/0x250
[ 21.143315] try_to_unmap+0xa0/0xc8
[ 21.143441] shrink_folio_list+0x59c/0x18a8
[ 21.143759] shrink_lruvec+0x664/0xbf0
[ 21.144043] shrink_node+0x218/0x878
[ 21.144285] __node_reclaim.constprop.0+0x98/0x338
[ 21.144763] user_proactive_reclaim+0x2a4/0x340
[ 21.145056] reclaim_store+0x3c/0x60
[ 21.145216] dev_attr_store+0x20/0x40
[ 21.145585] sysfs_kf_write+0x84/0xa8
[ 21.145835] kernfs_fop_write_iter+0x130/0x1c8
[ 21.145994] vfs_write+0x2b8/0x368
[ 21.146119] ksys_write+0x70/0x110
[ 21.146240] __arm64_sys_write+0x24/0x38
[ 21.146380] invoke_syscall+0x50/0x120
[ 21.146513] el0_svc_common.constprop.0+0x48/0xf8
[ 21.146679] do_el0_svc+0x28/0x40
[ 21.146798] el0_svc+0x34/0x110
[ 21.146926] el0t_64_sync_handler+0xa0/0xe8
[ 21.147074] el0t_64_sync+0x198/0x1a0
[ 21.147225] Code: f9400441 b4fff241 17ffff94 d4210000 (d4210000)
[ 21.147440] ---[ end trace 0000000000000000 ]---
#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <sys/wait.h>
#include <sched.h>
#include <fcntl.h>
void write_to_reclaim() {
const char *path = "/sys/devices/system/node/node0/reclaim";
const char *value = "409600000000";
int fd = open(path, O_WRONLY);
if (fd == -1) {
perror("open");
exit(EXIT_FAILURE);
}
if (write(fd, value, sizeof("409600000000") - 1) == -1) {
perror("write");
close(fd);
exit(EXIT_FAILURE);
}
printf("Successfully wrote %s to %s\n", value, path);
close(fd);
}
int main()
{
char *ptr = mmap((void *)(1UL << 30), 1UL << 16, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if ((unsigned long)ptr != (1UL << 30)) {
perror("mmap");
return 1;
}
/* a 64K folio gets faulted in */
memset(ptr, 0, 1UL << 16);
/* 32K half will not be shared into child */
if (madvise(ptr, 1UL << 15, MADV_DONTFORK)) {
perror("madvise madv dontfork");
return 1;
}
pid_t pid = fork();
if (pid < 0) {
perror("fork");
return 1;
} else if (pid == 0) {
sleep(15);
} else {
/* merge VMAs. now first half of the 16 ptes are writable, the other half not. */
if (madvise(ptr, 1UL << 15, MADV_DOFORK)) {
perror("madvise madv fork");
return 1;
}
if (madvise(ptr, (1UL << 16), MADV_FREE)) {
perror("madvise madv free");
return 1;
}
/* dirty the large folio */
(*ptr) += 10;
write_to_reclaim();
// sleep(10);
waitpid(pid, NULL, 0);
}
}
Link: https://lkml.kernel.org/r/20260303061528.2429162-1-dev.jain@arm.com
Fixes:
|
||
|
|
fae654083b |
mm/huge_memory: fix use of NULL folio in move_pages_huge_pmd()
move_pages_huge_pmd() handles UFFDIO_MOVE for both normal THPs and huge zero pages. For the huge zero page path, src_folio is explicitly set to NULL, and is used as a sentinel to skip folio operations like lock and rmap. In the huge zero page branch, src_folio is NULL, so folio_mk_pmd(NULL, pgprot) passes NULL through folio_pfn() and page_to_pfn(). With SPARSEMEM_VMEMMAP this silently produces a bogus PFN, installing a PMD pointing to non-existent physical memory. On other memory models it is a NULL dereference. Use page_folio(src_page) to obtain the valid huge zero folio from the page, which was obtained from pmd_page() and remains valid throughout. After commit |
||
|
|
b4f0dd314b |
15 hotfixes. 6 are cc:stable. 14 are for MM.
Singletons, with one doubleton - please see the changelogs for details. -----BEGIN PGP SIGNATURE----- iHUEABYKAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCaa9ZkgAKCRDdBJ7gKXxA jpfeAQDAj+Sn3bRdJWCUVNgWPdNLrmZnZRw5VdipioAnfmZHZQEAqBXnm7ehn1BA JgQvx6zTlsyqy4FhW+Q6uLmBpKmMTQw= =kbLH -----END PGP SIGNATURE----- Merge tag 'mm-hotfixes-stable-2026-03-09-16-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "15 hotfixes. 6 are cc:stable. 14 are for MM. Singletons, with one doubleton - please see the changelogs for details" * tag 'mm-hotfixes-stable-2026-03-09-16-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: MAINTAINERS, mailmap: update email address for Lorenzo Stoakes mm/mmu_notifier: clean up mmu_notifier.h kernel-doc uaccess: correct kernel-doc parameter format mm/huge_memory: fix a folio_split() race condition with folio_try_get() MAINTAINERS: add co-maintainer and reviewer for SLAB ALLOCATOR MAINTAINERS: add RELAY entry memcg: fix slab accounting in refill_obj_stock() trylock path mm/hugetlb.c: use __pa() instead of virt_to_phys() in early bootmem alloc code zram: rename writeback_compressed device attr tools/testing: fix testing/vma and testing/radix-tree build Revert "ptdesc: remove references to folios from __pagetable_ctor() and pagetable_dtor()" mm/cma: move put_page_testzero() out of VM_WARN_ON in cma_release() mm/damon/core: clear walk_control on inactive context in damos_walk() mm: memfd_luo: always dirty all folios mm: memfd_luo: always make all folios uptodate |
||
|
|
8dafa9f590 |
mm/slab: fix an incorrect check in obj_exts_alloc_size()
obj_exts_alloc_size() prevents recursive allocation of slabobj_ext
array from the same cache, to avoid creating slabs that are never freed.
There is one mistake that returns the original size when memory
allocation profiling is disabled. The assumption was that
memcg-triggered slabobj_ext allocation is always served from
KMALLOC_CGROUP type. But this is wrong [1]: when the caller specifies
both __GFP_RECLAIMABLE and __GFP_ACCOUNT with SLUB_TINY enabled, the
allocation is served from normal kmalloc. This is because kmalloc_type()
prioritizes __GFP_RECLAIMABLE over __GFP_ACCOUNT, and SLUB_TINY aliases
KMALLOC_RECLAIM with KMALLOC_NORMAL.
As a result, the recursion guard is bypassed and the problematic slabs
can be created. Fix this by removing the mem_alloc_profiling_enabled()
check entirely. The remaining is_kmalloc_normal() check is still
sufficient to detect whether the cache is of KMALLOC_NORMAL type and
avoid bumping the size if it's not.
Without SLUB_TINY, no functional change intended.
With SLUB_TINY, allocations with __GFP_ACCOUNT|__GFP_RECLAIMABLE
now allocate a larger array if the sizes equal.
Reported-by: Zw Tang <shicenci@gmail.com>
Fixes:
|
||
|
|
dfb3142844 |
drm fixes for 7.0-rc3
mm: - mm: Fix a hmm_range_fault() livelock / starvation problem pagemap: - Revert "drm/pagemap: Disable device-to-device migration" ttm: - fix function return breaking reclaim - fix build failure on PREEMPT_RT - fix bo->resource UAF dma-buf: - include ioctl.h in uapi header sched: - fix kernel doc warning amdgpu: - LUT fixes - VCN5 fix - Dispclk fix - SMU 13.x fix - Fix race in VM acquire - PSP 15.x fix - UserQ fix amdxdna: - fix invalid payload for failed command - fix NULL ptr dereference - fix major fw version check - avoid inconsistent fw state on error i915/display: - Fix for Lenovo T14 G7 display not refreshing xe: - Do not preempt fence signaling CS instructions - Some leak and finalization fixes - Workaround fix nouveau: - avoid runtime suspend oops when using dp aux panthor: - fix gem_sync argument ordering solomon: - fix incorrect display output renesas: - fix DSI divider programming ethosu: - fix job submit error clean-up refcount - fix NPU_OP_ELEMENTWISE validation - handle possible underflows in IFM size calcs -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmmrODEACgkQDHTzWXnE hr6ENA/9E+ABsQPUSr1ZUR6MfS3FdDMfyNi34u9F0XE0vL/6J4DIN2MXHJN0yj4S YeR8RIH1uuteexRCVjK8gYrDFZm5GpdJ7tZsPBogKSjfr4P4zvSdcyP0DtubmHN8 k/Rbdr1m2BnAerviPUlQ09voG4VyM/IlnkOj/aUrAG4xHfNLWhQ/eY21HjvTaMEw eNPk75G32AkusybhYGLcfRKUqmZ9Sh/EAIOiqp5Gd61bSbNXFbXqU59AliRSW16D dRMjCenQbBtXnf1xzn9xmC95Mr5NhlId3BddY6nsmTvqnse50w1IBw4MbA5703iv teNAsRHNWBiyachtolylu63z8fjc3mqRgyB3qoYCIn4pGm6wej3SSuOr+WaKTPMr ucKNxvsluiXOIdUrkLYyl1LTU1Bqs2/mFTPzyHFykeB72swHGyCfYyh/RgL+dYIc nuOsY1n4JR1JCkg5p4Y8LZq0RXLE6PJfyxmpF9W2r9DVV4T52Yk5fiXEo+HJ9WaJ yaa3xBJJLP0z50JuaV7iEIiXQBwj482m0No+jG4+ZTatRm/4oz0cHpmoRkPuN6Og vc6UdFUtRysVqSetLfV1oBpLm3zkhumDxtrqb6GHB7n+ahY7mFCLc175wFXoGCqu DxCshK5FZchyPbOXaxOWyIVXDCYJtCJnK4vBCHwkJAkxLQCpv/k= =yGA4 -----END PGP SIGNATURE----- Merge tag 'drm-fixes-2026-03-07' of https://gitlab.freedesktop.org/drm/kernel Pull drm fixes from Dave Airlie: "Weekly fixes pull. There is one mm fix in here for a HMM livelock triggered by the xe driver tests. Otherwise it's a pretty wide range of fixes across the board, ttm UAF regression fix, amdgpu fixes, nouveau doesn't crash my laptop anymore fix, and a fair bit of misc. Seems about right for rc3. mm: - mm: Fix a hmm_range_fault() livelock / starvation problem pagemap: - Revert "drm/pagemap: Disable device-to-device migration" ttm: - fix function return breaking reclaim - fix build failure on PREEMPT_RT - fix bo->resource UAF dma-buf: - include ioctl.h in uapi header sched: - fix kernel doc warning amdgpu: - LUT fixes - VCN5 fix - Dispclk fix - SMU 13.x fix - Fix race in VM acquire - PSP 15.x fix - UserQ fix amdxdna: - fix invalid payload for failed command - fix NULL ptr dereference - fix major fw version check - avoid inconsistent fw state on error i915/display: - Fix for Lenovo T14 G7 display not refreshing xe: - Do not preempt fence signaling CS instructions - Some leak and finalization fixes - Workaround fix nouveau: - avoid runtime suspend oops when using dp aux panthor: - fix gem_sync argument ordering solomon: - fix incorrect display output renesas: - fix DSI divider programming ethosu: - fix job submit error clean-up refcount - fix NPU_OP_ELEMENTWISE validation - handle possible underflows in IFM size calcs" * tag 'drm-fixes-2026-03-07' of https://gitlab.freedesktop.org/drm/kernel: (38 commits) accel: ethosu: Handle possible underflow in IFM size calculations accel: ethosu: Fix NPU_OP_ELEMENTWISE validation with scalar accel: ethosu: Fix job submit error clean-up refcount underflows accel/amdxdna: Split mailbox channel create function drm/panthor: Correct the order of arguments passed to gem_sync Revert "drm/syncobj: Fix handle <-> fd ioctls with dirty stack" drm/ttm: Fix bo resource use-after-free nouveau/dpcd: return EBUSY for aux xfer if the device is asleep accel/amdxdna: Fix major version check on NPU1 platform drm/amdgpu/userq: refcount userqueues to avoid any race conditions drm/amdgpu/userq: Consolidate wait ioctl exit path drm/amdgpu/psp: Use Indirect access address for GFX to PSP mailbox drm/amdgpu: Fix use-after-free race in VM acquire drm/amd/pm: remove invalid gpu_metrics.energy_accumulator on smu v13.0.x drm/xe: Fix memory leak in xe_vm_madvise_ioctl drm/xe/reg_sr: Fix leak on xa_store failure drm/xe/xe2_hpg: Correct implementation of Wa_16025250150 drm/xe/gsc: Fix GSC proxy cleanup on early initialization failure Revert "drm/pagemap: Disable device-to-device migration" drm/i915/psr: Fix for Panel Replay X granularity DPCD register handling ... |
||
|
|
9a881ea3da |
slab fixes for 7.0-rc2
-----BEGIN PGP SIGNATURE----- iQFPBAABCAA5FiEEe7vIQRWZI0iWSE3xu+CwddJFiJoFAmmqujcbFIAAAAAABAAO bWFudTIsMi41KzEuMTIsMiwyAAoJELvgsHXSRYiakO0H/RtqrDjh7evZtXlOu5l2 7cg2HKYeytPwlKbyerIZb7bt0rgBOVIugZWNswluvCXFWf8ypioBPUEKnkxuXtXd 9+8pdt8GOdW6XwobvmnupEWeN7xXMygPtMABh9E5GX1flxha5DVlspL6m4RUadJ9 Or6uo33NB9s5CQGFadb3CjTUnj5tlcKt48hvDisaxzjr1UaYrE9pauwL+UUB58Zi Xd38HET0a6mpBOsdPuzgrvGmtXL8dhwnNVY6aSMvpqeHY4w03iUXM08XdHbaqOW9 X1RYzmZNOR1poxFTsnhVGPvRNuVTpktm7hqh0JnbdEHM8zF72ZmXS8OxciCGZE4H Zr4= =tIX2 -----END PGP SIGNATURE----- Merge tag 'slab-for-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab Pull slab fixes from Vlastimil Babka: - Fix for slab->stride truncation on 64k page systems due to short type. It was not due to races and lack of barriers in the end. (Harry Yoo) - Fix for severe performance regression due to unnecessary sheaf refill restrictions exposed by mempool allocation strategy. (Vlastimil Babka) - Stable fix for potential silent percpu sheaf flushing failures on PREEMPT_RT. (Vlastimil Babka) * tag 'slab-for-7.0-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/slab: change stride type from unsigned short to unsigned int mm/slab: allow sheaf refill if blocking is not allowed slab: distinguish lock and trylock for sheaf_flush_main() |
||
|
|
0b2758f48f |
Require (reasonably) normal mappings for MADV_DOFORK
This came up as a result of the tracing fix pull request, and commit
|
||
|
|
577a1f495f |
mm/huge_memory: fix a folio_split() race condition with folio_try_get()
During a pagecache folio split, the values in the related xarray should not be changed from the original folio at xarray split time until all after-split folios are well formed and stored in the xarray. Current use of xas_try_split() in __split_unmapped_folio() lets some after-split folios show up at wrong indices in the xarray. When these misplaced after-split folios are unfrozen, before correct folios are stored via __xa_store(), and grabbed by folio_try_get(), they are returned to userspace at wrong file indices, causing data corruption. More detailed explanation is at the bottom. The reproducer is at: https://github.com/dfinity/thp-madv-remove-test It 1. creates a memfd, 2. forks, 3. in the child process, maps the file with large folios (via shmem code path) and reads the mapped file continuously with 16 threads, 4. in the parent process, uses madvise(MADV_REMOVE) to punch poles in the large folio. Data corruption can be observed without the fix. Basically, data from a wrong page->index is returned. Fix it by using the original folio in xas_try_split() calls, so that folio_try_get() can get the right after-split folios after the original folio is unfrozen. Uniform split, split_huge_page*(), is not affected, since it uses xas_split_alloc() and xas_split() only once and stores the original folio in the xarray. Change xas_split() used in uniform split branch to use the original folio to avoid confusion. Fixes below points to the commit introduces the code, but folio_split() is used in a later commit |
||
|
|
dccd5ee262 |
memcg: fix slab accounting in refill_obj_stock() trylock path
In the trylock path of refill_obj_stock(), mod_objcg_mlstate() should use
the real alloc/free bytes (i.e., nr_acct) for accounting, rather than
nr_bytes.
The user-visible impact is that the NR_SLAB_RECLAIMABLE_B and
NR_SLAB_UNRECLAIMABLE_B stats can end up being incorrect.
For example, if a user allocates a 6144-byte object, then before this
fix efill_obj_stock() calls mod_objcg_mlstate(..., nr_bytes=2048), even
though it should account for 6144 bytes (i.e., nr_acct).
When the user later frees the same object with kfree(),
refill_obj_stock() calls mod_objcg_mlstate(..., nr_bytes=6144). This
ends up adding 6144 to the stats, but it should be applying -6144
(i.e., nr_acct) since the object is being freed.
Link: https://lkml.kernel.org/r/20260226115145.62903-1-hao.li@linux.dev
Fixes:
|
||
|
|
a1e59fc6ee |
mm/hugetlb.c: use __pa() instead of virt_to_phys() in early bootmem alloc code
Architecture like powerpc, checks for pfn_valid() in their virt_to_phys() implementation (when CONFIG_DEBUG_VIRTUAL is enabled) [1]. Commit |
||
|
|
f4355d6bb3 |
mm/cma: move put_page_testzero() out of VM_WARN_ON in cma_release()
When CONFIG_DEBUG_VM is not set, VM_WARN_ON is a NOP. Putting any statement with side effect inside it is incorrect. Collect all !put_page_testzero() results and check the sum using WARN instead after the loop. It restores the same check in free_contig_range() before commit |
||
|
|
d210fdcac9 |
mm/damon/core: clear walk_control on inactive context in damos_walk()
damos_walk() sets ctx->walk_control to the caller-provided control structure before checking whether the context is running. If the context is inactive (damon_is_running() returns false), the function returns -EINVAL without clearing ctx->walk_control. This leaves a dangling pointer to a stack-allocated structure that will be freed when the caller returns. This is structurally identical to the bug fixed in commit |
||
|
|
7e04bf1f33 |
mm: memfd_luo: always dirty all folios
A dirty folio is one which has been written to. A clean folio is its
opposite. Since a clean folio has no user data, it can be freed under
memory pressure.
memfd preservation with LUO saves the flag at preserve(). This is
problematic. The folio might get dirtied later. Saving it at freeze()
also doesn't work, since the dirty bit from PTE is normally synced at
unmap and there might still be mappings of the file at freeze().
To see why this is a problem, say a folio is clean at preserve, but gets
dirtied later. The serialized state of the folio will mark it as clean.
After retrieve, the next kernel will see the folio as clean and might try
to reclaim it under memory pressure. This will result in losing user
data.
Mark all folios of the file as dirty, and always set the
MEMFD_LUO_FOLIO_DIRTY flag. This comes with the side effect of making all
clean folios un-reclaimable. This is a cost that has to be paid for
participants of live update. It is not expected to be a common use case
to preserve a lot of clean folios anyway.
Since the value of pfolio->flags is a constant now, drop the flags
variable and set it directly.
Link: https://lkml.kernel.org/r/20260223173931.2221759-3-pratyush@kernel.org
Fixes:
|
||
|
|
50d7b4332f |
mm: memfd_luo: always make all folios uptodate
Patch series "mm: memfd_luo: fixes for folio flag preservation".
This series contains a couple fixes for flag preservation for memfd live
update.
The first patch fixes memfd preservation when fallocate() was used to
pre-allocate some pages. For these memfds, all the writes to fallocated
pages touched after preserve were lost.
The second patch fixes dirty flag tracking. If the dirty flag is not
tracked correctly, the next kernel might incorrectly reclaim some folios
under memory pressure, losing user data. This is a theoretical bug that I
observed when reading the code, and haven't been able to reproduce it.
This patch (of 2):
When a folio is added to a shmem file via fallocate, it is not zeroed on
allocation. This is done as a performance optimization since it is
possible the folio will never end up being used at all. When the folio is
used, shmem checks for the uptodate flag, and if absent, zeroes the folio
(and sets the flag) before returning to user.
With LUO, the flags of each folio are saved at preserve time. It is
possible to have a memfd with some folios fallocated but not uptodate.
For those, the uptodate flag doesn't get saved. The folios might later
end up being used and become uptodate. They would get passed to the next
kernel via KHO correctly since they did get preserved. But they won't
have the MEMFD_LUO_FOLIO_UPTODATE flag.
This means that when the memfd is retrieved, the folios will be added to
the shmem file without the uptodate flag. They will be zeroed before
first use, losing the data in those folios.
Since we take a big performance hit in allocating, zeroing, and pinning
all folios at prepare time anyway, take some more and zero all
non-uptodate ones too.
Later when there is a stronger need to make prepare faster, this can be
optimized.
To avoid racing with another uptodate operation, take the folio lock.
Link: https://lkml.kernel.org/r/20260223173931.2221759-2-pratyush@kernel.org
Fixes:
|
||
|
|
6432f15c81 |
mm/slab: change stride type from unsigned short to unsigned int
Commit |
||
|
|
fb1091febd |
mm/slab: allow sheaf refill if blocking is not allowed
Ming Lei reported [1] a regression in the ublk null target benchmark due
to sheaves. The profile shows that the alloc_from_pcs() fastpath fails
and allocations fall back to ___slab_alloc(). It also shows the
allocations happen through mempool_alloc().
The strategy of mempool_alloc() is to call the underlying allocator
(here slab) without __GFP_DIRECT_RECLAIM first. This does not play well
with __pcs_replace_empty_main() checking for gfpflags_allow_blocking()
to decide if it should refill an empty sheaf or fallback to the
slowpath, so we end up falling back.
We could change the mempool strategy but there might be other paths
doing the same ting. So instead allow sheaf refill when blocking is not
allowed, changing the condition to gfpflags_allow_spinning(). The
original condition was unnecessarily restrictive.
Note this doesn't fully resolve the regression [1] as another component
of that are memoryless nodes, which is to be addressed separately.
Reported-by: Ming Lei <ming.lei@redhat.com>
Fixes:
|
||
|
|
b570f37a2c
|
mm: Fix a hmm_range_fault() livelock / starvation problem
If hmm_range_fault() fails a folio_trylock() in do_swap_page, trying to acquire the lock of a device-private folio for migration, to ram, the function will spin until it succeeds grabbing the lock. However, if the process holding the lock is depending on a work item to be completed, which is scheduled on the same CPU as the spinning hmm_range_fault(), that work item might be starved and we end up in a livelock / starvation situation which is never resolved. This can happen, for example if the process holding the device-private folio lock is stuck in migrate_device_unmap()->lru_add_drain_all() sinc lru_add_drain_all() requires a short work-item to be run on all online cpus to complete. A prerequisite for this to happen is: a) Both zone device and system memory folios are considered in migrate_device_unmap(), so that there is a reason to call lru_add_drain_all() for a system memory folio while a folio lock is held on a zone device folio. b) The zone device folio has an initial mapcount > 1 which causes at least one migration PTE entry insertion to be deferred to try_to_migrate(), which can happen after the call to lru_add_drain_all(). c) No or voluntary only preemption. This all seems pretty unlikely to happen, but indeed is hit by the "xe_exec_system_allocator" igt test. Resolve this by waiting for the folio to be unlocked if the folio_trylock() fails in do_swap_page(). Rename migration_entry_wait_on_locked() to softleaf_entry_wait_unlock() and update its documentation to indicate the new use-case. Future code improvements might consider moving the lru_add_drain_all() call in migrate_device_unmap() to be called *after* all pages have migration entries inserted. That would eliminate also b) above. v2: - Instead of a cond_resched() in hmm_range_fault(), eliminate the problem by waiting for the folio to be unlocked in do_swap_page() (Alistair Popple, Andrew Morton) v3: - Add a stub migration_entry_wait_on_locked() for the !CONFIG_MIGRATION case. (Kernel Test Robot) v4: - Rename migrate_entry_wait_on_locked() to softleaf_entry_wait_on_locked() and update docs (Alistair Popple) v5: - Add a WARN_ON_ONCE() for the !CONFIG_MIGRATION version of softleaf_entry_wait_on_locked(). - Modify wording around function names in the commit message (Andrew Morton) Suggested-by: Alistair Popple <apopple@nvidia.com> Fixes: |
||
|
|
48647d3f9a |
slab: distinguish lock and trylock for sheaf_flush_main()
sheaf_flush_main() can be called from __pcs_replace_full_main() where
it's fine if the trylock fails, and pcs_flush_all() where it's not
expected to and for some flush callers (when destroying the cache or
memory hotremove) it would be actually a problem if it failed and left
the main sheaf not flushed. The flush callers can however safely use
local_lock() instead of trylock.
The trylock failure should not happen in practice on !PREEMPT_RT, but
can happen on PREEMPT_RT. The impact is limited in practice because when
a trylock fails in the kmem_cache_destroy() path, it means someone is
using the cache while destroying it, which is a bug on its own. The memory
hotremove path is unlikely to be employed in a production RT config, but
it's possible.
To fix this, split the function into sheaf_flush_main() (using
local_lock()) and sheaf_try_flush_main() (using local_trylock()) where
both call __sheaf_flush_main_batch() to flush a single batch of objects.
This will also allow lockdep to verify our context assumptions.
The problem was raised in an off-list question by Marcelo.
Fixes:
|
||
|
|
3feb464fb7 |
slab fixes for 7.0-rc1
-----BEGIN PGP SIGNATURE----- iQFPBAABCAA5FiEEe7vIQRWZI0iWSE3xu+CwddJFiJoFAmmhvA4bFIAAAAAABAAO bWFudTIsMi41KzEuMTEsMiwyAAoJELvgsHXSRYiaqPEIAJAN0BlnFoaWew6YSiEO RjVx1G14Xdp4H9Vc93cg3pXXTvBlAMgNSxEhyiXXCwFTGjnsUqowDqDU6+cJCr7k KqVLPh1gHR4aIsXcgufOqu9W9hv/vQrd2d9/8qSD+k2jwcOEIk/oFyjRPrzxCrxg sH4hWy+zfJW1mllMXbMRj3RVRIrvITmnLK0J0ByzuTXjrswyHhsY+6Bl2G5Q7F4V MuHLZcUAcav/gi4r0d/RoK3m37lC7mGUBT7xsGvt/vRCTCgQvwHbKu47Wq7c+ozd dP2Dz3qNIR/q2BdDx5ftC8bacR+QE2E3EcYhraOVjLcEsMUO3e7n83umgOixyIT+ fNU= =FjaS -----END PGP SIGNATURE----- Merge tag 'slab-for-7.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab Pull slab fixes from Vlastimil Babka: - Fix for spurious page allocation warnings on sheaf refill (Harry Yoo) - Fix for CONFIG_MEM_ALLOC_PROFILING_DEBUG warnings (Suren Baghdasaryan) - Fix for kernel-doc warning on ksize() (Sanjay Chitroda) - Fix to avoid setting slab->stride later than on slab allocation. Doesn't yet fix the reports from powerpc; debugging is making progress (Harry Yoo) * tag 'slab-for-7.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/slab: initialize slab->stride early to avoid memory ordering issues mm/slub: drop duplicate kernel-doc for ksize() mm/slab: mark alloc tags empty for sheaves allocated with __GFP_NO_OBJ_EXT mm/slab: pass __GFP_NOWARN to refill_sheaf() if fallback is available |
||
|
|
e9217ca77d |
mm/slab: initialize slab->stride early to avoid memory ordering issues
When alloc_slab_obj_exts() is called later (instead of during slab
allocation and initialization), slab->stride and slab->obj_exts are
updated after the slab is already accessible by multiple CPUs.
The current implementation does not enforce memory ordering between
slab->stride and slab->obj_exts. For correctness, slab->stride must be
visible before slab->obj_exts. Otherwise, concurrent readers may observe
slab->obj_exts as non-zero while stride is still stale.
With stale slab->stride, slab_obj_ext() could return the wrong obj_ext.
This could cause two problems:
- obj_cgroup_put() is called on the wrong objcg, leading to
a use-after-free due to incorrect reference counting [1] by
decrementing the reference count more than it was incremented.
- refill_obj_stock() is called on the wrong objcg, leading to
a page_counter overflow [2] by uncharging more memory than charged.
Fix this by unconditionally initializing slab->stride in
alloc_slab_obj_exts_early(), before the need_slab_obj_exts() check.
In the case of SLAB_OBJ_EXT_IN_OBJ, it is overridden in the function.
This ensures updates to slab->stride become visible before the slab
can be accessed by other CPUs via the per-node partial slab list
(protected by spinlock with acquire/release semantics).
Thanks to Shakeel Butt for pointing out this issue [3].
[vbabka@kernel.org: the bug reports [1] and [2] are not yet fully fixed,
with investigation ongoing, but it is nevertheless a step in the right
direction to only set stride once after allocating the slab and not
change it later ]
Fixes:
|
||
|
|
f3ec502b67 |
mm/slab: mark alloc tags empty for sheaves allocated with __GFP_NO_OBJ_EXT
alloc_empty_sheaf() allocates sheaves from SLAB_KMALLOC caches using __GFP_NO_OBJ_EXT to avoid recursion, however it does not mark their allocation tags empty before freeing, which results in a warning when CONFIG_MEM_ALLOC_PROFILING_DEBUG is set. Fix this by marking allocation tags for such sheaves as empty. The problem was technically introduced in commit |
||
|
|
021ca6b670 |
mm/slab: pass __GFP_NOWARN to refill_sheaf() if fallback is available
When refill_sheaf() is called, failing to refill the sheaf doesn't
necessarily mean the allocation will fail because a fallback path
might be available and serve the allocation request.
Suppress spurious warnings by passing __GFP_NOWARN along with
__GFP_NOMEMALLOC whenever a fallback path is available.
When the caller is alloc_full_sheaf() or __pcs_replace_empty_main(),
the kernel always falls back to the slowpath (__slab_alloc_node()).
For __prefill_sheaf_pfmemalloc(), the fallback path is available
only when gfp_pfmemalloc_allowed() returns true.
Reported-and-tested-by: Chris Bainbridge <chris.bainbridge@gmail.com>
Closes: https://lore.kernel.org/linux-mm/aZt2-oS9lkmwT7Ch@debian.local
Fixes:
|
||
|
|
a4ab97e34b |
mm: fix NULL NODE_DATA dereference for memoryless nodes on boot
Commit |
||
|
|
d155aab90f |
mm/kfence: fix KASAN hardware tag faults during late enablement
When KASAN hardware tags are enabled, re-enabling KFENCE late (via
/sys/module/kfence/parameters/sample_interval) causes KASAN faults.
This happens because the KFENCE pool and metadata are allocated via the
page allocator, which tags the memory, while KFENCE continues to access it
using untagged pointers during initialization.
Use __GFP_SKIP_KASAN for late KFENCE pool and metadata allocations to
ensure the memory remains untagged, consistent with early allocations from
memblock. To support this, add __GFP_SKIP_KASAN to the allowlist in
__alloc_contig_verify_gfp_mask().
Link: https://lkml.kernel.org/r/20260220144940.2779209-1-glider@google.com
Fixes:
|
||
|
|
c80f46ac22 |
mm/damon/core: disallow non-power of two min_region_sz
DAMON core uses min_region_sz parameter value as the DAMON region
alignment. The alignment is made using ALIGN() and ALIGN_DOWN(), which
support only the power of two alignments. But DAMON core API callers can
set min_region_sz to an arbitrary number. Users can also set it
indirectly, using addr_unit.
When the alignment is not properly set, DAMON behavior becomes difficult
to expect and understand, makes it effectively broken. It doesn't cause a
kernel crash-like significant issue, though.
Fix the issue by disallowing min_region_sz input that is not a power of
two. Add the check to damon_commit_ctx(), as all DAMON API callers who
set min_region_sz uses the function.
This can be a sort of behavioral change, but it does not break users, for
the following reasons. As the symptom is making DAMON effectively broken,
it is not reasonable to believe there are real use cases of non-power of
two min_region_sz. There is no known use case or issue reports from the
setup, either.
In future, if we find real use cases of non-power of two alignments and we
can support it with low enough overhead, we can consider moving the
restriction. But, for now, simply disallowing the corner case should be
good enough as a hot fix.
Link: https://lkml.kernel.org/r/20260214214124.87689-1-sj@kernel.org
Fixes:
|
||
|
|
f85b1c6af5 |
liveupdate: luo_file: remember retrieve() status
LUO keeps track of successful retrieve attempts on a LUO file. It does so
to avoid multiple retrievals of the same file. Multiple retrievals cause
problems because once the file is retrieved, the serialized data
structures are likely freed and the file is likely in a very different
state from what the code expects.
The retrieve boolean in struct luo_file keeps track of this, and is passed
to the finish callback so it knows what work was already done and what it
has left to do.
All this works well when retrieve succeeds. When it fails,
luo_retrieve_file() returns the error immediately, without ever storing
anywhere that a retrieve was attempted or what its error code was. This
results in an errored LIVEUPDATE_SESSION_RETRIEVE_FD ioctl to userspace,
but nothing prevents it from trying this again.
The retry is problematic for much of the same reasons listed above. The
file is likely in a very different state than what the retrieve logic
normally expects, and it might even have freed some serialization data
structures. Attempting to access them or free them again is going to
break things.
For example, if memfd managed to restore 8 of its 10 folios, but fails on
the 9th, a subsequent retrieve attempt will try to call
kho_restore_folio() on the first folio again, and that will fail with a
warning since it is an invalid operation.
Apart from the retry, finish() also breaks. Since on failure the
retrieved bool in luo_file is never touched, the finish() call on session
close will tell the file handler that retrieve was never attempted, and it
will try to access or free the data structures that might not exist, much
in the same way as the retry attempt.
There is no sane way of attempting the retrieve again. Remember the error
retrieve returned and directly return it on a retry. Also pass this
status code to finish() so it can make the right decision on the work it
needs to do.
This is done by changing the bool to an integer. A value of 0 means
retrieve was never attempted, a positive value means it succeeded, and a
negative value means it failed and the error code is the value.
Link: https://lkml.kernel.org/r/20260216132221.987987-1-pratyush@kernel.org
Fixes:
|
||
|
|
dd085fe9a8 |
mm: thp: deny THP for files on anonymous inodes
file_thp_enabled() incorrectly allows THP for files on anonymous inodes
(e.g. guest_memfd and secretmem). These files are created via
alloc_file_pseudo(), which does not call get_write_access() and leaves
inode->i_writecount at 0. Combined with S_ISREG(inode->i_mode) being
true, they appear as read-only regular files when
CONFIG_READ_ONLY_THP_FOR_FS is enabled, making them eligible for THP
collapse.
Anonymous inodes can never pass the inode_is_open_for_write() check
since their i_writecount is never incremented through the normal VFS
open path. The right thing to do is to exclude them from THP eligibility
altogether, since CONFIG_READ_ONLY_THP_FOR_FS was designed for real
filesystem files (e.g. shared libraries), not for pseudo-filesystem
inodes.
For guest_memfd, this allows khugepaged and MADV_COLLAPSE to create
large folios in the page cache via the collapse path, but the
guest_memfd fault handler does not support large folios. This triggers
WARN_ON_ONCE(folio_test_large(folio)) in kvm_gmem_fault_user_mapping().
For secretmem, collapse_file() tries to copy page contents through the
direct map, but secretmem pages are removed from the direct map. This
can result in a kernel crash:
BUG: unable to handle page fault for address: ffff88810284d000
RIP: 0010:memcpy_orig+0x16/0x130
Call Trace:
collapse_file
hpage_collapse_scan_file
madvise_collapse
Secretmem is not affected by the crash on upstream as the memory failure
recovery handles the failed copy gracefully, but it still triggers
confusing false memory failure reports:
Memory failure: 0x106d96f: recovery action for clean unevictable
LRU page: Recovered
Check IS_ANON_FILE(inode) in file_thp_enabled() to deny THP for all
anonymous inode files.
Link: https://syzkaller.appspot.com/bug?extid=33a04338019ac7e43a44
Link: https://lore.kernel.org/linux-mm/CAEvNRgHegcz3ro35ixkDw39ES8=U6rs6S7iP0gkR9enr7HoGtA@mail.gmail.com
Link: https://lkml.kernel.org/r/20260214001535.435626-1-kartikey406@gmail.com
Fixes:
|
||
|
|
09833d99db |
mm/kfence: disable KFENCE upon KASAN HW tags enablement
KFENCE does not currently support KASAN hardware tags. As a result, the
two features are incompatible when enabled simultaneously.
Given that MTE provides deterministic protection and KFENCE is a
sampling-based debugging tool, prioritize the stronger hardware
protections. Disable KFENCE initialization and free the pre-allocated
pool if KASAN hardware tags are detected to ensure the system maintains
the security guarantees provided by MTE.
Link: https://lkml.kernel.org/r/20260213095410.1862978-1-glider@google.com
Fixes:
|
||
|
|
189f164e57 |
Convert remaining multi-line kmalloc_obj/flex GFP_KERNEL uses
Conversion performed via this Coccinelle script:
// SPDX-License-Identifier: GPL-2.0-only
// Options: --include-headers-for-types --all-includes --include-headers --keep-comments
virtual patch
@gfp depends on patch && !(file in "tools") && !(file in "samples")@
identifier ALLOC = {kmalloc_obj,kmalloc_objs,kmalloc_flex,
kzalloc_obj,kzalloc_objs,kzalloc_flex,
kvmalloc_obj,kvmalloc_objs,kvmalloc_flex,
kvzalloc_obj,kvzalloc_objs,kvzalloc_flex};
@@
ALLOC(...
- , GFP_KERNEL
)
$ make coccicheck MODE=patch COCCI=gfp.cocci
Build and boot tested x86_64 with Fedora 42's GCC and Clang:
Linux version 6.19.0+ (user@host) (gcc (GCC) 15.2.1 20260123 (Red Hat 15.2.1-7), GNU ld version 2.44-12.fc42) #1 SMP PREEMPT_DYNAMIC 1970-01-01
Linux version 6.19.0+ (user@host) (clang version 20.1.8 (Fedora 20.1.8-4.fc42), LLD 20.1.8) #1 SMP PREEMPT_DYNAMIC 1970-01-01
Signed-off-by: Kees Cook <kees@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
||
|
|
32a92f8c89 |
Convert more 'alloc_obj' cases to default GFP_KERNEL arguments
This converts some of the visually simpler cases that have been split over multiple lines. I only did the ones that are easy to verify the resulting diff by having just that final GFP_KERNEL argument on the next line. Somebody should probably do a proper coccinelle script for this, but for me the trivial script actually resulted in an assertion failure in the middle of the script. I probably had made it a bit _too_ trivial. So after fighting that far a while I decided to just do some of the syntactically simpler cases with variations of the previous 'sed' scripts. The more syntactically complex multi-line cases would mostly really want whitespace cleanup anyway. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |