drm/xe/uapi: Reject coh_none PAT index for CPU cached memory in madvise

Add validation in xe_vm_madvise_ioctl() to reject PAT indices with
XE_COH_NONE coherency mode when applied to CPU cached memory.

Using coh_none with CPU cached buffers is a security issue. When the
kernel clears pages before reallocation, the clear operation stays in
CPU cache (dirty). GPU with coh_none can bypass CPU caches and read
stale sensitive data directly from DRAM, potentially leaking data from
previously freed pages of other processes.

This aligns with the existing validation in vm_bind path
(xe_vm_bind_ioctl_validate_bo).

v2(Matthew brost)
- Add fixes
- Move one debug print to better place

v3(Matthew Auld)
- Should be drm/xe/uapi
- More Cc

v4(Shuicheng Lin)
- Fix kmem leak issues by the way

v5
- Remove kmem leak because it has been merged by another patch

v6
- Remove the fix which is not related to current fix

v7
- No change

v8
- Rebase

v9
- Limit the restrictions to iGPU

v10
- No change

Fixes: ada7486c56 ("drm/xe: Implement madvise ioctl for xe")
Cc: <stable@vger.kernel.org> # v6.18+
Cc: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: Mathew Alwin <alwin.mathew@intel.com>
Cc: Michal Mrozek <michal.mrozek@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Jia Yao <jia.yao@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Acked-by: Michal Mrozek <michal.mrozek@intel.com>
Acked-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patch.msgid.link/20260417055917.2027459-2-jia.yao@intel.com
(cherry picked from commit 016ccdb674b8c899940b3944952c96a6a490d10a)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
This commit is contained in:
Jia Yao 2026-04-17 05:59:16 +00:00 committed by Rodrigo Vivi
parent 0df99689eb
commit 4e5591c2fc
No known key found for this signature in database
GPG Key ID: FA625F640EEB13CA

View File

@ -621,6 +621,45 @@ static int xe_madvise_purgeable_retained_to_user(const struct xe_madvise_details
return 0;
}
static bool check_pat_args_are_sane(struct xe_device *xe,
struct xe_vmas_in_madvise_range *madvise_range,
u16 pat_index)
{
u16 coh_mode = xe_pat_index_get_coh_mode(xe, pat_index);
int i;
/*
* Using coh_none with CPU cached buffers is not allowed on iGPU.
* On iGPU the GPU shares the LLC with the CPU, so with coh_none
* the GPU bypasses CPU caches and reads directly from DRAM,
* potentially seeing stale sensitive data from previously freed
* pages. On dGPU this restriction does not apply, because the
* platform does not provide a non-coherent system memory access
* path that would violate the DMA coherency contract.
*/
if (coh_mode != XE_COH_NONE || IS_DGFX(xe))
return true;
for (i = 0; i < madvise_range->num_vmas; i++) {
struct xe_vma *vma = madvise_range->vmas[i];
struct xe_bo *bo = xe_vma_bo(vma);
if (bo) {
/* BO with WB caching + COH_NONE is not allowed */
if (XE_IOCTL_DBG(xe, bo->cpu_caching == DRM_XE_GEM_CPU_CACHING_WB))
return false;
/* Imported dma-buf without caching info, assume cached */
if (XE_IOCTL_DBG(xe, !bo->cpu_caching))
return false;
} else if (XE_IOCTL_DBG(xe, xe_vma_is_cpu_addr_mirror(vma) ||
xe_vma_is_userptr(vma)))
/* System memory (userptr/SVM) is always CPU cached */
return false;
}
return true;
}
static bool check_bo_args_are_sane(struct xe_vm *vm, struct xe_vma **vmas,
int num_vmas, u32 atomic_val)
{
@ -750,6 +789,14 @@ int xe_vm_madvise_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
}
}
if (args->type == DRM_XE_MEM_RANGE_ATTR_PAT) {
if (!check_pat_args_are_sane(xe, &madvise_range,
args->pat_index.val)) {
err = -EINVAL;
goto free_vmas;
}
}
if (madvise_range.has_bo_vmas) {
if (args->type == DRM_XE_MEM_RANGE_ATTR_ATOMIC) {
if (!check_bo_args_are_sane(vm, madvise_range.vmas,