s390 updates for 7.1 merge window

- Add support for CONFIG_PAGE_TABLE_CHECK and enable it in
   debug_defconfig. s390 can only tell user from kernel PTEs via the mm,
   so mm_struct is now passed into pxx_user_accessible_page() callbacks
 
 - Expose the PCI function UID as an arch-specific slot attribute in
   sysfs so a function can be identified by its user-defined id while
   still in standby. Introduces a generic ARCH_PCI_SLOT_GROUPS hook in
   drivers/pci/slot.c
 
 - Refresh s390 PCI documentation to reflect current behavior and cover
   previously undocumented sysfs attributes
 
 - zcrypt device driver cleanup series: consistent field types, clearer
   variable naming, a kernel-doc warning fix, and a comment explaining
   the intentional synchronize_rcu() in pkey_handler_register()
 
 - Provide an s390 arch_raw_cpu_ptr() that avoids the detour via
   get_lowcore() using alternatives, shrinking defconfig by ~27 kB
 
 - Guard identity-base randomization with kaslr_enabled() so nokaslr
   keeps the identity mapping at 0 even with
   CONFIG_RANDOMIZE_IDENTITY_BASE=y
 
 - Build S390_MODULES_SANITY_TEST as a module only by requiring
   KUNIT && m, since built-in would not exercise module loading
 
 - Remove the permanently commented-out HMCDRV_DEV_CLASS create_class()
   code in the hmcdrv driver
 
 - Drop stale ident_map_size extern conflicting with asm/page.h
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEE3QHqV+H2a8xAv27vjYWKoQLXFBgFAmno78kACgkQjYWKoQLX
 FBiHbggAmW5hPIDf4F8HLomMREaaQb7QAyYwfeefwhcFUXSMu8td8S68aN4UkOnS
 DGSFjb+V6Nqd+ewrF7IS9pRU9YFsmBqo3MnLdcJ/ojZFz8BlwoAi+E4AD1a38hY2
 9zh2siPBMjydqBRUn6zjsK8auk4e8r44iS5MNNMXDF2ePE/PnPKTm93GhbtnnM6r
 a7mQkiPbi6j0sN/UU+pQkhS4fm2XNaGpCGGX0W0v2RdLIYZ9zQQdg4TaEsjQ5wZA
 OC3P8LG3OyJjnxsY2J8PIKK0VM0JP67KUGnQOi1y8HbN1LkFfAWF6CK7tsyUE/JM
 TYg7ENs2mUMmaa8niOGkiXzjjAxD0g==
 =NpmP
 -----END PGP SIGNATURE-----

Merge tag 's390-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Vasily Gorbik:

 - Add support for CONFIG_PAGE_TABLE_CHECK and enable it in
   debug_defconfig. s390 can only tell user from kernel PTEs via the mm,
   so mm_struct is now passed into pxx_user_accessible_page() callbacks

 - Expose the PCI function UID as an arch-specific slot attribute in
   sysfs so a function can be identified by its user-defined id while
   still in standby. Introduces a generic ARCH_PCI_SLOT_GROUPS hook in
   drivers/pci/slot.c

 - Refresh s390 PCI documentation to reflect current behavior and cover
   previously undocumented sysfs attributes

 - zcrypt device driver cleanup series: consistent field types, clearer
   variable naming, a kernel-doc warning fix, and a comment explaining
   the intentional synchronize_rcu() in pkey_handler_register()

 - Provide an s390 arch_raw_cpu_ptr() that avoids the detour via
   get_lowcore() using alternatives, shrinking defconfig by ~27 kB

 - Guard identity-base randomization with kaslr_enabled() so nokaslr
   keeps the identity mapping at 0 even with RANDOMIZE_IDENTITY_BASE=y

 - Build S390_MODULES_SANITY_TEST as a module only by requiring KUNIT &&
   m, since built-in would not exercise module loading

 - Remove the permanently commented-out HMCDRV_DEV_CLASS create_class()
   code in the hmcdrv driver

 - Drop stale ident_map_size extern conflicting with asm/page.h

* tag 's390-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/zcrypt: Fix warning about wrong kernel doc comment
  PCI: s390: Expose the UID as an arch specific PCI slot attribute
  docs: s390/pci: Improve and update PCI documentation
  s390/pkey: Add comment about synchronize_rcu() to pkey base
  s390/hmcdrv: Remove commented out code
  s390/zcrypt: Slight rework on the agent_id field
  s390/zcrypt: Explicitly use a card variable in _zcrypt_send_cprb
  s390/zcrypt: Rework MKVP fields and handling
  s390/zcrypt: Make apfs a real unsigned int field
  s390/zcrypt: Rework domain processing within zcrypt device driver
  s390/zcrypt: Move inline function rng_type6cprb_msgx from header to code
  s390/percpu: Provide arch_raw_cpu_ptr()
  s390: Enable page table check for debug_defconfig
  s390/pgtable: Add s390 support for page table check
  s390/pgtable: Use set_pmd_bit() to invalidate PMD entry
  mm/page_table_check: Pass mm_struct to pxx_user_accessible_page()
  s390/boot: Respect kaslr_enabled() for identity randomization
  s390/Kconfig: Make modules sanity test a module-only option
  s390/setup: Drop stale ident_map_size declaration
This commit is contained in:
Linus Torvalds 2026-04-22 11:13:45 -07:00
commit 2a4c0c11c0
28 changed files with 467 additions and 389 deletions

View File

@ -6,6 +6,7 @@ S/390 PCI
Authors:
- Pierre Morel
- Niklas Schnelle
Copyright, IBM Corp. 2020
@ -27,14 +28,16 @@ Command line parameters
debugfs entries
---------------
The S/390 debug feature (s390dbf) generates views to hold various debug results in sysfs directories of the form:
The S/390 debug feature (s390dbf) generates views to hold various debug results
in sysfs directories of the form:
* /sys/kernel/debug/s390dbf/pci_*/
For example:
- /sys/kernel/debug/s390dbf/pci_msg/sprintf
Holds messages from the processing of PCI events, like machine check handling
holds messages from the processing of PCI events, like machine check handling
and setting of global functionality, like UID checking.
Change the level of logging to be more or less verbose by piping
@ -47,87 +50,141 @@ Sysfs entries
Entries specific to zPCI functions and entries that hold zPCI information.
* /sys/bus/pci/slots/XXXXXXXX
* /sys/bus/pci/slots/XXXXXXXX:
The slot entries are set up using the function identifier (FID) of the
PCI function. The format depicted as XXXXXXXX above is 8 hexadecimal digits
with 0 padding and lower case hexadecimal digits.
The slot entries are set up using the function identifier (FID) of the PCI
function as slot name. The format depicted as XXXXXXXX above is 8 hexadecimal
digits with 0 padding and lower case hexadecimal digits.
- /sys/bus/pci/slots/XXXXXXXX/power
In addition to using the FID as the name of the slot, the slot directory
also contains the following s390-specific slot attributes.
- uid:
The User-defined identifier (UID) of the function which may be configured
by this slot. See also the corresponding attribute of the device.
A physical function that currently supports a virtual function cannot be
powered off until all virtual functions are removed with:
echo 0 > /sys/bus/pci/devices/XXXX:XX:XX.X/sriov_numvf
echo 0 > /sys/bus/pci/devices/DDDD:BB:dd.f/sriov_numvf
* /sys/bus/pci/devices/XXXX:XX:XX.X/
* /sys/bus/pci/devices/DDDD:BB:dd.f/:
- function_id
A zPCI function identifier that uniquely identifies the function in the Z server.
- function_id:
The zPCI function identifier (FID) is a 32-bit hexadecimal value that
uniquely identifies the PCI function. Unless the hypervisor provides
a virtual FID e.g. on KVM this identifier is unique across the machine even
between different partitions.
- function_handle
Low-level identifier used for a configured PCI function.
It might be useful for debugging.
- function_handle:
This 32-bit hexadecimal value is a low-level identifier used for a PCI
function. Note that the function handle may be changed and become invalid
on PCI events and when enabling/disabling the PCI function.
- pchid
Model-dependent location of the I/O adapter.
- pchid:
This 16-bit hexadecimal value encodes a model-dependent location for
the PCI function.
- pfgid
PCI function group ID, functions that share identical functionality
- pfgid:
PCI function group ID; functions that share identical functionality
use a common identifier.
A PCI group defines interrupts, IOMMU, IOTLB, and DMA specifics.
- vfn
- vfn:
The virtual function number, from 1 to N for virtual functions,
0 for physical functions.
- pft
The PCI function type
- pft:
The PCI function type is an s390-specific type attribute. It indicates
a more general, usage oriented, type than PCI Specification
class/vendor/device identifiers. That is PCI functions with the same pft
value may be backed by different hardware implementations. At the same time
apart from unclassified functions (pft is 0x00) the same pft value
generally implies a similar usage model. At the same time the same
PCI hardware device may appear with different pft values when in a
different usage model. For example NETD and NETH VFs may be implemented
by the same PCI hardware device but in NETD the parent Physical Function
is user managed while with NETH it is platform managed.
- port
The port corresponds to the physical port the function is attached to.
It also gives an indication of the physical function a virtual function
is attached to.
Currently the following PFT values are defined:
- uid
The user identifier (UID) may be defined as part of the machine
configuration or the z/VM or KVM guest configuration. If the accompanying
uid_is_unique attribute is 1 the platform guarantees that the UID is unique
within that instance and no devices with the same UID can be attached
during the lifetime of the system.
- 0x00 (UNC): Unclassified
- 0x02 (ROCE): RoCE Express
- 0x05 (ISM): Internal Shared Memory
- 0x0a (ROC2): RoCE Express 2
- 0x0b (NVMe): NVMe
- 0x0c (NETH): Network Express hybrid
- 0x0d (CNW): Cloud Network Adapter
- 0x0f (NETD): Network Express direct
- uid_is_unique
Indicates whether the user identifier (UID) is guaranteed to be and remain
unique within this Linux instance.
- port:
The port is a decimal value corresponding to the physical port the function
is attached to. Virtual Functions (VFs) share the port with their parent
Physical Function (PF). A value of 0 indicates that the port attribute is
not applicable for that PCI function type.
- pfip/segmentX
- uid:
The user-defined identifier (UID) for a PCI function is a 32-bit
hexadecimal value. It is defined on a per instance basis as part of the
partition, KVM guest, or z/VM guest configuration. If UID Checking is
enabled the platform ensures that the UID is unique within that instance
and no two PCI functions with the same UID will be visible to the instance.
Independent of this guarantee and unlike the function ID (FID) the UID may
be the same in different partitions within the same machine. This allows to
create PCI configurations in multiple partitions to be identical in the
UID-namespace.
- uid_is_unique:
A 0 or 1 flag indicating whether the user-defined identifier (UID) is
guaranteed to be and remain unique within this Linux instance. This
platform feature is called UID Checking.
- pfip/segmentX:
The segments determine the isolation of a function.
They correspond to the physical path to the function.
The more the segments are different, the more the functions are isolated.
- fidparm:
Contains an 8-bit-per-PCI function parameter field in hexadecimal provided
by the platform. The meaning of this field is PCI function type specific.
For NETH VFs a value of 0x01 indicates that the function supports
promiscuous mode.
* /sys/firmware/clp/uid_checking:
In addition to the per-device uid_is_unique attribute this presents a
global indication of whether UID Checking is enabled. This allows users
to check for UID Checking even when no PCI functions are configured.
Enumeration and hotplug
=======================
The PCI address consists of four parts: domain, bus, device and function,
and is of this form: DDDD:BB:dd.f
and is of this form: DDDD:BB:dd.f.
* When not using multi-functions (norid is set, or the firmware does not
support multi-functions):
* For a PCI function for which the platform does not expose the RID, the
pci=norid kernel parameter is used, or a so-called isolated Virtual Function
which does have RID information but is used without its parent Physical
Function being part of the same PCI configuration:
- There is only one function per domain.
- The domain is set from the zPCI function's UID as defined during the
LPAR creation.
- The domain is set from the zPCI function's UID if UID Checking is on;
otherwise the domain ID is generated dynamically and is not stable
across reboots or hot plug.
* When using multi-functions (norid parameter is not set),
zPCI functions are addressed differently:
* For a PCI function for which the platform exposes the RID and which
is not an Isolated Virtual Function:
- There is still only one bus per domain.
- There can be up to 256 functions per bus.
- There can be up to 256 PCI functions per bus.
- The domain part of the address of all functions for
a multi-Function device is set from the zPCI function's UID as defined
in the LPAR creation for the function zero.
- The domain part of the address of all functions within the same topology is
that of the configured PCI function with the lowest devfn within that
topology.
- New functions will only be ready for use after the function zero
(the function with devfn 0) has been enumerated.
- Virtual Functions generated by an SR-IOV capable Physical Function only
become visible once SR-IOV is enabled.

View File

@ -1276,17 +1276,17 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
#endif
#ifdef CONFIG_PAGE_TABLE_CHECK
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return pte_valid(pte) && (pte_user(pte) || pte_user_exec(pte));
}
static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr)
static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd)
{
return pmd_valid(pmd) && !pmd_table(pmd) && (pmd_user(pmd) || pmd_user_exec(pmd));
}
static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr)
static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud)
{
return pud_valid(pud) && !pud_table(pud) && (pud_user(pud) || pud_user_exec(pud));
}

View File

@ -438,7 +438,7 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return true;
}
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return pte_present(pte) && !is_kernel_addr(addr);
}

View File

@ -549,7 +549,7 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return arch_pte_access_permitted(pte_val(pte), write, 0);
}
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return pte_present(pte) && pte_user(pte);
}
@ -925,9 +925,9 @@ static inline bool pud_access_permitted(pud_t pud, bool write)
}
#define pud_user_accessible_page pud_user_accessible_page
static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr)
static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud)
{
return pud_leaf(pud) && pte_user_accessible_page(pud_pte(pud), addr);
return pud_leaf(pud) && pte_user_accessible_page(mm, addr, pud_pte(pud));
}
#define __p4d_raw(x) ((p4d_t) { __pgd_raw(x) })
@ -1096,9 +1096,9 @@ static inline bool pmd_access_permitted(pmd_t pmd, bool write)
}
#define pmd_user_accessible_page pmd_user_accessible_page
static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr)
static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd)
{
return pmd_leaf(pmd) && pte_user_accessible_page(pmd_pte(pmd), addr);
return pmd_leaf(pmd) && pte_user_accessible_page(mm, addr, pmd_pte(pmd));
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE

View File

@ -249,7 +249,7 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
return true;
}
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return pte_present(pte) && !is_kernel_addr(addr);
}

View File

@ -213,11 +213,11 @@ static inline bool arch_supports_memmap_on_memory(unsigned long vmemmap_size)
#endif /* CONFIG_PPC64 */
#ifndef pmd_user_accessible_page
#define pmd_user_accessible_page(pmd, addr) false
#define pmd_user_accessible_page(mm, addr, pmd) false
#endif
#ifndef pud_user_accessible_page
#define pud_user_accessible_page(pud, addr) false
#define pud_user_accessible_page(mm, addr, pud) false
#endif
#endif /* __ASSEMBLER__ */

View File

@ -984,17 +984,17 @@ static inline void set_pud_at(struct mm_struct *mm, unsigned long addr,
}
#ifdef CONFIG_PAGE_TABLE_CHECK
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return pte_present(pte) && pte_user(pte);
}
static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr)
static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd)
{
return pmd_leaf(pmd) && pmd_user(pmd);
}
static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr)
static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud)
{
return pud_leaf(pud) && pud_user(pud);
}

View File

@ -152,6 +152,7 @@ config S390
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && CC_IS_CLANG
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_PAGE_TABLE_CHECK
select ARCH_SUPPORTS_PER_VMA_LOCK
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
@ -1023,7 +1024,7 @@ config S390_KPROBES_SANITY_TEST
config S390_MODULES_SANITY_TEST
def_tristate n
depends on KUNIT
depends on KUNIT && m
default KUNIT_ALL_TESTS
prompt "Enable s390 specific modules tests"
select S390_MODULES_SANITY_TEST_HELPERS

View File

@ -440,7 +440,8 @@ static unsigned long setup_kernel_memory_layout(unsigned long kernel_size)
max_mappable = max(ident_map_size, MAX_DCSS_ADDR);
max_mappable = min(max_mappable, vmemmap_start);
#ifdef CONFIG_RANDOMIZE_IDENTITY_BASE
__identity_base = round_down(vmemmap_start - max_mappable, rte_size);
if (kaslr_enabled())
__identity_base = round_down(vmemmap_start - max_mappable, rte_size);
#endif
boot_debug("identity map: 0x%016lx-0x%016lx\n", __identity_base,
__identity_base + ident_map_size);

View File

@ -925,3 +925,5 @@ CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y
CONFIG_TEST_BITOPS=m
CONFIG_TEST_BPF=m
CONFIG_PAGE_TABLE_CHECK=y
CONFIG_PAGE_TABLE_CHECK_ENFORCED=y

View File

@ -208,6 +208,10 @@ extern const struct attribute_group zpci_ident_attr_group;
&pfip_attr_group, \
&zpci_ident_attr_group,
extern const struct attribute_group zpci_slot_attr_group;
#define ARCH_PCI_SLOT_GROUPS (&zpci_slot_attr_group)
extern unsigned int s390_pci_force_floating __initdata;
extern unsigned int s390_pci_no_rid;

View File

@ -12,6 +12,24 @@
*/
#define __my_cpu_offset get_lowcore()->percpu_offset
#define arch_raw_cpu_ptr(_ptr) \
({ \
unsigned long lc_percpu, tcp_ptr__; \
\
tcp_ptr__ = (__force unsigned long)(_ptr); \
lc_percpu = offsetof(struct lowcore, percpu_offset); \
asm_inline volatile( \
ALTERNATIVE("ag %[__ptr__],%[offzero](%%r0)\n", \
"ag %[__ptr__],%[offalt](%%r0)\n", \
ALT_FEATURE(MFEATURE_LOWCORE)) \
: [__ptr__] "+d" (tcp_ptr__) \
: [offzero] "i" (lc_percpu), \
[offalt] "i" (lc_percpu + LOWCORE_ALT_ADDRESS), \
"m" (((struct lowcore *)0)->percpu_offset) \
: "cc"); \
(TYPEOF_UNQUAL(*(_ptr)) __force __kernel *)tcp_ptr__; \
})
/*
* We use a compare-and-swap loop since that uses less cpu cycles than
* disabling and enabling interrupts like the generic variant would do.

View File

@ -16,8 +16,10 @@
#include <linux/mm_types.h>
#include <linux/cpufeature.h>
#include <linux/page-flags.h>
#include <linux/page_table_check.h>
#include <linux/radix-tree.h>
#include <linux/atomic.h>
#include <linux/mmap_lock.h>
#include <asm/ctlreg.h>
#include <asm/bug.h>
#include <asm/page.h>
@ -1190,6 +1192,7 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
/* At this point the reference through the mapping is still present */
if (mm_is_protected(mm) && pte_present(res))
WARN_ON_ONCE(uv_convert_from_secure_pte(res));
page_table_check_pte_clear(mm, addr, res);
return res;
}
@ -1208,6 +1211,7 @@ static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
/* At this point the reference through the mapping is still present */
if (mm_is_protected(vma->vm_mm) && pte_present(res))
WARN_ON_ONCE(uv_convert_from_secure_pte(res));
page_table_check_pte_clear(vma->vm_mm, addr, res);
return res;
}
@ -1231,6 +1235,9 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
} else {
res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
}
page_table_check_pte_clear(mm, addr, res);
/* Nothing to do */
if (!mm_is_protected(mm) || !pte_present(res))
return res;
@ -1327,6 +1334,7 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
{
if (pte_present(entry))
entry = clear_pte_bit(entry, __pgprot(_PAGE_UNUSED));
page_table_check_ptes_set(mm, addr, ptep, entry, nr);
for (;;) {
set_pte(ptep, entry);
if (--nr == 0)
@ -1703,6 +1711,7 @@ static inline bool pmdp_clear_flush_young(struct vm_area_struct *vma,
static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t entry)
{
page_table_check_pmd_set(mm, addr, pmdp, entry);
set_pmd(pmdp, entry);
}
@ -1717,7 +1726,11 @@ static inline pmd_t pmd_mkhuge(pmd_t pmd)
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
return pmdp_xchg_direct(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
pmd_t pmd;
pmd = pmdp_xchg_direct(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
page_table_check_pmd_clear(mm, addr, pmd);
return pmd;
}
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR_FULL
@ -1725,12 +1738,17 @@ static inline pmd_t pmdp_huge_get_and_clear_full(struct vm_area_struct *vma,
unsigned long addr,
pmd_t *pmdp, int full)
{
pmd_t pmd;
if (full) {
pmd_t pmd = *pmdp;
pmd = *pmdp;
set_pmd(pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
page_table_check_pmd_clear(vma->vm_mm, addr, pmd);
return pmd;
}
return pmdp_xchg_lazy(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
pmd = pmdp_xchg_lazy(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
page_table_check_pmd_clear(vma->vm_mm, addr, pmd);
return pmd;
}
#define __HAVE_ARCH_PMDP_HUGE_CLEAR_FLUSH
@ -1744,11 +1762,16 @@ static inline pmd_t pmdp_huge_clear_flush(struct vm_area_struct *vma,
static inline pmd_t pmdp_invalidate(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmdp)
{
pmd_t pmd;
pmd_t pmd = *pmdp;
VM_WARN_ON_ONCE(!pmd_present(*pmdp));
pmd = __pmd(pmd_val(*pmdp) | _SEGMENT_ENTRY_INVALID);
return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd);
VM_WARN_ON_ONCE(!pmd_present(pmd));
pmd = set_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_INVALID));
#ifdef CONFIG_PAGE_TABLE_CHECK
pmd = clear_pmd_bit(pmd, __pgprot(_SEGMENT_ENTRY_READ));
#endif
page_table_check_pmd_set(vma->vm_mm, addr, pmdp, pmd);
pmd = pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd);
return pmd;
}
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
@ -1783,6 +1806,29 @@ static inline int has_transparent_hugepage(void)
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#ifdef CONFIG_PAGE_TABLE_CHECK
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
VM_BUG_ON(mm == &init_mm);
return pte_present(pte);
}
static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd)
{
VM_BUG_ON(mm == &init_mm);
return pmd_leaf(pmd) && (pmd_val(pmd) & _SEGMENT_ENTRY_READ);
}
static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud)
{
VM_BUG_ON(mm == &init_mm);
return pud_leaf(pud);
}
#endif
/*
* 64 bit swap entry format:
* A page-table entry has some bits we have to treat in a special way.

View File

@ -52,7 +52,6 @@ extern unsigned int zlib_dfltcc_support;
#define ZLIB_DFLTCC_INFLATE_ONLY 3
#define ZLIB_DFLTCC_FULL_DEBUG 4
extern unsigned long ident_map_size;
extern unsigned long max_mappable;
/* The Write Back bit position in the physaddr is given by the SLPC PCI */

View File

@ -187,6 +187,17 @@ static ssize_t index_show(struct device *dev,
}
static DEVICE_ATTR_RO(index);
static ssize_t zpci_uid_slot_show(struct pci_slot *slot, char *buf)
{
struct zpci_dev *zdev = container_of(slot->hotplug, struct zpci_dev,
hotplug_slot);
return sysfs_emit(buf, "0x%x\n", zdev->uid);
}
static struct pci_slot_attribute zpci_slot_attr_uid =
__ATTR(uid, 0444, zpci_uid_slot_show, NULL);
static umode_t zpci_index_is_visible(struct kobject *kobj,
struct attribute *attr, int n)
{
@ -243,6 +254,15 @@ const struct attribute_group pfip_attr_group = {
.attrs = pfip_attrs,
};
static struct attribute *zpci_slot_attrs[] = {
&zpci_slot_attr_uid.attr,
NULL,
};
const struct attribute_group zpci_slot_attr_group = {
.attrs = zpci_slot_attrs,
};
static struct attribute *clp_fw_attrs[] = {
&uid_checking_attr.attr,
NULL,

View File

@ -1672,17 +1672,17 @@ static inline bool arch_has_hw_nonleaf_pmd_young(void)
#endif
#ifdef CONFIG_PAGE_TABLE_CHECK
static inline bool pte_user_accessible_page(pte_t pte, unsigned long addr)
static inline bool pte_user_accessible_page(struct mm_struct *mm, unsigned long addr, pte_t pte)
{
return (pte_val(pte) & _PAGE_PRESENT) && (pte_val(pte) & _PAGE_USER);
}
static inline bool pmd_user_accessible_page(pmd_t pmd, unsigned long addr)
static inline bool pmd_user_accessible_page(struct mm_struct *mm, unsigned long addr, pmd_t pmd)
{
return pmd_leaf(pmd) && (pmd_val(pmd) & _PAGE_PRESENT) && (pmd_val(pmd) & _PAGE_USER);
}
static inline bool pud_user_accessible_page(pud_t pud, unsigned long addr)
static inline bool pud_user_accessible_page(struct mm_struct *mm, unsigned long addr, pud_t pud)
{
return pud_leaf(pud) && (pud_val(pud) & _PAGE_PRESENT) && (pud_val(pud) & _PAGE_USER);
}

View File

@ -106,7 +106,18 @@ static struct attribute *pci_slot_default_attrs[] = {
&pci_slot_attr_cur_speed.attr,
NULL,
};
ATTRIBUTE_GROUPS(pci_slot_default);
static const struct attribute_group pci_slot_default_group = {
.attrs = pci_slot_default_attrs,
};
static const struct attribute_group *pci_slot_default_groups[] = {
&pci_slot_default_group,
#ifdef ARCH_PCI_SLOT_GROUPS
ARCH_PCI_SLOT_GROUPS,
#endif
NULL,
};
static const struct kobj_type pci_slot_ktype = {
.sysfs_ops = &pci_slot_sysfs_ops,

View File

@ -30,26 +30,12 @@
#include "hmcdrv_dev.h"
#include "hmcdrv_ftp.h"
/* If the following macro is defined, then the HMC device creates it's own
* separated device class (and dynamically assigns a major number). If not
* defined then the HMC device is assigned to the "misc" class devices.
*
#define HMCDRV_DEV_CLASS "hmcftp"
*/
#define HMCDRV_DEV_NAME "hmcdrv"
#define HMCDRV_DEV_BUSY_DELAY 500 /* delay between -EBUSY trials in ms */
#define HMCDRV_DEV_BUSY_RETRIES 3 /* number of retries on -EBUSY */
struct hmcdrv_dev_node {
#ifdef HMCDRV_DEV_CLASS
struct cdev dev; /* character device structure */
umode_t mode; /* mode of device node (unused, zero) */
#else
struct miscdevice dev; /* "misc" device structure */
#endif
};
static int hmcdrv_dev_open(struct inode *inode, struct file *fp);
@ -75,38 +61,6 @@ static const struct file_operations hmcdrv_dev_fops = {
static struct hmcdrv_dev_node hmcdrv_dev; /* HMC device struct (static) */
#ifdef HMCDRV_DEV_CLASS
static struct class *hmcdrv_dev_class; /* device class pointer */
static dev_t hmcdrv_dev_no; /* device number (major/minor) */
/**
* hmcdrv_dev_name() - provides a naming hint for a device node in /dev
* @dev: device for which the naming/mode hint is
* @mode: file mode for device node created in /dev
*
* See: devtmpfs.c, function devtmpfs_create_node()
*
* Return: recommended device file name in /dev
*/
static char *hmcdrv_dev_name(const struct device *dev, umode_t *mode)
{
char *nodename = NULL;
const char *devname = dev_name(dev); /* kernel device name */
if (devname)
nodename = kasprintf(GFP_KERNEL, "%s", devname);
/* on device destroy (rmmod) the mode pointer may be NULL
*/
if (mode)
*mode = hmcdrv_dev.mode;
return nodename;
}
#endif /* HMCDRV_DEV_CLASS */
/*
* open()
*/
@ -276,67 +230,11 @@ static ssize_t hmcdrv_dev_write(struct file *fp, const char __user *ubuf,
*/
int hmcdrv_dev_init(void)
{
int rc;
#ifdef HMCDRV_DEV_CLASS
struct device *dev;
rc = alloc_chrdev_region(&hmcdrv_dev_no, 0, 1, HMCDRV_DEV_NAME);
if (rc)
goto out_err;
cdev_init(&hmcdrv_dev.dev, &hmcdrv_dev_fops);
hmcdrv_dev.dev.owner = THIS_MODULE;
rc = cdev_add(&hmcdrv_dev.dev, hmcdrv_dev_no, 1);
if (rc)
goto out_unreg;
/* At this point the character device exists in the kernel (see
* /proc/devices), but not under /dev nor /sys/devices/virtual. So
* we have to create an associated class (see /sys/class).
*/
hmcdrv_dev_class = class_create(HMCDRV_DEV_CLASS);
if (IS_ERR(hmcdrv_dev_class)) {
rc = PTR_ERR(hmcdrv_dev_class);
goto out_devdel;
}
/* Finally a device node in /dev has to be established (as 'mkdev'
* does from the command line). Notice that assignment of a device
* node name/mode function is optional (only for mode != 0600).
*/
hmcdrv_dev.mode = 0; /* "unset" */
hmcdrv_dev_class->devnode = hmcdrv_dev_name;
dev = device_create(hmcdrv_dev_class, NULL, hmcdrv_dev_no, NULL,
"%s", HMCDRV_DEV_NAME);
if (!IS_ERR(dev))
return 0;
rc = PTR_ERR(dev);
class_destroy(hmcdrv_dev_class);
hmcdrv_dev_class = NULL;
out_devdel:
cdev_del(&hmcdrv_dev.dev);
out_unreg:
unregister_chrdev_region(hmcdrv_dev_no, 1);
out_err:
#else /* !HMCDRV_DEV_CLASS */
hmcdrv_dev.dev.minor = MISC_DYNAMIC_MINOR;
hmcdrv_dev.dev.name = HMCDRV_DEV_NAME;
hmcdrv_dev.dev.fops = &hmcdrv_dev_fops;
hmcdrv_dev.dev.mode = 0; /* finally produces 0600 */
rc = misc_register(&hmcdrv_dev.dev);
#endif /* HMCDRV_DEV_CLASS */
return rc;
return misc_register(&hmcdrv_dev.dev);
}
/**
@ -344,15 +242,5 @@ int hmcdrv_dev_init(void)
*/
void hmcdrv_dev_exit(void)
{
#ifdef HMCDRV_DEV_CLASS
if (!IS_ERR_OR_NULL(hmcdrv_dev_class)) {
device_destroy(hmcdrv_dev_class, hmcdrv_dev_no);
class_destroy(hmcdrv_dev_class);
}
cdev_del(&hmcdrv_dev.dev);
unregister_chrdev_region(hmcdrv_dev_no, 1);
#else /* !HMCDRV_DEV_CLASS */
misc_deregister(&hmcdrv_dev.dev);
#endif /* HMCDRV_DEV_CLASS */
}

View File

@ -60,6 +60,13 @@ int pkey_handler_register(struct pkey_handler *handler)
list_add_rcu(&handler->list, &handler_list);
spin_unlock(&handler_list_write_lock);
/*
* Fast path to push the info about the updated list to the other
* CPUs. If removed, the other CPUs may get the updated list when the
* RCU context is synched. As this code is in general not performance
* critical and the list update mostly only occurs at the early time in
* system startup the focus is on concurrency versus performance.
*/
synchronize_rcu();
module_put(handler->module);

View File

@ -87,50 +87,52 @@ static int cca_apqns4key(const u8 *key, u32 keylen, u32 flags,
zcrypt_wait_api_operational();
if (hdr->type == TOKTYPE_CCA_INTERNAL) {
u64 cur_mkvp = 0, old_mkvp = 0;
const u8 *ptr_cur_mkvp = NULL;
const u8 *ptr_old_mkvp = NULL;
int minhwtype = ZCRYPT_CEX3C;
if (hdr->version == TOKVER_CCA_AES) {
struct secaeskeytoken *t = (struct secaeskeytoken *)key;
if (flags & PKEY_FLAGS_MATCH_CUR_MKVP)
cur_mkvp = t->mkvp;
ptr_cur_mkvp = t->mkvp;
if (flags & PKEY_FLAGS_MATCH_ALT_MKVP)
old_mkvp = t->mkvp;
ptr_old_mkvp = t->mkvp;
} else if (hdr->version == TOKVER_CCA_VLSC) {
struct cipherkeytoken *t = (struct cipherkeytoken *)key;
minhwtype = ZCRYPT_CEX6;
if (flags & PKEY_FLAGS_MATCH_CUR_MKVP)
cur_mkvp = t->mkvp0;
ptr_cur_mkvp = t->mkvp0;
if (flags & PKEY_FLAGS_MATCH_ALT_MKVP)
old_mkvp = t->mkvp0;
ptr_old_mkvp = t->mkvp0;
} else {
/* unknown CCA internal token type */
return -EINVAL;
}
rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF,
minhwtype, AES_MK_SET,
cur_mkvp, old_mkvp, xflags);
ptr_cur_mkvp, ptr_old_mkvp, xflags);
if (rc)
goto out;
} else if (hdr->type == TOKTYPE_CCA_INTERNAL_PKA) {
struct eccprivkeytoken *t = (struct eccprivkeytoken *)key;
u64 cur_mkvp = 0, old_mkvp = 0;
const u8 *ptr_cur_mkvp = NULL;
const u8 *ptr_old_mkvp = NULL;
if (t->secid == 0x20) {
if (flags & PKEY_FLAGS_MATCH_CUR_MKVP)
cur_mkvp = t->mkvp;
ptr_cur_mkvp = t->mkvp;
if (flags & PKEY_FLAGS_MATCH_ALT_MKVP)
old_mkvp = t->mkvp;
ptr_old_mkvp = t->mkvp;
} else {
/* unknown CCA internal 2 token type */
return -EINVAL;
}
rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF,
ZCRYPT_CEX7, APKA_MK_SET,
cur_mkvp, old_mkvp, xflags);
ptr_cur_mkvp, ptr_old_mkvp, xflags);
if (rc)
goto out;
@ -167,31 +169,33 @@ static int cca_apqns4type(enum pkey_key_type ktype,
zcrypt_wait_api_operational();
if (ktype == PKEY_TYPE_CCA_DATA || ktype == PKEY_TYPE_CCA_CIPHER) {
u64 cur_mkvp = 0, old_mkvp = 0;
const u8 *ptr_cur_mkvp = NULL;
const u8 *ptr_old_mkvp = NULL;
int minhwtype = ZCRYPT_CEX3C;
if (flags & PKEY_FLAGS_MATCH_CUR_MKVP)
cur_mkvp = *((u64 *)cur_mkvp);
ptr_cur_mkvp = cur_mkvp;
if (flags & PKEY_FLAGS_MATCH_ALT_MKVP)
old_mkvp = *((u64 *)alt_mkvp);
ptr_old_mkvp = alt_mkvp;
if (ktype == PKEY_TYPE_CCA_CIPHER)
minhwtype = ZCRYPT_CEX6;
rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF,
minhwtype, AES_MK_SET,
cur_mkvp, old_mkvp, xflags);
ptr_cur_mkvp, ptr_old_mkvp, xflags);
if (rc)
goto out;
} else if (ktype == PKEY_TYPE_CCA_ECC) {
u64 cur_mkvp = 0, old_mkvp = 0;
const u8 *ptr_cur_mkvp = NULL;
const u8 *ptr_old_mkvp = NULL;
if (flags & PKEY_FLAGS_MATCH_CUR_MKVP)
cur_mkvp = *((u64 *)cur_mkvp);
ptr_cur_mkvp = cur_mkvp;
if (flags & PKEY_FLAGS_MATCH_ALT_MKVP)
old_mkvp = *((u64 *)alt_mkvp);
ptr_old_mkvp = alt_mkvp;
rc = cca_findcard2(_apqns, &_nr_apqns, 0xFFFF, 0xFFFF,
ZCRYPT_CEX7, APKA_MK_SET,
cur_mkvp, old_mkvp, xflags);
ptr_cur_mkvp, ptr_old_mkvp, xflags);
if (rc)
goto out;
@ -487,14 +491,14 @@ static int cca_verifykey(const u8 *key, u32 keylen,
*keybitsize = t->bitsize;
rc = cca_findcard2(apqns, &nr_apqns, *card, *dom,
ZCRYPT_CEX3C, AES_MK_SET,
t->mkvp, 0, xflags);
t->mkvp, NULL, xflags);
if (!rc)
*flags = PKEY_FLAGS_MATCH_CUR_MKVP;
if (rc == -ENODEV) {
nr_apqns = ARRAY_SIZE(apqns);
rc = cca_findcard2(apqns, &nr_apqns, *card, *dom,
ZCRYPT_CEX3C, AES_MK_SET,
0, t->mkvp, xflags);
NULL, t->mkvp, xflags);
if (!rc)
*flags = PKEY_FLAGS_MATCH_ALT_MKVP;
}
@ -521,14 +525,14 @@ static int cca_verifykey(const u8 *key, u32 keylen,
*keybitsize = PKEY_SIZE_AES_256;
rc = cca_findcard2(apqns, &nr_apqns, *card, *dom,
ZCRYPT_CEX6, AES_MK_SET,
t->mkvp0, 0, xflags);
t->mkvp0, NULL, xflags);
if (!rc)
*flags = PKEY_FLAGS_MATCH_CUR_MKVP;
if (rc == -ENODEV) {
nr_apqns = ARRAY_SIZE(apqns);
rc = cca_findcard2(apqns, &nr_apqns, *card, *dom,
ZCRYPT_CEX6, AES_MK_SET,
0, t->mkvp0, xflags);
NULL, t->mkvp0, xflags);
if (!rc)
*flags = PKEY_FLAGS_MATCH_ALT_MKVP;
}

View File

@ -854,13 +854,12 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
struct ica_xcRB *xcrb)
{
bool userspace = xflags & ZCRYPT_XFLAG_USERSPACE;
struct zcrypt_card *zc, *pref_zc;
struct zcrypt_queue *zq, *pref_zq;
struct ap_message ap_msg;
unsigned int card, domain, func_code = 0;
unsigned int wgt = 0, pref_wgt = 0;
unsigned int func_code = 0;
unsigned short *domain, tdom;
struct zcrypt_queue *zq, *pref_zq;
struct zcrypt_card *zc, *pref_zc;
int cpen, qpen, qid = 0, rc;
struct ap_message ap_msg;
struct module *mod;
trace_s390_zcrypt_req(xcrb, TB_ZSECSENDCPRB);
@ -878,10 +877,9 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
print_hex_dump_debug("ccareq: ", DUMP_PREFIX_ADDRESS, 16, 1,
ap_msg.msg, ap_msg.len, false);
tdom = *domain;
if (perms != &ap_perms && tdom < AP_DOMAINS) {
if (perms != &ap_perms && domain < AP_DOMAINS) {
if (ap_msg.flags & AP_MSG_FLAG_ADMIN) {
if (!test_bit_inv(tdom, perms->adm)) {
if (!test_bit_inv(domain, perms->adm)) {
rc = -ENODEV;
goto out;
}
@ -894,13 +892,14 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
* If a valid target domain is set and this domain is NOT a usage
* domain but a control only domain, autoselect target domain.
*/
if (tdom < AP_DOMAINS &&
!ap_test_config_usage_domain(tdom) &&
ap_test_config_ctrl_domain(tdom))
tdom = AUTOSEL_DOM;
if (domain < AP_DOMAINS &&
!ap_test_config_usage_domain(domain) &&
ap_test_config_ctrl_domain(domain))
domain = AUTOSEL_DOM;
pref_zc = NULL;
pref_zq = NULL;
card = xcrb->user_defined;
spin_lock(&zcrypt_list_lock);
for_each_zcrypt_card(zc) {
/* Check for usable CCA card */
@ -908,8 +907,7 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
!zc->card->hwinfo.cca)
continue;
/* Check for user selected CCA card */
if (xcrb->user_defined != AUTOSELECT &&
xcrb->user_defined != zc->card->id)
if (card != AUTOSELECT && card != zc->card->id)
continue;
/* check if request size exceeds card max msg size */
if (ap_msg.len > zc->card->maxmsgsize)
@ -929,8 +927,8 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
/* check for device usable and eligible */
if (!zq->online || !zq->ops->send_cprb ||
!ap_queue_usable(zq->queue) ||
(tdom != AUTOSEL_DOM &&
tdom != AP_QID_QUEUE(zq->queue->qid)))
(domain != AUTOSEL_DOM &&
domain != AP_QID_QUEUE(zq->queue->qid)))
continue;
/* check if device node has admission for this queue */
if (!zcrypt_check_queue(perms,
@ -953,16 +951,11 @@ static long _zcrypt_send_cprb(u32 xflags, struct ap_perms *perms,
if (!pref_zq) {
pr_debug("no match for address %02x.%04x => ENODEV\n",
xcrb->user_defined, *domain);
card, domain);
rc = -ENODEV;
goto out;
}
/* in case of auto select, provide the correct domain */
qid = pref_zq->queue->qid;
if (*domain == AUTOSEL_DOM)
*domain = AP_QID_QUEUE(qid);
rc = pref_zq->ops->send_cprb(userspace, pref_zq, xcrb, &ap_msg);
if (!rc) {
print_hex_dump_debug("ccarpl: ", DUMP_PREFIX_ADDRESS, 16, 1,
@ -1220,7 +1213,6 @@ static long zcrypt_rng(char *buffer)
unsigned int wgt = 0, pref_wgt = 0;
unsigned int func_code = 0;
struct ap_message ap_msg;
unsigned int domain;
int qid = 0, rc = -ENODEV;
struct module *mod;
@ -1229,7 +1221,7 @@ static long zcrypt_rng(char *buffer)
rc = ap_init_apmsg(&ap_msg, 0);
if (rc)
goto out;
rc = prep_rng_ap_msg(&ap_msg, &func_code, &domain);
rc = prep_rng_ap_msg(&ap_msg, &func_code, NULL);
if (rc)
goto out;

View File

@ -305,7 +305,7 @@ static inline void prep_xcrb(struct ica_xcRB *pxcrb,
struct CPRBX *prepcblk)
{
memset(pxcrb, 0, sizeof(*pxcrb));
pxcrb->agent_ID = 0x4341; /* 'CA' */
memcpy(&pxcrb->agent_ID, "CA", 2);
pxcrb->user_defined = (cardnr == 0xFFFF ? AUTOSELECT : cardnr);
pxcrb->request_control_blk_length =
preqcblk->cprb_len + preqcblk->req_parml;
@ -1710,8 +1710,8 @@ int cca_get_info(u16 cardnr, u16 domain, struct cca_info *ci, u32 xflags)
EXPORT_SYMBOL(cca_get_info);
int cca_findcard2(u32 *apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
int minhwtype, int mktype, u64 cur_mkvp, u64 old_mkvp,
u32 xflags)
int minhwtype, int mktype,
const u8 *ptr_cur_mkvp, const u8 *ptr_old_mkvp, u32 xflags)
{
struct zcrypt_device_status_ext *device_status;
int i, card, dom, curmatch, oldmatch;
@ -1755,20 +1755,28 @@ int cca_findcard2(u32 *apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
/* check min hardware type */
if (minhwtype > 0 && minhwtype > ci.hwtype)
continue;
if (cur_mkvp || old_mkvp) {
if (ptr_cur_mkvp || ptr_old_mkvp) {
/* check mkvps */
curmatch = oldmatch = 0;
if (mktype == AES_MK_SET) {
if (cur_mkvp && cur_mkvp == ci.cur_aes_mkvp)
if (ptr_cur_mkvp &&
!memcmp(ptr_cur_mkvp, ci.cur_aes_mkvp,
sizeof(ci.cur_aes_mkvp)))
curmatch = 1;
if (old_mkvp && ci.old_aes_mk_state == '2' &&
old_mkvp == ci.old_aes_mkvp)
if (ptr_old_mkvp &&
ci.old_aes_mk_state == '2' &&
!memcmp(ptr_old_mkvp, ci.old_aes_mkvp,
sizeof(ci.old_aes_mkvp)))
oldmatch = 1;
} else {
if (cur_mkvp && cur_mkvp == ci.cur_apka_mkvp)
if (ptr_cur_mkvp &&
!memcmp(ptr_cur_mkvp, ci.cur_apka_mkvp,
sizeof(ci.cur_apka_mkvp)))
curmatch = 1;
if (old_mkvp && ci.old_apka_mk_state == '2' &&
old_mkvp == ci.old_apka_mkvp)
if (ptr_old_mkvp &&
ci.old_apka_mk_state == '2' &&
!memcmp(ptr_old_mkvp, ci.old_apka_mkvp,
sizeof(ci.old_apka_mkvp)))
oldmatch = 1;
}
if (curmatch + oldmatch < 1)

View File

@ -47,7 +47,7 @@ struct secaeskeytoken {
u8 res1[1];
u8 flag; /* key flags */
u8 res2[1];
u64 mkvp; /* master key verification pattern */
u8 mkvp[8]; /* master key verification pattern */
u8 key[32]; /* key value (encrypted) */
u8 cv[8]; /* control vector */
u16 bitsize; /* key bit size */
@ -64,8 +64,8 @@ struct cipherkeytoken {
u8 res1[3];
u8 kms; /* key material state, 0x03 means wrapped with MK */
u8 kvpt; /* key verification pattern type, should be 0x01 */
u64 mkvp0; /* master key verification pattern, lo part */
u64 mkvp1; /* master key verification pattern, hi part (unused) */
u8 mkvp0[8]; /* master key verification pattern, lo part */
u8 mkvp1[8]; /* master key verification pattern, hi part (unused) */
u8 eskwm; /* encrypted section key wrapping method */
u8 hashalg; /* hash algorithmus used for wrapping key */
u8 plfver; /* pay load format version */
@ -113,7 +113,7 @@ struct eccprivkeytoken {
u8 ksrc; /* key source */
u16 pbitlen; /* length of prime p in bits */
u16 ibmadlen; /* IBM associated data length in bytes */
u64 mkvp; /* master key verification pattern */
u8 mkvp[8]; /* master key verification pattern */
u8 opk[48]; /* encrypted object protection key data */
u16 adatalen; /* associated data length in bytes */
u16 fseclen; /* formatted section length in bytes */
@ -227,8 +227,8 @@ int cca_query_crypto_facility(u16 cardnr, u16 domain,
* If no apqn meeting the criteria is found, -ENODEV is returned.
*/
int cca_findcard2(u32 *apqns, u32 *nr_apqns, u16 cardnr, u16 domain,
int minhwtype, int mktype, u64 cur_mkvp, u64 old_mkvp,
u32 xflags);
int minhwtype, int mktype,
const u8 *cur_mkvp, const u8 *old_mkvp, u32 xflags);
#define AES_MK_SET 0
#define APKA_MK_SET 1
@ -245,12 +245,12 @@ struct cca_info {
char new_asym_mk_state; /* '1' empty, '2' partially full, '3' full */
char cur_asym_mk_state; /* '1' invalid, '2' valid */
char old_asym_mk_state; /* '1' invalid, '2' valid */
u64 new_aes_mkvp; /* truncated sha256 of new aes master key */
u64 cur_aes_mkvp; /* truncated sha256 of current aes master key */
u64 old_aes_mkvp; /* truncated sha256 of old aes master key */
u64 new_apka_mkvp; /* truncated sha256 of new apka master key */
u64 cur_apka_mkvp; /* truncated sha256 of current apka mk */
u64 old_apka_mkvp; /* truncated sha256 of old apka mk */
u8 new_aes_mkvp[8]; /* truncated sha256 of new aes master key */
u8 cur_aes_mkvp[8]; /* truncated sha256 of current aes master key */
u8 old_aes_mkvp[8]; /* truncated sha256 of old aes master key */
u8 new_apka_mkvp[8]; /* truncated sha256 of new apka master key */
u8 cur_apka_mkvp[8]; /* truncated sha256 of current apka mk */
u8 old_apka_mkvp[8]; /* truncated sha256 of old apka mk */
u8 new_asym_mkvp[16]; /* verify pattern of new asym master key */
u8 cur_asym_mkvp[16]; /* verify pattern of current asym master key */
u8 old_asym_mkvp[16]; /* verify pattern of old asym master key */

View File

@ -102,9 +102,19 @@ static const struct attribute_group cca_card_attr_grp = {
.attrs = cca_card_attrs,
};
/*
* CCA queue additional device attributes
*/
/*
* Simple helper macro to format raw mkvp byte array into hex
*/
#define MKVP_TO_HEXBUF(mkvp, buf) \
do { \
BUILD_BUG_ON(sizeof(buf) <= 2 * sizeof(mkvp)); \
bin2hex(buf, mkvp, sizeof(mkvp)); \
buf[2 * sizeof(mkvp)] = '\0'; \
} while (0)
/*
* CCA queue additional device attributes
*/
static ssize_t cca_mkvps_show(struct device *dev,
struct device_attribute *attr,
char *buf)
@ -113,6 +123,7 @@ static ssize_t cca_mkvps_show(struct device *dev,
static const char * const cao_state[] = { "invalid", "valid" };
struct zcrypt_queue *zq = dev_get_drvdata(dev);
struct cca_info ci;
char hexbuf[2 * 16 + 1];
int n = 0;
memset(&ci, 0, sizeof(ci));
@ -121,71 +132,86 @@ static ssize_t cca_mkvps_show(struct device *dev,
AP_QID_QUEUE(zq->queue->qid),
&ci, 0);
if (ci.new_aes_mk_state >= '1' && ci.new_aes_mk_state <= '3')
n += sysfs_emit_at(buf, n, "AES NEW: %s 0x%016llx\n",
if (ci.new_aes_mk_state >= '1' && ci.new_aes_mk_state <= '3') {
MKVP_TO_HEXBUF(ci.new_aes_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "AES NEW: %s 0x%s\n",
new_state[ci.new_aes_mk_state - '1'],
ci.new_aes_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "AES NEW: - -\n");
}
if (ci.cur_aes_mk_state >= '1' && ci.cur_aes_mk_state <= '2')
n += sysfs_emit_at(buf, n, "AES CUR: %s 0x%016llx\n",
if (ci.cur_aes_mk_state >= '1' && ci.cur_aes_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.cur_aes_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "AES CUR: %s 0x%s\n",
cao_state[ci.cur_aes_mk_state - '1'],
ci.cur_aes_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "AES CUR: - -\n");
}
if (ci.old_aes_mk_state >= '1' && ci.old_aes_mk_state <= '2')
n += sysfs_emit_at(buf, n, "AES OLD: %s 0x%016llx\n",
if (ci.old_aes_mk_state >= '1' && ci.old_aes_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.old_aes_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "AES OLD: %s 0x%s\n",
cao_state[ci.old_aes_mk_state - '1'],
ci.old_aes_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "AES OLD: - -\n");
}
if (ci.new_apka_mk_state >= '1' && ci.new_apka_mk_state <= '3')
n += sysfs_emit_at(buf, n, "APKA NEW: %s 0x%016llx\n",
if (ci.new_apka_mk_state >= '1' && ci.new_apka_mk_state <= '3') {
MKVP_TO_HEXBUF(ci.new_apka_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "APKA NEW: %s 0x%s\n",
new_state[ci.new_apka_mk_state - '1'],
ci.new_apka_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "APKA NEW: - -\n");
}
if (ci.cur_apka_mk_state >= '1' && ci.cur_apka_mk_state <= '2')
n += sysfs_emit_at(buf, n, "APKA CUR: %s 0x%016llx\n",
if (ci.cur_apka_mk_state >= '1' && ci.cur_apka_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.cur_apka_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "APKA CUR: %s 0x%s\n",
cao_state[ci.cur_apka_mk_state - '1'],
ci.cur_apka_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "APKA CUR: - -\n");
}
if (ci.old_apka_mk_state >= '1' && ci.old_apka_mk_state <= '2')
n += sysfs_emit_at(buf, n, "APKA OLD: %s 0x%016llx\n",
if (ci.old_apka_mk_state >= '1' && ci.old_apka_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.old_apka_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "APKA OLD: %s 0x%s\n",
cao_state[ci.old_apka_mk_state - '1'],
ci.old_apka_mkvp);
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "APKA OLD: - -\n");
}
if (ci.new_asym_mk_state >= '1' && ci.new_asym_mk_state <= '3')
n += sysfs_emit_at(buf, n, "ASYM NEW: %s 0x%016llx%016llx\n",
if (ci.new_asym_mk_state >= '1' && ci.new_asym_mk_state <= '3') {
MKVP_TO_HEXBUF(ci.new_asym_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "ASYM NEW: %s 0x%s\n",
new_state[ci.new_asym_mk_state - '1'],
*((u64 *)(ci.new_asym_mkvp)),
*((u64 *)(ci.new_asym_mkvp + sizeof(u64))));
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "ASYM NEW: - -\n");
}
if (ci.cur_asym_mk_state >= '1' && ci.cur_asym_mk_state <= '2')
n += sysfs_emit_at(buf, n, "ASYM CUR: %s 0x%016llx%016llx\n",
if (ci.cur_asym_mk_state >= '1' && ci.cur_asym_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.cur_asym_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "ASYM CUR: %s 0x%s\n",
cao_state[ci.cur_asym_mk_state - '1'],
*((u64 *)(ci.cur_asym_mkvp)),
*((u64 *)(ci.cur_asym_mkvp + sizeof(u64))));
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "ASYM CUR: - -\n");
}
if (ci.old_asym_mk_state >= '1' && ci.old_asym_mk_state <= '2')
n += sysfs_emit_at(buf, n, "ASYM OLD: %s 0x%016llx%016llx\n",
if (ci.old_asym_mk_state >= '1' && ci.old_asym_mk_state <= '2') {
MKVP_TO_HEXBUF(ci.old_asym_mkvp, hexbuf);
n += sysfs_emit_at(buf, n, "ASYM OLD: %s 0x%s\n",
cao_state[ci.old_asym_mk_state - '1'],
*((u64 *)(ci.old_asym_mkvp)),
*((u64 *)(ci.old_asym_mkvp + sizeof(u64))));
else
hexbuf);
} else {
n += sysfs_emit_at(buf, n, "ASYM OLD: - -\n");
}
return n;
}

View File

@ -78,9 +78,13 @@ struct error_hdr {
static inline int convert_error(struct zcrypt_queue *zq,
struct ap_message *reply)
{
struct error_hdr *ehdr = reply->msg;
int card = AP_QID_CARD(zq->queue->qid);
int queue = AP_QID_QUEUE(zq->queue->qid);
int card = AP_QID_CARD(zq->queue->qid);
struct error_hdr *ehdr = reply->msg;
struct {
struct type86_hdr hdr;
struct type86_fmt2_ext fmt2;
} __packed * t86hdr = reply->msg;
switch (ehdr->reply_code) {
case REP82_ERROR_INVALID_MSG_LEN: /* 0x23 */
@ -100,19 +104,12 @@ static inline int convert_error(struct zcrypt_queue *zq,
/* RY indicates malformed request */
if (ehdr->reply_code == REP82_ERROR_FILTERED_BY_HYPERVISOR &&
ehdr->type == TYPE86_RSP_CODE) {
struct {
struct type86_hdr hdr;
struct type86_fmt2_ext fmt2;
} __packed * head = reply->msg;
unsigned int apfs = *((u32 *)head->fmt2.apfs);
ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x apfs=0x%x => rc=EINVAL\n",
__func__, card, queue,
ehdr->reply_code, apfs);
ehdr->reply_code, t86hdr->fmt2.apfs);
} else {
ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x => rc=EINVAL\n",
__func__, card, queue,
ehdr->reply_code);
__func__, card, queue, ehdr->reply_code);
}
return -EINVAL;
case REP82_ERROR_MACHINE_FAILURE: /* 0x10 */
@ -125,15 +122,10 @@ static inline int convert_error(struct zcrypt_queue *zq,
/* For type 86 response show the apfs value (failure reason) */
if (ehdr->reply_code == REP82_ERROR_TRANSPORT_FAIL &&
ehdr->type == TYPE86_RSP_CODE) {
struct {
struct type86_hdr hdr;
struct type86_fmt2_ext fmt2;
} __packed * head = reply->msg;
unsigned int apfs = *((u32 *)head->fmt2.apfs);
ZCRYPT_DBF_WARN(
"%s dev=%02x.%04x RY=0x%02x apfs=0x%x => bus rescan, rc=EAGAIN\n",
__func__, card, queue, ehdr->reply_code, apfs);
__func__, card, queue, ehdr->reply_code,
t86hdr->fmt2.apfs);
} else {
ZCRYPT_DBF_WARN("%s dev=%02x.%04x RY=0x%02x => bus rescan, rc=EAGAIN\n",
__func__, card, queue,

View File

@ -65,7 +65,7 @@ struct function_and_rules_block {
static const struct CPRBX static_cprbx = {
.cprb_len = 0x00DC,
.cprb_ver_id = 0x02,
.func_id = {0x54, 0x32},
.func_id = {'T', '2'},
};
int speed_idx_cca(int req_type)
@ -328,7 +328,7 @@ struct type86_fmt2_msg {
static int xcrb_msg_to_type6cprb_msgx(bool userspace, struct ap_message *ap_msg,
struct ica_xcRB *xcrb,
unsigned int *fcode,
unsigned short **dom)
unsigned int *domain)
{
static struct type6_hdr static_type6_hdrX = {
.type = 0x06,
@ -412,7 +412,8 @@ static int xcrb_msg_to_type6cprb_msgx(bool userspace, struct ap_message *ap_msg,
sizeof(msg->hdr.function_code));
*fcode = (msg->hdr.function_code[0] << 8) | msg->hdr.function_code[1];
*dom = (unsigned short *)&msg->cprbx.domain;
if (domain)
*domain = msg->cprbx.domain;
/* check subfunction, US and AU need special flag with NQAP */
if (memcmp(function_code, "US", 2) == 0 ||
@ -454,8 +455,7 @@ static int xcrb_msg_to_type6_ep11cprb_msgx(bool userspace, struct ap_message *ap
.type = 0x06,
.rqid = {0x00, 0x01},
.function_code = {0x00, 0x00},
.agent_id[0] = 0x58, /* {'X'} */
.agent_id[1] = 0x43, /* {'C'} */
.agent_id = {'X', 'C'},
.offset1 = 0x00000058,
};
@ -529,7 +529,8 @@ static int xcrb_msg_to_type6_ep11cprb_msgx(bool userspace, struct ap_message *ap
else
ap_msg->flags |= AP_MSG_FLAG_USAGE;
*domain = msg->cprbx.target_id;
if (domain)
*domain = msg->cprbx.target_id;
return 0;
}
@ -751,7 +752,7 @@ static int convert_response_xcrb(bool userspace, struct zcrypt_queue *zq,
return convert_error(zq, reply);
case TYPE86_RSP_CODE:
if (msg->hdr.reply_code) {
memcpy(&xcrb->status, msg->fmt2.apfs, sizeof(u32));
xcrb->status = msg->fmt2.apfs;
return convert_error(zq, reply);
}
if (msg->cprbx.cprb_ver_id == 0x02)
@ -1052,7 +1053,7 @@ static long zcrypt_msgtype6_modexpo_crt(struct zcrypt_queue *zq,
*/
int prep_cca_ap_msg(bool userspace, struct ica_xcRB *xcrb,
struct ap_message *ap_msg,
unsigned int *func_code, unsigned short **dom)
unsigned int *func_code, unsigned int *domain)
{
struct ap_response_type *resp_type = &ap_msg->response;
@ -1060,7 +1061,8 @@ int prep_cca_ap_msg(bool userspace, struct ica_xcRB *xcrb,
ap_msg->psmid = (((unsigned long)current->pid) << 32) +
atomic_inc_return(&zcrypt_step);
resp_type->type = CEXXC_RESPONSE_TYPE_XCRB;
return xcrb_msg_to_type6cprb_msgx(userspace, ap_msg, xcrb, func_code, dom);
return xcrb_msg_to_type6cprb_msgx(userspace, ap_msg,
xcrb, func_code, domain);
}
/*
@ -1105,6 +1107,9 @@ static long zcrypt_msgtype6_send_cprb(bool userspace, struct zcrypt_queue *zq,
msg->hdr.fromcardlen1 -= delta;
}
/* update domain field within the CPRB struct */
msg->cprbx.domain = AP_QID_QUEUE(zq->queue->qid);
init_completion(&resp_type->work);
rc = ap_queue_message(zq->queue, ap_msg);
if (rc)
@ -1210,8 +1215,7 @@ static long zcrypt_msgtype6_send_ep11_cprb(bool userspace, struct zcrypt_queue *
lfmt = 1; /* length format #1 */
}
payload_hdr = (struct pld_hdr *)((&msg->pld_lenfmt) + lfmt);
payload_hdr->dom_val = (unsigned int)
AP_QID_QUEUE(zq->queue->qid);
payload_hdr->dom_val = AP_QID_QUEUE(zq->queue->qid);
}
/*
@ -1246,6 +1250,56 @@ static long zcrypt_msgtype6_send_ep11_cprb(bool userspace, struct zcrypt_queue *
return rc;
}
/*
* Prepare a type6 CPRB message for random number generation
*
* @ap_dev: AP device pointer
* @ap_msg: pointer to AP message
*/
static inline void rng_type6cprb_msgx(struct ap_message *ap_msg,
unsigned int random_number_length,
unsigned int *domain)
{
struct {
struct type6_hdr hdr;
struct CPRBX cprbx;
char function_code[2];
short int rule_length;
char rule[8];
short int verb_length;
short int key_length;
} __packed * msg = ap_msg->msg;
static struct type6_hdr static_type6_hdrX = {
.type = 0x06,
.offset1 = 0x00000058,
.agent_id = {'C', 'A'},
.function_code = {'R', 'L'},
.tocardlen1 = sizeof(*msg) - sizeof(msg->hdr),
.fromcardlen1 = sizeof(*msg) - sizeof(msg->hdr),
};
static struct CPRBX local_cprbx = {
.cprb_len = 0x00dc,
.cprb_ver_id = 0x02,
.func_id = {'T', '2'},
.req_parml = sizeof(*msg) - sizeof(msg->hdr) -
sizeof(msg->cprbx),
.rpl_msgbl = sizeof(*msg) - sizeof(msg->hdr),
};
msg->hdr = static_type6_hdrX;
msg->hdr.fromcardlen2 = random_number_length;
msg->cprbx = local_cprbx;
msg->cprbx.rpl_datal = random_number_length;
memcpy(msg->function_code, msg->hdr.function_code, 0x02);
msg->rule_length = 0x0a;
memcpy(msg->rule, "RANDOM ", 8);
msg->verb_length = 0x02;
msg->key_length = 0x02;
ap_msg->len = sizeof(*msg);
if (domain)
*domain = msg->cprbx.domain;
}
/*
* Prepare a CEXXC get random request ap message.
* This function assumes that ap_msg has been initialized with

View File

@ -34,7 +34,7 @@ struct type6_hdr {
unsigned char right[4]; /* 0x00000000 */
unsigned char reserved3[2]; /* 0x0000 */
unsigned char reserved4[2]; /* 0x0000 */
unsigned char apfs[4]; /* 0x00000000 */
unsigned int apfs; /* 0x00000000 */
unsigned int offset1; /* 0x00000058 (offset to CPRB) */
unsigned int offset2; /* 0x00000000 */
unsigned int offset3; /* 0x00000000 */
@ -83,7 +83,7 @@ struct type86_hdr {
struct type86_fmt2_ext {
unsigned char reserved[4]; /* 0x00000000 */
unsigned char apfs[4]; /* final status */
unsigned int apfs; /* final status */
unsigned int count1; /* length of CPRB + parameters */
unsigned int offset1; /* offset to CPRB */
unsigned int count2; /* 0x00000000 */
@ -96,7 +96,7 @@ struct type86_fmt2_ext {
int prep_cca_ap_msg(bool userspace, struct ica_xcRB *xcrb,
struct ap_message *ap_msg,
unsigned int *fc, unsigned short **dom);
unsigned int *fc, unsigned int *dom);
int prep_ep11_ap_msg(bool userspace, struct ep11_urb *xcrb,
struct ap_message *ap_msg,
unsigned int *fc, unsigned int *dom);
@ -110,55 +110,6 @@ int prep_rng_ap_msg(struct ap_message *ap_msg,
int speed_idx_cca(int);
int speed_idx_ep11(int);
/**
* Prepare a type6 CPRB message for random number generation
*
* @ap_dev: AP device pointer
* @ap_msg: pointer to AP message
*/
static inline void rng_type6cprb_msgx(struct ap_message *ap_msg,
unsigned int random_number_length,
unsigned int *domain)
{
struct {
struct type6_hdr hdr;
struct CPRBX cprbx;
char function_code[2];
short int rule_length;
char rule[8];
short int verb_length;
short int key_length;
} __packed * msg = ap_msg->msg;
static struct type6_hdr static_type6_hdrX = {
.type = 0x06,
.offset1 = 0x00000058,
.agent_id = {'C', 'A'},
.function_code = {'R', 'L'},
.tocardlen1 = sizeof(*msg) - sizeof(msg->hdr),
.fromcardlen1 = sizeof(*msg) - sizeof(msg->hdr),
};
static struct CPRBX local_cprbx = {
.cprb_len = 0x00dc,
.cprb_ver_id = 0x02,
.func_id = {0x54, 0x32},
.req_parml = sizeof(*msg) - sizeof(msg->hdr) -
sizeof(msg->cprbx),
.rpl_msgbl = sizeof(*msg) - sizeof(msg->hdr),
};
msg->hdr = static_type6_hdrX;
msg->hdr.fromcardlen2 = random_number_length;
msg->cprbx = local_cprbx;
msg->cprbx.rpl_datal = random_number_length;
memcpy(msg->function_code, msg->hdr.function_code, 0x02);
msg->rule_length = 0x0a;
memcpy(msg->rule, "RANDOM ", 8);
msg->verb_length = 0x02;
msg->key_length = 0x02;
ap_msg->len = sizeof(*msg);
*domain = (unsigned short)msg->cprbx.domain;
}
void zcrypt_msgtype6_init(void);
void zcrypt_msgtype6_exit(void);

View File

@ -151,9 +151,8 @@ void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr,
if (&init_mm == mm)
return;
if (pte_user_accessible_page(pte, addr)) {
if (pte_user_accessible_page(mm, addr, pte))
page_table_check_clear(pte_pfn(pte), PAGE_SIZE >> PAGE_SHIFT);
}
}
EXPORT_SYMBOL(__page_table_check_pte_clear);
@ -163,9 +162,8 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
if (&init_mm == mm)
return;
if (pmd_user_accessible_page(pmd, addr)) {
if (pmd_user_accessible_page(mm, addr, pmd))
page_table_check_clear(pmd_pfn(pmd), PMD_SIZE >> PAGE_SHIFT);
}
}
EXPORT_SYMBOL(__page_table_check_pmd_clear);
@ -175,9 +173,8 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
if (&init_mm == mm)
return;
if (pud_user_accessible_page(pud, addr)) {
if (pud_user_accessible_page(mm, addr, pud))
page_table_check_clear(pud_pfn(pud), PUD_SIZE >> PAGE_SHIFT);
}
}
EXPORT_SYMBOL(__page_table_check_pud_clear);
@ -211,7 +208,7 @@ void __page_table_check_ptes_set(struct mm_struct *mm, unsigned long addr,
for (i = 0; i < nr; i++)
__page_table_check_pte_clear(mm, addr + PAGE_SIZE * i, ptep_get(ptep + i));
if (pte_user_accessible_page(pte, addr))
if (pte_user_accessible_page(mm, addr, pte))
page_table_check_set(pte_pfn(pte), nr, pte_write(pte));
}
EXPORT_SYMBOL(__page_table_check_ptes_set);
@ -241,7 +238,7 @@ void __page_table_check_pmds_set(struct mm_struct *mm, unsigned long addr,
for (i = 0; i < nr; i++)
__page_table_check_pmd_clear(mm, addr + PMD_SIZE * i, *(pmdp + i));
if (pmd_user_accessible_page(pmd, addr))
if (pmd_user_accessible_page(mm, addr, pmd))
page_table_check_set(pmd_pfn(pmd), stride * nr, pmd_write(pmd));
}
EXPORT_SYMBOL(__page_table_check_pmds_set);
@ -257,7 +254,7 @@ void __page_table_check_puds_set(struct mm_struct *mm, unsigned long addr,
for (i = 0; i < nr; i++)
__page_table_check_pud_clear(mm, addr + PUD_SIZE * i, *(pudp + i));
if (pud_user_accessible_page(pud, addr))
if (pud_user_accessible_page(mm, addr, pud))
page_table_check_set(pud_pfn(pud), stride * nr, pud_write(pud));
}
EXPORT_SYMBOL(__page_table_check_puds_set);