ASoC: amd: Move to GPIO descriptors

Linus Walleij <linusw@kernel.org> says:

After a quick look and test-compile I can determine that
all of these drivers include <linux/gpio.h> for no reason
whatsoever, so fixing it is low hanging fruit.

Link: https://patch.msgid.link/20260314-asoc-amd-v1-0-31afed06e022@kernel.org
This commit is contained in:
Mark Brown 2026-03-16 01:11:23 +00:00
commit 3e9cda2f4a
No known key found for this signature in database
GPG Key ID: 24D68B725D5487D0
541 changed files with 6370 additions and 4017 deletions

View File

@ -498,7 +498,8 @@ Lior David <quic_liord@quicinc.com> <liord@codeaurora.org>
Loic Poulain <loic.poulain@oss.qualcomm.com> <loic.poulain@linaro.org>
Loic Poulain <loic.poulain@oss.qualcomm.com> <loic.poulain@intel.com>
Lorenzo Pieralisi <lpieralisi@kernel.org> <lorenzo.pieralisi@arm.com>
Lorenzo Stoakes <lorenzo.stoakes@oracle.com> <lstoakes@gmail.com>
Lorenzo Stoakes <ljs@kernel.org> <lstoakes@gmail.com>
Lorenzo Stoakes <ljs@kernel.org> <lorenzo.stoakes@oracle.com>
Luca Ceresoli <luca.ceresoli@bootlin.com> <luca@lucaceresoli.net>
Luca Weiss <luca@lucaweiss.eu> <luca@z3ntu.xyz>
Lucas De Marchi <demarchi@kernel.org> <lucas.demarchi@intel.com>

View File

@ -151,11 +151,11 @@ Description:
The algorithm_params file is write-only and is used to setup
compression algorithm parameters.
What: /sys/block/zram<id>/writeback_compressed
What: /sys/block/zram<id>/compressed_writeback
Date: Decemeber 2025
Contact: Richard Chang <richardycc@google.com>
Description:
The writeback_compressed device atrribute toggles compressed
The compressed_writeback device atrribute toggles compressed
writeback feature.
What: /sys/block/zram<id>/writeback_batch_size

View File

@ -216,7 +216,7 @@ writeback_limit WO specifies the maximum amount of write IO zram
writeback_limit_enable RW show and set writeback_limit feature
writeback_batch_size RW show and set maximum number of in-flight
writeback operations
writeback_compressed RW show and set compressed writeback feature
compressed_writeback RW show and set compressed writeback feature
comp_algorithm RW show and change the compression algorithm
algorithm_params WO setup compression algorithm parameters
compact WO trigger memory compaction
@ -439,11 +439,11 @@ budget in next setting is user's job.
By default zram stores written back pages in decompressed (raw) form, which
means that writeback operation involves decompression of the page before
writing it to the backing device. This behavior can be changed by enabling
`writeback_compressed` feature, which causes zram to write compressed pages
`compressed_writeback` feature, which causes zram to write compressed pages
to the backing device, thus avoiding decompression overhead. To enable
this feature, execute::
$ echo yes > /sys/block/zramX/writeback_compressed
$ echo yes > /sys/block/zramX/compressed_writeback
Note that this feature should be configured before the `zramX` device is
initialized.

View File

@ -8196,6 +8196,9 @@ Kernel parameters
p = USB_QUIRK_SHORT_SET_ADDRESS_REQ_TIMEOUT
(Reduce timeout of the SET_ADDRESS
request from 5000 ms to 500 ms);
q = USB_QUIRK_FORCE_ONE_CONFIG (Device
claims zero configurations,
forcing to 1);
Example: quirks=0781:5580:bk,0a5c:5834:gij
usbhid.mousepoll=

View File

@ -253,7 +253,6 @@ allOf:
enum:
# these platforms support 2 streams MST on some interfaces,
# others are SST only
- qcom,glymur-dp
- qcom,sc8280xp-dp
- qcom,x1e80100-dp
then:
@ -310,6 +309,26 @@ allOf:
minItems: 6
maxItems: 8
- if:
properties:
compatible:
contains:
enum:
# these platforms support 2 streams MST on some interfaces,
# others are SST only, but all controllers have 4 ports
- qcom,glymur-dp
then:
properties:
reg:
minItems: 9
maxItems: 9
clocks:
minItems: 5
maxItems: 6
clocks-names:
minItems: 5
maxItems: 6
unevaluatedProperties: false
examples:

View File

@ -176,13 +176,17 @@ examples:
};
};
displayport-controller@ae90000 {
displayport-controller@af54000 {
compatible = "qcom,glymur-dp";
reg = <0xae90000 0x200>,
<0xae90200 0x200>,
<0xae90400 0x600>,
<0xae91000 0x400>,
<0xae91400 0x400>;
reg = <0xaf54000 0x200>,
<0xaf54200 0x200>,
<0xaf55000 0xc00>,
<0xaf56000 0x400>,
<0xaf57000 0x400>,
<0xaf58000 0x400>,
<0xaf59000 0x400>,
<0xaf5a000 0x600>,
<0xaf5b000 0x600>;
interrupt-parent = <&mdss>;
interrupts = <12>;

View File

@ -10,7 +10,7 @@ maintainers:
- Krzysztof Kozlowski <krzk@kernel.org>
description:
SM8650 MSM Mobile Display Subsystem(MDSS), which encapsulates sub-blocks like
SM8750 MSM Mobile Display Subsystem(MDSS), which encapsulates sub-blocks like
DPU display controller, DSI and DP interfaces etc.
$ref: /schemas/display/msm/mdss-common.yaml#

View File

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Synopsys DesignWare APB I2C Controller
maintainers:
- Jarkko Nikula <jarkko.nikula@linux.intel.com>
- Mika Westerberg <mika.westerberg@linux.intel.com>
allOf:
- $ref: /schemas/i2c/i2c-controller.yaml#

View File

@ -0,0 +1,93 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/powerpc/fsl/fsl,mpc83xx.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Freescale PowerQUICC II Pro (MPC83xx) platforms
maintainers:
- J. Neuschäfer <j.ne@posteo.net>
properties:
$nodename:
const: '/'
compatible:
oneOf:
- description: MPC83xx Reference Design Boards
items:
- enum:
- fsl,mpc8308rdb
- fsl,mpc8315erdb
- fsl,mpc8360rdk
- fsl,mpc8377rdb
- fsl,mpc8377wlan
- fsl,mpc8378rdb
- fsl,mpc8379rdb
- description: MPC8313E Reference Design Board
items:
- const: MPC8313ERDB
- const: MPC831xRDB
- const: MPC83xxRDB
- description: MPC8323E Reference Design Board
items:
- const: MPC8323ERDB
- const: MPC832xRDB
- const: MPC83xxRDB
- description: MPC8349E-mITX(-GP) Reference Design Platform
items:
- enum:
- MPC8349EMITX
- MPC8349EMITXGP
- const: MPC834xMITX
- const: MPC83xxMITX
- description: Keymile KMETER1 board
const: keymile,KMETER1
- description: MPC8308 P1M board
const: denx,mpc8308_p1m
patternProperties:
"^soc@.*$":
type: object
properties:
compatible:
oneOf:
- items:
- enum:
- fsl,mpc8315-immr
- fsl,mpc8308-immr
- const: simple-bus
- items:
- const: fsl,mpc8360-immr
- const: fsl,immr
- const: fsl,soc
- const: simple-bus
- const: simple-bus
additionalProperties: true
examples:
- |
/ {
compatible = "fsl,mpc8315erdb";
model = "MPC8315E-RDB";
#address-cells = <1>;
#size-cells = <1>;
soc@e0000000 {
compatible = "fsl,mpc8315-immr", "simple-bus";
reg = <0xe0000000 0x00000200>;
#address-cells = <1>;
#size-cells = <1>;
device_type = "soc";
ranges = <0 0xe0000000 0x00100000>;
bus-frequency = <0>;
};
};
...

View File

@ -6,9 +6,6 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Allwinner A31 SPI Controller
allOf:
- $ref: spi-controller.yaml
maintainers:
- Chen-Yu Tsai <wens@csie.org>
- Maxime Ripard <mripard@kernel.org>
@ -82,11 +79,11 @@ patternProperties:
spi-rx-bus-width:
items:
- const: 1
enum: [0, 1, 2, 4]
spi-tx-bus-width:
items:
- const: 1
enum: [0, 1, 2, 4]
required:
- compatible
@ -95,6 +92,28 @@ required:
- clocks
- clock-names
allOf:
- $ref: spi-controller.yaml
- if:
not:
properties:
compatible:
contains:
enum:
- allwinner,sun50i-r329-spi
- allwinner,sun55i-a523-spi
then:
patternProperties:
"^.*@[0-9a-f]+":
properties:
spi-rx-bus-width:
items:
enum: [0, 1]
spi-tx-bus-width:
items:
enum: [0, 1]
unevaluatedProperties: false
examples:

View File

@ -43,7 +43,6 @@ options should be enabled to use sched_ext:
CONFIG_DEBUG_INFO_BTF=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_PAHOLE_HAS_BTF_TAG=y
sched_ext is used only when the BPF scheduler is loaded and running.
@ -58,7 +57,8 @@ in ``ops->flags``, all ``SCHED_NORMAL``, ``SCHED_BATCH``, ``SCHED_IDLE``, and
However, when the BPF scheduler is loaded and ``SCX_OPS_SWITCH_PARTIAL`` is
set in ``ops->flags``, only tasks with the ``SCHED_EXT`` policy are scheduled
by sched_ext, while tasks with ``SCHED_NORMAL``, ``SCHED_BATCH`` and
``SCHED_IDLE`` policies are scheduled by the fair-class scheduler.
``SCHED_IDLE`` policies are scheduled by the fair-class scheduler which has
higher sched_class precedence than ``SCHED_EXT``.
Terminating the sched_ext scheduler program, triggering `SysRq-S`, or
detection of any internal error including stalled runnable tasks aborts the
@ -345,6 +345,8 @@ Where to Look
The functions prefixed with ``scx_bpf_`` can be called from the BPF
scheduler.
* ``kernel/sched/ext_idle.c`` contains the built-in idle CPU selection policy.
* ``tools/sched_ext/`` hosts example BPF scheduler implementations.
* ``scx_simple[.bpf].c``: Minimal global FIFO scheduler example using a
@ -353,13 +355,35 @@ Where to Look
* ``scx_qmap[.bpf].c``: A multi-level FIFO scheduler supporting five
levels of priority implemented with ``BPF_MAP_TYPE_QUEUE``.
* ``scx_central[.bpf].c``: A central FIFO scheduler where all scheduling
decisions are made on one CPU, demonstrating ``LOCAL_ON`` dispatching,
tickless operation, and kthread preemption.
* ``scx_cpu0[.bpf].c``: A scheduler that queues all tasks to a shared DSQ
and only dispatches them on CPU0 in FIFO order. Useful for testing bypass
behavior.
* ``scx_flatcg[.bpf].c``: A flattened cgroup hierarchy scheduler
implementing hierarchical weight-based cgroup CPU control by compounding
each cgroup's share at every level into a single flat scheduling layer.
* ``scx_pair[.bpf].c``: A core-scheduling example that always makes
sibling CPU pairs execute tasks from the same CPU cgroup.
* ``scx_sdt[.bpf].c``: A variation of ``scx_simple`` demonstrating BPF
arena memory management for per-task data.
* ``scx_userland[.bpf].c``: A minimal scheduler demonstrating user space
scheduling. Tasks with CPU affinity are direct-dispatched in FIFO order;
all others are scheduled in user space by a simple vruntime scheduler.
ABI Instability
===============
The APIs provided by sched_ext to BPF schedulers programs have no stability
guarantees. This includes the ops table callbacks and constants defined in
``include/linux/sched/ext.h``, as well as the ``scx_bpf_`` kfuncs defined in
``kernel/sched/ext.c``.
``kernel/sched/ext.c`` and ``kernel/sched/ext_idle.c``.
While we will attempt to provide a relatively stable API surface when
possible, they are subject to change without warning between kernel

View File

@ -8435,115 +8435,123 @@ KVM_CHECK_EXTENSION.
The valid bits in cap.args[0] are:
=================================== ============================================
KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT
LINT0 register is 0x700 (APIC_MODE_EXTINT).
When this quirk is disabled, the reset value
is 0x10000 (APIC_LVT_MASKED).
======================================== ================================================
KVM_X86_QUIRK_LINT0_REENABLED By default, the reset value for the LVT
LINT0 register is 0x700 (APIC_MODE_EXTINT).
When this quirk is disabled, the reset value
is 0x10000 (APIC_LVT_MASKED).
KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on
AMD CPUs to workaround buggy guest firmware
that runs in perpetuity with CR0.CD, i.e.
with caches in "no fill" mode.
KVM_X86_QUIRK_CD_NW_CLEARED By default, KVM clears CR0.CD and CR0.NW on
AMD CPUs to workaround buggy guest firmware
that runs in perpetuity with CR0.CD, i.e.
with caches in "no fill" mode.
When this quirk is disabled, KVM does not
change the value of CR0.CD and CR0.NW.
When this quirk is disabled, KVM does not
change the value of CR0.CD and CR0.NW.
KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is
available even when configured for x2APIC
mode. When this quirk is disabled, KVM
disables the MMIO LAPIC interface if the
LAPIC is in x2APIC mode.
KVM_X86_QUIRK_LAPIC_MMIO_HOLE By default, the MMIO LAPIC interface is
available even when configured for x2APIC
mode. When this quirk is disabled, KVM
disables the MMIO LAPIC interface if the
LAPIC is in x2APIC mode.
KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before
exiting to userspace for an OUT instruction
to port 0x7e. When this quirk is disabled,
KVM does not pre-increment %rip before
exiting to userspace.
KVM_X86_QUIRK_OUT_7E_INC_RIP By default, KVM pre-increments %rip before
exiting to userspace for an OUT instruction
to port 0x7e. When this quirk is disabled,
KVM does not pre-increment %rip before
exiting to userspace.
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets
CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if
IA32_MISC_ENABLE[bit 18] (MWAIT) is set.
Additionally, when this quirk is disabled,
KVM clears CPUID.01H:ECX[bit 3] if
IA32_MISC_ENABLE[bit 18] is cleared.
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT When this quirk is disabled, KVM sets
CPUID.01H:ECX[bit 3] (MONITOR/MWAIT) if
IA32_MISC_ENABLE[bit 18] (MWAIT) is set.
Additionally, when this quirk is disabled,
KVM clears CPUID.01H:ECX[bit 3] if
IA32_MISC_ENABLE[bit 18] is cleared.
KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest
VMMCALL/VMCALL instructions to match the
vendor's hypercall instruction for the
system. When this quirk is disabled, KVM
will no longer rewrite invalid guest
hypercall instructions. Executing the
incorrect hypercall instruction will
generate a #UD within the guest.
KVM_X86_QUIRK_FIX_HYPERCALL_INSN By default, KVM rewrites guest
VMMCALL/VMCALL instructions to match the
vendor's hypercall instruction for the
system. When this quirk is disabled, KVM
will no longer rewrite invalid guest
hypercall instructions. Executing the
incorrect hypercall instruction will
generate a #UD within the guest.
KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if
they are intercepted) as NOPs regardless of
whether or not MONITOR/MWAIT are supported
according to guest CPUID. When this quirk
is disabled and KVM_X86_DISABLE_EXITS_MWAIT
is not set (MONITOR/MWAIT are intercepted),
KVM will inject a #UD on MONITOR/MWAIT if
they're unsupported per guest CPUID. Note,
KVM will modify MONITOR/MWAIT support in
guest CPUID on writes to MISC_ENABLE if
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is
disabled.
KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS By default, KVM emulates MONITOR/MWAIT (if
they are intercepted) as NOPs regardless of
whether or not MONITOR/MWAIT are supported
according to guest CPUID. When this quirk
is disabled and KVM_X86_DISABLE_EXITS_MWAIT
is not set (MONITOR/MWAIT are intercepted),
KVM will inject a #UD on MONITOR/MWAIT if
they're unsupported per guest CPUID. Note,
KVM will modify MONITOR/MWAIT support in
guest CPUID on writes to MISC_ENABLE if
KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT is
disabled.
KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM
invalidates all SPTEs in all memslots and
address spaces when a memslot is deleted or
moved. When this quirk is disabled (or the
VM type isn't KVM_X86_DEFAULT_VM), KVM only
ensures the backing memory of the deleted
or moved memslot isn't reachable, i.e KVM
_may_ invalidate only SPTEs related to the
memslot.
KVM_X86_QUIRK_SLOT_ZAP_ALL By default, for KVM_X86_DEFAULT_VM VMs, KVM
invalidates all SPTEs in all memslots and
address spaces when a memslot is deleted or
moved. When this quirk is disabled (or the
VM type isn't KVM_X86_DEFAULT_VM), KVM only
ensures the backing memory of the deleted
or moved memslot isn't reachable, i.e KVM
_may_ invalidate only SPTEs related to the
memslot.
KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the
vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),
MSR_IA32_ARCH_CAPABILITIES (0x10a),
MSR_PLATFORM_INFO (0xce), and all VMX MSRs
(0x480..0x492) to the maximal capabilities
supported by KVM. KVM also sets
MSR_IA32_UCODE_REV (0x8b) to an arbitrary
value (which is different for Intel vs.
AMD). Lastly, when guest CPUID is set (by
userspace), KVM modifies select VMX MSR
fields to force consistency between guest
CPUID and L2's effective ISA. When this
quirk is disabled, KVM zeroes the vCPU's MSR
values (with two exceptions, see below),
i.e. treats the feature MSRs like CPUID
leaves and gives userspace full control of
the vCPU model definition. This quirk does
not affect VMX MSRs CR0/CR4_FIXED1 (0x487
and 0x489), as KVM does now allow them to
be set by userspace (KVM sets them based on
guest CPUID, for safety purposes).
KVM_X86_QUIRK_STUFF_FEATURE_MSRS By default, at vCPU creation, KVM sets the
vCPU's MSR_IA32_PERF_CAPABILITIES (0x345),
MSR_IA32_ARCH_CAPABILITIES (0x10a),
MSR_PLATFORM_INFO (0xce), and all VMX MSRs
(0x480..0x492) to the maximal capabilities
supported by KVM. KVM also sets
MSR_IA32_UCODE_REV (0x8b) to an arbitrary
value (which is different for Intel vs.
AMD). Lastly, when guest CPUID is set (by
userspace), KVM modifies select VMX MSR
fields to force consistency between guest
CPUID and L2's effective ISA. When this
quirk is disabled, KVM zeroes the vCPU's MSR
values (with two exceptions, see below),
i.e. treats the feature MSRs like CPUID
leaves and gives userspace full control of
the vCPU model definition. This quirk does
not affect VMX MSRs CR0/CR4_FIXED1 (0x487
and 0x489), as KVM does now allow them to
be set by userspace (KVM sets them based on
guest CPUID, for safety purposes).
KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores
guest PAT and forces the effective memory
type to WB in EPT. The quirk is not available
on Intel platforms which are incapable of
safely honoring guest PAT (i.e., without CPU
self-snoop, KVM always ignores guest PAT and
forces effective memory type to WB). It is
also ignored on AMD platforms or, on Intel,
when a VM has non-coherent DMA devices
assigned; KVM always honors guest PAT in
such case. The quirk is needed to avoid
slowdowns on certain Intel Xeon platforms
(e.g. ICX, SPR) where self-snoop feature is
supported but UC is slow enough to cause
issues with some older guests that use
UC instead of WC to map the video RAM.
Userspace can disable the quirk to honor
guest PAT if it knows that there is no such
guest software, for example if it does not
expose a bochs graphics device (which is
known to have had a buggy driver).
=================================== ============================================
KVM_X86_QUIRK_IGNORE_GUEST_PAT By default, on Intel platforms, KVM ignores
guest PAT and forces the effective memory
type to WB in EPT. The quirk is not available
on Intel platforms which are incapable of
safely honoring guest PAT (i.e., without CPU
self-snoop, KVM always ignores guest PAT and
forces effective memory type to WB). It is
also ignored on AMD platforms or, on Intel,
when a VM has non-coherent DMA devices
assigned; KVM always honors guest PAT in
such case. The quirk is needed to avoid
slowdowns on certain Intel Xeon platforms
(e.g. ICX, SPR) where self-snoop feature is
supported but UC is slow enough to cause
issues with some older guests that use
UC instead of WC to map the video RAM.
Userspace can disable the quirk to honor
guest PAT if it knows that there is no such
guest software, for example if it does not
expose a bochs graphics device (which is
known to have had a buggy driver).
KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM By default, KVM relaxes the consistency
check for GUEST_IA32_DEBUGCTL in vmcs12
to allow FREEZE_IN_SMM to be set. When
this quirk is disabled, KVM requires this
bit to be cleared. Note that the vmcs02
bit is still completely controlled by the
host, regardless of the quirk setting.
======================================== ================================================
7.32 KVM_CAP_MAX_VCPU_ID
------------------------

View File

@ -17,6 +17,8 @@ The acquisition orders for mutexes are as follows:
- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock
- vcpu->mutex is taken outside kvm->slots_lock and kvm->slots_arch_lock
- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring
them together is quite rare.

View File

@ -8626,9 +8626,8 @@ F: drivers/gpu/drm/lima/
F: include/uapi/drm/lima_drm.h
DRM DRIVERS FOR LOONGSON
M: Sui Jingfeng <suijingfeng@loongson.cn>
L: dri-devel@lists.freedesktop.org
S: Supported
S: Orphan
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
F: drivers/gpu/drm/loongson/
@ -16358,7 +16357,6 @@ F: net/dsa/tag_mtk.c
MEDIATEK T7XX 5G WWAN MODEM DRIVER
M: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
R: Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com>
R: Liu Haijun <haijun.liu@mediatek.com>
R: Ricardo Martinez <ricardo.martinez@linux.intel.com>
L: netdev@vger.kernel.org
@ -16643,7 +16641,7 @@ F: mm/balloon.c
MEMORY MANAGEMENT - CORE
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
@ -16773,7 +16771,7 @@ F: mm/workingset.c
MEMORY MANAGEMENT - MISC
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
@ -16864,7 +16862,7 @@ R: David Hildenbrand <david@kernel.org>
R: Michal Hocko <mhocko@kernel.org>
R: Qi Zheng <zhengqi.arch@bytedance.com>
R: Shakeel Butt <shakeel.butt@linux.dev>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Lorenzo Stoakes <ljs@kernel.org>
L: linux-mm@kvack.org
S: Maintained
F: mm/vmscan.c
@ -16873,7 +16871,7 @@ F: mm/workingset.c
MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Rik van Riel <riel@surriel.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Vlastimil Babka <vbabka@kernel.org>
@ -16918,7 +16916,7 @@ F: mm/swapfile.c
MEMORY MANAGEMENT - THP (TRANSPARENT HUGE PAGE)
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Zi Yan <ziy@nvidia.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
@ -16958,7 +16956,7 @@ F: tools/testing/selftests/mm/uffd-*.[ch]
MEMORY MANAGEMENT - RUST
M: Alice Ryhl <aliceryhl@google.com>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
L: linux-mm@kvack.org
L: rust-for-linux@vger.kernel.org
@ -16974,7 +16972,7 @@ F: rust/kernel/page.rs
MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Jann Horn <jannh@google.com>
R: Pedro Falcato <pfalcato@suse.de>
@ -17004,7 +17002,7 @@ MEMORY MAPPING - LOCKING
M: Andrew Morton <akpm@linux-foundation.org>
M: Suren Baghdasaryan <surenb@google.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Shakeel Butt <shakeel.butt@linux.dev>
L: linux-mm@kvack.org
@ -17019,7 +17017,7 @@ F: mm/mmap_lock.c
MEMORY MAPPING - MADVISE (MEMORY ADVICE)
M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
M: Lorenzo Stoakes <ljs@kernel.org>
M: David Hildenbrand <david@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Jann Horn <jannh@google.com>
@ -21938,7 +21936,7 @@ F: drivers/media/radio/radio-tea5777.c
RADOS BLOCK DEVICE (RBD)
M: Ilya Dryomov <idryomov@gmail.com>
R: Dongsheng Yang <dongsheng.yang@easystack.cn>
R: Dongsheng Yang <dongsheng.yang@linux.dev>
L: ceph-devel@vger.kernel.org
S: Supported
W: http://ceph.com/
@ -22267,6 +22265,16 @@ L: linux-wireless@vger.kernel.org
S: Orphan
F: drivers/net/wireless/rsi/
RELAY
M: Andrew Morton <akpm@linux-foundation.org>
M: Jens Axboe <axboe@kernel.dk>
M: Jason Xing <kernelxing@tencent.com>
L: linux-kernel@vger.kernel.org
S: Maintained
F: Documentation/filesystems/relay.rst
F: include/linux/relay.h
F: kernel/relay.c
REGISTER MAP ABSTRACTION
M: Mark Brown <broonie@kernel.org>
L: linux-kernel@vger.kernel.org
@ -23156,7 +23164,7 @@ K: \b(?i:rust)\b
RUST [ALLOC]
M: Danilo Krummrich <dakr@kernel.org>
R: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Uladzislau Rezki <urezki@gmail.com>
@ -24333,11 +24341,12 @@ F: drivers/nvmem/layouts/sl28vpd.c
SLAB ALLOCATOR
M: Vlastimil Babka <vbabka@kernel.org>
M: Harry Yoo <harry.yoo@oracle.com>
M: Andrew Morton <akpm@linux-foundation.org>
R: Hao Li <hao.li@linux.dev>
R: Christoph Lameter <cl@gentwo.org>
R: David Rientjes <rientjes@google.com>
R: Roman Gushchin <roman.gushchin@linux.dev>
R: Harry Yoo <harry.yoo@oracle.com>
L: linux-mm@kvack.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git
@ -25748,6 +25757,7 @@ F: include/net/pkt_cls.h
F: include/net/pkt_sched.h
F: include/net/sch_priv.h
F: include/net/tc_act/
F: include/net/tc_wrapper.h
F: include/uapi/linux/pkt_cls.h
F: include/uapi/linux/pkt_sched.h
F: include/uapi/linux/tc_act/

View File

@ -2,7 +2,7 @@
VERSION = 7
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc3
EXTRAVERSION = -rc4
NAME = Baby Opossum Posse
# *DOCUMENTATION*
@ -476,6 +476,7 @@ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
export rust_common_flags := --edition=2021 \
-Zbinary_dep_depinfo=y \
-Astable_features \
-Aunused_features \
-Dnon_ascii_idents \
-Dunsafe_op_in_unsafe_fn \
-Wmissing_docs \
@ -1113,6 +1114,9 @@ KBUILD_CFLAGS += -fno-builtin-wcslen
# change __FILE__ to the relative path to the source directory
ifdef building_out_of_srctree
KBUILD_CPPFLAGS += -fmacro-prefix-map=$(srcroot)/=
ifeq ($(call rustc-option-yn, --remap-path-scope=macro),y)
KBUILD_RUSTFLAGS += --remap-path-prefix=$(srcroot)/= --remap-path-scope=macro
endif
endif
# include additional Makefiles when needed

View File

@ -784,6 +784,9 @@ struct kvm_host_data {
/* Number of debug breakpoints/watchpoints for this CPU (minus 1) */
unsigned int debug_brps;
unsigned int debug_wrps;
/* Last vgic_irq part of the AP list recorded in an LR */
struct vgic_irq *last_lr_irq;
};
struct kvm_host_psci_config {

View File

@ -2345,6 +2345,15 @@ static bool can_trap_icv_dir_el1(const struct arm64_cpu_capabilities *entry,
!is_midr_in_range_list(has_vgic_v3))
return false;
/*
* pKVM prevents late onlining of CPUs. This means that whatever
* state the capability is in after deprivilege cannot be affected
* by a new CPU booting -- this is garanteed to be a CPU we have
* already seen, and the cap is therefore unchanged.
*/
if (system_capabilities_finalized() && is_protected_kvm_enabled())
return cpus_have_final_cap(ARM64_HAS_ICH_HCR_EL2_TDIR);
if (is_kernel_in_hyp_mode())
res.a1 = read_sysreg_s(SYS_ICH_VTR_EL2);
else

View File

@ -1504,8 +1504,6 @@ int __kvm_at_s1e2(struct kvm_vcpu *vcpu, u32 op, u64 vaddr)
fail = true;
}
isb();
if (!fail)
par = read_sysreg_par();

View File

@ -29,7 +29,7 @@
#include "trace.h"
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS()
};
@ -42,7 +42,7 @@ const struct kvm_stats_header kvm_vm_stats_header = {
sizeof(kvm_vm_stats_desc),
};
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, hvc_exit_stat),
STATS_DESC_COUNTER(VCPU, wfe_exit_stat),

View File

@ -518,7 +518,7 @@ static int host_stage2_adjust_range(u64 addr, struct kvm_mem_range *range)
granule = kvm_granule_size(level);
cur.start = ALIGN_DOWN(addr, granule);
cur.end = cur.start + granule;
if (!range_included(&cur, range))
if (!range_included(&cur, range) && level < KVM_PGTABLE_LAST_LEVEL)
continue;
*range = cur;
return 0;

View File

@ -1751,6 +1751,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
force_pte = (max_map_size == PAGE_SIZE);
vma_pagesize = min_t(long, vma_pagesize, max_map_size);
vma_shift = __ffs(vma_pagesize);
}
/*
@ -1837,10 +1838,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault && s2_force_noncacheable)
ret = -ENOEXEC;
if (ret) {
kvm_release_page_unused(page);
return ret;
}
if (ret)
goto out_put_page;
/*
* Guest performs atomic/exclusive operations on memory with unsupported
@ -1850,7 +1849,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
*/
if (esr_fsc_is_excl_atomic_fault(kvm_vcpu_get_esr(vcpu))) {
kvm_inject_dabt_excl_atomic(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
ret = 1;
goto out_put_page;
}
if (nested)
@ -1936,6 +1936,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
mark_page_dirty_in_slot(kvm, memslot, gfn);
return ret != -EAGAIN ? ret : 0;
out_put_page:
kvm_release_page_unused(page);
return ret;
}
/* Resolve the access fault by making the page young again. */

View File

@ -152,31 +152,31 @@ static int get_ia_size(struct s2_walk_info *wi)
return 64 - wi->t0sz;
}
static int check_base_s2_limits(struct s2_walk_info *wi,
static int check_base_s2_limits(struct kvm_vcpu *vcpu, struct s2_walk_info *wi,
int level, int input_size, int stride)
{
int start_size, ia_size;
int start_size, pa_max;
ia_size = get_ia_size(wi);
pa_max = kvm_get_pa_bits(vcpu->kvm);
/* Check translation limits */
switch (BIT(wi->pgshift)) {
case SZ_64K:
if (level == 0 || (level == 1 && ia_size <= 42))
if (level == 0 || (level == 1 && pa_max <= 42))
return -EFAULT;
break;
case SZ_16K:
if (level == 0 || (level == 1 && ia_size <= 40))
if (level == 0 || (level == 1 && pa_max <= 40))
return -EFAULT;
break;
case SZ_4K:
if (level < 0 || (level == 0 && ia_size <= 42))
if (level < 0 || (level == 0 && pa_max <= 42))
return -EFAULT;
break;
}
/* Check input size limits */
if (input_size > ia_size)
if (input_size > pa_max)
return -EFAULT;
/* Check number of entries in starting level table */
@ -269,16 +269,19 @@ static int walk_nested_s2_pgd(struct kvm_vcpu *vcpu, phys_addr_t ipa,
if (input_size > 48 || input_size < 25)
return -EFAULT;
ret = check_base_s2_limits(wi, level, input_size, stride);
if (WARN_ON(ret))
ret = check_base_s2_limits(vcpu, wi, level, input_size, stride);
if (WARN_ON(ret)) {
out->esr = compute_fsc(0, ESR_ELx_FSC_FAULT);
return ret;
}
base_lower_bound = 3 + input_size - ((3 - level) * stride +
wi->pgshift);
base_addr = wi->baddr & GENMASK_ULL(47, base_lower_bound);
if (check_output_size(wi, base_addr)) {
out->esr = compute_fsc(level, ESR_ELx_FSC_ADDRSZ);
/* R_BFHQH */
out->esr = compute_fsc(0, ESR_ELx_FSC_ADDRSZ);
return 1;
}
@ -293,8 +296,10 @@ static int walk_nested_s2_pgd(struct kvm_vcpu *vcpu, phys_addr_t ipa,
paddr = base_addr | index;
ret = read_guest_s2_desc(vcpu, paddr, &desc, wi);
if (ret < 0)
if (ret < 0) {
out->esr = ESR_ELx_FSC_SEA_TTW(level);
return ret;
}
new_desc = desc;

View File

@ -143,23 +143,6 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
kvm->arch.vgic.in_kernel = true;
kvm->arch.vgic.vgic_model = type;
kvm->arch.vgic.implementation_rev = KVM_VGIC_IMP_REV_LATEST;
kvm_for_each_vcpu(i, vcpu, kvm) {
ret = vgic_allocate_private_irqs_locked(vcpu, type);
if (ret)
break;
}
if (ret) {
kvm_for_each_vcpu(i, vcpu, kvm) {
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
kfree(vgic_cpu->private_irqs);
vgic_cpu->private_irqs = NULL;
}
goto out_unlock;
}
kvm->arch.vgic.vgic_dist_base = VGIC_ADDR_UNDEF;
aa64pfr0 = kvm_read_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1) & ~ID_AA64PFR0_EL1_GIC;
@ -176,6 +159,23 @@ int kvm_vgic_create(struct kvm *kvm, u32 type)
kvm_set_vm_id_reg(kvm, SYS_ID_AA64PFR0_EL1, aa64pfr0);
kvm_set_vm_id_reg(kvm, SYS_ID_PFR1_EL1, pfr1);
kvm_for_each_vcpu(i, vcpu, kvm) {
ret = vgic_allocate_private_irqs_locked(vcpu, type);
if (ret)
break;
}
if (ret) {
kvm_for_each_vcpu(i, vcpu, kvm) {
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
kfree(vgic_cpu->private_irqs);
vgic_cpu->private_irqs = NULL;
}
kvm->arch.vgic.vgic_model = 0;
goto out_unlock;
}
if (type == KVM_DEV_TYPE_ARM_VGIC_V3)
kvm->arch.vgic.nassgicap = system_supports_direct_sgis();

View File

@ -115,7 +115,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2;
u32 eoicount = FIELD_GET(GICH_HCR_EOICOUNT, cpuif->vgic_hcr);
struct vgic_irq *irq;
struct vgic_irq *irq = *host_data_ptr(last_lr_irq);
DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());
@ -123,7 +123,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
vgic_v2_fold_lr(vcpu, cpuif->vgic_lr[lr]);
/* See the GICv3 equivalent for the EOIcount handling rationale */
list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
list_for_each_entry_continue(irq, &vgic_cpu->ap_list_head, ap_list) {
u32 lr;
if (!eoicount) {

View File

@ -148,7 +148,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3;
u32 eoicount = FIELD_GET(ICH_HCR_EL2_EOIcount, cpuif->vgic_hcr);
struct vgic_irq *irq;
struct vgic_irq *irq = *host_data_ptr(last_lr_irq);
DEBUG_SPINLOCK_BUG_ON(!irqs_disabled());
@ -158,12 +158,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
/*
* EOIMode=0: use EOIcount to emulate deactivation. We are
* guaranteed to deactivate in reverse order of the activation, so
* just pick one active interrupt after the other in the ap_list,
* and replay the deactivation as if the CPU was doing it. We also
* rely on priority drop to have taken place, and the list to be
* sorted by priority.
* just pick one active interrupt after the other in the tail part
* of the ap_list, past the LRs, and replay the deactivation as if
* the CPU was doing it. We also rely on priority drop to have taken
* place, and the list to be sorted by priority.
*/
list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
list_for_each_entry_continue(irq, &vgic_cpu->ap_list_head, ap_list) {
u64 lr;
/*

View File

@ -814,6 +814,9 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu)
static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu)
{
if (!*host_data_ptr(last_lr_irq))
return;
if (kvm_vgic_global_state.type == VGIC_V2)
vgic_v2_fold_lr_state(vcpu);
else
@ -960,10 +963,13 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
if (irqs_outside_lrs(&als))
vgic_sort_ap_list(vcpu);
*host_data_ptr(last_lr_irq) = NULL;
list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
scoped_guard(raw_spinlock, &irq->irq_lock) {
if (likely(vgic_target_oracle(irq) == vcpu)) {
vgic_populate_lr(vcpu, irq, count++);
*host_data_ptr(last_lr_irq) = irq;
}
}

View File

@ -14,7 +14,7 @@
#define CREATE_TRACE_POINTS
#include "trace.h"
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, int_exits),
STATS_DESC_COUNTER(VCPU, idle_exits),

View File

@ -10,7 +10,7 @@
#include <asm/kvm_eiointc.h>
#include <asm/kvm_pch_pic.h>
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_ICOUNTER(VM, pages),
STATS_DESC_ICOUNTER(VM, hugepages),

View File

@ -38,7 +38,7 @@
#define VECTORSPACING 0x100 /* for EI/VI mode */
#endif
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS()
};
@ -51,7 +51,7 @@ const struct kvm_stats_header kvm_vm_stats_header = {
sizeof(kvm_vm_stats_desc),
};
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, wait_exits),
STATS_DESC_COUNTER(VCPU, cache_exits),

View File

@ -573,8 +573,8 @@ config ARCH_USING_PATCHABLE_FUNCTION_ENTRY
depends on FUNCTION_TRACER && (PPC32 || PPC64_ELF_ABI_V2)
depends on $(cc-option,-fpatchable-function-entry=2)
def_bool y if PPC32
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/check-fpatchable-function-entry.sh $(CC) $(CLANG_FLAGS) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN
def_bool $(success,$(srctree)/arch/powerpc/tools/check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN
config PPC_FTRACE_OUT_OF_LINE
def_bool PPC64 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY

View File

@ -37,7 +37,7 @@ PowerPC,8347@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x8000000>; // 128MB at 0
};

View File

@ -1,156 +0,0 @@
/* T4240 Interlaken LAC Portal device tree stub with 24 portals.
*
* Copyright 2012 Freescale Semiconductor Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Freescale Semiconductor nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#address-cells = <0x1>;
#size-cells = <0x1>;
compatible = "fsl,interlaken-lac-portals";
lportal0: lac-portal@0 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x0 0x1000>;
};
lportal1: lac-portal@1000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x1000 0x1000>;
};
lportal2: lac-portal@2000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x2000 0x1000>;
};
lportal3: lac-portal@3000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x3000 0x1000>;
};
lportal4: lac-portal@4000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x4000 0x1000>;
};
lportal5: lac-portal@5000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x5000 0x1000>;
};
lportal6: lac-portal@6000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x6000 0x1000>;
};
lportal7: lac-portal@7000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x7000 0x1000>;
};
lportal8: lac-portal@8000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x8000 0x1000>;
};
lportal9: lac-portal@9000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x9000 0x1000>;
};
lportal10: lac-portal@A000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xA000 0x1000>;
};
lportal11: lac-portal@B000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xB000 0x1000>;
};
lportal12: lac-portal@C000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xC000 0x1000>;
};
lportal13: lac-portal@D000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xD000 0x1000>;
};
lportal14: lac-portal@E000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xE000 0x1000>;
};
lportal15: lac-portal@F000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0xF000 0x1000>;
};
lportal16: lac-portal@10000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x10000 0x1000>;
};
lportal17: lac-portal@11000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x11000 0x1000>;
};
lportal18: lac-portal@1200 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x12000 0x1000>;
};
lportal19: lac-portal@13000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x13000 0x1000>;
};
lportal20: lac-portal@14000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x14000 0x1000>;
};
lportal21: lac-portal@15000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x15000 0x1000>;
};
lportal22: lac-portal@16000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x16000 0x1000>;
};
lportal23: lac-portal@17000 {
compatible = "fsl,interlaken-lac-portal-v1.0";
reg = <0x17000 0x1000>;
};

View File

@ -1,45 +0,0 @@
/*
* T4 Interlaken Look-aside Controller (LAC) device tree stub
*
* Copyright 2012 Freescale Semiconductor Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Freescale Semiconductor nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
lac: lac@229000 {
compatible = "fsl,interlaken-lac";
reg = <0x229000 0x1000>;
interrupts = <16 2 1 18>;
};
lac-hv@228000 {
compatible = "fsl,interlaken-lac-hv";
reg = <0x228000 0x1000>;
fsl,non-hv-node = <&lac>;
};

View File

@ -1,43 +0,0 @@
/*
* PQ3 MPIC Message (Group B) device tree stub [ controller @ offset 0x42400 ]
*
* Copyright 2012 Freescale Semiconductor Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Freescale Semiconductor nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
message@42400 {
compatible = "fsl,mpic-v3.1-msgr";
reg = <0x42400 0x200>;
interrupts = <
0xb4 2 0 0
0xb5 2 0 0
0xb6 2 0 0
0xb7 2 0 0>;
};

View File

@ -1,80 +0,0 @@
/*
* QorIQ FMan v3 1g port #1 device tree stub [ controller @ offset 0x400000 ]
*
* Copyright 2012 - 2015 Freescale Semiconductor Inc.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of Freescale Semiconductor nor the
* names of its contributors may be used to endorse or promote products
* derived from this software without specific prior written permission.
*
*
* ALTERNATIVELY, this software may be distributed under the terms of the
* GNU General Public License ("GPL") as published by the Free Software
* Foundation, either version 2 of that License or (at your option) any
* later version.
*
* THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
fman@400000 {
fman0_rx_0x09: port@89000 {
cell-index = <0x9>;
compatible = "fsl,fman-v3-port-rx";
reg = <0x89000 0x1000>;
fsl,fman-10g-port;
fsl,fman-best-effort-port;
};
fman0_tx_0x29: port@a9000 {
cell-index = <0x29>;
compatible = "fsl,fman-v3-port-tx";
reg = <0xa9000 0x1000>;
fsl,fman-10g-port;
fsl,fman-best-effort-port;
};
ethernet@e2000 {
cell-index = <1>;
compatible = "fsl,fman-memac";
reg = <0xe2000 0x1000>;
fsl,fman-ports = <&fman0_rx_0x09 &fman0_tx_0x29>;
ptp-timer = <&ptp_timer0>;
pcsphy-handle = <&pcsphy1>, <&qsgmiia_pcs1>;
pcs-handle-names = "sgmii", "qsgmii";
};
mdio@e1000 {
qsgmiia_pcs1: ethernet-pcs@1 {
compatible = "fsl,lynx-pcs";
reg = <1>;
};
};
mdio@e3000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,fman-memac-mdio", "fsl,fman-xmdio";
reg = <0xe3000 0x1000>;
fsl,erratum-a011043; /* must ignore read errors */
pcsphy1: ethernet-phy@0 {
reg = <0x0>;
};
};
};

View File

@ -37,7 +37,7 @@ PowerPC,8308@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x08000000>; // 128MB at 0
};

View File

@ -38,7 +38,7 @@ PowerPC,8308@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x08000000>; // 128MB at 0
};

View File

@ -6,6 +6,7 @@
*/
/dts-v1/;
#include <dt-bindings/interrupt-controller/irq.h>
/ {
model = "MPC8313ERDB";
@ -38,7 +39,7 @@ PowerPC,8313@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x08000000>; // 128MB at 0
};
@ -48,7 +49,7 @@ localbus@e0005000 {
#size-cells = <1>;
compatible = "fsl,mpc8313-elbc", "fsl,elbc", "simple-bus";
reg = <0xe0005000 0x1000>;
interrupts = <77 0x8>;
interrupts = <77 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
// CS0 and CS1 are swapped when
@ -118,7 +119,7 @@ i2c@3000 {
cell-index = <0>;
compatible = "fsl-i2c";
reg = <0x3000 0x100>;
interrupts = <14 0x8>;
interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
dfsrr;
rtc@68 {
@ -131,7 +132,7 @@ crypto@30000 {
compatible = "fsl,sec2.2", "fsl,sec2.1",
"fsl,sec2.0";
reg = <0x30000 0x10000>;
interrupts = <11 0x8>;
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
fsl,num-channels = <1>;
fsl,channel-fifo-len = <24>;
@ -146,7 +147,7 @@ i2c@3100 {
cell-index = <1>;
compatible = "fsl-i2c";
reg = <0x3100 0x100>;
interrupts = <15 0x8>;
interrupts = <15 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
dfsrr;
};
@ -155,7 +156,7 @@ spi@7000 {
cell-index = <0>;
compatible = "fsl,spi";
reg = <0x7000 0x1000>;
interrupts = <16 0x8>;
interrupts = <16 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
mode = "cpu";
};
@ -167,7 +168,7 @@ usb@23000 {
#address-cells = <1>;
#size-cells = <0>;
interrupt-parent = <&ipic>;
interrupts = <38 0x8>;
interrupts = <38 IRQ_TYPE_LEVEL_LOW>;
phy_type = "utmi_wide";
sleep = <&pmc 0x00300000>;
};
@ -175,7 +176,8 @@ usb@23000 {
ptp_clock@24E00 {
compatible = "fsl,etsec-ptp";
reg = <0x24E00 0xB0>;
interrupts = <12 0x8 13 0x8>;
interrupts = <12 IRQ_TYPE_LEVEL_LOW>,
<13 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = < &ipic >;
fsl,tclk-period = <10>;
fsl,tmr-prsc = <100>;
@ -197,7 +199,9 @@ enet0: ethernet@24000 {
compatible = "gianfar";
reg = <0x24000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <37 0x8 36 0x8 35 0x8>;
interrupts = <37 IRQ_TYPE_LEVEL_LOW>,
<36 IRQ_TYPE_LEVEL_LOW>,
<35 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
tbi-handle = < &tbi0 >;
/* Vitesse 7385 isn't on the MDIO bus */
@ -211,7 +215,7 @@ mdio@520 {
reg = <0x520 0x20>;
phy4: ethernet-phy@4 {
interrupt-parent = <&ipic>;
interrupts = <20 0x8>;
interrupts = <20 IRQ_TYPE_LEVEL_LOW>;
reg = <0x4>;
};
tbi0: tbi-phy@11 {
@ -231,7 +235,9 @@ enet1: ethernet@25000 {
reg = <0x25000 0x1000>;
ranges = <0x0 0x25000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <34 0x8 33 0x8 32 0x8>;
interrupts = <34 IRQ_TYPE_LEVEL_LOW>,
<33 IRQ_TYPE_LEVEL_LOW>,
<32 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
tbi-handle = < &tbi1 >;
phy-handle = < &phy4 >;
@ -259,7 +265,7 @@ serial0: serial@4500 {
compatible = "fsl,ns16550", "ns16550";
reg = <0x4500 0x100>;
clock-frequency = <0>;
interrupts = <9 0x8>;
interrupts = <9 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
@ -269,15 +275,12 @@ serial1: serial@4600 {
compatible = "fsl,ns16550", "ns16550";
reg = <0x4600 0x100>;
clock-frequency = <0>;
interrupts = <10 0x8>;
interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
/* IPIC
* interrupts cell = <intr #, sense>
* sense values match linux IORESOURCE_IRQ_* defines:
* sense == 8: Level, low assertion
* sense == 2: Edge, high-to-low change
* interrupts cell = <intr #, type>
*/
ipic: pic@700 {
interrupt-controller;
@ -290,7 +293,7 @@ ipic: pic@700 {
pmc: power@b00 {
compatible = "fsl,mpc8313-pmc", "fsl,mpc8349-pmc";
reg = <0xb00 0x100 0xa00 0x100>;
interrupts = <80 8>;
interrupts = <80 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
fsl,mpc8313-wakeup-timer = <&gtm1>;
@ -306,14 +309,20 @@ pmc: power@b00 {
gtm1: timer@500 {
compatible = "fsl,mpc8313-gtm", "fsl,gtm";
reg = <0x500 0x100>;
interrupts = <90 8 78 8 84 8 72 8>;
interrupts = <90 IRQ_TYPE_LEVEL_LOW>,
<78 IRQ_TYPE_LEVEL_LOW>,
<84 IRQ_TYPE_LEVEL_LOW>,
<72 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
timer@600 {
compatible = "fsl,mpc8313-gtm", "fsl,gtm";
reg = <0x600 0x100>;
interrupts = <91 8 79 8 85 8 73 8>;
interrupts = <91 IRQ_TYPE_LEVEL_LOW>,
<79 IRQ_TYPE_LEVEL_LOW>,
<85 IRQ_TYPE_LEVEL_LOW>,
<73 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
};
@ -341,7 +350,7 @@ pci0: pci@e0008500 {
0x7800 0x0 0x0 0x3 &ipic 17 0x8
0x7800 0x0 0x0 0x4 &ipic 18 0x8>;
interrupt-parent = <&ipic>;
interrupts = <66 0x8>;
interrupts = <66 IRQ_TYPE_LEVEL_LOW>;
bus-range = <0x0 0x0>;
ranges = <0x02000000 0x0 0x90000000 0x90000000 0x0 0x10000000
0x42000000 0x0 0x80000000 0x80000000 0x0 0x10000000
@ -363,14 +372,14 @@ dma@82a8 {
reg = <0xe00082a8 4>;
ranges = <0 0xe0008100 0x1a8>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
dma-channel@0 {
compatible = "fsl,mpc8313-dma-channel",
"fsl,elo-dma-channel";
reg = <0 0x28>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
cell-index = <0>;
};
@ -379,7 +388,7 @@ dma-channel@80 {
"fsl,elo-dma-channel";
reg = <0x80 0x28>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
cell-index = <1>;
};
@ -388,7 +397,7 @@ dma-channel@100 {
"fsl,elo-dma-channel";
reg = <0x100 0x28>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
cell-index = <2>;
};
@ -397,7 +406,7 @@ dma-channel@180 {
"fsl,elo-dma-channel";
reg = <0x180 0x28>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
cell-index = <3>;
};
};

View File

@ -40,7 +40,7 @@ PowerPC,8315@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x08000000>; // 128MB at 0
};
@ -50,7 +50,7 @@ localbus@e0005000 {
#size-cells = <1>;
compatible = "fsl,mpc8315-elbc", "fsl,elbc", "simple-bus";
reg = <0xe0005000 0x1000>;
interrupts = <77 0x8>;
interrupts = <77 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
// CS0 and CS1 are swapped when
@ -112,7 +112,7 @@ i2c@3000 {
cell-index = <0>;
compatible = "fsl-i2c";
reg = <0x3000 0x100>;
interrupts = <14 0x8>;
interrupts = <14 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
dfsrr;
rtc@68 {
@ -133,8 +133,10 @@ spi@7000 {
cell-index = <0>;
compatible = "fsl,spi";
reg = <0x7000 0x1000>;
interrupts = <16 0x8>;
interrupts = <16 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
#address-cells = <1>;
#size-cells = <0>;
mode = "cpu";
};
@ -145,35 +147,35 @@ dma@82a8 {
reg = <0x82a8 4>;
ranges = <0 0x8100 0x1a8>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
cell-index = <0>;
dma-channel@0 {
compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel";
reg = <0 0x80>;
cell-index = <0>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
};
dma-channel@80 {
compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel";
reg = <0x80 0x80>;
cell-index = <1>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
};
dma-channel@100 {
compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel";
reg = <0x100 0x80>;
cell-index = <2>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
};
dma-channel@180 {
compatible = "fsl,mpc8315-dma-channel", "fsl,elo-dma-channel";
reg = <0x180 0x28>;
cell-index = <3>;
interrupt-parent = <&ipic>;
interrupts = <71 8>;
interrupts = <71 IRQ_TYPE_LEVEL_LOW>;
};
};
@ -183,7 +185,7 @@ usb@23000 {
#address-cells = <1>;
#size-cells = <0>;
interrupt-parent = <&ipic>;
interrupts = <38 0x8>;
interrupts = <38 IRQ_TYPE_LEVEL_LOW>;
phy_type = "utmi";
};
@ -197,7 +199,9 @@ enet0: ethernet@24000 {
reg = <0x24000 0x1000>;
ranges = <0x0 0x24000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <32 0x8 33 0x8 34 0x8>;
interrupts = <32 IRQ_TYPE_LEVEL_LOW>,
<33 IRQ_TYPE_LEVEL_LOW>,
<34 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
tbi-handle = <&tbi0>;
phy-handle = < &phy0 >;
@ -238,7 +242,9 @@ enet1: ethernet@25000 {
reg = <0x25000 0x1000>;
ranges = <0x0 0x25000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <35 0x8 36 0x8 37 0x8>;
interrupts = <35 IRQ_TYPE_LEVEL_LOW>,
<36 IRQ_TYPE_LEVEL_LOW>,
<37 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
tbi-handle = <&tbi1>;
phy-handle = < &phy1 >;
@ -263,7 +269,7 @@ serial0: serial@4500 {
compatible = "fsl,ns16550", "ns16550";
reg = <0x4500 0x100>;
clock-frequency = <133333333>;
interrupts = <9 0x8>;
interrupts = <9 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
@ -273,7 +279,7 @@ serial1: serial@4600 {
compatible = "fsl,ns16550", "ns16550";
reg = <0x4600 0x100>;
clock-frequency = <133333333>;
interrupts = <10 0x8>;
interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
@ -282,7 +288,7 @@ crypto@30000 {
"fsl,sec2.4", "fsl,sec2.2", "fsl,sec2.1",
"fsl,sec2.0";
reg = <0x30000 0x10000>;
interrupts = <11 0x8>;
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
fsl,num-channels = <4>;
fsl,channel-fifo-len = <24>;
@ -294,7 +300,7 @@ sata@18000 {
compatible = "fsl,mpc8315-sata", "fsl,pq-sata";
reg = <0x18000 0x1000>;
cell-index = <1>;
interrupts = <44 0x8>;
interrupts = <44 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
@ -302,14 +308,17 @@ sata@19000 {
compatible = "fsl,mpc8315-sata", "fsl,pq-sata";
reg = <0x19000 0x1000>;
cell-index = <2>;
interrupts = <45 0x8>;
interrupts = <45 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
};
gtm1: timer@500 {
compatible = "fsl,mpc8315-gtm", "fsl,gtm";
reg = <0x500 0x100>;
interrupts = <90 8 78 8 84 8 72 8>;
interrupts = <90 IRQ_TYPE_LEVEL_LOW>,
<78 IRQ_TYPE_LEVEL_LOW>,
<84 IRQ_TYPE_LEVEL_LOW>,
<72 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
clock-frequency = <133333333>;
};
@ -317,16 +326,16 @@ gtm1: timer@500 {
timer@600 {
compatible = "fsl,mpc8315-gtm", "fsl,gtm";
reg = <0x600 0x100>;
interrupts = <91 8 79 8 85 8 73 8>;
interrupts = <91 IRQ_TYPE_LEVEL_LOW>,
<79 IRQ_TYPE_LEVEL_LOW>,
<85 IRQ_TYPE_LEVEL_LOW>,
<73 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
clock-frequency = <133333333>;
};
/* IPIC
* interrupts cell = <intr #, sense>
* sense values match linux IORESOURCE_IRQ_* defines:
* sense == 8: Level, low assertion
* sense == 2: Edge, high-to-low change
* interrupts cell = <intr #, type>
*/
ipic: interrupt-controller@700 {
interrupt-controller;
@ -340,14 +349,14 @@ ipic-msi@7c0 {
compatible = "fsl,ipic-msi";
reg = <0x7c0 0x40>;
msi-available-ranges = <0 0x100>;
interrupts = <0x43 0x8
0x4 0x8
0x51 0x8
0x52 0x8
0x56 0x8
0x57 0x8
0x58 0x8
0x59 0x8>;
interrupts = <0x43 IRQ_TYPE_LEVEL_LOW
0x4 IRQ_TYPE_LEVEL_LOW
0x51 IRQ_TYPE_LEVEL_LOW
0x52 IRQ_TYPE_LEVEL_LOW
0x56 IRQ_TYPE_LEVEL_LOW
0x57 IRQ_TYPE_LEVEL_LOW
0x58 IRQ_TYPE_LEVEL_LOW
0x59 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = < &ipic >;
};
@ -355,7 +364,7 @@ pmc: power@b00 {
compatible = "fsl,mpc8315-pmc", "fsl,mpc8313-pmc",
"fsl,mpc8349-pmc";
reg = <0xb00 0x100 0xa00 0x100>;
interrupts = <80 8>;
interrupts = <80 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
fsl,mpc8313-wakeup-timer = <&gtm1>;
};
@ -374,24 +383,24 @@ pci0: pci@e0008500 {
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
/* IDSEL 0x0E -mini PCI */
0x7000 0x0 0x0 0x1 &ipic 18 0x8
0x7000 0x0 0x0 0x2 &ipic 18 0x8
0x7000 0x0 0x0 0x3 &ipic 18 0x8
0x7000 0x0 0x0 0x4 &ipic 18 0x8
0x7000 0x0 0x0 0x1 &ipic 18 IRQ_TYPE_LEVEL_LOW
0x7000 0x0 0x0 0x2 &ipic 18 IRQ_TYPE_LEVEL_LOW
0x7000 0x0 0x0 0x3 &ipic 18 IRQ_TYPE_LEVEL_LOW
0x7000 0x0 0x0 0x4 &ipic 18 IRQ_TYPE_LEVEL_LOW
/* IDSEL 0x0F -mini PCI */
0x7800 0x0 0x0 0x1 &ipic 17 0x8
0x7800 0x0 0x0 0x2 &ipic 17 0x8
0x7800 0x0 0x0 0x3 &ipic 17 0x8
0x7800 0x0 0x0 0x4 &ipic 17 0x8
0x7800 0x0 0x0 0x1 &ipic 17 IRQ_TYPE_LEVEL_LOW
0x7800 0x0 0x0 0x2 &ipic 17 IRQ_TYPE_LEVEL_LOW
0x7800 0x0 0x0 0x3 &ipic 17 IRQ_TYPE_LEVEL_LOW
0x7800 0x0 0x0 0x4 &ipic 17 IRQ_TYPE_LEVEL_LOW
/* IDSEL 0x10 - PCI slot */
0x8000 0x0 0x0 0x1 &ipic 48 0x8
0x8000 0x0 0x0 0x2 &ipic 17 0x8
0x8000 0x0 0x0 0x3 &ipic 48 0x8
0x8000 0x0 0x0 0x4 &ipic 17 0x8>;
0x8000 0x0 0x0 0x1 &ipic 48 IRQ_TYPE_LEVEL_LOW
0x8000 0x0 0x0 0x2 &ipic 17 IRQ_TYPE_LEVEL_LOW
0x8000 0x0 0x0 0x3 &ipic 48 IRQ_TYPE_LEVEL_LOW
0x8000 0x0 0x0 0x4 &ipic 17 IRQ_TYPE_LEVEL_LOW>;
interrupt-parent = <&ipic>;
interrupts = <66 0x8>;
interrupts = <66 IRQ_TYPE_LEVEL_LOW>;
bus-range = <0x0 0x0>;
ranges = <0x02000000 0 0x90000000 0x90000000 0 0x10000000
0x42000000 0 0x80000000 0x80000000 0 0x10000000
@ -417,10 +426,10 @@ pci1: pcie@e0009000 {
0x01000000 0 0x00000000 0xb1000000 0 0x00800000>;
bus-range = <0 255>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <0 0 0 1 &ipic 1 8
0 0 0 2 &ipic 1 8
0 0 0 3 &ipic 1 8
0 0 0 4 &ipic 1 8>;
interrupt-map = <0 0 0 1 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 2 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 3 &ipic 1 IRQ_TYPE_LEVEL_LOW
0 0 0 4 &ipic 1 IRQ_TYPE_LEVEL_LOW>;
clock-frequency = <0>;
pcie@0 {
@ -448,10 +457,10 @@ pci2: pcie@e000a000 {
0x01000000 0 0x00000000 0xd1000000 0 0x00800000>;
bus-range = <0 255>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <0 0 0 1 &ipic 2 8
0 0 0 2 &ipic 2 8
0 0 0 3 &ipic 2 8
0 0 0 4 &ipic 2 8>;
interrupt-map = <0 0 0 1 &ipic 2 IRQ_TYPE_LEVEL_LOW
0 0 0 2 &ipic 2 IRQ_TYPE_LEVEL_LOW
0 0 0 3 &ipic 2 IRQ_TYPE_LEVEL_LOW
0 0 0 4 &ipic 2 IRQ_TYPE_LEVEL_LOW>;
clock-frequency = <0>;
pcie@0 {
@ -471,12 +480,12 @@ pcie@0 {
leds {
compatible = "gpio-leds";
pwr {
led-pwr {
gpios = <&mcu_pio 0 0>;
default-state = "on";
};
hdd {
led-hdd {
gpios = <&mcu_pio 1 0>;
linux,default-trigger = "disk-activity";
};

View File

@ -38,7 +38,7 @@ PowerPC,8323@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x04000000>;
};

View File

@ -39,7 +39,7 @@ PowerPC,8349@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x10000000>;
};

View File

@ -37,7 +37,7 @@ PowerPC,8349@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x10000000>;
};

View File

@ -39,7 +39,7 @@ PowerPC,8377@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x10000000>; // 256MB at 0
};

View File

@ -40,7 +40,7 @@ PowerPC,8377@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x20000000>; // 512MB at 0
};

View File

@ -39,7 +39,7 @@ PowerPC,8378@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x10000000>; // 256MB at 0
};

View File

@ -37,7 +37,7 @@ PowerPC,8379@0 {
};
};
memory {
memory@0 {
device_type = "memory";
reg = <0x00000000 0x10000000>; // 256MB at 0
};

View File

@ -120,10 +120,8 @@
#if defined(CONFIG_44x)
#include <asm/nohash/32/pte-44x.h>
#elif defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT)
#include <asm/nohash/pte-e500.h>
#elif defined(CONFIG_PPC_85xx)
#include <asm/nohash/32/pte-85xx.h>
#include <asm/nohash/pte-e500.h>
#elif defined(CONFIG_PPC_8xx)
#include <asm/nohash/32/pte-8xx.h>
#endif

View File

@ -1,59 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_POWERPC_NOHASH_32_PTE_85xx_H
#define _ASM_POWERPC_NOHASH_32_PTE_85xx_H
#ifdef __KERNEL__
/* PTE bit definitions for Freescale BookE SW loaded TLB MMU based
* processors
*
MMU Assist Register 3:
32 33 34 35 36 ... 50 51 52 53 54 55 56 57 58 59 60 61 62 63
RPN...................... 0 0 U0 U1 U2 U3 UX SX UW SW UR SR
- PRESENT *must* be in the bottom two bits because swap PTEs use
the top 30 bits.
*/
/* Definitions for FSL Book-E Cores */
#define _PAGE_READ 0x00001 /* H: Read permission (SR) */
#define _PAGE_PRESENT 0x00002 /* S: PTE contains a translation */
#define _PAGE_WRITE 0x00004 /* S: Write permission (SW) */
#define _PAGE_DIRTY 0x00008 /* S: Page dirty */
#define _PAGE_EXEC 0x00010 /* H: SX permission */
#define _PAGE_ACCESSED 0x00020 /* S: Page referenced */
#define _PAGE_ENDIAN 0x00040 /* H: E bit */
#define _PAGE_GUARDED 0x00080 /* H: G bit */
#define _PAGE_COHERENT 0x00100 /* H: M bit */
#define _PAGE_NO_CACHE 0x00200 /* H: I bit */
#define _PAGE_WRITETHRU 0x00400 /* H: W bit */
#define _PAGE_SPECIAL 0x00800 /* S: Special page */
#define _PMD_PRESENT 0
#define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK)
#define _PMD_USER 0
#define _PTE_NONE_MASK 0
#define PTE_WIMGE_SHIFT (6)
/*
* We define 2 sets of base prot bits, one for basic pages (ie,
* cacheable kernel and user pages) and one for non cacheable
* pages. We always set _PAGE_COHERENT when SMP is enabled or
* the processor might need it for DMA coherency.
*/
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED)
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
#else
#define _PAGE_BASE (_PAGE_BASE_NC)
#endif
#include <asm/pgtable-masks.h>
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_NOHASH_32_PTE_FSL_85xx_H */

View File

@ -49,7 +49,7 @@ static inline unsigned long pud_val(pud_t x)
#endif /* CONFIG_PPC64 */
/* PGD level */
#if defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT)
#if defined(CONFIG_PPC_85xx)
typedef struct { unsigned long long pgd; } pgd_t;
static inline unsigned long long pgd_val(pgd_t x)

View File

@ -15,6 +15,9 @@
#define TASK_SIZE_MAX TASK_SIZE_USER64
#endif
/* Threshold above which VMX copy path is used */
#define VMX_COPY_THRESHOLD 3328
#include <asm-generic/access_ok.h>
/*
@ -255,7 +258,7 @@ __gus_failed: \
".section .fixup,\"ax\"\n" \
"4: li %0,%3\n" \
" li %1,0\n" \
" li %1+1,0\n" \
" li %L1,0\n" \
" b 3b\n" \
".previous\n" \
EX_TABLE(1b, 4b) \
@ -326,40 +329,62 @@ do { \
extern unsigned long __copy_tofrom_user(void __user *to,
const void __user *from, unsigned long size);
#ifdef __powerpc64__
unsigned long __copy_tofrom_user_base(void __user *to,
const void __user *from, unsigned long size);
unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
const void __user *from, unsigned long size);
static __always_inline bool will_use_vmx(unsigned long n)
{
return IS_ENABLED(CONFIG_ALTIVEC) && cpu_has_feature(CPU_FTR_VMX_COPY) &&
n > VMX_COPY_THRESHOLD;
}
static __always_inline unsigned long
raw_copy_tofrom_user(void __user *to, const void __user *from,
unsigned long n, unsigned long dir)
{
unsigned long ret;
if (will_use_vmx(n) && enter_vmx_usercopy()) {
allow_user_access(to, dir);
ret = __copy_tofrom_user_power7_vmx(to, from, n);
prevent_user_access(dir);
exit_vmx_usercopy();
if (unlikely(ret)) {
allow_user_access(to, dir);
ret = __copy_tofrom_user_base(to, from, n);
prevent_user_access(dir);
}
return ret;
}
allow_user_access(to, dir);
ret = __copy_tofrom_user(to, from, n);
prevent_user_access(dir);
return ret;
}
#ifdef CONFIG_PPC64
static inline unsigned long
raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
{
unsigned long ret;
barrier_nospec();
allow_user_access(to, KUAP_READ_WRITE);
ret = __copy_tofrom_user(to, from, n);
prevent_user_access(KUAP_READ_WRITE);
return ret;
return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
}
#endif /* __powerpc64__ */
#endif /* CONFIG_PPC64 */
static inline unsigned long raw_copy_from_user(void *to,
const void __user *from, unsigned long n)
static inline unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
unsigned long ret;
allow_user_access(NULL, KUAP_READ);
ret = __copy_tofrom_user((__force void __user *)to, from, n);
prevent_user_access(KUAP_READ);
return ret;
return raw_copy_tofrom_user((__force void __user *)to, from, n, KUAP_READ);
}
static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
unsigned long ret;
allow_user_access(to, KUAP_WRITE);
ret = __copy_tofrom_user(to, (__force const void __user *)from, n);
prevent_user_access(KUAP_WRITE);
return ret;
return raw_copy_tofrom_user(to, (__force const void __user *)from, n, KUAP_WRITE);
}
unsigned long __arch_clear_user(void __user *addr, unsigned long size);

View File

@ -305,7 +305,6 @@ set_ivor:
* r12 is pointer to the pte
* r10 is the pshift from the PGD, if we're a hugepage
*/
#ifdef CONFIG_PTE_64BIT
#ifdef CONFIG_HUGETLB_PAGE
#define FIND_PTE \
rlwinm r12, r13, 14, 18, 28; /* Compute pgdir/pmd offset */ \
@ -329,15 +328,6 @@ set_ivor:
rlwimi r12, r13, 23, 20, 28; /* Compute pte address */ \
lwz r11, 4(r12); /* Get pte entry */
#endif /* HUGEPAGE */
#else /* !PTE_64BIT */
#define FIND_PTE \
rlwimi r11, r13, 12, 20, 29; /* Create L1 (pgdir/pmd) address */ \
lwz r11, 0(r11); /* Get L1 entry */ \
rlwinm. r12, r11, 0, 0, 19; /* Extract L2 (pte) base address */ \
beq 2f; /* Bail if no table */ \
rlwimi r12, r13, 22, 20, 29; /* Compute PTE address */ \
lwz r11, 0(r12); /* Get Linux PTE */
#endif
/*
* Interrupt vector entry code
@ -473,21 +463,15 @@ END_BTB_FLUSH_SECTION
4:
FIND_PTE
#ifdef CONFIG_PTE_64BIT
li r13,_PAGE_PRESENT|_PAGE_BAP_SR
oris r13,r13,_PAGE_ACCESSED@h
#else
li r13,_PAGE_PRESENT|_PAGE_READ|_PAGE_ACCESSED
#endif
andc. r13,r13,r11 /* Check permission */
#ifdef CONFIG_PTE_64BIT
#ifdef CONFIG_SMP
subf r13,r11,r12 /* create false data dep */
lwzx r13,r11,r13 /* Get upper pte bits */
#else
lwz r13,0(r12) /* Get upper pte bits */
#endif
#endif
bne 2f /* Bail if permission/valid mismatch */
@ -552,12 +536,8 @@ END_BTB_FLUSH_SECTION
FIND_PTE
/* Make up the required permissions for kernel code */
#ifdef CONFIG_PTE_64BIT
li r13,_PAGE_PRESENT | _PAGE_BAP_SX
oris r13,r13,_PAGE_ACCESSED@h
#else
li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
#endif
b 4f
/* Get the PGD for the current thread */
@ -573,23 +553,17 @@ END_BTB_FLUSH_SECTION
FIND_PTE
/* Make up the required permissions for user code */
#ifdef CONFIG_PTE_64BIT
li r13,_PAGE_PRESENT | _PAGE_BAP_UX
oris r13,r13,_PAGE_ACCESSED@h
#else
li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
#endif
4:
andc. r13,r13,r11 /* Check permission */
#ifdef CONFIG_PTE_64BIT
#ifdef CONFIG_SMP
subf r13,r11,r12 /* create false data dep */
lwzx r13,r11,r13 /* Get upper pte bits */
#else
lwz r13,0(r12) /* Get upper pte bits */
#endif
#endif
bne 2f /* Bail if permission mismatch */
@ -683,7 +657,7 @@ interrupt_end:
* r10 - tsize encoding (if HUGETLB_PAGE) or available to use
* r11 - TLB (info from Linux PTE)
* r12 - available to use
* r13 - upper bits of PTE (if PTE_64BIT) or available to use
* r13 - upper bits of PTE
* CR5 - results of addr >= PAGE_OFFSET
* MAS0, MAS1 - loaded with proper value when we get here
* MAS2, MAS3 - will need additional info from Linux PTE
@ -751,7 +725,6 @@ finish_tlb_load:
* here we (properly should) assume have the appropriate value.
*/
finish_tlb_load_cont:
#ifdef CONFIG_PTE_64BIT
rlwinm r12, r11, 32-2, 26, 31 /* Move in perm bits */
andi. r10, r11, _PAGE_DIRTY
bne 1f
@ -764,26 +737,9 @@ BEGIN_MMU_FTR_SECTION
srwi r10, r13, 12 /* grab RPN[12:31] */
mtspr SPRN_MAS7, r10
END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
#else
li r10, (_PAGE_EXEC | _PAGE_READ)
mr r13, r11
rlwimi r10, r11, 31, 29, 29 /* extract _PAGE_DIRTY into SW */
and r12, r11, r10
mcrf cr0, cr5 /* Test for user page */
slwi r10, r12, 1
or r10, r10, r12
rlwinm r10, r10, 0, ~_PAGE_EXEC /* Clear SX on user pages */
isellt r12, r10, r12
rlwimi r13, r12, 0, 20, 31 /* Get RPN from PTE, merge w/ perms */
mtspr SPRN_MAS3, r13
#endif
mfspr r12, SPRN_MAS2
#ifdef CONFIG_PTE_64BIT
rlwimi r12, r11, 32-19, 27, 31 /* extract WIMGE from pte */
#else
rlwimi r12, r11, 26, 27, 31 /* extract WIMGE from pte */
#endif
#ifdef CONFIG_HUGETLB_PAGE
beq 6, 3f /* don't mask if page isn't huge */
li r13, 1

View File

@ -1159,7 +1159,7 @@ spapr_tce_platform_iommu_attach_dev(struct iommu_domain *platform_domain,
struct device *dev,
struct iommu_domain *old)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
struct iommu_domain *domain = iommu_driver_get_domain_for_dev(dev);
struct iommu_table_group *table_group;
struct iommu_group *grp;

View File

@ -2893,7 +2893,8 @@ static void __init fixup_device_tree_pmac(void)
for (node = 0; prom_next_node(&node); ) {
type[0] = '\0';
prom_getprop(node, "device_type", type, sizeof(type));
if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s"))
if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s") &&
prom_strcmp(type, "media-bay"))
continue;
if (prom_getproplen(node, "#size-cells") != PROM_ERROR)

View File

@ -35,7 +35,6 @@
#include <linux/of_irq.h>
#include <linux/hugetlb.h>
#include <linux/pgtable.h>
#include <asm/kexec.h>
#include <asm/io.h>
#include <asm/paca.h>
#include <asm/processor.h>
@ -995,15 +994,6 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
/*
* Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and
* hugetlb. These must be called after initmem_init(), so that
* pageblock_order is initialised.
*/
fadump_cma_init();
kdump_cma_reserve();
kvm_cma_reserve();
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
if (ppc_md.setup_arch)

View File

@ -37,11 +37,29 @@ unsigned long ftrace_call_adjust(unsigned long addr)
if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end)
return 0;
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) &&
!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
addr += MCOUNT_INSN_SIZE;
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) {
if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
addr += MCOUNT_INSN_SIZE;
if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS))
addr += MCOUNT_INSN_SIZE;
} else if (IS_ENABLED(CONFIG_CC_IS_CLANG) && IS_ENABLED(CONFIG_PPC64)) {
/*
* addr points to global entry point though the NOP was emitted at local
* entry point due to https://github.com/llvm/llvm-project/issues/163706
* Handle that here with ppc_function_entry() for kernel symbols while
* adjusting module addresses in the else case, by looking for the below
* module global entry point sequence:
* ld r2, -8(r12)
* add r2, r2, r12
*/
if (is_kernel_text(addr) || is_kernel_inittext(addr))
addr = ppc_function_entry((void *)addr);
else if ((ppc_inst_val(ppc_inst_read((u32 *)addr)) ==
PPC_RAW_LD(_R2, _R12, -8)) &&
(ppc_inst_val(ppc_inst_read((u32 *)(addr+4))) ==
PPC_RAW_ADD(_R2, _R2, _R12)))
addr += 8;
}
}
return addr;

View File

@ -23,6 +23,7 @@
#include <asm/firmware.h>
#define cpu_to_be_ulong __PASTE(cpu_to_be, BITS_PER_LONG)
#define __be_word __PASTE(__be, BITS_PER_LONG)
#ifdef CONFIG_CRASH_DUMP
void machine_crash_shutdown(struct pt_regs *regs)
@ -146,25 +147,25 @@ int __init overlaps_crashkernel(unsigned long start, unsigned long size)
}
/* Values we need to export to the second kernel via the device tree. */
static phys_addr_t crashk_base;
static phys_addr_t crashk_size;
static unsigned long long mem_limit;
static __be_word crashk_base;
static __be_word crashk_size;
static __be_word mem_limit;
static struct property crashk_base_prop = {
.name = "linux,crashkernel-base",
.length = sizeof(phys_addr_t),
.length = sizeof(__be_word),
.value = &crashk_base
};
static struct property crashk_size_prop = {
.name = "linux,crashkernel-size",
.length = sizeof(phys_addr_t),
.length = sizeof(__be_word),
.value = &crashk_size,
};
static struct property memory_limit_prop = {
.name = "linux,memory-limit",
.length = sizeof(unsigned long long),
.length = sizeof(__be_word),
.value = &mem_limit,
};
@ -193,11 +194,11 @@ static void __init export_crashk_values(struct device_node *node)
}
#endif /* CONFIG_CRASH_RESERVE */
static phys_addr_t kernel_end;
static __be_word kernel_end;
static struct property kernel_end_prop = {
.name = "linux,kernel-end",
.length = sizeof(phys_addr_t),
.length = sizeof(__be_word),
.value = &kernel_end,
};

View File

@ -450,6 +450,11 @@ static int load_elfcorehdr_segment(struct kimage *image, struct kexec_buf *kbuf)
kbuf->buffer = headers;
kbuf->mem = KEXEC_BUF_MEM_UNKNOWN;
kbuf->bufsz = headers_sz;
/*
* Account for extra space required to accommodate additional memory
* ranges in elfcorehdr due to memory hotplug events.
*/
kbuf->memsz = headers_sz + kdump_extra_elfcorehdr_size(cmem);
kbuf->top_down = false;
@ -460,7 +465,14 @@ static int load_elfcorehdr_segment(struct kimage *image, struct kexec_buf *kbuf)
}
image->elf_load_addr = kbuf->mem;
image->elf_headers_sz = headers_sz;
/*
* If CONFIG_CRASH_HOTPLUG is enabled, the elfcorehdr kexec segment
* memsz can be larger than bufsz. Always initialize elf_headers_sz
* with memsz. This ensures the correct size is reserved for elfcorehdr
* memory in the FDT prepared for kdump.
*/
image->elf_headers_sz = kbuf->memsz;
image->elf_headers = headers;
out:
kfree(cmem);

View File

@ -38,7 +38,7 @@
/* #define EXIT_DEBUG */
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_ICOUNTER(VM, num_2M_pages),
STATS_DESC_ICOUNTER(VM, num_1G_pages)
@ -53,7 +53,7 @@ const struct kvm_stats_header kvm_vm_stats_header = {
sizeof(kvm_vm_stats_desc),
};
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, sum_exits),
STATS_DESC_COUNTER(VCPU, mmio_exits),

View File

@ -36,7 +36,7 @@
unsigned long kvmppc_booke_handlers;
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_ICOUNTER(VM, num_2M_pages),
STATS_DESC_ICOUNTER(VM, num_1G_pages)
@ -51,7 +51,7 @@ const struct kvm_stats_header kvm_vm_stats_header = {
sizeof(kvm_vm_stats_desc),
};
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, sum_exits),
STATS_DESC_COUNTER(VCPU, mmio_exits),

View File

@ -39,15 +39,11 @@ enum vcpu_ftr {
/* bits [6-5] MAS2_X1 and MAS2_X0 and [4-0] bits for WIMGE */
#define E500_TLB_MAS2_ATTR (0x7f)
struct tlbe_ref {
struct tlbe_priv {
kvm_pfn_t pfn; /* valid only for TLB0, except briefly */
unsigned int flags; /* E500_TLB_* */
};
struct tlbe_priv {
struct tlbe_ref ref;
};
#ifdef CONFIG_KVM_E500V2
struct vcpu_id_table;
#endif

View File

@ -920,12 +920,12 @@ int kvmppc_e500_tlb_init(struct kvmppc_vcpu_e500 *vcpu_e500)
vcpu_e500->gtlb_offset[0] = 0;
vcpu_e500->gtlb_offset[1] = KVM_E500_TLB0_SIZE;
vcpu_e500->gtlb_priv[0] = kzalloc_objs(struct tlbe_ref,
vcpu_e500->gtlb_priv[0] = kzalloc_objs(struct tlbe_priv,
vcpu_e500->gtlb_params[0].entries);
if (!vcpu_e500->gtlb_priv[0])
goto free_vcpu;
vcpu_e500->gtlb_priv[1] = kzalloc_objs(struct tlbe_ref,
vcpu_e500->gtlb_priv[1] = kzalloc_objs(struct tlbe_priv,
vcpu_e500->gtlb_params[1].entries);
if (!vcpu_e500->gtlb_priv[1])
goto free_vcpu;

View File

@ -189,16 +189,16 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
{
struct kvm_book3e_206_tlb_entry *gtlbe =
get_entry(vcpu_e500, tlbsel, esel);
struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[tlbsel][esel].ref;
struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[tlbsel][esel];
/* Don't bother with unmapped entries */
if (!(ref->flags & E500_TLB_VALID)) {
WARN(ref->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),
"%s: flags %x\n", __func__, ref->flags);
if (!(tlbe->flags & E500_TLB_VALID)) {
WARN(tlbe->flags & (E500_TLB_BITMAP | E500_TLB_TLB0),
"%s: flags %x\n", __func__, tlbe->flags);
WARN_ON(tlbsel == 1 && vcpu_e500->g2h_tlb1_map[esel]);
}
if (tlbsel == 1 && ref->flags & E500_TLB_BITMAP) {
if (tlbsel == 1 && tlbe->flags & E500_TLB_BITMAP) {
u64 tmp = vcpu_e500->g2h_tlb1_map[esel];
int hw_tlb_indx;
unsigned long flags;
@ -216,28 +216,28 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
}
mb();
vcpu_e500->g2h_tlb1_map[esel] = 0;
ref->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);
tlbe->flags &= ~(E500_TLB_BITMAP | E500_TLB_VALID);
local_irq_restore(flags);
}
if (tlbsel == 1 && ref->flags & E500_TLB_TLB0) {
if (tlbsel == 1 && tlbe->flags & E500_TLB_TLB0) {
/*
* TLB1 entry is backed by 4k pages. This should happen
* rarely and is not worth optimizing. Invalidate everything.
*/
kvmppc_e500_tlbil_all(vcpu_e500);
ref->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);
tlbe->flags &= ~(E500_TLB_TLB0 | E500_TLB_VALID);
}
/*
* If TLB entry is still valid then it's a TLB0 entry, and thus
* backed by at most one host tlbe per shadow pid
*/
if (ref->flags & E500_TLB_VALID)
if (tlbe->flags & E500_TLB_VALID)
kvmppc_e500_tlbil_one(vcpu_e500, gtlbe);
/* Mark the TLB as not backed by the host anymore */
ref->flags = 0;
tlbe->flags = 0;
}
static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
@ -245,26 +245,26 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
}
static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
struct kvm_book3e_206_tlb_entry *gtlbe,
kvm_pfn_t pfn, unsigned int wimg,
bool writable)
static inline void kvmppc_e500_tlbe_setup(struct tlbe_priv *tlbe,
struct kvm_book3e_206_tlb_entry *gtlbe,
kvm_pfn_t pfn, unsigned int wimg,
bool writable)
{
ref->pfn = pfn;
ref->flags = E500_TLB_VALID;
tlbe->pfn = pfn;
tlbe->flags = E500_TLB_VALID;
if (writable)
ref->flags |= E500_TLB_WRITABLE;
tlbe->flags |= E500_TLB_WRITABLE;
/* Use guest supplied MAS2_G and MAS2_E */
ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
tlbe->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
}
static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
static inline void kvmppc_e500_tlbe_release(struct tlbe_priv *tlbe)
{
if (ref->flags & E500_TLB_VALID) {
if (tlbe->flags & E500_TLB_VALID) {
/* FIXME: don't log bogus pfn for TLB1 */
trace_kvm_booke206_ref_release(ref->pfn, ref->flags);
ref->flags = 0;
trace_kvm_booke206_ref_release(tlbe->pfn, tlbe->flags);
tlbe->flags = 0;
}
}
@ -284,11 +284,8 @@ static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500)
int i;
for (tlbsel = 0; tlbsel <= 1; tlbsel++) {
for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++) {
struct tlbe_ref *ref =
&vcpu_e500->gtlb_priv[tlbsel][i].ref;
kvmppc_e500_ref_release(ref);
}
for (i = 0; i < vcpu_e500->gtlb_params[tlbsel].entries; i++)
kvmppc_e500_tlbe_release(&vcpu_e500->gtlb_priv[tlbsel][i]);
}
}
@ -304,18 +301,18 @@ void kvmppc_core_flush_tlb(struct kvm_vcpu *vcpu)
static void kvmppc_e500_setup_stlbe(
struct kvm_vcpu *vcpu,
struct kvm_book3e_206_tlb_entry *gtlbe,
int tsize, struct tlbe_ref *ref, u64 gvaddr,
int tsize, struct tlbe_priv *tlbe, u64 gvaddr,
struct kvm_book3e_206_tlb_entry *stlbe)
{
kvm_pfn_t pfn = ref->pfn;
kvm_pfn_t pfn = tlbe->pfn;
u32 pr = vcpu->arch.shared->msr & MSR_PR;
bool writable = !!(ref->flags & E500_TLB_WRITABLE);
bool writable = !!(tlbe->flags & E500_TLB_WRITABLE);
BUG_ON(!(ref->flags & E500_TLB_VALID));
BUG_ON(!(tlbe->flags & E500_TLB_VALID));
/* Force IPROT=0 for all guest mappings. */
stlbe->mas1 = MAS1_TSIZE(tsize) | get_tlb_sts(gtlbe) | MAS1_VALID;
stlbe->mas2 = (gvaddr & MAS2_EPN) | (ref->flags & E500_TLB_MAS2_ATTR);
stlbe->mas2 = (gvaddr & MAS2_EPN) | (tlbe->flags & E500_TLB_MAS2_ATTR);
stlbe->mas7_3 = ((u64)pfn << PAGE_SHIFT) |
e500_shadow_mas3_attrib(gtlbe->mas7_3, writable, pr);
}
@ -323,7 +320,7 @@ static void kvmppc_e500_setup_stlbe(
static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,
int tlbsel, struct kvm_book3e_206_tlb_entry *stlbe,
struct tlbe_ref *ref)
struct tlbe_priv *tlbe)
{
struct kvm_memory_slot *slot;
unsigned int psize;
@ -455,9 +452,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
}
}
kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg, writable);
kvmppc_e500_tlbe_setup(tlbe, gtlbe, pfn, wimg, writable);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
ref, gvaddr, stlbe);
tlbe, gvaddr, stlbe);
writable = tlbe_is_writable(stlbe);
/* Clear i-cache for new pages */
@ -474,17 +471,17 @@ static int kvmppc_e500_tlb0_map(struct kvmppc_vcpu_e500 *vcpu_e500, int esel,
struct kvm_book3e_206_tlb_entry *stlbe)
{
struct kvm_book3e_206_tlb_entry *gtlbe;
struct tlbe_ref *ref;
struct tlbe_priv *tlbe;
int stlbsel = 0;
int sesel = 0;
int r;
gtlbe = get_entry(vcpu_e500, 0, esel);
ref = &vcpu_e500->gtlb_priv[0][esel].ref;
tlbe = &vcpu_e500->gtlb_priv[0][esel];
r = kvmppc_e500_shadow_map(vcpu_e500, get_tlb_eaddr(gtlbe),
get_tlb_raddr(gtlbe) >> PAGE_SHIFT,
gtlbe, 0, stlbe, ref);
gtlbe, 0, stlbe, tlbe);
if (r)
return r;
@ -494,7 +491,7 @@ static int kvmppc_e500_tlb0_map(struct kvmppc_vcpu_e500 *vcpu_e500, int esel,
}
static int kvmppc_e500_tlb1_map_tlb1(struct kvmppc_vcpu_e500 *vcpu_e500,
struct tlbe_ref *ref,
struct tlbe_priv *tlbe,
int esel)
{
unsigned int sesel = vcpu_e500->host_tlb1_nv++;
@ -507,10 +504,10 @@ static int kvmppc_e500_tlb1_map_tlb1(struct kvmppc_vcpu_e500 *vcpu_e500,
vcpu_e500->g2h_tlb1_map[idx] &= ~(1ULL << sesel);
}
vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_BITMAP;
vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_BITMAP;
vcpu_e500->g2h_tlb1_map[esel] |= (u64)1 << sesel;
vcpu_e500->h2g_tlb1_rmap[sesel] = esel + 1;
WARN_ON(!(ref->flags & E500_TLB_VALID));
WARN_ON(!(tlbe->flags & E500_TLB_VALID));
return sesel;
}
@ -522,24 +519,24 @@ static int kvmppc_e500_tlb1_map(struct kvmppc_vcpu_e500 *vcpu_e500,
u64 gvaddr, gfn_t gfn, struct kvm_book3e_206_tlb_entry *gtlbe,
struct kvm_book3e_206_tlb_entry *stlbe, int esel)
{
struct tlbe_ref *ref = &vcpu_e500->gtlb_priv[1][esel].ref;
struct tlbe_priv *tlbe = &vcpu_e500->gtlb_priv[1][esel];
int sesel;
int r;
r = kvmppc_e500_shadow_map(vcpu_e500, gvaddr, gfn, gtlbe, 1, stlbe,
ref);
tlbe);
if (r)
return r;
/* Use TLB0 when we can only map a page with 4k */
if (get_tlb_tsize(stlbe) == BOOK3E_PAGESZ_4K) {
vcpu_e500->gtlb_priv[1][esel].ref.flags |= E500_TLB_TLB0;
vcpu_e500->gtlb_priv[1][esel].flags |= E500_TLB_TLB0;
write_stlbe(vcpu_e500, gtlbe, stlbe, 0, 0);
return 0;
}
/* Otherwise map into TLB1 */
sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, ref, esel);
sesel = kvmppc_e500_tlb1_map_tlb1(vcpu_e500, tlbe, esel);
write_stlbe(vcpu_e500, gtlbe, stlbe, 1, sesel);
return 0;
@ -561,11 +558,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr,
priv = &vcpu_e500->gtlb_priv[tlbsel][esel];
/* Triggers after clear_tlb_privs or on initial mapping */
if (!(priv->ref.flags & E500_TLB_VALID)) {
if (!(priv->flags & E500_TLB_VALID)) {
kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe);
} else {
kvmppc_e500_setup_stlbe(vcpu, gtlbe, BOOK3E_PAGESZ_4K,
&priv->ref, eaddr, &stlbe);
priv, eaddr, &stlbe);
write_stlbe(vcpu_e500, gtlbe, &stlbe, 0, 0);
}
break;

View File

@ -562,3 +562,4 @@ exc; std r10,32(3)
li r5,4096
b .Ldst_aligned
EXPORT_SYMBOL(__copy_tofrom_user)
EXPORT_SYMBOL(__copy_tofrom_user_base)

View File

@ -5,13 +5,9 @@
*
* Author: Anton Blanchard <anton@au.ibm.com>
*/
#include <linux/export.h>
#include <asm/ppc_asm.h>
#ifndef SELFTEST_CASE
/* 0 == don't use VMX, 1 == use VMX */
#define SELFTEST_CASE 0
#endif
#ifdef __BIG_ENDIAN__
#define LVS(VRT,RA,RB) lvsl VRT,RA,RB
#define VPERM(VRT,VRA,VRB,VRC) vperm VRT,VRA,VRB,VRC
@ -47,10 +43,14 @@
ld r15,STK_REG(R15)(r1)
ld r14,STK_REG(R14)(r1)
.Ldo_err3:
bl CFUNC(exit_vmx_usercopy)
ld r6,STK_REG(R31)(r1) /* original destination pointer */
ld r5,STK_REG(R29)(r1) /* original number of bytes */
subf r7,r6,r3 /* #bytes copied */
subf r3,r7,r5 /* #bytes not copied in r3 */
ld r0,STACKFRAMESIZE+16(r1)
mtlr r0
b .Lexit
addi r1,r1,STACKFRAMESIZE
blr
#endif /* CONFIG_ALTIVEC */
.Ldo_err2:
@ -74,7 +74,6 @@
_GLOBAL(__copy_tofrom_user_power7)
cmpldi r5,16
cmpldi cr1,r5,3328
std r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
std r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
blt .Lshort_copy
#ifdef CONFIG_ALTIVEC
test_feature = SELFTEST_CASE
BEGIN_FTR_SECTION
bgt cr1,.Lvmx_copy
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif
.Lnonvmx_copy:
/* Get the source 8B aligned */
@ -263,23 +256,14 @@ err1; stb r0,0(r3)
15: li r3,0
blr
.Lunwind_stack_nonvmx_copy:
addi r1,r1,STACKFRAMESIZE
b .Lnonvmx_copy
.Lvmx_copy:
#ifdef CONFIG_ALTIVEC
_GLOBAL(__copy_tofrom_user_power7_vmx)
mflr r0
std r0,16(r1)
stdu r1,-STACKFRAMESIZE(r1)
bl CFUNC(enter_vmx_usercopy)
cmpwi cr1,r3,0
ld r0,STACKFRAMESIZE+16(r1)
ld r3,STK_REG(R31)(r1)
ld r4,STK_REG(R30)(r1)
ld r5,STK_REG(R29)(r1)
mtlr r0
std r3,STK_REG(R31)(r1)
std r5,STK_REG(R29)(r1)
/*
* We prefetch both the source and destination using enhanced touch
* instructions. We use a stream ID of 0 for the load side and
@ -300,8 +284,6 @@ err1; stb r0,0(r3)
DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
beq cr1,.Lunwind_stack_nonvmx_copy
/*
* If source and destination are not relatively aligned we use a
* slower permute loop.
@ -478,7 +460,8 @@ err3; lbz r0,0(r4)
err3; stb r0,0(r3)
15: addi r1,r1,STACKFRAMESIZE
b CFUNC(exit_vmx_usercopy) /* tail call optimise */
li r3,0
blr
.Lvmx_unaligned_copy:
/* Get the destination 16B aligned */
@ -681,5 +664,7 @@ err3; lbz r0,0(r4)
err3; stb r0,0(r3)
15: addi r1,r1,STACKFRAMESIZE
b CFUNC(exit_vmx_usercopy) /* tail call optimise */
li r3,0
blr
EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
#endif /* CONFIG_ALTIVEC */

View File

@ -27,6 +27,7 @@ int enter_vmx_usercopy(void)
return 1;
}
EXPORT_SYMBOL(enter_vmx_usercopy);
/*
* This function must return 0 because we tail call optimise when calling
@ -49,6 +50,7 @@ int exit_vmx_usercopy(void)
set_dec(1);
return 0;
}
EXPORT_SYMBOL(exit_vmx_usercopy);
int enter_vmx_ops(void)
{

View File

@ -30,6 +30,10 @@
#include <asm/setup.h>
#include <asm/fixmap.h>
#include <asm/fadump.h>
#include <asm/kexec.h>
#include <asm/kvm_ppc.h>
#include <mm/mmu_decl.h>
unsigned long long memory_limit __initdata;
@ -268,6 +272,16 @@ void __init paging_init(void)
void __init arch_mm_preinit(void)
{
/*
* Reserve large chunks of memory for use by CMA for kdump, fadump, KVM
* and hugetlb. These must be called after pageblock_order is
* initialised.
*/
fadump_cma_init();
kdump_cma_reserve();
kvm_cma_reserve();
/*
* book3s is limited to 16 page sizes due to encoding this in
* a 4-bit field for slices.

View File

@ -81,9 +81,6 @@
#ifdef CONFIG_PPC64
/* for gpr non volatile registers BPG_REG_6 to 10 */
#define BPF_PPC_STACK_SAVE (6 * 8)
/* If dummy pass (!image), account for maximum possible instructions */
#define PPC_LI64(d, i) do { \
if (!image) \
@ -219,8 +216,6 @@ int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg,
int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, int pass,
struct codegen_context *ctx, int insn_idx,
int jmp_off, int dst_reg, u32 code);
int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx);
#endif
#endif

View File

@ -450,7 +450,7 @@ bool bpf_jit_supports_subprog_tailcalls(void)
bool bpf_jit_supports_kfunc_call(void)
{
return true;
return IS_ENABLED(CONFIG_PPC64);
}
bool bpf_jit_supports_arena(void)
@ -638,19 +638,12 @@ static int invoke_bpf_mod_ret(u32 *image, u32 *ro_image, struct codegen_context
* for the traced function (BPF subprog/callee) to fetch it.
*/
static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_context *ctx,
int func_frame_offset,
int bpf_dummy_frame_size, int r4_off)
int bpf_frame_size, int r4_off)
{
if (IS_ENABLED(CONFIG_PPC64)) {
/* See Generated stack layout */
int tailcallinfo_offset = BPF_PPC_TAILCALL;
/*
* func_frame_offset = ...(1)
* bpf_dummy_frame_size + trampoline_frame_size
*/
EMIT(PPC_RAW_LD(_R4, _R1, func_frame_offset));
EMIT(PPC_RAW_LD(_R3, _R4, -tailcallinfo_offset));
EMIT(PPC_RAW_LD(_R4, _R1, bpf_frame_size));
/* Refer to trampoline's Generated stack layout */
EMIT(PPC_RAW_LD(_R3, _R4, -BPF_PPC_TAILCALL));
/*
* Setting the tail_call_info in trampoline's frame
@ -658,22 +651,14 @@ static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_conte
*/
EMIT(PPC_RAW_CMPLWI(_R3, MAX_TAIL_CALL_CNT));
PPC_BCC_CONST_SHORT(COND_GT, 8);
EMIT(PPC_RAW_ADDI(_R3, _R4, bpf_jit_stack_tailcallinfo_offset(ctx)));
EMIT(PPC_RAW_ADDI(_R3, _R4, -BPF_PPC_TAILCALL));
/*
* From ...(1) above:
* trampoline_frame_bottom = ...(2)
* func_frame_offset - bpf_dummy_frame_size
*
* Using ...(2) derived above:
* trampoline_tail_call_info_offset = ...(3)
* trampoline_frame_bottom - tailcallinfo_offset
*
* From ...(3):
* Use trampoline_tail_call_info_offset to write reference of main's
* tail_call_info in trampoline frame.
* Trampoline's tail_call_info is at the same offset, as that of
* any bpf program, with reference to previous frame. Update the
* address of main's tail_call_info in trampoline frame.
*/
EMIT(PPC_RAW_STL(_R3, _R1, (func_frame_offset - bpf_dummy_frame_size)
- tailcallinfo_offset));
EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size - BPF_PPC_TAILCALL));
} else {
/* See bpf_jit_stack_offsetof() and BPF_PPC_TC */
EMIT(PPC_RAW_LL(_R4, _R1, r4_off));
@ -681,7 +666,7 @@ static void bpf_trampoline_setup_tail_call_info(u32 *image, struct codegen_conte
}
static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx,
int func_frame_offset, int r4_off)
int bpf_frame_size, int r4_off)
{
if (IS_ENABLED(CONFIG_PPC32)) {
/*
@ -692,12 +677,12 @@ static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_cont
}
}
static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset,
int nr_regs, int regs_off)
static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx,
int bpf_frame_size, int nr_regs, int regs_off)
{
int param_save_area_offset;
param_save_area_offset = func_frame_offset; /* the two frames we alloted */
param_save_area_offset = bpf_frame_size;
param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */
for (int i = 0; i < nr_regs; i++) {
@ -720,11 +705,11 @@ static void bpf_trampoline_restore_args_regs(u32 *image, struct codegen_context
/* Used when we call into the traced function. Replicate parameter save area */
static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx,
int func_frame_offset, int nr_regs, int regs_off)
int bpf_frame_size, int nr_regs, int regs_off)
{
int param_save_area_offset;
param_save_area_offset = func_frame_offset; /* the two frames we alloted */
param_save_area_offset = bpf_frame_size;
param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */
for (int i = 8; i < nr_regs; i++) {
@ -741,10 +726,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
void *func_addr)
{
int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0;
int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset;
struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY];
struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT];
int i, ret, nr_regs, retaddr_off, bpf_frame_size = 0;
struct codegen_context codegen_ctx, *ctx;
u32 *image = (u32 *)rw_image;
ppc_inst_t branch_insn;
@ -770,24 +755,19 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* Generated stack layout:
*
* func prev back chain [ back chain ]
* [ ]
* bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc)
* [ ] --
* LR save area [ r0 save (64-bit) ] | header
* [ r0 save (32-bit) ] |
* dummy frame for unwind [ back chain 1 ] --
* [ tail_call_info ] optional - 64-bit powerpc
* [ padding ] align stack frame
* r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc
* alt_lr_off [ real lr (ool stub)] optional - actual lr
* retaddr_off [ return address ]
* [ r26 ]
* nvr_off [ r25 ] nvr save area
* retval_off [ return value ]
* [ reg argN ]
* [ ... ]
* regs_off [ reg_arg1 ] prog ctx context
* nregs_off [ args count ]
* ip_off [ traced function ]
* regs_off [ reg_arg1 ] prog_ctx
* nregs_off [ args count ] ((u64 *)prog_ctx)[-1]
* ip_off [ traced function ] ((u64 *)prog_ctx)[-2]
* [ ... ]
* run_ctx_off [ bpf_tramp_run_ctx ]
* [ reg argN ]
@ -843,6 +823,10 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
nvr_off = bpf_frame_size;
bpf_frame_size += 2 * SZL;
/* Save area for return address */
retaddr_off = bpf_frame_size;
bpf_frame_size += SZL;
/* Optional save area for actual LR in case of ool ftrace */
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
alt_lr_off = bpf_frame_size;
@ -869,16 +853,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* Padding to align stack frame, if any */
bpf_frame_size = round_up(bpf_frame_size, SZL * 2);
/* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */
bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64;
/* Offset to the traced function's stack frame */
func_frame_offset = bpf_dummy_frame_size + bpf_frame_size;
/* Create dummy frame for unwind, store original return value */
/* Store original return value */
EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF));
/* Protect red zone where tail call count goes */
EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size));
/* Create our stack frame */
EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size));
@ -893,34 +869,44 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2)
EMIT(PPC_RAW_STL(_R4, _R1, r4_off));
bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off);
bpf_trampoline_save_args(image, ctx, bpf_frame_size, nr_regs, regs_off);
/* Save our return address */
/* Save our LR/return address */
EMIT(PPC_RAW_MFLR(_R3));
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));
else
EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));
/*
* Save ip address of the traced function.
* We could recover this from LR, but we will need to address for OOL trampoline,
* and optional GEP area.
* Derive IP address of the traced function.
* In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR points to the instruction
* after the 'bl' instruction in the OOL stub. Refer to ftrace_init_ool_stub() and
* bpf_arch_text_poke() for OOL stub of kernel functions and bpf programs respectively.
* Relevant stub sequence:
*
* bl <tramp>
* LR (R3) => mtlr r0
* b <func_addr+4>
*
* Recover kernel function/bpf program address from the unconditional
* branch instruction at the end of OOL stub.
*/
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) {
EMIT(PPC_RAW_LWZ(_R4, _R3, 4));
EMIT(PPC_RAW_SLWI(_R4, _R4, 6));
EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));
EMIT(PPC_RAW_ADD(_R3, _R3, _R4));
EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
}
if (flags & BPF_TRAMP_F_IP_ARG)
EMIT(PPC_RAW_STL(_R3, _R1, ip_off));
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
/* Fake our LR for unwind */
EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
/* Fake our LR for BPF_TRAMP_F_CALL_ORIG case */
EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
EMIT(PPC_RAW_STL(_R3, _R1, retaddr_off));
}
/* Save function arg count -- see bpf_get_func_arg_cnt() */
EMIT(PPC_RAW_LI(_R3, nr_regs));
@ -958,20 +944,19 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* Call the traced function */
if (flags & BPF_TRAMP_F_CALL_ORIG) {
/*
* The address in LR save area points to the correct point in the original function
* retaddr on trampoline stack points to the correct point in the original function
* with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction
* sequence
*/
EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
EMIT(PPC_RAW_LL(_R3, _R1, retaddr_off));
EMIT(PPC_RAW_MTCTR(_R3));
/* Replicate tail_call_cnt before calling the original BPF prog */
if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
bpf_trampoline_setup_tail_call_info(image, ctx, func_frame_offset,
bpf_dummy_frame_size, r4_off);
bpf_trampoline_setup_tail_call_info(image, ctx, bpf_frame_size, r4_off);
/* Restore args */
bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off);
bpf_trampoline_restore_args_stack(image, ctx, bpf_frame_size, nr_regs, regs_off);
/* Restore TOC for 64-bit */
if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL))
@ -985,7 +970,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
/* Restore updated tail_call_cnt */
if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off);
bpf_trampoline_restore_tail_call_cnt(image, ctx, bpf_frame_size, r4_off);
/* Reserve space to patch branch instruction to skip fexit progs */
if (ro_image) /* image is NULL for dummy pass */
@ -1037,7 +1022,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
EMIT(PPC_RAW_LD(_R2, _R1, 24));
if (flags & BPF_TRAMP_F_SKIP_FRAME) {
/* Skip the traced function and return to parent */
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_BLR());
@ -1045,13 +1030,13 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_BLR());
} else {
EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF));
EMIT(PPC_RAW_LL(_R0, _R1, retaddr_off));
EMIT(PPC_RAW_MTCTR(_R0));
EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset));
EMIT(PPC_RAW_ADDI(_R1, _R1, bpf_frame_size));
EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF));
EMIT(PPC_RAW_MTLR(_R0));
EMIT(PPC_RAW_BCTR());

View File

@ -32,23 +32,27 @@
*
* [ prev sp ] <-------------
* [ tail_call_info ] 8 |
* [ nv gpr save area ] 6*8 + (12*8) |
* [ nv gpr save area ] (6 * 8) |
* [ addl. nv gpr save area] (12 * 8) | <--- exception boundary/callback program
* [ local_tmp_var ] 24 |
* fp (r31) --> [ ebpf stack space ] upto 512 |
* [ frame header ] 32/112 |
* sp (r1) ---> [ stack pointer ] --------------
*
* Additional (12*8) in 'nv gpr save area' only in case of
* exception boundary.
* Additional (12 * 8) in 'nv gpr save area' only in case of
* exception boundary/callback.
*/
/* BPF non-volatile registers save area size */
#define BPF_PPC_STACK_SAVE (6 * 8)
/* for bpf JIT code internal usage */
#define BPF_PPC_STACK_LOCALS 24
/*
* for additional non volatile registers(r14-r25) to be saved
* at exception boundary
*/
#define BPF_PPC_EXC_STACK_SAVE (12*8)
#define BPF_PPC_EXC_STACK_SAVE (12 * 8)
/* stack frame excluding BPF stack, ensure this is quadword aligned */
#define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \
@ -125,12 +129,13 @@ static inline bool bpf_has_stack_frame(struct codegen_context *ctx)
* [ ... ] |
* sp (r1) ---> [ stack pointer ] --------------
* [ tail_call_info ] 8
* [ nv gpr save area ] 6*8 + (12*8)
* [ nv gpr save area ] (6 * 8)
* [ addl. nv gpr save area] (12 * 8) <--- exception boundary/callback program
* [ local_tmp_var ] 24
* [ unused red zone ] 224
*
* Additional (12*8) in 'nv gpr save area' only in case of
* exception boundary.
* Additional (12 * 8) in 'nv gpr save area' only in case of
* exception boundary/callback.
*/
static int bpf_jit_stack_local(struct codegen_context *ctx)
{
@ -148,7 +153,7 @@ static int bpf_jit_stack_local(struct codegen_context *ctx)
}
}
int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)
static int bpf_jit_stack_tailcallinfo_offset(struct codegen_context *ctx)
{
return bpf_jit_stack_local(ctx) + BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE;
}
@ -237,10 +242,6 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
if (bpf_has_stack_frame(ctx) && !ctx->exception_cb) {
/*
* exception_cb uses boundary frame after stack walk.
* It can simply use redzone, this optimization reduces
* stack walk loop by one level.
*
* We need a stack frame, but we don't necessarily need to
* save/restore LR unless we call other functions
*/
@ -284,6 +285,22 @@ void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx)
* program(main prog) as third arg
*/
EMIT(PPC_RAW_MR(_R1, _R5));
/*
* Exception callback reuses the stack frame of exception boundary.
* But BPF stack depth of exception callback and exception boundary
* don't have to be same. If BPF stack depth is different, adjust the
* stack frame size considering BPF stack depth of exception callback.
* The non-volatile register save area remains unchanged. These non-
* volatile registers are restored in exception callback's epilogue.
*/
EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), _R5, 0));
EMIT(PPC_RAW_SUB(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_1), _R1));
EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),
-BPF_PPC_EXC_STACKFRAME));
EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), ctx->stack_size));
PPC_BCC_CONST_SHORT(COND_EQ, 12);
EMIT(PPC_RAW_MR(_R1, bpf_to_ppc(TMP_REG_1)));
EMIT(PPC_RAW_STDU(_R1, _R1, -(BPF_PPC_EXC_STACKFRAME + ctx->stack_size)));
}
/*
@ -482,6 +499,83 @@ int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *
return 0;
}
static int zero_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)
{
switch (size) {
case 1:
/* zero-extend 8 bits into 64 bits */
EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 56));
return 0;
case 2:
/* zero-extend 16 bits into 64 bits */
EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 48));
return 0;
case 4:
/* zero-extend 32 bits into 64 bits */
EMIT(PPC_RAW_RLDICL(dst_reg, src_reg, 0, 32));
fallthrough;
case 8:
/* Nothing to do */
return 0;
default:
return -1;
}
}
static int sign_extend(u32 *image, struct codegen_context *ctx, u32 src_reg, u32 dst_reg, u32 size)
{
switch (size) {
case 1:
/* sign-extend 8 bits into 64 bits */
EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));
return 0;
case 2:
/* sign-extend 16 bits into 64 bits */
EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));
return 0;
case 4:
/* sign-extend 32 bits into 64 bits */
EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));
fallthrough;
case 8:
/* Nothing to do */
return 0;
default:
return -1;
}
}
/*
* Handle powerpc ABI expectations from caller:
* - Unsigned arguments are zero-extended.
* - Signed arguments are sign-extended.
*/
static int prepare_for_kfunc_call(const struct bpf_prog *fp, u32 *image,
struct codegen_context *ctx,
const struct bpf_insn *insn)
{
const struct btf_func_model *m = bpf_jit_find_kfunc_model(fp, insn);
int i;
if (!m)
return -1;
for (i = 0; i < m->nr_args; i++) {
/* Note that BPF ABI only allows up to 5 args for kfuncs */
u32 reg = bpf_to_ppc(BPF_REG_1 + i), size = m->arg_size[i];
if (!(m->arg_flags[i] & BTF_FMODEL_SIGNED_ARG)) {
if (zero_extend(image, ctx, reg, reg, size))
return -1;
} else {
if (sign_extend(image, ctx, reg, reg, size))
return -1;
}
}
return 0;
}
static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 out)
{
/*
@ -522,9 +616,30 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
/*
* tail_call_info++; <- Actual value of tcc here
* Writeback this updated value only if tailcall succeeds.
*/
EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), 1));
/* prog = array->ptrs[index]; */
EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_2), b2p_index, 8));
EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2), b2p_bpf_array));
EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),
offsetof(struct bpf_array, ptrs)));
/*
* if (prog == NULL)
* goto out;
*/
EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_2), 0));
PPC_BCC_SHORT(COND_EQ, out);
/* goto *(prog->bpf_func + prologue_size); */
EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),
offsetof(struct bpf_prog, bpf_func)));
EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_2), bpf_to_ppc(TMP_REG_2),
FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));
EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_2)));
/*
* Before writing updated tail_call_info, distinguish if current frame
* is storing a reference to tail_call_info or actual tcc value in
@ -539,24 +654,6 @@ static int bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 o
/* Writeback updated value to tail_call_info */
EMIT(PPC_RAW_STD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_2), 0));
/* prog = array->ptrs[index]; */
EMIT(PPC_RAW_MULI(bpf_to_ppc(TMP_REG_1), b2p_index, 8));
EMIT(PPC_RAW_ADD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), b2p_bpf_array));
EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_array, ptrs)));
/*
* if (prog == NULL)
* goto out;
*/
EMIT(PPC_RAW_CMPLDI(bpf_to_ppc(TMP_REG_1), 0));
PPC_BCC_SHORT(COND_EQ, out);
/* goto *(prog->bpf_func + prologue_size); */
EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1), offsetof(struct bpf_prog, bpf_func)));
EMIT(PPC_RAW_ADDI(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_1),
FUNCTION_DESCR_SIZE + bpf_tailcall_prologue_size));
EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_1)));
/* tear down stack, restore NVRs, ... */
bpf_jit_emit_common_epilogue(image, ctx);
@ -1123,14 +1220,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
/* special mov32 for zext */
EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
break;
} else if (off == 8) {
EMIT(PPC_RAW_EXTSB(dst_reg, src_reg));
} else if (off == 16) {
EMIT(PPC_RAW_EXTSH(dst_reg, src_reg));
} else if (off == 32) {
EMIT(PPC_RAW_EXTSW(dst_reg, src_reg));
} else if (dst_reg != src_reg)
EMIT(PPC_RAW_MR(dst_reg, src_reg));
}
if (off == 0) {
/* MOV */
if (dst_reg != src_reg)
EMIT(PPC_RAW_MR(dst_reg, src_reg));
} else {
/* MOVSX: dst = (s8,s16,s32)src (off = 8,16,32) */
if (sign_extend(image, ctx, src_reg, dst_reg, off / 8))
return -1;
}
goto bpf_alu32_trunc;
case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */
case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */
@ -1598,6 +1697,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
if (ret < 0)
return ret;
/* Take care of powerpc ABI requirements before kfunc call */
if (insn[i].src_reg == BPF_PSEUDO_KFUNC_CALL) {
if (prepare_for_kfunc_call(fp, image, ctx, &insn[i]))
return -1;
}
ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr);
if (ret)
return ret;

View File

@ -103,6 +103,11 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
void
perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
perf_callchain_store(entry, perf_arch_instruction_pointer(regs));
if (!current->mm)
return;
if (!is_32bit_task())
perf_callchain_user_64(entry, regs);
else

View File

@ -142,7 +142,6 @@ void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
next_ip = perf_arch_instruction_pointer(regs);
lr = regs->link;
sp = regs->gpr[1];
perf_callchain_store(entry, next_ip);
while (entry->nr < entry->max_stack) {
fp = (unsigned int __user *) (unsigned long) sp;

View File

@ -77,7 +77,6 @@ void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
next_ip = perf_arch_instruction_pointer(regs);
lr = regs->link;
sp = regs->gpr[1];
perf_callchain_store(entry, next_ip);
while (entry->nr < entry->max_stack) {
fp = (unsigned long __user *) sp;

View File

@ -155,8 +155,8 @@ machine_device_initcall(mpc83xx_km, mpc83xx_declare_of_platform_devices);
/* list of the supported boards */
static char *board[] __initdata = {
"Keymile,KMETER1",
"Keymile,kmpbec8321",
"keymile,KMETER1",
"keymile,kmpbec8321",
NULL
};

View File

@ -276,7 +276,7 @@ config PPC_BOOK3S
config PPC_E500
select FSL_EMB_PERFMON
bool
select ARCH_SUPPORTS_HUGETLBFS if PHYS_64BIT || PPC64
select ARCH_SUPPORTS_HUGETLBFS
select PPC_SMP_MUXED_IPI
select PPC_DOORBELL
select PPC_KUEP
@ -337,7 +337,7 @@ config BOOKE
config PTE_64BIT
bool
depends on 44x || PPC_E500 || PPC_86xx
default y if PHYS_64BIT
default y if PPC_E500 || PHYS_64BIT
config PHYS_64BIT
bool 'Large physical address support' if PPC_E500 || PPC_86xx

View File

@ -605,7 +605,7 @@ static int pseries_irq_domain_alloc(struct irq_domain *domain, unsigned int virq
&pseries_msi_irq_chip, pseries_dev);
}
pseries_dev->msi_used++;
pseries_dev->msi_used += nr_irqs;
return 0;
out:

View File

@ -15,9 +15,9 @@ if [ -z "$is_64bit" ]; then
RELOCATION=R_PPC_ADDR32
fi
num_ool_stubs_total=$($objdump -r -j __patchable_function_entries "$vmlinux_o" |
num_ool_stubs_total=$($objdump -r -j __patchable_function_entries -d "$vmlinux_o" |
grep -c "$RELOCATION")
num_ool_stubs_inittext=$($objdump -r -j __patchable_function_entries "$vmlinux_o" |
num_ool_stubs_inittext=$($objdump -r -j __patchable_function_entries -d "$vmlinux_o" |
grep -e ".init.text" -e ".text.startup" | grep -c "$RELOCATION")
num_ool_stubs_text=$((num_ool_stubs_total - num_ool_stubs_inittext))

View File

@ -13,6 +13,7 @@
#include <linux/irqchip/riscv-imsic.h>
#include <linux/irqdomain.h>
#include <linux/kvm_host.h>
#include <linux/nospec.h>
#include <linux/percpu.h>
#include <linux/spinlock.h>
#include <asm/cpufeature.h>
@ -182,9 +183,14 @@ int kvm_riscv_vcpu_aia_get_csr(struct kvm_vcpu *vcpu,
unsigned long *out_val)
{
struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
return -ENOENT;
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
*out_val = 0;
if (kvm_riscv_aia_available())
@ -198,9 +204,14 @@ int kvm_riscv_vcpu_aia_set_csr(struct kvm_vcpu *vcpu,
unsigned long val)
{
struct kvm_vcpu_aia_csr *csr = &vcpu->arch.aia_context.guest_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long))
if (!riscv_isa_extension_available(vcpu->arch.isa, SSAIA))
return -ENOENT;
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
if (kvm_riscv_aia_available()) {
((unsigned long *)csr)[reg_num] = val;

View File

@ -10,6 +10,7 @@
#include <linux/irqchip/riscv-aplic.h>
#include <linux/kvm_host.h>
#include <linux/math.h>
#include <linux/nospec.h>
#include <linux/spinlock.h>
#include <linux/swab.h>
#include <kvm/iodev.h>
@ -45,7 +46,7 @@ static u32 aplic_read_sourcecfg(struct aplic *aplic, u32 irq)
if (!irq || aplic->nr_irqs <= irq)
return 0;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
ret = irqd->sourcecfg;
@ -61,7 +62,7 @@ static void aplic_write_sourcecfg(struct aplic *aplic, u32 irq, u32 val)
if (!irq || aplic->nr_irqs <= irq)
return;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
if (val & APLIC_SOURCECFG_D)
val = 0;
@ -81,7 +82,7 @@ static u32 aplic_read_target(struct aplic *aplic, u32 irq)
if (!irq || aplic->nr_irqs <= irq)
return 0;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
ret = irqd->target;
@ -97,7 +98,7 @@ static void aplic_write_target(struct aplic *aplic, u32 irq, u32 val)
if (!irq || aplic->nr_irqs <= irq)
return;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
val &= APLIC_TARGET_EIID_MASK |
(APLIC_TARGET_HART_IDX_MASK << APLIC_TARGET_HART_IDX_SHIFT) |
@ -116,7 +117,7 @@ static bool aplic_read_pending(struct aplic *aplic, u32 irq)
if (!irq || aplic->nr_irqs <= irq)
return false;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
ret = (irqd->state & APLIC_IRQ_STATE_PENDING) ? true : false;
@ -132,7 +133,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
if (!irq || aplic->nr_irqs <= irq)
return;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
@ -170,7 +171,7 @@ static bool aplic_read_enabled(struct aplic *aplic, u32 irq)
if (!irq || aplic->nr_irqs <= irq)
return false;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
ret = (irqd->state & APLIC_IRQ_STATE_ENABLED) ? true : false;
@ -186,7 +187,7 @@ static void aplic_write_enabled(struct aplic *aplic, u32 irq, bool enabled)
if (!irq || aplic->nr_irqs <= irq)
return;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
if (enabled)
@ -205,7 +206,7 @@ static bool aplic_read_input(struct aplic *aplic, u32 irq)
if (!irq || aplic->nr_irqs <= irq)
return false;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
@ -254,7 +255,7 @@ static void aplic_update_irq_range(struct kvm *kvm, u32 first, u32 last)
for (irq = first; irq <= last; irq++) {
if (!irq || aplic->nr_irqs <= irq)
continue;
irqd = &aplic->irqs[irq];
irqd = &aplic->irqs[array_index_nospec(irq, aplic->nr_irqs)];
raw_spin_lock_irqsave(&irqd->lock, flags);
@ -283,7 +284,7 @@ int kvm_riscv_aia_aplic_inject(struct kvm *kvm, u32 source, bool level)
if (!aplic || !source || (aplic->nr_irqs <= source))
return -ENODEV;
irqd = &aplic->irqs[source];
irqd = &aplic->irqs[array_index_nospec(source, aplic->nr_irqs)];
ie = (aplic->domaincfg & APLIC_DOMAINCFG_IE) ? true : false;
raw_spin_lock_irqsave(&irqd->lock, flags);

View File

@ -11,6 +11,7 @@
#include <linux/irqchip/riscv-imsic.h>
#include <linux/kvm_host.h>
#include <linux/uaccess.h>
#include <linux/cpufeature.h>
static int aia_create(struct kvm_device *dev, u32 type)
{
@ -22,6 +23,9 @@ static int aia_create(struct kvm_device *dev, u32 type)
if (irqchip_in_kernel(kvm))
return -EEXIST;
if (!riscv_isa_extension_available(NULL, SSAIA))
return -ENODEV;
ret = -EBUSY;
if (kvm_trylock_all_vcpus(kvm))
return ret;
@ -437,7 +441,7 @@ static int aia_get_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
{
int nr_vcpus;
int nr_vcpus, r = -ENXIO;
switch (attr->group) {
case KVM_DEV_RISCV_AIA_GRP_CONFIG:
@ -466,12 +470,18 @@ static int aia_has_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
}
break;
case KVM_DEV_RISCV_AIA_GRP_APLIC:
return kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);
mutex_lock(&dev->kvm->lock);
r = kvm_riscv_aia_aplic_has_attr(dev->kvm, attr->attr);
mutex_unlock(&dev->kvm->lock);
break;
case KVM_DEV_RISCV_AIA_GRP_IMSIC:
return kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);
mutex_lock(&dev->kvm->lock);
r = kvm_riscv_aia_imsic_has_attr(dev->kvm, attr->attr);
mutex_unlock(&dev->kvm->lock);
break;
}
return -ENXIO;
return r;
}
struct kvm_device_ops kvm_riscv_aia_device_ops = {

View File

@ -908,6 +908,10 @@ int kvm_riscv_vcpu_aia_imsic_rmw(struct kvm_vcpu *vcpu, unsigned long isel,
int r, rc = KVM_INSN_CONTINUE_NEXT_SEPC;
struct imsic *imsic = vcpu->arch.aia_context.imsic_state;
/* If IMSIC vCPU state not initialized then forward to user space */
if (!imsic)
return KVM_INSN_EXIT_TO_USER_SPACE;
if (isel == KVM_RISCV_AIA_IMSIC_TOPEI) {
/* Read pending and enabled interrupt with highest priority */
topei = imsic_mrif_topei(imsic->swfile, imsic->nr_eix,

View File

@ -245,6 +245,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
{
struct kvm_gstage gstage;
bool mmu_locked;
if (!kvm->arch.pgd)
return false;
@ -253,9 +254,12 @@ bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range)
gstage.flags = 0;
gstage.vmid = READ_ONCE(kvm->arch.vmid.vmid);
gstage.pgd = kvm->arch.pgd;
mmu_locked = spin_trylock(&kvm->mmu_lock);
kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT,
(range->end - range->start) << PAGE_SHIFT,
range->may_block);
if (mmu_locked)
spin_unlock(&kvm->mmu_lock);
return false;
}
@ -535,7 +539,7 @@ int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot,
goto out_unlock;
/* Check if we are backed by a THP and thus use block mapping if possible */
if (vma_pagesize == PAGE_SIZE)
if (!logging && (vma_pagesize == PAGE_SIZE))
vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva, &hfn, &gpa);
if (writable) {

View File

@ -24,7 +24,7 @@
#define CREATE_TRACE_POINTS
#include "trace.h"
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, ecall_exit_stat),
STATS_DESC_COUNTER(VCPU, wfi_exit_stat),

View File

@ -10,6 +10,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kvm_host.h>
#include <linux/nospec.h>
#include <linux/uaccess.h>
#include <asm/cpufeature.h>
@ -93,9 +94,11 @@ int kvm_riscv_vcpu_get_reg_fp(struct kvm_vcpu *vcpu,
if (reg_num == KVM_REG_RISCV_FP_F_REG(fcsr))
reg_val = &cntx->fp.f.fcsr;
else if ((KVM_REG_RISCV_FP_F_REG(f[0]) <= reg_num) &&
reg_num <= KVM_REG_RISCV_FP_F_REG(f[31]))
reg_num <= KVM_REG_RISCV_FP_F_REG(f[31])) {
reg_num = array_index_nospec(reg_num,
ARRAY_SIZE(cntx->fp.f.f));
reg_val = &cntx->fp.f.f[reg_num];
else
} else
return -ENOENT;
} else if ((rtype == KVM_REG_RISCV_FP_D) &&
riscv_isa_extension_available(vcpu->arch.isa, d)) {
@ -107,6 +110,8 @@ int kvm_riscv_vcpu_get_reg_fp(struct kvm_vcpu *vcpu,
reg_num <= KVM_REG_RISCV_FP_D_REG(f[31])) {
if (KVM_REG_SIZE(reg->id) != sizeof(u64))
return -EINVAL;
reg_num = array_index_nospec(reg_num,
ARRAY_SIZE(cntx->fp.d.f));
reg_val = &cntx->fp.d.f[reg_num];
} else
return -ENOENT;
@ -138,9 +143,11 @@ int kvm_riscv_vcpu_set_reg_fp(struct kvm_vcpu *vcpu,
if (reg_num == KVM_REG_RISCV_FP_F_REG(fcsr))
reg_val = &cntx->fp.f.fcsr;
else if ((KVM_REG_RISCV_FP_F_REG(f[0]) <= reg_num) &&
reg_num <= KVM_REG_RISCV_FP_F_REG(f[31]))
reg_num <= KVM_REG_RISCV_FP_F_REG(f[31])) {
reg_num = array_index_nospec(reg_num,
ARRAY_SIZE(cntx->fp.f.f));
reg_val = &cntx->fp.f.f[reg_num];
else
} else
return -ENOENT;
} else if ((rtype == KVM_REG_RISCV_FP_D) &&
riscv_isa_extension_available(vcpu->arch.isa, d)) {
@ -152,6 +159,8 @@ int kvm_riscv_vcpu_set_reg_fp(struct kvm_vcpu *vcpu,
reg_num <= KVM_REG_RISCV_FP_D_REG(f[31])) {
if (KVM_REG_SIZE(reg->id) != sizeof(u64))
return -EINVAL;
reg_num = array_index_nospec(reg_num,
ARRAY_SIZE(cntx->fp.d.f));
reg_val = &cntx->fp.d.f[reg_num];
} else
return -ENOENT;

View File

@ -10,6 +10,7 @@
#include <linux/bitops.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/nospec.h>
#include <linux/uaccess.h>
#include <linux/kvm_host.h>
#include <asm/cacheflush.h>
@ -127,6 +128,7 @@ static int kvm_riscv_vcpu_isa_check_host(unsigned long kvm_ext, unsigned long *g
kvm_ext >= ARRAY_SIZE(kvm_isa_ext_arr))
return -ENOENT;
kvm_ext = array_index_nospec(kvm_ext, ARRAY_SIZE(kvm_isa_ext_arr));
*guest_ext = kvm_isa_ext_arr[kvm_ext];
switch (*guest_ext) {
case RISCV_ISA_EXT_SMNPM:
@ -443,13 +445,16 @@ static int kvm_riscv_vcpu_get_reg_core(struct kvm_vcpu *vcpu,
unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
KVM_REG_SIZE_MASK |
KVM_REG_RISCV_CORE);
unsigned long regs_max = sizeof(struct kvm_riscv_core) / sizeof(unsigned long);
unsigned long reg_val;
if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
return -EINVAL;
if (reg_num >= sizeof(struct kvm_riscv_core) / sizeof(unsigned long))
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
if (reg_num == KVM_REG_RISCV_CORE_REG(regs.pc))
reg_val = cntx->sepc;
else if (KVM_REG_RISCV_CORE_REG(regs.pc) < reg_num &&
@ -476,13 +481,16 @@ static int kvm_riscv_vcpu_set_reg_core(struct kvm_vcpu *vcpu,
unsigned long reg_num = reg->id & ~(KVM_REG_ARCH_MASK |
KVM_REG_SIZE_MASK |
KVM_REG_RISCV_CORE);
unsigned long regs_max = sizeof(struct kvm_riscv_core) / sizeof(unsigned long);
unsigned long reg_val;
if (KVM_REG_SIZE(reg->id) != sizeof(unsigned long))
return -EINVAL;
if (reg_num >= sizeof(struct kvm_riscv_core) / sizeof(unsigned long))
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
if (copy_from_user(&reg_val, uaddr, KVM_REG_SIZE(reg->id)))
return -EFAULT;
@ -507,10 +515,13 @@ static int kvm_riscv_vcpu_general_get_csr(struct kvm_vcpu *vcpu,
unsigned long *out_val)
{
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
kvm_riscv_vcpu_flush_interrupts(vcpu);
*out_val = (csr->hvip >> VSIP_TO_HVIP_SHIFT) & VSIP_VALID_MASK;
@ -526,10 +537,13 @@ static int kvm_riscv_vcpu_general_set_csr(struct kvm_vcpu *vcpu,
unsigned long reg_val)
{
struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_csr) / sizeof(unsigned long))
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
if (reg_num == KVM_REG_RISCV_CSR_REG(sip)) {
reg_val &= VSIP_VALID_MASK;
reg_val <<= VSIP_TO_HVIP_SHIFT;
@ -548,10 +562,15 @@ static inline int kvm_riscv_vcpu_smstateen_set_csr(struct kvm_vcpu *vcpu,
unsigned long reg_val)
{
struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long))
return -EINVAL;
if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
return -ENOENT;
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
((unsigned long *)csr)[reg_num] = reg_val;
return 0;
@ -562,10 +581,15 @@ static int kvm_riscv_vcpu_smstateen_get_csr(struct kvm_vcpu *vcpu,
unsigned long *out_val)
{
struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr;
unsigned long regs_max = sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long);
if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) /
sizeof(unsigned long))
return -EINVAL;
if (!riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN))
return -ENOENT;
if (reg_num >= regs_max)
return -ENOENT;
reg_num = array_index_nospec(reg_num, regs_max);
*out_val = ((unsigned long *)csr)[reg_num];
return 0;
@ -595,10 +619,7 @@ static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu,
rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, &reg_val);
break;
case KVM_REG_RISCV_CSR_SMSTATEEN:
rc = -EINVAL;
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num,
&reg_val);
rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num, &reg_val);
break;
default:
rc = -ENOENT;
@ -640,10 +661,7 @@ static int kvm_riscv_vcpu_set_reg_csr(struct kvm_vcpu *vcpu,
rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val);
break;
case KVM_REG_RISCV_CSR_SMSTATEEN:
rc = -EINVAL;
if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN))
rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num,
reg_val);
rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num, reg_val);
break;
default:
rc = -ENOENT;

View File

@ -10,6 +10,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/kvm_host.h>
#include <linux/nospec.h>
#include <linux/perf/riscv_pmu.h>
#include <asm/csr.h>
#include <asm/kvm_vcpu_sbi.h>
@ -87,7 +88,8 @@ static void kvm_pmu_release_perf_event(struct kvm_pmc *pmc)
static u64 kvm_pmu_get_perf_event_hw_config(u32 sbi_event_code)
{
return hw_event_perf_map[sbi_event_code];
return hw_event_perf_map[array_index_nospec(sbi_event_code,
SBI_PMU_HW_GENERAL_MAX)];
}
static u64 kvm_pmu_get_perf_event_cache_config(u32 sbi_event_code)
@ -218,6 +220,7 @@ static int pmu_fw_ctr_read_hi(struct kvm_vcpu *vcpu, unsigned long cidx,
return -EINVAL;
}
cidx = array_index_nospec(cidx, RISCV_KVM_MAX_COUNTERS);
pmc = &kvpmu->pmc[cidx];
if (pmc->cinfo.type != SBI_PMU_CTR_TYPE_FW)
@ -244,6 +247,7 @@ static int pmu_ctr_read(struct kvm_vcpu *vcpu, unsigned long cidx,
return -EINVAL;
}
cidx = array_index_nospec(cidx, RISCV_KVM_MAX_COUNTERS);
pmc = &kvpmu->pmc[cidx];
if (pmc->cinfo.type == SBI_PMU_CTR_TYPE_FW) {
@ -520,11 +524,12 @@ int kvm_riscv_vcpu_pmu_ctr_info(struct kvm_vcpu *vcpu, unsigned long cidx,
{
struct kvm_pmu *kvpmu = vcpu_to_pmu(vcpu);
if (cidx > RISCV_KVM_MAX_COUNTERS || cidx == 1) {
if (cidx >= RISCV_KVM_MAX_COUNTERS || cidx == 1) {
retdata->err_val = SBI_ERR_INVALID_PARAM;
return 0;
}
cidx = array_index_nospec(cidx, RISCV_KVM_MAX_COUNTERS);
retdata->out_val = kvpmu->pmc[cidx].cinfo.value;
return 0;
@ -559,7 +564,8 @@ int kvm_riscv_vcpu_pmu_ctr_start(struct kvm_vcpu *vcpu, unsigned long ctr_base,
}
/* Start the counters that have been configured and requested by the guest */
for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) {
pmc_index = i + ctr_base;
pmc_index = array_index_nospec(i + ctr_base,
RISCV_KVM_MAX_COUNTERS);
if (!test_bit(pmc_index, kvpmu->pmc_in_use))
continue;
/* The guest started the counter again. Reset the overflow status */
@ -630,7 +636,8 @@ int kvm_riscv_vcpu_pmu_ctr_stop(struct kvm_vcpu *vcpu, unsigned long ctr_base,
/* Stop the counters that have been configured and requested by the guest */
for_each_set_bit(i, &ctr_mask, RISCV_MAX_COUNTERS) {
pmc_index = i + ctr_base;
pmc_index = array_index_nospec(i + ctr_base,
RISCV_KVM_MAX_COUNTERS);
if (!test_bit(pmc_index, kvpmu->pmc_in_use))
continue;
pmc = &kvpmu->pmc[pmc_index];
@ -761,6 +768,7 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba
}
}
ctr_idx = array_index_nospec(ctr_idx, RISCV_KVM_MAX_COUNTERS);
pmc = &kvpmu->pmc[ctr_idx];
pmc->idx = ctr_idx;

View File

@ -13,7 +13,7 @@
#include <linux/kvm_host.h>
#include <asm/kvm_mmu.h>
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS()
};
static_assert(ARRAY_SIZE(kvm_vm_stats_desc) ==

View File

@ -147,10 +147,8 @@ void noinstr do_io_irq(struct pt_regs *regs)
bool from_idle;
from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT);
if (from_idle) {
if (from_idle)
update_timer_idle();
regs->psw.mask &= ~(PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_WAIT);
}
irq_enter_rcu();
@ -176,6 +174,9 @@ void noinstr do_io_irq(struct pt_regs *regs)
set_irq_regs(old_regs);
irqentry_exit(regs, state);
if (from_idle)
regs->psw.mask &= ~(PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_WAIT);
}
void noinstr do_ext_irq(struct pt_regs *regs)
@ -185,10 +186,8 @@ void noinstr do_ext_irq(struct pt_regs *regs)
bool from_idle;
from_idle = test_and_clear_cpu_flag(CIF_ENABLED_WAIT);
if (from_idle) {
if (from_idle)
update_timer_idle();
regs->psw.mask &= ~(PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_WAIT);
}
irq_enter_rcu();
@ -210,6 +209,9 @@ void noinstr do_ext_irq(struct pt_regs *regs)
irq_exit_rcu();
set_irq_regs(old_regs);
irqentry_exit(regs, state);
if (from_idle)
regs->psw.mask &= ~(PSW_MASK_EXT | PSW_MASK_IO | PSW_MASK_WAIT);
}
static void show_msi_interrupt(struct seq_file *p, int irq)

View File

@ -65,7 +65,7 @@
#define VCPU_IRQS_MAX_BUF (sizeof(struct kvm_s390_irq) * \
(KVM_MAX_VCPUS + LOCAL_IRQS))
const struct _kvm_stats_desc kvm_vm_stats_desc[] = {
const struct kvm_stats_desc kvm_vm_stats_desc[] = {
KVM_GENERIC_VM_STATS(),
STATS_DESC_COUNTER(VM, inject_io),
STATS_DESC_COUNTER(VM, inject_float_mchk),
@ -91,7 +91,7 @@ const struct kvm_stats_header kvm_vm_stats_header = {
sizeof(kvm_vm_stats_desc),
};
const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
const struct kvm_stats_desc kvm_vcpu_stats_desc[] = {
KVM_GENERIC_VCPU_STATS(),
STATS_DESC_COUNTER(VCPU, exit_userspace),
STATS_DESC_COUNTER(VCPU, exit_null),

View File

@ -2485,7 +2485,8 @@ int memslot_rmap_alloc(struct kvm_memory_slot *slot, unsigned long npages);
KVM_X86_QUIRK_MWAIT_NEVER_UD_FAULTS | \
KVM_X86_QUIRK_SLOT_ZAP_ALL | \
KVM_X86_QUIRK_STUFF_FEATURE_MSRS | \
KVM_X86_QUIRK_IGNORE_GUEST_PAT)
KVM_X86_QUIRK_IGNORE_GUEST_PAT | \
KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM)
#define KVM_X86_CONDITIONAL_QUIRKS \
(KVM_X86_QUIRK_CD_NW_CLEARED | \

View File

@ -476,6 +476,7 @@ struct kvm_sync_regs {
#define KVM_X86_QUIRK_SLOT_ZAP_ALL (1 << 7)
#define KVM_X86_QUIRK_STUFF_FEATURE_MSRS (1 << 8)
#define KVM_X86_QUIRK_IGNORE_GUEST_PAT (1 << 9)
#define KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM (1 << 10)
#define KVM_STATE_NESTED_FORMAT_VMX 0
#define KVM_STATE_NESTED_FORMAT_SVM 1

View File

@ -1894,6 +1894,7 @@ void __init check_x2apic(void)
static inline void try_to_enable_x2apic(int remap_mode) { }
static inline void __x2apic_enable(void) { }
static inline void __x2apic_disable(void) { }
#endif /* !CONFIG_X86_X2APIC */
void __init enable_IR_x2apic(void)
@ -2456,6 +2457,11 @@ static void lapic_resume(void *data)
if (x2apic_mode) {
__x2apic_enable();
} else {
if (x2apic_enabled()) {
pr_warn_once("x2apic: re-enabled by firmware during resume. Disabling\n");
__x2apic_disable();
}
/*
* Make sure the APICBASE points to the right address
*

View File

@ -776,7 +776,10 @@ do { \
#define SYNTHESIZED_F(name) \
({ \
kvm_cpu_cap_synthesized |= feature_bit(name); \
F(name); \
\
BUILD_BUG_ON(X86_FEATURE_##name >= MAX_CPU_FEATURES); \
if (boot_cpu_has(X86_FEATURE_##name)) \
F(name); \
})
/*

View File

@ -1981,16 +1981,17 @@ int kvm_hv_vcpu_flush_tlb(struct kvm_vcpu *vcpu)
if (entries[i] == KVM_HV_TLB_FLUSHALL_ENTRY)
goto out_flush_all;
if (is_noncanonical_invlpg_address(entries[i], vcpu))
continue;
/*
* Lower 12 bits of 'address' encode the number of additional
* pages to flush.
*/
gva = entries[i] & PAGE_MASK;
for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++)
for (j = 0; j < (entries[i] & ~PAGE_MASK) + 1; j++) {
if (is_noncanonical_invlpg_address(gva + j * PAGE_SIZE, vcpu))
continue;
kvm_x86_call(flush_tlb_gva)(vcpu, gva + j * PAGE_SIZE);
}
++vcpu->stat.tlb_flush;
}

View File

@ -321,7 +321,8 @@ void kvm_fire_mask_notifiers(struct kvm *kvm, unsigned irqchip, unsigned pin,
idx = srcu_read_lock(&kvm->irq_srcu);
gsi = kvm_irq_map_chip_pin(kvm, irqchip, pin);
if (gsi != -1)
hlist_for_each_entry_rcu(kimn, &ioapic->mask_notifier_list, link)
hlist_for_each_entry_srcu(kimn, &ioapic->mask_notifier_list, link,
srcu_read_lock_held(&kvm->irq_srcu))
if (kimn->irq == gsi)
kimn->func(kimn, mask);
srcu_read_unlock(&kvm->irq_srcu, idx);

View File

@ -189,12 +189,12 @@ static void avic_activate_vmcb(struct vcpu_svm *svm)
struct kvm_vcpu *vcpu = &svm->vcpu;
vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
vmcb->control.avic_physical_id |= avic_get_max_physical_id(vcpu);
vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
svm_clr_intercept(svm, INTERCEPT_CR8_WRITE);
/*
* Note: KVM supports hybrid-AVIC mode, where KVM emulates x2APIC MSR
* accesses, while interrupt injection to a running vCPU can be
@ -226,6 +226,9 @@ static void avic_deactivate_vmcb(struct vcpu_svm *svm)
vmcb->control.int_ctl &= ~(AVIC_ENABLE_MASK | X2APIC_MODE_MASK);
vmcb->control.avic_physical_id &= ~AVIC_PHYSICAL_MAX_INDEX_MASK;
if (!sev_es_guest(svm->vcpu.kvm))
svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
/*
* If running nested and the guest uses its own MSR bitmap, there
* is no need to update L0's msr bitmap
@ -368,7 +371,7 @@ void avic_init_vmcb(struct vcpu_svm *svm, struct vmcb *vmcb)
vmcb->control.avic_physical_id = __sme_set(__pa(kvm_svm->avic_physical_id_table));
vmcb->control.avic_vapic_bar = APIC_DEFAULT_PHYS_BASE;
if (kvm_apicv_activated(svm->vcpu.kvm))
if (kvm_vcpu_apicv_active(&svm->vcpu))
avic_activate_vmcb(svm);
else
avic_deactivate_vmcb(svm);

View File

@ -418,6 +418,15 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu)
return __nested_vmcb_check_controls(vcpu, ctl);
}
int nested_svm_check_cached_vmcb12(struct kvm_vcpu *vcpu)
{
if (!nested_vmcb_check_save(vcpu) ||
!nested_vmcb_check_controls(vcpu))
return -EINVAL;
return 0;
}
/*
* If a feature is not advertised to L1, clear the corresponding vmcb12
* intercept.
@ -1028,8 +1037,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
if (!nested_vmcb_check_save(vcpu) ||
!nested_vmcb_check_controls(vcpu)) {
if (nested_svm_check_cached_vmcb12(vcpu) < 0) {
vmcb12->control.exit_code = SVM_EXIT_ERR;
vmcb12->control.exit_info_1 = 0;
vmcb12->control.exit_info_2 = 0;

View File

@ -1077,8 +1077,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
svm_set_intercept(svm, INTERCEPT_CR0_WRITE);
svm_set_intercept(svm, INTERCEPT_CR3_WRITE);
svm_set_intercept(svm, INTERCEPT_CR4_WRITE);
if (!kvm_vcpu_apicv_active(vcpu))
svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
set_dr_intercepts(svm);
@ -1189,7 +1188,7 @@ static void init_vmcb(struct kvm_vcpu *vcpu, bool init_event)
if (guest_cpu_cap_has(vcpu, X86_FEATURE_ERAPS))
svm->vmcb->control.erap_ctl |= ERAP_CONTROL_ALLOW_LARGER_RAP;
if (kvm_vcpu_apicv_active(vcpu))
if (enable_apicv && irqchip_in_kernel(vcpu->kvm))
avic_init_vmcb(svm, vmcb);
if (vnmi)
@ -2674,9 +2673,11 @@ static int dr_interception(struct kvm_vcpu *vcpu)
static int cr8_write_interception(struct kvm_vcpu *vcpu)
{
u8 cr8_prev = kvm_get_cr8(vcpu);
int r;
u8 cr8_prev = kvm_get_cr8(vcpu);
WARN_ON_ONCE(kvm_vcpu_apicv_active(vcpu));
/* instruction emulation calls kvm_set_cr8() */
r = cr_interception(vcpu);
if (lapic_in_kernel(vcpu))
@ -4879,11 +4880,15 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram)
vmcb12 = map.hva;
nested_copy_vmcb_control_to_cache(svm, &vmcb12->control);
nested_copy_vmcb_save_to_cache(svm, &vmcb12->save);
ret = enter_svm_guest_mode(vcpu, smram64->svm_guest_vmcb_gpa, vmcb12, false);
if (ret)
if (nested_svm_check_cached_vmcb12(vcpu) < 0)
goto unmap_save;
if (enter_svm_guest_mode(vcpu, smram64->svm_guest_vmcb_gpa,
vmcb12, false) != 0)
goto unmap_save;
ret = 0;
svm->nested.nested_run_pending = 1;
unmap_save:

View File

@ -797,6 +797,7 @@ static inline int nested_svm_simple_vmexit(struct vcpu_svm *svm, u32 exit_code)
int nested_svm_exit_handled(struct vcpu_svm *svm);
int nested_svm_check_permissions(struct kvm_vcpu *vcpu);
int nested_svm_check_cached_vmcb12(struct kvm_vcpu *vcpu);
int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
bool has_error_code, u32 error_code);
int nested_svm_exit_special(struct vcpu_svm *svm);

View File

@ -3300,10 +3300,24 @@ static int nested_vmx_check_guest_state(struct kvm_vcpu *vcpu,
if (CC(vmcs12->guest_cr4 & X86_CR4_CET && !(vmcs12->guest_cr0 & X86_CR0_WP)))
return -EINVAL;
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) &&
(CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
CC(!vmx_is_valid_debugctl(vcpu, vmcs12->guest_ia32_debugctl, false))))
return -EINVAL;
if (vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS) {
u64 debugctl = vmcs12->guest_ia32_debugctl;
/*
* FREEZE_IN_SMM is not virtualized, but allow L1 to set it in
* vmcs12's DEBUGCTL under a quirk for backwards compatibility.
* Note that the quirk only relaxes the consistency check. The
* vmcc02 bit is still under the control of the host. In
* particular, if a host administrator decides to clear the bit,
* then L1 has no say in the matter.
*/
if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_VMCS12_ALLOW_FREEZE_IN_SMM))
debugctl &= ~DEBUGCTLMSR_FREEZE_IN_SMM;
if (CC(!kvm_dr7_valid(vmcs12->guest_dr7)) ||
CC(!vmx_is_valid_debugctl(vcpu, debugctl, false)))
return -EINVAL;
}
if ((vmcs12->vm_entry_controls & VM_ENTRY_LOAD_IA32_PAT) &&
CC(!kvm_pat_valid(vmcs12->guest_ia32_pat)))
@ -6842,13 +6856,34 @@ void vmx_leave_nested(struct kvm_vcpu *vcpu)
free_nested(vcpu);
}
int nested_vmx_check_restored_vmcs12(struct kvm_vcpu *vcpu)
{
enum vm_entry_failure_code ignored;
struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
if (nested_cpu_has_shadow_vmcs(vmcs12) &&
vmcs12->vmcs_link_pointer != INVALID_GPA) {
struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu);
if (shadow_vmcs12->hdr.revision_id != VMCS12_REVISION ||
!shadow_vmcs12->hdr.shadow_vmcs)
return -EINVAL;
}
if (nested_vmx_check_controls(vcpu, vmcs12) ||
nested_vmx_check_host_state(vcpu, vmcs12) ||
nested_vmx_check_guest_state(vcpu, vmcs12, &ignored))
return -EINVAL;
return 0;
}
static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
struct kvm_nested_state __user *user_kvm_nested_state,
struct kvm_nested_state *kvm_state)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct vmcs12 *vmcs12;
enum vm_entry_failure_code ignored;
struct kvm_vmx_nested_state_data __user *user_vmx_nested_state =
&user_kvm_nested_state->data.vmx[0];
int ret;
@ -6979,25 +7014,20 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
vmx->nested.mtf_pending =
!!(kvm_state->flags & KVM_STATE_NESTED_MTF_PENDING);
ret = -EINVAL;
if (nested_cpu_has_shadow_vmcs(vmcs12) &&
vmcs12->vmcs_link_pointer != INVALID_GPA) {
struct vmcs12 *shadow_vmcs12 = get_shadow_vmcs12(vcpu);
ret = -EINVAL;
if (kvm_state->size <
sizeof(*kvm_state) +
sizeof(user_vmx_nested_state->vmcs12) + sizeof(*shadow_vmcs12))
goto error_guest_mode;
ret = -EFAULT;
if (copy_from_user(shadow_vmcs12,
user_vmx_nested_state->shadow_vmcs12,
sizeof(*shadow_vmcs12))) {
ret = -EFAULT;
goto error_guest_mode;
}
if (shadow_vmcs12->hdr.revision_id != VMCS12_REVISION ||
!shadow_vmcs12->hdr.shadow_vmcs)
sizeof(*shadow_vmcs12)))
goto error_guest_mode;
}
@ -7008,9 +7038,8 @@ static int vmx_set_nested_state(struct kvm_vcpu *vcpu,
kvm_state->hdr.vmx.preemption_timer_deadline;
}
if (nested_vmx_check_controls(vcpu, vmcs12) ||
nested_vmx_check_host_state(vcpu, vmcs12) ||
nested_vmx_check_guest_state(vcpu, vmcs12, &ignored))
ret = nested_vmx_check_restored_vmcs12(vcpu);
if (ret < 0)
goto error_guest_mode;
vmx->nested.dirty_vmcs12 = true;

Some files were not shown because too many files have changed in this diff Show More