Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR (net-6.16-rc4).

Conflicts:

Documentation/netlink/specs/mptcp_pm.yaml
  9e6dd4c256 ("netlink: specs: mptcp: replace underscores with dashes in names")
  ec362192aa ("netlink: specs: fix up indentation errors")
https://lore.kernel.org/20250626122205.389c2cd4@canb.auug.org.au

Adjacent changes:

Documentation/netlink/specs/fou.yaml
  791a9ed0a4 ("netlink: specs: fou: replace underscores with dashes in names")
  880d43ca9a ("netlink: specs: clean up spaces in brackets")

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2025-06-12 10:08:24 -07:00
commit 28aa52b618
370 changed files with 3538 additions and 1795 deletions

View File

@ -2981,6 +2981,11 @@ S: 521 Pleasant Valley Road
S: Potsdam, New York 13676
S: USA
N: Shannon Nelson
E: sln@onemain.com
D: Worked on several network drivers including
D: ixgbe, i40e, ionic, pds_core, pds_vdpa, pds_fwctl
N: Dave Neuer
E: dave.neuer@pobox.com
D: Helped implement support for Compaq's H31xx series iPAQs

View File

@ -234,7 +234,7 @@ Before jumping into the kernel, the following conditions must be met:
- If the kernel is entered at EL1:
- ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1
- ICC_SRE_EL2.Enable (bit 3) must be initialised to 0b1
- ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1.
- The DT or ACPI tables must describe a GICv3 interrupt controller.

View File

@ -233,10 +233,16 @@ attempts in order to enforce the LRU property which have increasing impacts on
other CPUs involved in the following operation attempts:
- Attempt to use CPU-local state to batch operations
- Attempt to fetch free nodes from global lists
- Attempt to fetch ``target_free`` free nodes from global lists
- Attempt to pull any node from a global list and remove it from the hashmap
- Attempt to pull any node from any CPU's list and remove it from the hashmap
The number of nodes to borrow from the global list in a batch, ``target_free``,
depends on the size of the map. Larger batch size reduces lock contention, but
may also exhaust the global structure. The value is computed at map init to
avoid exhaustion, by limiting aggregate reservation by all CPUs to half the map
size. With a minimum of a single element and maximum budget of 128 at a time.
This algorithm is described visually in the following diagram. See the
description in commit 3a08c2fd7634 ("bpf: LRU List") for a full explanation of
the corresponding operations:

View File

@ -35,18 +35,18 @@ digraph {
fn_bpf_lru_list_pop_free_to_local [shape=rectangle,fillcolor=2,
label="Flush local pending,
Rotate Global list, move
LOCAL_FREE_TARGET
target_free
from global -> local"]
// Also corresponds to:
// fn__local_list_flush()
// fn_bpf_lru_list_rotate()
fn___bpf_lru_node_move_to_free[shape=diamond,fillcolor=2,
label="Able to free\nLOCAL_FREE_TARGET\nnodes?"]
label="Able to free\ntarget_free\nnodes?"]
fn___bpf_lru_list_shrink_inactive [shape=rectangle,fillcolor=3,
label="Shrink inactive list
up to remaining
LOCAL_FREE_TARGET
target_free
(global LRU -> local)"]
fn___bpf_lru_list_shrink [shape=diamond,fillcolor=2,
label="> 0 entries in\nlocal free list?"]

View File

@ -97,7 +97,10 @@ properties:
resets:
items:
- description: module reset
- description:
Module reset. This property is optional for controllers in Tegra194,
Tegra234 etc where an internal software reset is available as an
alternative.
reset-names:
items:
@ -116,6 +119,13 @@ properties:
- const: rx
- const: tx
required:
- compatible
- reg
- interrupts
- clocks
- clock-names
allOf:
- $ref: /schemas/i2c/i2c-controller.yaml
- if:
@ -169,6 +179,18 @@ allOf:
properties:
power-domains: false
- if:
not:
properties:
compatible:
contains:
enum:
- nvidia,tegra194-i2c
then:
required:
- resets
- reset-names
unevaluatedProperties: false
examples:

View File

@ -1249,3 +1249,12 @@ Using try_lookup_noperm() will require linux/namei.h to be included.
Calling conventions for ->d_automount() have changed; we should *not* grab
an extra reference to new mount - it should be returned with refcount 1.
---
collect_mounts()/drop_collected_mounts()/iterate_mounts() are gone now.
Replacement is collect_paths()/drop_collected_path(), with no special
iterator needed. Instead of a cloned mount tree, the new interface returns
an array of struct path, one for each mount collect_mounts() would've
created. These struct path point to locations in the caller's namespace
that would be roots of the cloned mounts.

View File

@ -25,7 +25,7 @@ providing a consistent API to upper layers of the driver stack.
GSP Support
------------------------
.. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
.. kernel-doc:: drivers/gpu/drm/nouveau/nvkm/subdev/gsp/rm/r535/rpc.c
:doc: GSP message queue element
.. kernel-doc:: drivers/gpu/drm/nouveau/include/nvkm/subdev/gsp.h

View File

@ -6,6 +6,9 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines
$defs:
name:
type: string
pattern: ^[0-9a-z-]+$
uint:
type: integer
minimum: 0
@ -76,7 +79,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
header:
description: For C-compatible languages, header which already defines this value.
type: string
@ -103,7 +106,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
value:
type: integer
doc:
@ -132,7 +135,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
type:
description: The netlink attribute type
enum: [ u8, u16, u32, u64, s8, s16, s32, s64, string, binary ]
@ -169,7 +172,7 @@ properties:
name:
description: |
Name used when referring to this space in other definitions, not used outside of the spec.
type: string
$ref: '#/$defs/name'
name-prefix:
description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -206,7 +209,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
type: &attr-type
description: The netlink attribute type
enum: [ unused, pad, flag, binary, bitfield32,
@ -348,7 +351,7 @@ properties:
properties:
name:
description: Name of the operation, also defining its C enum value in uAPI.
type: string
$ref: '#/$defs/name'
doc:
description: Documentation for the command.
type: string

View File

@ -6,6 +6,9 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines
$defs:
name:
type: string
pattern: ^[0-9a-z-]+$
uint:
type: integer
minimum: 0
@ -29,7 +32,7 @@ additionalProperties: False
properties:
name:
description: Name of the genetlink family.
type: string
$ref: '#/$defs/name'
doc:
type: string
protocol:
@ -48,7 +51,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
header:
description: For C-compatible languages, header which already defines this value.
type: string
@ -75,7 +78,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
value:
type: integer
doc:
@ -96,7 +99,7 @@ properties:
name:
description: |
Name used when referring to this space in other definitions, not used outside of the spec.
type: string
$ref: '#/$defs/name'
name-prefix:
description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -121,7 +124,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
type: &attr-type
enum: [ unused, pad, flag, binary,
uint, sint, u8, u16, u32, u64, s8, s16, s32, s64,
@ -243,7 +246,7 @@ properties:
properties:
name:
description: Name of the operation, also defining its C enum value in uAPI.
type: string
$ref: '#/$defs/name'
doc:
description: Documentation for the command.
type: string
@ -327,7 +330,7 @@ properties:
name:
description: |
The name for the group, used to form the define and the value of the define.
type: string
$ref: '#/$defs/name'
flags: *cmd_flags
kernel-family:

View File

@ -6,6 +6,12 @@ $schema: https://json-schema.org/draft-07/schema
# Common defines
$defs:
name:
type: string
pattern: ^[0-9a-z-]+$
name-cap:
type: string
pattern: ^[0-9a-zA-Z-]+$
uint:
type: integer
minimum: 0
@ -71,7 +77,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
header:
description: For C-compatible languages, header which already defines this value.
type: string
@ -98,7 +104,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
value:
type: integer
doc:
@ -124,7 +130,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name-cap'
type:
description: |
The netlink attribute type. Members of type 'binary' or 'pad'
@ -183,7 +189,7 @@ properties:
name:
description: |
Name used when referring to this space in other definitions, not used outside of the spec.
type: string
$ref: '#/$defs/name'
name-prefix:
description: |
Prefix for the C enum name of the attributes. Default family[name]-set[name]-a-
@ -220,7 +226,7 @@ properties:
additionalProperties: False
properties:
name:
type: string
$ref: '#/$defs/name'
type: &attr-type
description: The netlink attribute type
enum: [ unused, pad, flag, binary, bitfield32,
@ -408,7 +414,7 @@ properties:
properties:
name:
description: Name of the operation, also defining its C enum value in uAPI.
type: string
$ref: '#/$defs/name'
doc:
description: Documentation for the command.
type: string

View File

@ -38,15 +38,15 @@ definitions:
-
name: dsa
-
name: pci_pf
name: pci-pf
-
name: pci_vf
name: pci-vf
-
name: virtual
-
name: unused
-
name: pci_sf
name: pci-sf
-
type: enum
name: port-fn-state
@ -220,7 +220,7 @@ definitions:
-
name: flag
-
name: nul_string
name: nul-string
value: 10
-
name: binary

View File

@ -188,7 +188,7 @@ definitions:
value: 10000
-
type: const
name: pin-frequency-77_5-khz
name: pin-frequency-77-5-khz
value: 77500
-
type: const

View File

@ -48,7 +48,7 @@ definitions:
name: started
doc: The firmware flashing process has started.
-
name: in_progress
name: in-progress
doc: The firmware flashing process is in progress.
-
name: completed
@ -1473,7 +1473,7 @@ attribute-sets:
name: hkey
type: binary
-
name: input_xfrm
name: input-xfrm
type: u32
-
name: start-context
@ -2306,7 +2306,7 @@ operations:
- hfunc
- indir
- hkey
- input_xfrm
- input-xfrm
dump:
request:
attributes:

View File

@ -15,7 +15,7 @@ kernel-policy: global
definitions:
-
type: enum
name: encap_type
name: encap-type
name-prefix: fou-encap-
enum-name:
entries: [unspec, direct, gue]
@ -43,26 +43,26 @@ attribute-sets:
name: type
type: u8
-
name: remcsum_nopartial
name: remcsum-nopartial
type: flag
-
name: local_v4
name: local-v4
type: u32
-
name: local_v6
name: local-v6
type: binary
checks:
min-len: 16
-
name: peer_v4
name: peer-v4
type: u32
-
name: peer_v6
name: peer-v6
type: binary
checks:
min-len: 16
-
name: peer_port
name: peer-port
type: u16
byte-order: big-endian
-
@ -90,12 +90,12 @@ operations:
- port
- ipproto
- type
- remcsum_nopartial
- local_v4
- peer_v4
- local_v6
- peer_v6
- peer_port
- remcsum-nopartial
- local-v4
- peer-v4
- local-v6
- peer-v6
- peer-port
- ifindex
-
@ -112,11 +112,11 @@ operations:
- af
- ifindex
- port
- peer_port
- local_v4
- peer_v4
- local_v6
- peer_v6
- peer-port
- local-v4
- peer-v4
- local-v6
- peer-v6
-
name: get

View File

@ -57,21 +57,21 @@ definitions:
doc: >-
A new subflow has been established. 'error' should not be set.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if_idx [, error].
daddr6, sport, dport, backup, if-idx [, error].
-
name: sub-closed
doc: >-
A subflow has been closed. An error (copy of sk_err) could be set if
an error has been detected for this subflow.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if_idx [, error].
daddr6, sport, dport, backup, if-idx [, error].
-
name: sub-priority
value: 13
doc: >-
The priority of a subflow has changed. 'error' should not be set.
Attributes: token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 |
daddr6, sport, dport, backup, if_idx [, error].
daddr6, sport, dport, backup, if-idx [, error].
-
name: listener-created
value: 15
@ -255,7 +255,7 @@ attribute-sets:
name: timeout
type: u32
-
name: if_idx
name: if-idx
type: u32
-
name: reset-reason

View File

@ -27,7 +27,7 @@ attribute-sets:
name: proc
type: u32
-
name: service_time
name: service-time
type: s64
-
name: pad
@ -139,7 +139,7 @@ operations:
- prog
- version
- proc
- service_time
- service-time
- saddr4
- daddr4
- saddr6

View File

@ -216,7 +216,7 @@ definitions:
type: struct
members:
-
name: nd_target
name: nd-target
type: binary
len: 16
byte-order: big-endian
@ -258,12 +258,12 @@ definitions:
type: struct
members:
-
name: vlan_tpid
name: vlan-tpid
type: u16
byte-order: big-endian
doc: Tag protocol identifier (TPID) to push.
-
name: vlan_tci
name: vlan-tci
type: u16
byte-order: big-endian
doc: Tag control identifier (TCI) to push.

View File

@ -603,7 +603,7 @@ definitions:
name: optmask
type: u32
-
name: if_stats_msg
name: if-stats-msg
type: struct
members:
-
@ -2486,7 +2486,7 @@ operations:
name: getstats
doc: Get / dump link stats.
attribute-set: stats-attrs
fixed-header: if_stats_msg
fixed-header: if-stats-msg
do:
request:
value: 94

View File

@ -233,7 +233,7 @@ definitions:
type: u8
doc: log(P_max / (qth-max - qth-min))
-
name: Scell_log
name: Scell-log
type: u8
doc: cell size for idle damping
-
@ -254,7 +254,7 @@ definitions:
name: DPs
type: u32
-
name: def_DP
name: def-DP
type: u32
-
name: grio

View File

@ -66,7 +66,7 @@ Admin Function driver
As mentioned above RVU PF0 is called the admin function (AF), this driver
supports resource provisioning and configuration of functional blocks.
Doesn't handle any I/O. It sets up few basic stuff but most of the
funcionality is achieved via configuration requests from PFs and VFs.
functionality is achieved via configuration requests from PFs and VFs.
PF/VFs communicates with AF via a shared memory region (mailbox). Upon
receiving requests AF does resource provisioning and other HW configuration.

View File

@ -1,8 +1,8 @@
.. SPDX-License-Identifier: GPL-2.0-only
=====================================================================
Audio drivers for Cirrus Logic CS35L54/56/57 Boosted Smart Amplifiers
=====================================================================
========================================================================
Audio drivers for Cirrus Logic CS35L54/56/57/63 Boosted Smart Amplifiers
========================================================================
:Copyright: 2025 Cirrus Logic, Inc. and
Cirrus Logic International Semiconductor Ltd.
@ -13,11 +13,11 @@ Summary
The high-level summary of this document is:
**If you have a laptop that uses CS35L54/56/57 amplifiers but audio is not
**If you have a laptop that uses CS35L54/56/57/63 amplifiers but audio is not
working, DO NOT ATTEMPT TO USE FIRMWARE AND SETTINGS FROM ANOTHER LAPTOP,
EVEN IF THAT LAPTOP SEEMS SIMILAR.**
The CS35L54/56/57 amplifiers must be correctly configured for the power
The CS35L54/56/57/63 amplifiers must be correctly configured for the power
supply voltage, speaker impedance, maximum speaker voltage/current, and
other external hardware connections.
@ -34,6 +34,7 @@ The cs35l56 drivers support:
* CS35L54
* CS35L56
* CS35L57
* CS35L63
There are two drivers in the kernel
@ -104,6 +105,13 @@ In this example the SSID is 10280c63.
The format of the firmware file names is:
SoundWire (except CS35L56 Rev B0):
cs35lxx-b0-dsp1-misc-SSID[-spkidX]-l?u?
SoundWire CS35L56 Rev B0:
cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN
Non-SoundWire (HDA and I2S):
cs35lxx-b0-dsp1-misc-SSID[-spkidX]-ampN
Where:
@ -111,12 +119,18 @@ Where:
* cs35lxx-b0 is the amplifier model and silicon revision. This information
is logged by the driver during initialization.
* SSID is the 8-digit hexadecimal SSID value.
* l?u? is the physical address on the SoundWire bus of the amp this
file applies to.
* ampN is the amplifier number (for example amp1). This is the same as
the prefix on the ALSA control names except that it is always lower-case
in the file name.
* spkidX is an optional part, used for laptops that have firmware
configurations for different makes and models of internal speakers.
The CS35L56 Rev B0 continues to use the old filename scheme because a
large number of firmware files have already been published with these
names.
Sound Open Firmware and ALSA topology files
-------------------------------------------

View File

@ -6645,7 +6645,8 @@ to the byte array.
.. note::
For KVM_EXIT_IO, KVM_EXIT_MMIO, KVM_EXIT_OSI, KVM_EXIT_PAPR, KVM_EXIT_XEN,
KVM_EXIT_EPR, KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding
KVM_EXIT_EPR, KVM_EXIT_HYPERCALL, KVM_EXIT_TDX,
KVM_EXIT_X86_RDMSR and KVM_EXIT_X86_WRMSR the corresponding
operations are complete (and guest state is consistent) only after userspace
has re-entered the kernel with KVM_RUN. The kernel side will first finish
incomplete operations and then check for pending signals.
@ -7174,6 +7175,62 @@ The valid value for 'flags' is:
- KVM_NOTIFY_CONTEXT_INVALID -- the VM context is corrupted and not valid
in VMCS. It would run into unknown result if resume the target VM.
::
/* KVM_EXIT_TDX */
struct {
__u64 flags;
__u64 nr;
union {
struct {
u64 ret;
u64 data[5];
} unknown;
struct {
u64 ret;
u64 gpa;
u64 size;
} get_quote;
struct {
u64 ret;
u64 leaf;
u64 r11, r12, r13, r14;
} get_tdvmcall_info;
};
} tdx;
Process a TDVMCALL from the guest. KVM forwards select TDVMCALL based
on the Guest-Hypervisor Communication Interface (GHCI) specification;
KVM bridges these requests to the userspace VMM with minimal changes,
placing the inputs in the union and copying them back to the guest
on re-entry.
Flags are currently always zero, whereas ``nr`` contains the TDVMCALL
number from register R11. The remaining field of the union provide the
inputs and outputs of the TDVMCALL. Currently the following values of
``nr`` are defined:
* ``TDVMCALL_GET_QUOTE``: the guest has requested to generate a TD-Quote
signed by a service hosting TD-Quoting Enclave operating on the host.
Parameters and return value are in the ``get_quote`` field of the union.
The ``gpa`` field and ``size`` specify the guest physical address
(without the shared bit set) and the size of a shared-memory buffer, in
which the TDX guest passes a TD Report. The ``ret`` field represents
the return value of the GetQuote request. When the request has been
queued successfully, the TDX guest can poll the status field in the
shared-memory area to check whether the Quote generation is completed or
not. When completed, the generated Quote is returned via the same buffer.
* ``TDVMCALL_GET_TD_VM_CALL_INFO``: the guest has requested the support
status of TDVMCALLs. The output values for the given leaf should be
placed in fields from ``r11`` to ``r14`` of the ``get_tdvmcall_info``
field of the union.
KVM may add support for more values in the future that may cause a userspace
exit, even without calls to ``KVM_ENABLE_CAP`` or similar. In this case,
it will enter with output fields already valid; in the common case, the
``unknown.ret`` field of the union will be ``TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED``.
Userspace need not do anything if it does not wish to support a TDVMCALL.
::
/* Fix the size of the union. */

View File

@ -10839,7 +10839,7 @@ S: Maintained
F: drivers/dma/hisi_dma.c
HISILICON GPIO DRIVER
M: Jay Fang <f.fangjian@huawei.com>
M: Yang Shen <shenyang39@huawei.com>
L: linux-gpio@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/gpio/hisilicon,ascend910-gpio.yaml
@ -11155,7 +11155,8 @@ F: include/linux/platform_data/huawei-gaokun-ec.h
HUGETLB SUBSYSTEM
M: Muchun Song <muchun.song@linux.dev>
R: Oscar Salvador <osalvador@suse.de>
M: Oscar Salvador <osalvador@suse.de>
R: David Hildenbrand <david@redhat.com>
L: linux-mm@kvack.org
S: Maintained
F: Documentation/ABI/testing/sysfs-kernel-mm-hugepages
@ -11166,6 +11167,7 @@ F: fs/hugetlbfs/
F: include/linux/hugetlb.h
F: include/trace/events/hugetlbfs.h
F: mm/hugetlb.c
F: mm/hugetlb_cgroup.c
F: mm/hugetlb_cma.c
F: mm/hugetlb_cma.h
F: mm/hugetlb_vmemmap.c
@ -13345,6 +13347,7 @@ M: Alexander Graf <graf@amazon.com>
M: Mike Rapoport <rppt@kernel.org>
M: Changyuan Lyu <changyuanl@google.com>
L: kexec@lists.infradead.org
L: linux-mm@kvack.org
S: Maintained
F: Documentation/admin-guide/mm/kho.rst
F: Documentation/core-api/kho/*
@ -15676,8 +15679,11 @@ S: Maintained
F: Documentation/core-api/boot-time-mm.rst
F: Documentation/core-api/kho/bindings/memblock/*
F: include/linux/memblock.h
F: mm/bootmem_info.c
F: mm/memblock.c
F: mm/memtest.c
F: mm/mm_init.c
F: mm/rodata_test.c
F: tools/testing/memblock/
MEMORY ALLOCATION PROFILING
@ -15732,7 +15738,6 @@ F: Documentation/admin-guide/mm/
F: Documentation/mm/
F: include/linux/gfp.h
F: include/linux/gfp_types.h
F: include/linux/memfd.h
F: include/linux/memory_hotplug.h
F: include/linux/memory-tiers.h
F: include/linux/mempolicy.h
@ -15792,6 +15797,10 @@ S: Maintained
W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: mm/gup.c
F: mm/gup_test.c
F: mm/gup_test.h
F: tools/testing/selftests/mm/gup_longterm.c
F: tools/testing/selftests/mm/gup_test.c
MEMORY MANAGEMENT - KSM (Kernel Samepage Merging)
M: Andrew Morton <akpm@linux-foundation.org>
@ -15868,6 +15877,7 @@ L: linux-mm@kvack.org
S: Maintained
F: mm/pt_reclaim.c
F: mm/vmscan.c
F: mm/workingset.c
MEMORY MANAGEMENT - RMAP (REVERSE MAPPING)
M: Andrew Morton <akpm@linux-foundation.org>
@ -15880,6 +15890,7 @@ R: Harry Yoo <harry.yoo@oracle.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/rmap.h
F: mm/page_vma_mapped.c
F: mm/rmap.c
MEMORY MANAGEMENT - SECRETMEM
@ -15972,11 +15983,14 @@ S: Maintained
W: http://www.linux-mm.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
F: include/trace/events/mmap.h
F: mm/mincore.c
F: mm/mlock.c
F: mm/mmap.c
F: mm/mprotect.c
F: mm/mremap.c
F: mm/mseal.c
F: mm/msync.c
F: mm/nommu.c
F: mm/vma.c
F: mm/vma.h
F: mm/vma_exec.c
@ -25027,8 +25041,11 @@ M: Hugh Dickins <hughd@google.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/memfd.h
F: include/linux/shmem_fs.h
F: mm/memfd.c
F: mm/shmem.c
F: mm/shmem_quota.c
TOMOYO SECURITY MODULE
M: Kentaro Takeda <takedakn@nttdata.co.jp>

View File

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 16
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -561,68 +561,6 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu_set_flag((v), e); \
} while (0)
#define __build_check_all_or_none(r, bits) \
BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits))
#define __cpacr_to_cptr_clr(clr, set) \
({ \
u64 cptr = 0; \
\
if ((set) & CPACR_EL1_FPEN) \
cptr |= CPTR_EL2_TFP; \
if ((set) & CPACR_EL1_ZEN) \
cptr |= CPTR_EL2_TZ; \
if ((set) & CPACR_EL1_SMEN) \
cptr |= CPTR_EL2_TSM; \
if ((clr) & CPACR_EL1_TTA) \
cptr |= CPTR_EL2_TTA; \
if ((clr) & CPTR_EL2_TAM) \
cptr |= CPTR_EL2_TAM; \
if ((clr) & CPTR_EL2_TCPAC) \
cptr |= CPTR_EL2_TCPAC; \
\
cptr; \
})
#define __cpacr_to_cptr_set(clr, set) \
({ \
u64 cptr = 0; \
\
if ((clr) & CPACR_EL1_FPEN) \
cptr |= CPTR_EL2_TFP; \
if ((clr) & CPACR_EL1_ZEN) \
cptr |= CPTR_EL2_TZ; \
if ((clr) & CPACR_EL1_SMEN) \
cptr |= CPTR_EL2_TSM; \
if ((set) & CPACR_EL1_TTA) \
cptr |= CPTR_EL2_TTA; \
if ((set) & CPTR_EL2_TAM) \
cptr |= CPTR_EL2_TAM; \
if ((set) & CPTR_EL2_TCPAC) \
cptr |= CPTR_EL2_TCPAC; \
\
cptr; \
})
#define cpacr_clear_set(clr, set) \
do { \
BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \
BUILD_BUG_ON((clr) & CPACR_EL1_E0POE); \
__build_check_all_or_none((clr), CPACR_EL1_FPEN); \
__build_check_all_or_none((set), CPACR_EL1_FPEN); \
__build_check_all_or_none((clr), CPACR_EL1_ZEN); \
__build_check_all_or_none((set), CPACR_EL1_ZEN); \
__build_check_all_or_none((clr), CPACR_EL1_SMEN); \
__build_check_all_or_none((set), CPACR_EL1_SMEN); \
\
if (has_vhe() || has_hvhe()) \
sysreg_clear_set(cpacr_el1, clr, set); \
else \
sysreg_clear_set(cptr_el2, \
__cpacr_to_cptr_clr(clr, set), \
__cpacr_to_cptr_set(clr, set));\
} while (0)
/*
* Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE
* format if E2H isn't set.

View File

@ -1289,9 +1289,8 @@ void kvm_arm_resume_guest(struct kvm *kvm);
})
/*
* The couple of isb() below are there to guarantee the same behaviour
* on VHE as on !VHE, where the eret to EL1 acts as a context
* synchronization event.
* The isb() below is there to guarantee the same behaviour on VHE as on !VHE,
* where the eret to EL1 acts as a context synchronization event.
*/
#define kvm_call_hyp(f, ...) \
do { \
@ -1309,7 +1308,6 @@ void kvm_arm_resume_guest(struct kvm *kvm);
\
if (has_vhe()) { \
ret = f(__VA_ARGS__); \
isb(); \
} else { \
ret = kvm_call_hyp_nvhe(f, ##__VA_ARGS__); \
} \

View File

@ -288,7 +288,9 @@ static void flush_gcs(void)
if (!system_supports_gcs())
return;
gcs_free(current);
current->thread.gcspr_el0 = 0;
current->thread.gcs_base = 0;
current->thread.gcs_size = 0;
current->thread.gcs_el0_mode = 0;
write_sysreg_s(GCSCRE0_EL1_nTR, SYS_GCSCRE0_EL1);
write_sysreg_s(0, SYS_GCSPR_EL0);

View File

@ -141,7 +141,7 @@ unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
addr += n;
if (regs_within_kernel_stack(regs, (unsigned long)addr))
return *addr;
return READ_ONCE_NOCHECK(*addr);
else
return 0;
}

View File

@ -2764,7 +2764,8 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
bool kvm_arch_irqfd_route_changed(struct kvm_kernel_irq_routing_entry *old,
struct kvm_kernel_irq_routing_entry *new)
{
if (new->type != KVM_IRQ_ROUTING_MSI)
if (old->type != KVM_IRQ_ROUTING_MSI ||
new->type != KVM_IRQ_ROUTING_MSI)
return true;
return memcmp(&old->msi, &new->msi, sizeof(new->msi));

View File

@ -65,6 +65,136 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
}
}
static inline void __activate_cptr_traps_nvhe(struct kvm_vcpu *vcpu)
{
u64 val = CPTR_NVHE_EL2_RES1 | CPTR_EL2_TAM | CPTR_EL2_TTA;
/*
* Always trap SME since it's not supported in KVM.
* TSM is RES1 if SME isn't implemented.
*/
val |= CPTR_EL2_TSM;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())
val |= CPTR_EL2_TZ;
if (!guest_owns_fp_regs())
val |= CPTR_EL2_TFP;
write_sysreg(val, cptr_el2);
}
static inline void __activate_cptr_traps_vhe(struct kvm_vcpu *vcpu)
{
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
u64 val = CPTR_EL2_TAM | CPACR_EL1_TTA;
u64 cptr;
if (guest_owns_fp_regs()) {
val |= CPACR_EL1_FPEN;
if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN;
}
if (!vcpu_has_nv(vcpu))
goto write;
/*
* The architecture is a bit crap (what a surprise): an EL2 guest
* writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA,
* as they are RES0 in the guest's view. To work around it, trap the
* sucker using the very same bit it can't set...
*/
if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu))
val |= CPTR_EL2_TCPAC;
/*
* Layer the guest hypervisor's trap configuration on top of our own if
* we're in a nested context.
*/
if (is_hyp_ctxt(vcpu))
goto write;
cptr = vcpu_sanitised_cptr_el2(vcpu);
/*
* Pay attention, there's some interesting detail here.
*
* The CPTR_EL2.xEN fields are 2 bits wide, although there are only two
* meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest):
*
* - CPTR_EL2.xEN = x0, traps are enabled
* - CPTR_EL2.xEN = x1, traps are disabled
*
* In other words, bit[0] determines if guest accesses trap or not. In
* the interest of simplicity, clear the entire field if the guest
* hypervisor has traps enabled to dispel any illusion of something more
* complicated taking place.
*/
if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0)))
val &= ~CPACR_EL1_FPEN;
if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0)))
val &= ~CPACR_EL1_ZEN;
if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
val |= cptr & CPACR_EL1_E0POE;
val |= cptr & CPTR_EL2_TCPAC;
write:
write_sysreg(val, cpacr_el1);
}
static inline void __activate_cptr_traps(struct kvm_vcpu *vcpu)
{
if (!guest_owns_fp_regs())
__activate_traps_fpsimd32(vcpu);
if (has_vhe() || has_hvhe())
__activate_cptr_traps_vhe(vcpu);
else
__activate_cptr_traps_nvhe(vcpu);
}
static inline void __deactivate_cptr_traps_nvhe(struct kvm_vcpu *vcpu)
{
u64 val = CPTR_NVHE_EL2_RES1;
if (!cpus_have_final_cap(ARM64_SVE))
val |= CPTR_EL2_TZ;
if (!cpus_have_final_cap(ARM64_SME))
val |= CPTR_EL2_TSM;
write_sysreg(val, cptr_el2);
}
static inline void __deactivate_cptr_traps_vhe(struct kvm_vcpu *vcpu)
{
u64 val = CPACR_EL1_FPEN;
if (cpus_have_final_cap(ARM64_SVE))
val |= CPACR_EL1_ZEN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN;
write_sysreg(val, cpacr_el1);
}
static inline void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
{
if (has_vhe() || has_hvhe())
__deactivate_cptr_traps_vhe(vcpu);
else
__deactivate_cptr_traps_nvhe(vcpu);
}
#define reg_to_fgt_masks(reg) \
({ \
struct fgt_masks *m; \
@ -486,11 +616,6 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
*/
if (system_supports_sve()) {
__hyp_sve_save_host();
/* Re-enable SVE traps if not supported for the guest vcpu. */
if (!vcpu_has_sve(vcpu))
cpacr_clear_set(CPACR_EL1_ZEN, 0);
} else {
__fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));
}
@ -541,10 +666,7 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */
if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve()))
cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN);
else
cpacr_clear_set(0, CPACR_EL1_FPEN);
__deactivate_cptr_traps(vcpu);
isb();
/* Write out the host state if it's in the registers */
@ -566,6 +688,13 @@ static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
*host_data_ptr(fp_owner) = FP_STATE_GUEST_OWNED;
/*
* Re-enable traps necessary for the current state of the guest, e.g.
* those enabled by a guest hypervisor. The ERET to the guest will
* provide the necessary context synchronization.
*/
__activate_cptr_traps(vcpu);
return true;
}

View File

@ -69,7 +69,10 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
if (!guest_owns_fp_regs())
return;
cpacr_clear_set(0, CPACR_EL1_FPEN | CPACR_EL1_ZEN);
/*
* Traps have been disabled by __deactivate_cptr_traps(), but there
* hasn't necessarily been a context synchronization event yet.
*/
isb();
if (vcpu_has_sve(vcpu))

View File

@ -47,65 +47,6 @@ struct fgt_masks hdfgwtr2_masks;
extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc);
static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
{
u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */
if (!guest_owns_fp_regs())
__activate_traps_fpsimd32(vcpu);
if (has_hvhe()) {
val |= CPACR_EL1_TTA;
if (guest_owns_fp_regs()) {
val |= CPACR_EL1_FPEN;
if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN;
}
write_sysreg(val, cpacr_el1);
} else {
val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
/*
* Always trap SME since it's not supported in KVM.
* TSM is RES1 if SME isn't implemented.
*/
val |= CPTR_EL2_TSM;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())
val |= CPTR_EL2_TZ;
if (!guest_owns_fp_regs())
val |= CPTR_EL2_TFP;
write_sysreg(val, cptr_el2);
}
}
static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
{
if (has_hvhe()) {
u64 val = CPACR_EL1_FPEN;
if (cpus_have_final_cap(ARM64_SVE))
val |= CPACR_EL1_ZEN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN;
write_sysreg(val, cpacr_el1);
} else {
u64 val = CPTR_NVHE_EL2_RES1;
if (!cpus_have_final_cap(ARM64_SVE))
val |= CPTR_EL2_TZ;
if (!cpus_have_final_cap(ARM64_SME))
val |= CPTR_EL2_TSM;
write_sysreg(val, cptr_el2);
}
}
static void __activate_traps(struct kvm_vcpu *vcpu)
{
___activate_traps(vcpu, vcpu->arch.hcr_el2);

View File

@ -90,87 +90,6 @@ static u64 __compute_hcr(struct kvm_vcpu *vcpu)
return hcr | (guest_hcr & ~NV_HCR_GUEST_EXCLUDE);
}
static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
{
u64 cptr;
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
* CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
* except for some missing controls, such as TAM.
* In this case, CPTR_EL2.TAM has the same position with or without
* VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
* shift value for trapping the AMU accesses.
*/
u64 val = CPACR_EL1_TTA | CPTR_EL2_TAM;
if (guest_owns_fp_regs()) {
val |= CPACR_EL1_FPEN;
if (vcpu_has_sve(vcpu))
val |= CPACR_EL1_ZEN;
} else {
__activate_traps_fpsimd32(vcpu);
}
if (!vcpu_has_nv(vcpu))
goto write;
/*
* The architecture is a bit crap (what a surprise): an EL2 guest
* writing to CPTR_EL2 via CPACR_EL1 can't set any of TCPAC or TTA,
* as they are RES0 in the guest's view. To work around it, trap the
* sucker using the very same bit it can't set...
*/
if (vcpu_el2_e2h_is_set(vcpu) && is_hyp_ctxt(vcpu))
val |= CPTR_EL2_TCPAC;
/*
* Layer the guest hypervisor's trap configuration on top of our own if
* we're in a nested context.
*/
if (is_hyp_ctxt(vcpu))
goto write;
cptr = vcpu_sanitised_cptr_el2(vcpu);
/*
* Pay attention, there's some interesting detail here.
*
* The CPTR_EL2.xEN fields are 2 bits wide, although there are only two
* meaningful trap states when HCR_EL2.TGE = 0 (running a nested guest):
*
* - CPTR_EL2.xEN = x0, traps are enabled
* - CPTR_EL2.xEN = x1, traps are disabled
*
* In other words, bit[0] determines if guest accesses trap or not. In
* the interest of simplicity, clear the entire field if the guest
* hypervisor has traps enabled to dispel any illusion of something more
* complicated taking place.
*/
if (!(SYS_FIELD_GET(CPACR_EL1, FPEN, cptr) & BIT(0)))
val &= ~CPACR_EL1_FPEN;
if (!(SYS_FIELD_GET(CPACR_EL1, ZEN, cptr) & BIT(0)))
val &= ~CPACR_EL1_ZEN;
if (kvm_has_feat(vcpu->kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
val |= cptr & CPACR_EL1_E0POE;
val |= cptr & CPTR_EL2_TCPAC;
write:
write_sysreg(val, cpacr_el1);
}
static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
{
u64 val = CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN;
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN_EL1EN;
write_sysreg(val, cpacr_el1);
}
static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
@ -639,10 +558,10 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
host_ctxt = host_data_ptr(host_ctxt);
guest_ctxt = &vcpu->arch.ctxt;
sysreg_save_host_state_vhe(host_ctxt);
fpsimd_lazy_switch_to_guest(vcpu);
sysreg_save_host_state_vhe(host_ctxt);
/*
* Note that ARM erratum 1165522 requires us to configure both stage 1
* and stage 2 translation for the guest context before we clear
@ -667,15 +586,23 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
__deactivate_traps(vcpu);
fpsimd_lazy_switch_to_host(vcpu);
sysreg_restore_host_state_vhe(host_ctxt);
__debug_switch_to_host(vcpu);
/*
* Ensure that all system register writes above have taken effect
* before returning to the host. In VHE mode, CPTR traps for
* FPSIMD/SVE/SME also apply to EL2, so FPSIMD/SVE/SME state must be
* manipulated after the ISB.
*/
isb();
fpsimd_lazy_switch_to_host(vcpu);
if (guest_owns_fp_regs())
__fpsimd_save_fpexc32(vcpu);
__debug_switch_to_host(vcpu);
return exit_code;
}
NOKPROBE_SYMBOL(__kvm_vcpu_run_vhe);
@ -705,12 +632,6 @@ int __kvm_vcpu_run(struct kvm_vcpu *vcpu)
*/
local_daif_restore(DAIF_PROCCTX_NOIRQ);
/*
* When we exit from the guest we change a number of CPU configuration
* parameters, such as traps. We rely on the isb() in kvm_call_hyp*()
* to make sure these changes take effect before running the host or
* additional guests.
*/
return ret;
}

View File

@ -36,6 +36,11 @@ struct shadow_if {
static DEFINE_PER_CPU(struct shadow_if, shadow_if);
static int lr_map_idx_to_shadow_idx(struct shadow_if *shadow_if, int idx)
{
return hweight16(shadow_if->lr_map & (BIT(idx) - 1));
}
/*
* Nesting GICv3 support
*
@ -209,6 +214,29 @@ u64 vgic_v3_get_misr(struct kvm_vcpu *vcpu)
return reg;
}
static u64 translate_lr_pintid(struct kvm_vcpu *vcpu, u64 lr)
{
struct vgic_irq *irq;
if (!(lr & ICH_LR_HW))
return lr;
/* We have the HW bit set, check for validity of pINTID */
irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));
/* If there was no real mapping, nuke the HW bit */
if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI)
lr &= ~ICH_LR_HW;
/* Translate the virtual mapping to the real one, even if invalid */
if (irq) {
lr &= ~ICH_LR_PHYS_ID_MASK;
lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid);
vgic_put_irq(vcpu->kvm, irq);
}
return lr;
}
/*
* For LRs which have HW bit set such as timer interrupts, we modify them to
* have the host hardware interrupt number instead of the virtual one programmed
@ -217,58 +245,37 @@ u64 vgic_v3_get_misr(struct kvm_vcpu *vcpu)
static void vgic_v3_create_shadow_lr(struct kvm_vcpu *vcpu,
struct vgic_v3_cpu_if *s_cpu_if)
{
unsigned long lr_map = 0;
int index = 0;
struct shadow_if *shadow_if;
shadow_if = container_of(s_cpu_if, struct shadow_if, cpuif);
shadow_if->lr_map = 0;
for (int i = 0; i < kvm_vgic_global_state.nr_lr; i++) {
u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
struct vgic_irq *irq;
if (!(lr & ICH_LR_STATE))
lr = 0;
continue;
if (!(lr & ICH_LR_HW))
goto next;
lr = translate_lr_pintid(vcpu, lr);
/* We have the HW bit set, check for validity of pINTID */
irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));
if (!irq || !irq->hw || irq->intid > VGIC_MAX_SPI ) {
/* There was no real mapping, so nuke the HW bit */
lr &= ~ICH_LR_HW;
if (irq)
vgic_put_irq(vcpu->kvm, irq);
goto next;
}
/* Translate the virtual mapping to the real one */
lr &= ~ICH_LR_PHYS_ID_MASK;
lr |= FIELD_PREP(ICH_LR_PHYS_ID_MASK, (u64)irq->hwintid);
vgic_put_irq(vcpu->kvm, irq);
next:
s_cpu_if->vgic_lr[index] = lr;
if (lr) {
lr_map |= BIT(i);
index++;
}
s_cpu_if->vgic_lr[hweight16(shadow_if->lr_map)] = lr;
shadow_if->lr_map |= BIT(i);
}
container_of(s_cpu_if, struct shadow_if, cpuif)->lr_map = lr_map;
s_cpu_if->used_lrs = index;
s_cpu_if->used_lrs = hweight16(shadow_if->lr_map);
}
void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
{
struct shadow_if *shadow_if = get_shadow_if();
int i, index = 0;
int i;
for_each_set_bit(i, &shadow_if->lr_map, kvm_vgic_global_state.nr_lr) {
u64 lr = __vcpu_sys_reg(vcpu, ICH_LRN(i));
struct vgic_irq *irq;
if (!(lr & ICH_LR_HW) || !(lr & ICH_LR_STATE))
goto next;
continue;
/*
* If we had a HW lr programmed by the guest hypervisor, we
@ -277,15 +284,13 @@ void vgic_v3_sync_nested(struct kvm_vcpu *vcpu)
*/
irq = vgic_get_vcpu_irq(vcpu, FIELD_GET(ICH_LR_PHYS_ID_MASK, lr));
if (WARN_ON(!irq)) /* Shouldn't happen as we check on load */
goto next;
continue;
lr = __gic_v3_get_lr(index);
lr = __gic_v3_get_lr(lr_map_idx_to_shadow_idx(shadow_if, i));
if (!(lr & ICH_LR_STATE))
irq->active = false;
vgic_put_irq(vcpu->kvm, irq);
next:
index++;
}
}
@ -368,13 +373,11 @@ void vgic_v3_put_nested(struct kvm_vcpu *vcpu)
val = __vcpu_sys_reg(vcpu, ICH_LRN(i));
val &= ~ICH_LR_STATE;
val |= s_cpu_if->vgic_lr[i] & ICH_LR_STATE;
val |= s_cpu_if->vgic_lr[lr_map_idx_to_shadow_idx(shadow_if, i)] & ICH_LR_STATE;
__vcpu_assign_sys_reg(vcpu, ICH_LRN(i), val);
s_cpu_if->vgic_lr[i] = 0;
}
shadow_if->lr_map = 0;
vcpu->arch.vgic_cpu.vgic_v3.used_lrs = 0;
}

View File

@ -1305,7 +1305,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
next = addr;
end = addr + PUD_SIZE;
do {
pmd_free_pte_page(pmdp, next);
if (pmd_present(pmdp_get(pmdp)))
pmd_free_pte_page(pmdp, next);
} while (pmdp++, next += PMD_SIZE, next != end);
pud_clear(pudp);

View File

@ -103,7 +103,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT);
break;
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA:
if (cp->a2 == 0 && cp->a3 == 0)
if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask);
else
kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask,
@ -111,7 +111,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT);
break;
case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID:
if (cp->a2 == 0 && cp->a3 == 0)
if ((cp->a2 == 0 && cp->a3 == 0) || cp->a3 == -1UL)
kvm_riscv_hfence_vvma_asid_all(vcpu->kvm,
hbase, hmask, cp->a4);
else
@ -127,9 +127,9 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vcpu, struct kvm_run *run
case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID:
/*
* Until nested virtualization is implemented, the
* SBI HFENCE calls should be treated as NOPs
* SBI HFENCE calls should return not supported
* hence fallthrough.
*/
break;
default:
retdata->err_val = SBI_ERR_NOT_SUPPORTED;
}

View File

@ -41,7 +41,7 @@ int start_io_thread(struct os_helper_thread **td_out, int *fd_out)
*fd_out = fds[1];
err = os_set_fd_block(*fd_out, 0);
err = os_set_fd_block(kernel_fd, 0);
err |= os_set_fd_block(kernel_fd, 0);
if (err) {
printk("start_io_thread - failed to set nonblocking I/O.\n");
goto out_close;

View File

@ -1625,35 +1625,19 @@ static void vector_eth_configure(
device->dev = dev;
*vp = ((struct vector_private)
{
.list = LIST_HEAD_INIT(vp->list),
.dev = dev,
.unit = n,
.options = get_transport_options(def),
.rx_irq = 0,
.tx_irq = 0,
.parsed = def,
.max_packet = get_mtu(def) + ETH_HEADER_OTHER,
/* TODO - we need to calculate headroom so that ip header
* is 16 byte aligned all the time
*/
.headroom = get_headroom(def),
.form_header = NULL,
.verify_header = NULL,
.header_rxbuffer = NULL,
.header_txbuffer = NULL,
.header_size = 0,
.rx_header_size = 0,
.rexmit_scheduled = false,
.opened = false,
.transport_data = NULL,
.in_write_poll = false,
.coalesce = 2,
.req_size = get_req_size(def),
.in_error = false,
.bpf = NULL
});
INIT_LIST_HEAD(&vp->list);
vp->dev = dev;
vp->unit = n;
vp->options = get_transport_options(def);
vp->parsed = def;
vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER;
/*
* TODO - we need to calculate headroom so that ip header
* is 16 byte aligned all the time
*/
vp->headroom = get_headroom(def);
vp->coalesce = 2;
vp->req_size = get_req_size(def);
dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST);
INIT_WORK(&vp->reset_tx, vector_reset_tx);

View File

@ -570,6 +570,17 @@ static void uml_vfio_release_device(struct uml_vfio_device *dev)
kfree(dev);
}
static struct uml_vfio_device *uml_vfio_find_device(const char *device)
{
struct uml_vfio_device *dev;
list_for_each_entry(dev, &uml_vfio_devices, list) {
if (!strcmp(dev->name, device))
return dev;
}
return NULL;
}
static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *kp)
{
struct uml_vfio_device *dev;
@ -582,6 +593,9 @@ static int uml_vfio_cmdline_set(const char *device, const struct kernel_param *k
uml_vfio_container.fd = fd;
}
if (uml_vfio_find_device(device))
return -EEXIST;
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
if (!dev)
return -ENOMEM;

View File

@ -2826,7 +2826,7 @@ static void intel_pmu_read_event(struct perf_event *event)
* If the PEBS counters snapshotting is enabled,
* the topdown event is available in PEBS records.
*/
if (is_topdown_event(event) && !is_pebs_counter_event_group(event))
if (is_topdown_count(event) && !is_pebs_counter_event_group(event))
static_call(intel_pmu_update_topdown_event)(event, NULL);
else
intel_pmu_drain_pebs_buffer();

View File

@ -80,6 +80,7 @@
#define TDVMCALL_STATUS_RETRY 0x0000000000000001ULL
#define TDVMCALL_STATUS_INVALID_OPERAND 0x8000000000000000ULL
#define TDVMCALL_STATUS_ALIGN_ERROR 0x8000000000000002ULL
#define TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED 0x8000000000000003ULL
/*
* Bitmasks of exposed registers (with VMM).

View File

@ -228,7 +228,7 @@ static void *its_alloc(void)
struct its_array *pages = &its_pages;
void *page;
#ifdef CONFIG_MODULE
#ifdef CONFIG_MODULES
if (its_mod)
pages = &its_mod->arch.its_pages;
#endif
@ -3138,6 +3138,6 @@ void __ref smp_text_poke_batch_add(void *addr, const void *opcode, size_t len, c
*/
void __ref smp_text_poke_single(void *addr, const void *opcode, size_t len, const void *emulate)
{
__smp_text_poke_batch_add(addr, opcode, len, emulate);
smp_text_poke_batch_add(addr, opcode, len, emulate);
smp_text_poke_batch_finish();
}

View File

@ -31,7 +31,7 @@
#include "cpu.h"
u16 invlpgb_count_max __ro_after_init;
u16 invlpgb_count_max __ro_after_init = 1;
static inline int rdmsrq_amd_safe(unsigned msr, u64 *p)
{

View File

@ -498,6 +498,7 @@ static void domain_add_cpu_mon(int cpu, struct rdt_resource *r)
struct rdt_hw_mon_domain *hw_dom;
struct rdt_domain_hdr *hdr;
struct rdt_mon_domain *d;
struct cacheinfo *ci;
int err;
lockdep_assert_held(&domain_list_lock);
@ -525,12 +526,13 @@ static void domain_add_cpu_mon(int cpu, struct rdt_resource *r)
d = &hw_dom->d_resctrl;
d->hdr.id = id;
d->hdr.type = RESCTRL_MON_DOMAIN;
d->ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
if (!d->ci) {
ci = get_cpu_cacheinfo_level(cpu, RESCTRL_L3_CACHE);
if (!ci) {
pr_warn_once("Can't find L3 cache for CPU:%d resource %s\n", cpu, r->name);
mon_domain_free(hw_dom);
return;
}
d->ci_id = ci->id;
cpumask_set_cpu(cpu, &d->hdr.cpu_mask);
arch_mon_domain_online(r, d);

View File

@ -1212,11 +1212,13 @@ static int tdx_map_gpa(struct kvm_vcpu *vcpu)
/*
* Converting TDVMCALL_MAP_GPA to KVM_HC_MAP_GPA_RANGE requires
* userspace to enable KVM_CAP_EXIT_HYPERCALL with KVM_HC_MAP_GPA_RANGE
* bit set. If not, the error code is not defined in GHCI for TDX, use
* TDVMCALL_STATUS_INVALID_OPERAND for this case.
* bit set. This is a base call so it should always be supported, but
* KVM has no way to ensure that userspace implements the GHCI correctly.
* So if KVM_HC_MAP_GPA_RANGE does not cause a VMEXIT, return an error
* to the guest.
*/
if (!user_exit_on_hypercall(vcpu->kvm, KVM_HC_MAP_GPA_RANGE)) {
ret = TDVMCALL_STATUS_INVALID_OPERAND;
ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED;
goto error;
}
@ -1449,20 +1451,85 @@ static int tdx_emulate_mmio(struct kvm_vcpu *vcpu)
return 1;
}
static int tdx_complete_get_td_vm_call_info(struct kvm_vcpu *vcpu)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
tdvmcall_set_return_code(vcpu, vcpu->run->tdx.get_tdvmcall_info.ret);
/*
* For now, there is no TDVMCALL beyond GHCI base API supported by KVM
* directly without the support from userspace, just set the value
* returned from userspace.
*/
tdx->vp_enter_args.r11 = vcpu->run->tdx.get_tdvmcall_info.r11;
tdx->vp_enter_args.r12 = vcpu->run->tdx.get_tdvmcall_info.r12;
tdx->vp_enter_args.r13 = vcpu->run->tdx.get_tdvmcall_info.r13;
tdx->vp_enter_args.r14 = vcpu->run->tdx.get_tdvmcall_info.r14;
return 1;
}
static int tdx_get_td_vm_call_info(struct kvm_vcpu *vcpu)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
if (tdx->vp_enter_args.r12)
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
else {
switch (tdx->vp_enter_args.r12) {
case 0:
tdx->vp_enter_args.r11 = 0;
tdx->vp_enter_args.r12 = 0;
tdx->vp_enter_args.r13 = 0;
tdx->vp_enter_args.r14 = 0;
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUCCESS);
return 1;
case 1:
vcpu->run->tdx.get_tdvmcall_info.leaf = tdx->vp_enter_args.r12;
vcpu->run->exit_reason = KVM_EXIT_TDX;
vcpu->run->tdx.flags = 0;
vcpu->run->tdx.nr = TDVMCALL_GET_TD_VM_CALL_INFO;
vcpu->run->tdx.get_tdvmcall_info.ret = TDVMCALL_STATUS_SUCCESS;
vcpu->run->tdx.get_tdvmcall_info.r11 = 0;
vcpu->run->tdx.get_tdvmcall_info.r12 = 0;
vcpu->run->tdx.get_tdvmcall_info.r13 = 0;
vcpu->run->tdx.get_tdvmcall_info.r14 = 0;
vcpu->arch.complete_userspace_io = tdx_complete_get_td_vm_call_info;
return 0;
default:
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
return 1;
}
}
static int tdx_complete_simple(struct kvm_vcpu *vcpu)
{
tdvmcall_set_return_code(vcpu, vcpu->run->tdx.unknown.ret);
return 1;
}
static int tdx_get_quote(struct kvm_vcpu *vcpu)
{
struct vcpu_tdx *tdx = to_tdx(vcpu);
u64 gpa = tdx->vp_enter_args.r12;
u64 size = tdx->vp_enter_args.r13;
/* The gpa of buffer must have shared bit set. */
if (vt_is_tdx_private_gpa(vcpu->kvm, gpa)) {
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
return 1;
}
vcpu->run->exit_reason = KVM_EXIT_TDX;
vcpu->run->tdx.flags = 0;
vcpu->run->tdx.nr = TDVMCALL_GET_QUOTE;
vcpu->run->tdx.get_quote.ret = TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED;
vcpu->run->tdx.get_quote.gpa = gpa & ~gfn_to_gpa(kvm_gfn_direct_bits(tdx->vcpu.kvm));
vcpu->run->tdx.get_quote.size = size;
vcpu->arch.complete_userspace_io = tdx_complete_simple;
return 0;
}
static int handle_tdvmcall(struct kvm_vcpu *vcpu)
{
switch (tdvmcall_leaf(vcpu)) {
@ -1472,11 +1539,13 @@ static int handle_tdvmcall(struct kvm_vcpu *vcpu)
return tdx_report_fatal_error(vcpu);
case TDVMCALL_GET_TD_VM_CALL_INFO:
return tdx_get_td_vm_call_info(vcpu);
case TDVMCALL_GET_QUOTE:
return tdx_get_quote(vcpu);
default:
break;
}
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_INVALID_OPERAND);
tdvmcall_set_return_code(vcpu, TDVMCALL_STATUS_SUBFUNC_UNSUPPORTED);
return 1;
}

View File

@ -98,6 +98,11 @@ void __init pti_check_boottime_disable(void)
return;
setup_force_cpu_cap(X86_FEATURE_PTI);
if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) {
pr_debug("PTI enabled, disabling INVLPGB\n");
setup_clear_cpu_cap(X86_FEATURE_INVLPGB);
}
}
static int __init pti_parse_cmdline(char *arg)

View File

@ -161,7 +161,7 @@ static int fpregs_legacy_set(struct task_struct *target,
from = kbuf;
}
return um_fxsr_from_i387(fxsave, &buf);
return um_fxsr_from_i387(fxsave, from);
}
#endif

View File

@ -176,16 +176,33 @@ config CRYPTO_USER
config CRYPTO_SELFTESTS
bool "Enable cryptographic self-tests"
depends on DEBUG_KERNEL
depends on EXPERT
help
Enable the cryptographic self-tests.
The cryptographic self-tests run at boot time, or at algorithm
registration time if algorithms are dynamically loaded later.
This is primarily intended for developer use. It should not be
enabled in production kernels, unless you are trying to use these
tests to fulfill a FIPS testing requirement.
There are two main use cases for these tests:
- Development and pre-release testing. In this case, also enable
CRYPTO_SELFTESTS_FULL to get the full set of tests. All crypto code
in the kernel is expected to pass the full set of tests.
- Production kernels, to help prevent buggy drivers from being used
and/or meet FIPS 140-3 pre-operational testing requirements. In
this case, enable CRYPTO_SELFTESTS but not CRYPTO_SELFTESTS_FULL.
config CRYPTO_SELFTESTS_FULL
bool "Enable the full set of cryptographic self-tests"
depends on CRYPTO_SELFTESTS
help
Enable the full set of cryptographic self-tests for each algorithm.
The full set of tests should be enabled for development and
pre-release testing, but not in production kernels.
All crypto code in the kernel is expected to pass the full tests.
config CRYPTO_NULL
tristate "Null algorithms"

View File

@ -600,12 +600,14 @@ static void ahash_def_finup_done2(void *data, int err)
static int ahash_def_finup_finish1(struct ahash_request *req, int err)
{
struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
if (err)
goto out;
req->base.complete = ahash_def_finup_done2;
err = crypto_ahash_final(req);
err = crypto_ahash_alg(tfm)->final(req);
if (err == -EINPROGRESS || err == -EBUSY)
return err;

View File

@ -45,6 +45,7 @@ static bool notests;
module_param(notests, bool, 0644);
MODULE_PARM_DESC(notests, "disable all crypto self-tests");
#ifdef CONFIG_CRYPTO_SELFTESTS_FULL
static bool noslowtests;
module_param(noslowtests, bool, 0644);
MODULE_PARM_DESC(noslowtests, "disable slow crypto self-tests");
@ -52,6 +53,10 @@ MODULE_PARM_DESC(noslowtests, "disable slow crypto self-tests");
static unsigned int fuzz_iterations = 100;
module_param(fuzz_iterations, uint, 0644);
MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations");
#else
#define noslowtests 1
#define fuzz_iterations 0
#endif
#ifndef CONFIG_CRYPTO_SELFTESTS
@ -319,9 +324,9 @@ struct testvec_config {
/*
* The following are the lists of testvec_configs to test for each algorithm
* type when the fast crypto self-tests are enabled. They aim to provide good
* test coverage, while keeping the test time much shorter than the full tests
* so that the fast tests can be used to fulfill FIPS 140 testing requirements.
* type when the "fast" crypto self-tests are enabled. They aim to provide good
* test coverage, while keeping the test time much shorter than the "full" tests
* so that the "fast" tests can be enabled in a wider range of circumstances.
*/
/* Configs for skciphers and aeads */
@ -1183,14 +1188,18 @@ static void generate_random_testvec_config(struct rnd_state *rng,
static void crypto_disable_simd_for_test(void)
{
#ifdef CONFIG_CRYPTO_SELFTESTS_FULL
migrate_disable();
__this_cpu_write(crypto_simd_disabled_for_test, true);
#endif
}
static void crypto_reenable_simd_for_test(void)
{
#ifdef CONFIG_CRYPTO_SELFTESTS_FULL
__this_cpu_write(crypto_simd_disabled_for_test, false);
migrate_enable();
#endif
}
/*

View File

@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
return_ACPI_STATUS(AE_NULL_OBJECT);
}
if (this_walk_state->num_operands < obj_desc->method.param_count) {
ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",
acpi_ut_get_node_name(method_node)));
return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);
}
/* Init for new method, possibly wait on method mutex */
status =

View File

@ -852,6 +852,8 @@ queue_skb(struct idt77252_dev *card, struct vc_map *vc,
IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data,
skb->len, DMA_TO_DEVICE);
if (dma_mapping_error(&card->pcidev->dev, IDT77252_PRV_PADDR(skb)))
return -ENOMEM;
error = -EINVAL;
@ -1857,6 +1859,8 @@ add_rx_skb(struct idt77252_dev *card, int queue,
paddr = dma_map_single(&card->pcidev->dev, skb->data,
skb_end_pointer(skb) - skb->data,
DMA_FROM_DEVICE);
if (dma_mapping_error(&card->pcidev->dev, paddr))
goto outpoolrm;
IDT77252_PRV_PADDR(skb) = paddr;
if (push_rx_skb(card, skb, queue)) {
@ -1871,6 +1875,7 @@ add_rx_skb(struct idt77252_dev *card, int queue,
dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE);
outpoolrm:
handle = IDT77252_PRV_POOL(skb);
card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL;

View File

@ -80,6 +80,7 @@ enum {
DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */
DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */
DEVFL_FREED = (1<<8), /* device has been cleaned up */
DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */
};
enum {

View File

@ -754,7 +754,7 @@ rexmit_timer(struct timer_list *timer)
utgts = count_targets(d, NULL);
if (d->flags & DEVFL_TKILL) {
if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) {
spin_unlock_irqrestore(&d->lock, flags);
return;
}
@ -786,7 +786,8 @@ rexmit_timer(struct timer_list *timer)
* to clean up.
*/
list_splice(&flist, &d->factive[0]);
aoedev_downdev(d);
d->flags |= DEVFL_DEAD;
queue_work(aoe_wq, &d->work);
goto out;
}
@ -898,6 +899,9 @@ aoecmd_sleepwork(struct work_struct *work)
{
struct aoedev *d = container_of(work, struct aoedev, work);
if (d->flags & DEVFL_DEAD)
aoedev_downdev(d);
if (d->flags & DEVFL_GDALLOC)
aoeblk_gdalloc(d);

View File

@ -198,9 +198,13 @@ aoedev_downdev(struct aoedev *d)
{
struct aoetgt *t, **tt, **te;
struct list_head *head, *pos, *nx;
struct request *rq, *rqnext;
int i;
unsigned long flags;
d->flags &= ~DEVFL_UP;
spin_lock_irqsave(&d->lock, flags);
d->flags &= ~(DEVFL_UP | DEVFL_DEAD);
spin_unlock_irqrestore(&d->lock, flags);
/* clean out active and to-be-retransmitted buffers */
for (i = 0; i < NFACTIVE; i++) {
@ -223,6 +227,13 @@ aoedev_downdev(struct aoedev *d)
/* clean out the in-process request (if any) */
aoe_failip(d);
/* clean out any queued block requests */
list_for_each_entry_safe(rq, rqnext, &d->rq_list, queuelist) {
list_del_init(&rq->queuelist);
blk_mq_start_request(rq);
blk_mq_end_request(rq, BLK_STS_IOERR);
}
/* fast fail all pending I/O */
if (d->blkq) {
/* UP is cleared, freeze+quiesce to insure all are errored */

View File

@ -2825,6 +2825,9 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header)
if (copy_from_user(&info, argp, sizeof(info)))
return -EFAULT;
if (info.queue_depth > UBLK_MAX_QUEUE_DEPTH || info.nr_hw_queues > UBLK_MAX_NR_QUEUES)
return -EINVAL;
if (capable(CAP_SYS_ADMIN))
info.flags &= ~UBLK_F_UNPRIVILEGED_DEV;
else if (!(info.flags & UBLK_F_UNPRIVILEGED_DEV))

View File

@ -2033,6 +2033,28 @@ static void btintel_pcie_release_hdev(struct btintel_pcie_data *data)
data->hdev = NULL;
}
static void btintel_pcie_disable_interrupts(struct btintel_pcie_data *data)
{
spin_lock(&data->irq_lock);
btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, data->fh_init_mask);
btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, data->hw_init_mask);
spin_unlock(&data->irq_lock);
}
static void btintel_pcie_enable_interrupts(struct btintel_pcie_data *data)
{
spin_lock(&data->irq_lock);
btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_FH_INT_MASK, ~data->fh_init_mask);
btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_MSIX_HW_INT_MASK, ~data->hw_init_mask);
spin_unlock(&data->irq_lock);
}
static void btintel_pcie_synchronize_irqs(struct btintel_pcie_data *data)
{
for (int i = 0; i < data->alloc_vecs; i++)
synchronize_irq(data->msix_entries[i].vector);
}
static int btintel_pcie_setup_internal(struct hci_dev *hdev)
{
struct btintel_pcie_data *data = hci_get_drvdata(hdev);
@ -2152,6 +2174,8 @@ static int btintel_pcie_setup(struct hci_dev *hdev)
bt_dev_err(hdev, "Firmware download retry count: %d",
fw_dl_retry);
btintel_pcie_dump_debug_registers(hdev);
btintel_pcie_disable_interrupts(data);
btintel_pcie_synchronize_irqs(data);
err = btintel_pcie_reset_bt(data);
if (err) {
bt_dev_err(hdev, "Failed to do shr reset: %d", err);
@ -2159,6 +2183,7 @@ static int btintel_pcie_setup(struct hci_dev *hdev)
}
usleep_range(10000, 12000);
btintel_pcie_reset_ia(data);
btintel_pcie_enable_interrupts(data);
btintel_pcie_config_msix(data);
err = btintel_pcie_enable_bt(data);
if (err) {
@ -2291,6 +2316,12 @@ static void btintel_pcie_remove(struct pci_dev *pdev)
data = pci_get_drvdata(pdev);
btintel_pcie_disable_interrupts(data);
btintel_pcie_synchronize_irqs(data);
flush_work(&data->rx_work);
btintel_pcie_reset_bt(data);
for (int i = 0; i < data->alloc_vecs; i++) {
struct msix_entry *msix_entry;
@ -2303,8 +2334,6 @@ static void btintel_pcie_remove(struct pci_dev *pdev)
btintel_pcie_release_hdev(data);
flush_work(&data->rx_work);
destroy_workqueue(data->workqueue);
btintel_pcie_free(data);

View File

@ -2392,10 +2392,17 @@ static int qca_serdev_probe(struct serdev_device *serdev)
*/
qcadev->bt_power->pwrseq = devm_pwrseq_get(&serdev->dev,
"bluetooth");
if (IS_ERR(qcadev->bt_power->pwrseq))
return PTR_ERR(qcadev->bt_power->pwrseq);
break;
/*
* Some modules have BT_EN enabled via a hardware pull-up,
* meaning it is not defined in the DTS and is not controlled
* through the power sequence. In such cases, fall through
* to follow the legacy flow.
*/
if (IS_ERR(qcadev->bt_power->pwrseq))
qcadev->bt_power->pwrseq = NULL;
else
break;
}
fallthrough;
case QCA_WCN3950:

View File

@ -3879,6 +3879,7 @@ static int per_family_init(struct amd64_pvt *pvt)
break;
case 0x70 ... 0x7f:
pvt->ctl_name = "F19h_M70h";
pvt->max_mcs = 4;
pvt->flags.zn_regs_v2 = 1;
break;
case 0x90 ... 0x9f:

View File

@ -125,7 +125,7 @@
#define MEM_SLICE_HASH_MASK(v) (GET_BITFIELD(v, 6, 19) << 6)
#define MEM_SLICE_HASH_LSB_MASK_BIT(v) GET_BITFIELD(v, 24, 26)
static const struct res_config {
static struct res_config {
bool machine_check;
/* The number of present memory controllers. */
int num_imc;
@ -479,7 +479,7 @@ static u64 rpl_p_err_addr(u64 ecclog)
return ECC_ERROR_LOG_ADDR45(ecclog);
}
static const struct res_config ehl_cfg = {
static struct res_config ehl_cfg = {
.num_imc = 1,
.imc_base = 0x5000,
.ibecc_base = 0xdc00,
@ -489,7 +489,7 @@ static const struct res_config ehl_cfg = {
.err_addr_to_imc_addr = ehl_err_addr_to_imc_addr,
};
static const struct res_config icl_cfg = {
static struct res_config icl_cfg = {
.num_imc = 1,
.imc_base = 0x5000,
.ibecc_base = 0xd800,
@ -499,7 +499,7 @@ static const struct res_config icl_cfg = {
.err_addr_to_imc_addr = ehl_err_addr_to_imc_addr,
};
static const struct res_config tgl_cfg = {
static struct res_config tgl_cfg = {
.machine_check = true,
.num_imc = 2,
.imc_base = 0x5000,
@ -513,7 +513,7 @@ static const struct res_config tgl_cfg = {
.err_addr_to_imc_addr = tgl_err_addr_to_imc_addr,
};
static const struct res_config adl_cfg = {
static struct res_config adl_cfg = {
.machine_check = true,
.num_imc = 2,
.imc_base = 0xd800,
@ -524,7 +524,7 @@ static const struct res_config adl_cfg = {
.err_addr_to_imc_addr = adl_err_addr_to_imc_addr,
};
static const struct res_config adl_n_cfg = {
static struct res_config adl_n_cfg = {
.machine_check = true,
.num_imc = 1,
.imc_base = 0xd800,
@ -535,7 +535,7 @@ static const struct res_config adl_n_cfg = {
.err_addr_to_imc_addr = adl_err_addr_to_imc_addr,
};
static const struct res_config rpl_p_cfg = {
static struct res_config rpl_p_cfg = {
.machine_check = true,
.num_imc = 2,
.imc_base = 0xd800,
@ -547,7 +547,7 @@ static const struct res_config rpl_p_cfg = {
.err_addr_to_imc_addr = adl_err_addr_to_imc_addr,
};
static const struct res_config mtl_ps_cfg = {
static struct res_config mtl_ps_cfg = {
.machine_check = true,
.num_imc = 2,
.imc_base = 0xd800,
@ -558,7 +558,7 @@ static const struct res_config mtl_ps_cfg = {
.err_addr_to_imc_addr = adl_err_addr_to_imc_addr,
};
static const struct res_config mtl_p_cfg = {
static struct res_config mtl_p_cfg = {
.machine_check = true,
.num_imc = 2,
.imc_base = 0xd800,
@ -569,7 +569,7 @@ static const struct res_config mtl_p_cfg = {
.err_addr_to_imc_addr = adl_err_addr_to_imc_addr,
};
static const struct pci_device_id igen6_pci_tbl[] = {
static struct pci_device_id igen6_pci_tbl[] = {
{ PCI_VDEVICE(INTEL, DID_EHL_SKU5), (kernel_ulong_t)&ehl_cfg },
{ PCI_VDEVICE(INTEL, DID_EHL_SKU6), (kernel_ulong_t)&ehl_cfg },
{ PCI_VDEVICE(INTEL, DID_EHL_SKU7), (kernel_ulong_t)&ehl_cfg },
@ -1350,9 +1350,11 @@ static int igen6_register_mcis(struct pci_dev *pdev, u64 mchbar)
return -ENODEV;
}
if (lmc < res_cfg->num_imc)
if (lmc < res_cfg->num_imc) {
igen6_printk(KERN_WARNING, "Expected %d mcs, but only %d detected.",
res_cfg->num_imc, lmc);
res_cfg->num_imc = lmc;
}
return 0;

View File

@ -268,7 +268,7 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data0 = {
/* LS7A2000 ACPI GPIO */
static const struct loongson_gpio_chip_data loongson_gpio_ls7a2000_data1 = {
.label = "ls7a2000_gpio",
.mode = BYTE_CTRL_MODE,
.mode = BIT_CTRL_MODE,
.conf_offset = 0x4,
.in_offset = 0x8,
.out_offset = 0x0,

View File

@ -190,7 +190,9 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
struct mlxbf3_gpio_context *gs;
struct gpio_irq_chip *girq;
struct gpio_chip *gc;
char *colon_ptr;
int ret, irq;
long num;
gs = devm_kzalloc(dev, sizeof(*gs), GFP_KERNEL);
if (!gs)
@ -227,25 +229,39 @@ static int mlxbf3_gpio_probe(struct platform_device *pdev)
gc->owner = THIS_MODULE;
gc->add_pin_ranges = mlxbf3_gpio_add_pin_ranges;
irq = platform_get_irq(pdev, 0);
if (irq >= 0) {
girq = &gs->gc.irq;
gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
girq->default_type = IRQ_TYPE_NONE;
/* This will let us handle the parent IRQ in the driver */
girq->num_parents = 0;
girq->parents = NULL;
girq->parent_handler = NULL;
girq->handler = handle_bad_irq;
colon_ptr = strchr(dev_name(dev), ':');
if (!colon_ptr) {
dev_err(dev, "invalid device name format\n");
return -EINVAL;
}
/*
* Directly request the irq here instead of passing
* a flow-handler because the irq is shared.
*/
ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
IRQF_SHARED, dev_name(dev), gs);
if (ret)
return dev_err_probe(dev, ret, "failed to request IRQ");
ret = kstrtol(++colon_ptr, 16, &num);
if (ret) {
dev_err(dev, "invalid device instance\n");
return ret;
}
if (!num) {
irq = platform_get_irq(pdev, 0);
if (irq >= 0) {
girq = &gs->gc.irq;
gpio_irq_chip_set_chip(girq, &gpio_mlxbf3_irqchip);
girq->default_type = IRQ_TYPE_NONE;
/* This will let us handle the parent IRQ in the driver */
girq->num_parents = 0;
girq->parents = NULL;
girq->parent_handler = NULL;
girq->handler = handle_bad_irq;
/*
* Directly request the irq here instead of passing
* a flow-handler because the irq is shared.
*/
ret = devm_request_irq(dev, irq, mlxbf3_gpio_irq_handler,
IRQF_SHARED, dev_name(dev), gs);
if (ret)
return dev_err_probe(dev, ret, "failed to request IRQ");
}
}
platform_set_drvdata(pdev, gs);

View File

@ -974,7 +974,7 @@ static int pca953x_irq_setup(struct pca953x_chip *chip, int irq_base)
IRQF_ONESHOT | IRQF_SHARED, dev_name(dev),
chip);
if (ret)
return dev_err_probe(dev, client->irq, "failed to request irq\n");
return dev_err_probe(dev, ret, "failed to request irq\n");
return 0;
}

View File

@ -278,6 +278,7 @@ static const struct of_device_id spacemit_gpio_dt_ids[] = {
{ .compatible = "spacemit,k1-gpio" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, spacemit_gpio_dt_ids);
static struct platform_driver spacemit_gpio_driver = {
.probe = spacemit_gpio_probe,

View File

@ -1902,7 +1902,7 @@ static void amdgpu_ib_preempt_mark_partial_job(struct amdgpu_ring *ring)
continue;
}
job = to_amdgpu_job(s_job);
if (preempted && (&job->hw_fence) == fence)
if (preempted && (&job->hw_fence.base) == fence)
/* mark the job as preempted */
job->preemption_status |= AMDGPU_IB_PREEMPTED;
}

View File

@ -6019,16 +6019,12 @@ static int amdgpu_device_health_check(struct list_head *device_list_handle)
return ret;
}
static int amdgpu_device_halt_activities(struct amdgpu_device *adev,
struct amdgpu_job *job,
struct amdgpu_reset_context *reset_context,
struct list_head *device_list,
struct amdgpu_hive_info *hive,
bool need_emergency_restart)
static int amdgpu_device_recovery_prepare(struct amdgpu_device *adev,
struct list_head *device_list,
struct amdgpu_hive_info *hive)
{
struct list_head *device_list_handle = NULL;
struct amdgpu_device *tmp_adev = NULL;
int i, r = 0;
int r;
/*
* Build list of devices to reset.
@ -6045,26 +6041,54 @@ static int amdgpu_device_halt_activities(struct amdgpu_device *adev,
}
if (!list_is_first(&adev->reset_list, device_list))
list_rotate_to_front(&adev->reset_list, device_list);
device_list_handle = device_list;
} else {
list_add_tail(&adev->reset_list, device_list);
device_list_handle = device_list;
}
if (!amdgpu_sriov_vf(adev) && (!adev->pcie_reset_ctx.occurs_dpc)) {
r = amdgpu_device_health_check(device_list_handle);
r = amdgpu_device_health_check(device_list);
if (r)
return r;
}
/* We need to lock reset domain only once both for XGMI and single device */
tmp_adev = list_first_entry(device_list_handle, struct amdgpu_device,
reset_list);
return 0;
}
static void amdgpu_device_recovery_get_reset_lock(struct amdgpu_device *adev,
struct list_head *device_list)
{
struct amdgpu_device *tmp_adev = NULL;
if (list_empty(device_list))
return;
tmp_adev =
list_first_entry(device_list, struct amdgpu_device, reset_list);
amdgpu_device_lock_reset_domain(tmp_adev->reset_domain);
}
static void amdgpu_device_recovery_put_reset_lock(struct amdgpu_device *adev,
struct list_head *device_list)
{
struct amdgpu_device *tmp_adev = NULL;
if (list_empty(device_list))
return;
tmp_adev =
list_first_entry(device_list, struct amdgpu_device, reset_list);
amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);
}
static int amdgpu_device_halt_activities(
struct amdgpu_device *adev, struct amdgpu_job *job,
struct amdgpu_reset_context *reset_context,
struct list_head *device_list, struct amdgpu_hive_info *hive,
bool need_emergency_restart)
{
struct amdgpu_device *tmp_adev = NULL;
int i, r = 0;
/* block all schedulers and reset given job's ring */
list_for_each_entry(tmp_adev, device_list_handle, reset_list) {
list_for_each_entry(tmp_adev, device_list, reset_list) {
amdgpu_device_set_mp1_state(tmp_adev);
/*
@ -6252,11 +6276,6 @@ static void amdgpu_device_gpu_resume(struct amdgpu_device *adev,
amdgpu_ras_set_error_query_ready(tmp_adev, true);
}
tmp_adev = list_first_entry(device_list, struct amdgpu_device,
reset_list);
amdgpu_device_unlock_reset_domain(tmp_adev->reset_domain);
}
@ -6324,10 +6343,16 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
reset_context->hive = hive;
INIT_LIST_HEAD(&device_list);
if (amdgpu_device_recovery_prepare(adev, &device_list, hive))
goto end_reset;
/* We need to lock reset domain only once both for XGMI and single device */
amdgpu_device_recovery_get_reset_lock(adev, &device_list);
r = amdgpu_device_halt_activities(adev, job, reset_context, &device_list,
hive, need_emergency_restart);
if (r)
goto end_reset;
goto reset_unlock;
if (need_emergency_restart)
goto skip_sched_resume;
@ -6337,7 +6362,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
*
* job->base holds a reference to parent fence
*/
if (job && dma_fence_is_signaled(&job->hw_fence)) {
if (job && dma_fence_is_signaled(&job->hw_fence.base)) {
job_signaled = true;
dev_info(adev->dev, "Guilty job already signaled, skipping HW reset");
goto skip_hw_reset;
@ -6345,13 +6370,15 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
r = amdgpu_device_asic_reset(adev, &device_list, reset_context);
if (r)
goto end_reset;
goto reset_unlock;
skip_hw_reset:
r = amdgpu_device_sched_resume(&device_list, reset_context, job_signaled);
if (r)
goto end_reset;
goto reset_unlock;
skip_sched_resume:
amdgpu_device_gpu_resume(adev, &device_list, need_emergency_restart);
reset_unlock:
amdgpu_device_recovery_put_reset_lock(adev, &device_list);
end_reset:
if (hive) {
mutex_unlock(&hive->hive_lock);
@ -6763,6 +6790,8 @@ pci_ers_result_t amdgpu_pci_error_detected(struct pci_dev *pdev, pci_channel_sta
memset(&reset_context, 0, sizeof(reset_context));
INIT_LIST_HEAD(&device_list);
amdgpu_device_recovery_prepare(adev, &device_list, hive);
amdgpu_device_recovery_get_reset_lock(adev, &device_list);
r = amdgpu_device_halt_activities(adev, NULL, &reset_context, &device_list,
hive, false);
if (hive) {
@ -6880,8 +6909,8 @@ pci_ers_result_t amdgpu_pci_slot_reset(struct pci_dev *pdev)
if (hive) {
list_for_each_entry(tmp_adev, &device_list, reset_list)
amdgpu_device_unset_mp1_state(tmp_adev);
amdgpu_device_unlock_reset_domain(adev->reset_domain);
}
amdgpu_device_recovery_put_reset_lock(adev, &device_list);
}
if (hive) {
@ -6927,6 +6956,7 @@ void amdgpu_pci_resume(struct pci_dev *pdev)
amdgpu_device_sched_resume(&device_list, NULL, NULL);
amdgpu_device_gpu_resume(adev, &device_list, false);
amdgpu_device_recovery_put_reset_lock(adev, &device_list);
adev->pcie_reset_ctx.occurs_dpc = false;
if (hive) {

View File

@ -41,22 +41,6 @@
#include "amdgpu_trace.h"
#include "amdgpu_reset.h"
/*
* Fences mark an event in the GPUs pipeline and are used
* for GPU/CPU synchronization. When the fence is written,
* it is expected that all buffers associated with that fence
* are no longer in use by the associated ring on the GPU and
* that the relevant GPU caches have been flushed.
*/
struct amdgpu_fence {
struct dma_fence base;
/* RB, DMA, etc. */
struct amdgpu_ring *ring;
ktime_t start_timestamp;
};
static struct kmem_cache *amdgpu_fence_slab;
int amdgpu_fence_slab_init(void)
@ -151,12 +135,12 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct amd
am_fence = kmem_cache_alloc(amdgpu_fence_slab, GFP_ATOMIC);
if (am_fence == NULL)
return -ENOMEM;
fence = &am_fence->base;
am_fence->ring = ring;
} else {
/* take use of job-embedded fence */
fence = &job->hw_fence;
am_fence = &job->hw_fence;
}
fence = &am_fence->base;
am_fence->ring = ring;
seq = ++ring->fence_drv.sync_seq;
if (job && job->job_run_counter) {
@ -718,7 +702,7 @@ void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)
* it right here or we won't be able to track them in fence_drv
* and they will remain unsignaled during sa_bo free.
*/
job = container_of(old, struct amdgpu_job, hw_fence);
job = container_of(old, struct amdgpu_job, hw_fence.base);
if (!job->base.s_fence && !dma_fence_is_signaled(old))
dma_fence_signal(old);
RCU_INIT_POINTER(*ptr, NULL);
@ -780,7 +764,7 @@ static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)
static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)
{
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
return (const char *)to_amdgpu_ring(job->base.sched)->name;
}
@ -810,7 +794,7 @@ static bool amdgpu_fence_enable_signaling(struct dma_fence *f)
*/
static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)
{
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);
struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence.base);
if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))
amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));
@ -845,7 +829,7 @@ static void amdgpu_job_fence_free(struct rcu_head *rcu)
struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
/* free job if fence has a parent job */
kfree(container_of(f, struct amdgpu_job, hw_fence));
kfree(container_of(f, struct amdgpu_job, hw_fence.base));
}
/**

View File

@ -272,8 +272,8 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
/* Check if any fences where initialized */
if (job->base.s_fence && job->base.s_fence->finished.ops)
f = &job->base.s_fence->finished;
else if (job->hw_fence.ops)
f = &job->hw_fence;
else if (job->hw_fence.base.ops)
f = &job->hw_fence.base;
else
f = NULL;
@ -290,10 +290,10 @@ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
amdgpu_sync_free(&job->explicit_sync);
/* only put the hw fence if has embedded fence */
if (!job->hw_fence.ops)
if (!job->hw_fence.base.ops)
kfree(job);
else
dma_fence_put(&job->hw_fence);
dma_fence_put(&job->hw_fence.base);
}
void amdgpu_job_set_gang_leader(struct amdgpu_job *job,
@ -322,10 +322,10 @@ void amdgpu_job_free(struct amdgpu_job *job)
if (job->gang_submit != &job->base.s_fence->scheduled)
dma_fence_put(job->gang_submit);
if (!job->hw_fence.ops)
if (!job->hw_fence.base.ops)
kfree(job);
else
dma_fence_put(&job->hw_fence);
dma_fence_put(&job->hw_fence.base);
}
struct dma_fence *amdgpu_job_submit(struct amdgpu_job *job)

View File

@ -48,7 +48,7 @@ struct amdgpu_job {
struct drm_sched_job base;
struct amdgpu_vm *vm;
struct amdgpu_sync explicit_sync;
struct dma_fence hw_fence;
struct amdgpu_fence hw_fence;
struct dma_fence *gang_submit;
uint32_t preamble_status;
uint32_t preemption_status;

View File

@ -3522,8 +3522,12 @@ int psp_init_sos_microcode(struct psp_context *psp, const char *chip_name)
uint8_t *ucode_array_start_addr;
int err = 0;
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_sos.bin", chip_name);
if (amdgpu_is_kicker_fw(adev))
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_sos_kicker.bin", chip_name);
else
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_sos.bin", chip_name);
if (err)
goto out;
@ -3799,8 +3803,12 @@ int psp_init_ta_microcode(struct psp_context *psp, const char *chip_name)
struct amdgpu_device *adev = psp->adev;
int err;
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_ta.bin", chip_name);
if (amdgpu_is_kicker_fw(adev))
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_ta_kicker.bin", chip_name);
else
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_ta.bin", chip_name);
if (err)
return err;

View File

@ -127,6 +127,22 @@ struct amdgpu_fence_driver {
struct dma_fence **fences;
};
/*
* Fences mark an event in the GPUs pipeline and are used
* for GPU/CPU synchronization. When the fence is written,
* it is expected that all buffers associated with that fence
* are no longer in use by the associated ring on the GPU and
* that the relevant GPU caches have been flushed.
*/
struct amdgpu_fence {
struct dma_fence base;
/* RB, DMA, etc. */
struct amdgpu_ring *ring;
ktime_t start_timestamp;
};
extern const struct drm_sched_backend_ops amdgpu_sched_ops;
void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);

View File

@ -540,8 +540,10 @@ static int amdgpu_sdma_soft_reset(struct amdgpu_device *adev, u32 instance_id)
case IP_VERSION(4, 4, 2):
case IP_VERSION(4, 4, 4):
case IP_VERSION(4, 4, 5):
/* For SDMA 4.x, use the existing DPM interface for backward compatibility */
r = amdgpu_dpm_reset_sdma(adev, 1 << instance_id);
/* For SDMA 4.x, use the existing DPM interface for backward compatibility,
* we need to convert the logical instance ID to physical instance ID before reset.
*/
r = amdgpu_dpm_reset_sdma(adev, 1 << GET_INST(SDMA0, instance_id));
break;
case IP_VERSION(5, 0, 0):
case IP_VERSION(5, 0, 1):
@ -568,7 +570,7 @@ static int amdgpu_sdma_soft_reset(struct amdgpu_device *adev, u32 instance_id)
/**
* amdgpu_sdma_reset_engine - Reset a specific SDMA engine
* @adev: Pointer to the AMDGPU device
* @instance_id: ID of the SDMA engine instance to reset
* @instance_id: Logical ID of the SDMA engine instance to reset
*
* Returns: 0 on success, or a negative error code on failure.
*/
@ -601,7 +603,7 @@ int amdgpu_sdma_reset_engine(struct amdgpu_device *adev, uint32_t instance_id)
/* Perform the SDMA reset for the specified instance */
ret = amdgpu_sdma_soft_reset(adev, instance_id);
if (ret) {
dev_err(adev->dev, "Failed to reset SDMA instance %u\n", instance_id);
dev_err(adev->dev, "Failed to reset SDMA logical instance %u\n", instance_id);
goto exit;
}

View File

@ -30,6 +30,10 @@
#define AMDGPU_UCODE_NAME_MAX (128)
static const struct kicker_device kicker_device_list[] = {
{0x744B, 0x00},
};
static void amdgpu_ucode_print_common_hdr(const struct common_firmware_header *hdr)
{
DRM_DEBUG("size_bytes: %u\n", le32_to_cpu(hdr->size_bytes));
@ -1387,6 +1391,19 @@ static const char *amdgpu_ucode_legacy_naming(struct amdgpu_device *adev, int bl
return NULL;
}
bool amdgpu_is_kicker_fw(struct amdgpu_device *adev)
{
int i;
for (i = 0; i < ARRAY_SIZE(kicker_device_list); i++) {
if (adev->pdev->device == kicker_device_list[i].device &&
adev->pdev->revision == kicker_device_list[i].revision)
return true;
}
return false;
}
void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len)
{
int maj, min, rev;

View File

@ -605,6 +605,11 @@ struct amdgpu_firmware {
uint32_t pldm_version;
};
struct kicker_device{
unsigned short device;
u8 revision;
};
void amdgpu_ucode_print_mc_hdr(const struct common_firmware_header *hdr);
void amdgpu_ucode_print_smc_hdr(const struct common_firmware_header *hdr);
void amdgpu_ucode_print_imu_hdr(const struct common_firmware_header *hdr);
@ -632,5 +637,6 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type);
const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id);
void amdgpu_ucode_ip_version_decode(struct amdgpu_device *adev, int block_type, char *ucode_prefix, int len);
bool amdgpu_is_kicker_fw(struct amdgpu_device *adev);
#endif

View File

@ -85,6 +85,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin");
@ -759,6 +760,10 @@ static int gfx_v11_0_init_microcode(struct amdgpu_device *adev)
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
AMDGPU_UCODE_REQUIRED,
"amdgpu/gc_11_0_0_rlc_1.bin");
else if (amdgpu_is_kicker_fw(adev))
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_rlc_kicker.bin", ucode_prefix);
else
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
AMDGPU_UCODE_REQUIRED,

View File

@ -32,6 +32,7 @@
#include "gc/gc_11_0_0_sh_mask.h"
MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin");
MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin");
@ -51,8 +52,12 @@ static int imu_v11_0_init_microcode(struct amdgpu_device *adev)
DRM_DEBUG("\n");
amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_imu.bin", ucode_prefix);
if (amdgpu_is_kicker_fw(adev))
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_imu_kicker.bin", ucode_prefix);
else
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_imu.bin", ucode_prefix);
if (err)
goto out;

View File

@ -42,7 +42,9 @@ MODULE_FIRMWARE("amdgpu/psp_13_0_5_ta.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin");
MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");

View File

@ -490,7 +490,7 @@ static void sdma_v4_4_2_inst_gfx_stop(struct amdgpu_device *adev,
{
struct amdgpu_ring *sdma[AMDGPU_MAX_SDMA_INSTANCES];
u32 doorbell_offset, doorbell;
u32 rb_cntl, ib_cntl;
u32 rb_cntl, ib_cntl, sdma_cntl;
int i;
for_each_inst(i, inst_mask) {
@ -502,6 +502,9 @@ static void sdma_v4_4_2_inst_gfx_stop(struct amdgpu_device *adev,
ib_cntl = RREG32_SDMA(i, regSDMA_GFX_IB_CNTL);
ib_cntl = REG_SET_FIELD(ib_cntl, SDMA_GFX_IB_CNTL, IB_ENABLE, 0);
WREG32_SDMA(i, regSDMA_GFX_IB_CNTL, ib_cntl);
sdma_cntl = RREG32_SDMA(i, regSDMA_CNTL);
sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA_CNTL, UTC_L1_ENABLE, 0);
WREG32_SDMA(i, regSDMA_CNTL, sdma_cntl);
if (sdma[i]->use_doorbell) {
doorbell = RREG32_SDMA(i, regSDMA_GFX_DOORBELL);
@ -995,6 +998,7 @@ static int sdma_v4_4_2_inst_start(struct amdgpu_device *adev,
/* set utc l1 enable flag always to 1 */
temp = RREG32_SDMA(i, regSDMA_CNTL);
temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1);
WREG32_SDMA(i, regSDMA_CNTL, temp);
if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) {
/* enable context empty interrupt during initialization */
@ -1670,7 +1674,7 @@ static bool sdma_v4_4_2_page_ring_is_guilty(struct amdgpu_ring *ring)
static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
{
struct amdgpu_device *adev = ring->adev;
u32 id = GET_INST(SDMA0, ring->me);
u32 id = ring->me;
int r;
if (!(adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
@ -1686,7 +1690,7 @@ static int sdma_v4_4_2_reset_queue(struct amdgpu_ring *ring, unsigned int vmid)
static int sdma_v4_4_2_stop_queue(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
u32 instance_id = GET_INST(SDMA0, ring->me);
u32 instance_id = ring->me;
u32 inst_mask;
uint64_t rptr;

View File

@ -1399,6 +1399,7 @@ static int sdma_v5_0_sw_init(struct amdgpu_ip_block *ip_block)
return r;
for (i = 0; i < adev->sdma.num_instances; i++) {
mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_0_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;

View File

@ -1318,6 +1318,7 @@ static int sdma_v5_2_sw_init(struct amdgpu_ip_block *ip_block)
}
for (i = 0; i < adev->sdma.num_instances; i++) {
mutex_init(&adev->sdma.instance[i].engine_reset_mutex);
adev->sdma.instance[i].funcs = &sdma_v5_2_sdma_funcs;
ring = &adev->sdma.instance[i].ring;
ring->ring_obj = NULL;

View File

@ -669,6 +669,9 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
if (indirect)
amdgpu_vcn_psp_update_sram(adev, inst_idx, AMDGPU_UCODE_ID_VCN0_RAM);
/* resetting ring, fw should not check RB ring */
fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
/* Pause dpg */
vcn_v5_0_1_pause_dpg_mode(vinst, &state);
@ -681,7 +684,7 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE);
tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK);
WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp);
fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET;
WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0);
WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0);
@ -692,6 +695,7 @@ static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst,
tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE);
tmp |= VCN_RB_ENABLE__RB1_EN_MASK;
WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp);
/* resetting done, fw can check RB ring */
fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF);
WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL,

View File

@ -240,7 +240,7 @@ static int pm_map_queues_v9(struct packet_manager *pm, uint32_t *buffer,
packet->bitfields2.engine_sel =
engine_sel__mes_map_queues__compute_vi;
packet->bitfields2.gws_control_queue = q->gws ? 1 : 0;
packet->bitfields2.gws_control_queue = q->properties.is_gws ? 1 : 0;
packet->bitfields2.extended_engine_sel =
extended_engine_sel__mes_map_queues__legacy_engine_sel;
packet->bitfields2.queue_type =

View File

@ -510,6 +510,10 @@ static ssize_t node_show(struct kobject *kobj, struct attribute *attr,
dev->node_props.capability |=
HSA_CAP_AQL_QUEUE_DOUBLE_MAP;
if (KFD_GC_VERSION(dev->gpu) < IP_VERSION(10, 0, 0) &&
(dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE))
dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED;
sysfs_show_32bit_prop(buffer, offs, "max_engine_clk_fcompute",
dev->node_props.max_engine_clk_fcompute);
@ -2008,8 +2012,6 @@ static void kfd_topology_set_capabilities(struct kfd_topology_device *dev)
if (!amdgpu_sriov_vf(dev->gpu->adev))
dev->node_props.capability |= HSA_CAP_PER_QUEUE_RESET_SUPPORTED;
if (dev->gpu->adev->sdma.supported_reset & AMDGPU_RESET_TYPE_PER_QUEUE)
dev->node_props.capability2 |= HSA_CAP2_PER_SDMA_QUEUE_RESET_SUPPORTED;
} else {
dev->node_props.debug_prop |= HSA_DBG_WATCH_ADDR_MASK_LO_BIT_GFX10 |
HSA_DBG_WATCH_ADDR_MASK_HI_BIT;

View File

@ -4718,9 +4718,23 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
return 1;
}
static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,
uint32_t *brightness)
/* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */
static inline u32 scale_input_to_fw(int min, int max, u64 input)
{
return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min);
}
/* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */
static inline u32 scale_fw_to_input(int min, int max, u64 input)
{
return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL);
}
static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,
unsigned int min, unsigned int max,
uint32_t *user_brightness)
{
u32 brightness = scale_input_to_fw(min, max, *user_brightness);
u8 prev_signal = 0, prev_lum = 0;
int i = 0;
@ -4731,7 +4745,7 @@ static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *cap
return;
/* choose start to run less interpolation steps */
if (caps->luminance_data[caps->data_points/2].input_signal > *brightness)
if (caps->luminance_data[caps->data_points/2].input_signal > brightness)
i = caps->data_points/2;
do {
u8 signal = caps->luminance_data[i].input_signal;
@ -4742,17 +4756,18 @@ static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *cap
* brightness < signal: interpolate between previous and current luminance numerator
* brightness > signal: find next data point
*/
if (*brightness > signal) {
if (brightness > signal) {
prev_signal = signal;
prev_lum = lum;
i++;
continue;
}
if (*brightness < signal)
if (brightness < signal)
lum = prev_lum + DIV_ROUND_CLOSEST((lum - prev_lum) *
(*brightness - prev_signal),
(brightness - prev_signal),
signal - prev_signal);
*brightness = DIV_ROUND_CLOSEST(lum * *brightness, 101);
*user_brightness = scale_fw_to_input(min, max,
DIV_ROUND_CLOSEST(lum * brightness, 101));
return;
} while (i < caps->data_points);
}
@ -4765,11 +4780,10 @@ static u32 convert_brightness_from_user(const struct amdgpu_dm_backlight_caps *c
if (!get_brightness_range(caps, &min, &max))
return brightness;
convert_custom_brightness(caps, &brightness);
convert_custom_brightness(caps, min, max, &brightness);
// Rescale 0..255 to min..max
return min + DIV_ROUND_CLOSEST((max - min) * brightness,
AMDGPU_MAX_BL_LEVEL);
// Rescale 0..max to min..max
return min + DIV_ROUND_CLOSEST_ULL((u64)(max - min) * brightness, max);
}
static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *caps,
@ -4782,8 +4796,8 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
if (brightness < min)
return 0;
// Rescale min..max to 0..255
return DIV_ROUND_CLOSEST(AMDGPU_MAX_BL_LEVEL * (brightness - min),
// Rescale min..max to 0..max
return DIV_ROUND_CLOSEST_ULL((u64)max * (brightness - min),
max - min);
}
@ -4908,7 +4922,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
struct drm_device *drm = aconnector->base.dev;
struct amdgpu_display_manager *dm = &drm_to_adev(drm)->dm;
struct backlight_properties props = { 0 };
struct amdgpu_dm_backlight_caps caps = { 0 };
struct amdgpu_dm_backlight_caps *caps;
char bl_name[16];
int min, max;
@ -4922,22 +4936,21 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
return;
}
amdgpu_acpi_get_backlight_caps(&caps);
if (caps.caps_valid && get_brightness_range(&caps, &min, &max)) {
caps = &dm->backlight_caps[aconnector->bl_idx];
if (get_brightness_range(caps, &min, &max)) {
if (power_supply_is_system_supplied() > 0)
props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.ac_level, 100);
props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->ac_level, 100);
else
props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps.dc_level, 100);
props.brightness = (max - min) * DIV_ROUND_CLOSEST(caps->dc_level, 100);
/* min is zero, so max needs to be adjusted */
props.max_brightness = max - min;
drm_dbg(drm, "Backlight caps: min: %d, max: %d, ac %d, dc %d\n", min, max,
caps.ac_level, caps.dc_level);
caps->ac_level, caps->dc_level);
} else
props.brightness = AMDGPU_MAX_BL_LEVEL;
props.brightness = props.max_brightness = AMDGPU_MAX_BL_LEVEL;
if (caps.data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE))
if (caps->data_points && !(amdgpu_dc_debug_mask & DC_DISABLE_CUSTOM_BRIGHTNESS_CURVE))
drm_info(drm, "Using custom brightness curve\n");
props.max_brightness = AMDGPU_MAX_BL_LEVEL;
props.type = BACKLIGHT_RAW;
snprintf(bl_name, sizeof(bl_name), "amdgpu_bl%d",

View File

@ -241,6 +241,7 @@ static bool create_links(
DC_LOG_DC("BIOS object table - end");
/* Create a link for each usb4 dpia port */
dc->lowest_dpia_link_index = MAX_LINKS;
for (i = 0; i < dc->res_pool->usb4_dpia_count; i++) {
struct link_init_data link_init_params = {0};
struct dc_link *link;
@ -253,6 +254,9 @@ static bool create_links(
link = dc->link_srv->create_link(&link_init_params);
if (link) {
if (dc->lowest_dpia_link_index > dc->link_count)
dc->lowest_dpia_link_index = dc->link_count;
dc->links[dc->link_count] = link;
link->dc = dc;
++dc->link_count;
@ -6376,6 +6380,35 @@ unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context)
else
return 0;
}
/**
***********************************************************************************************
* dc_get_host_router_index: Get index of host router from a dpia link
*
* This function return a host router index of the target link. If the target link is dpia link.
*
* @param [in] link: target link
* @param [out] host_router_index: host router index of the target link
*
* @return: true if the host router index is found and valid.
*
***********************************************************************************************
*/
bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index)
{
struct dc *dc = link->ctx->dc;
if (link->ep_type != DISPLAY_ENDPOINT_USB4_DPIA)
return false;
if (link->link_index < dc->lowest_dpia_link_index)
return false;
*host_router_index = (link->link_index - dc->lowest_dpia_link_index) / dc->caps.num_of_dpias_per_host_router;
if (*host_router_index < dc->caps.num_of_host_routers)
return true;
else
return false;
}
bool dc_is_cursor_limit_pending(struct dc *dc)
{

View File

@ -66,7 +66,8 @@ struct dmub_notification;
#define MAX_STREAMS 6
#define MIN_VIEWPORT_SIZE 12
#define MAX_NUM_EDP 2
#define MAX_HOST_ROUTERS_NUM 2
#define MAX_HOST_ROUTERS_NUM 3
#define MAX_DPIA_PER_HOST_ROUTER 2
/* Display Core Interfaces */
struct dc_versions {
@ -305,6 +306,8 @@ struct dc_caps {
/* Conservative limit for DCC cases which require ODM4:1 to support*/
uint32_t dcc_plane_width_limit;
struct dc_scl_caps scl_caps;
uint8_t num_of_host_routers;
uint8_t num_of_dpias_per_host_router;
};
struct dc_bug_wa {
@ -1603,6 +1606,7 @@ struct dc {
uint8_t link_count;
struct dc_link *links[MAX_LINKS];
uint8_t lowest_dpia_link_index;
struct link_service *link_srv;
struct dc_state *current_state;
@ -2595,6 +2599,8 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context);
bool dc_get_host_router_index(const struct dc_link *link, unsigned int *host_router_index);
/* DSC Interfaces */
#include "dc_dsc.h"

View File

@ -1172,8 +1172,8 @@ struct dc_lttpr_caps {
union dp_128b_132b_supported_lttpr_link_rates supported_128b_132b_rates;
union dp_alpm_lttpr_cap alpm;
uint8_t aux_rd_interval[MAX_REPEATER_CNT - 1];
uint8_t lttpr_ieee_oui[3];
uint8_t lttpr_device_id[6];
uint8_t lttpr_ieee_oui[3]; // Always read from closest LTTPR to host
uint8_t lttpr_device_id[6]; // Always read from closest LTTPR to host
};
struct dc_dongle_dfp_cap_ext {

View File

@ -788,6 +788,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
plane->pixel_format = dml2_420_10;
break;
case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:
case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:
case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:
case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:
plane->pixel_format = dml2_444_64;

View File

@ -4685,7 +4685,10 @@ static void calculate_tdlut_setting(
//the tdlut is fetched during the 2 row times of prefetch.
if (p->setup_for_tdlut) {
*p->tdlut_groups_per_2row_ub = (unsigned int)math_ceil2((double) *p->tdlut_bytes_per_frame / *p->tdlut_bytes_per_group, 1);
*p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate;
if (*p->tdlut_bytes_per_frame > p->cursor_buffer_size * 1024)
*p->tdlut_opt_time = (*p->tdlut_bytes_per_frame - p->cursor_buffer_size * 1024) / tdlut_drain_rate;
else
*p->tdlut_opt_time = 0;
*p->tdlut_drain_time = p->cursor_buffer_size * 1024 / tdlut_drain_rate;
*p->tdlut_bytes_to_deliver = (unsigned int) (p->cursor_buffer_size * 1024.0);
}

View File

@ -953,6 +953,7 @@ static void populate_dml_surface_cfg_from_plane_state(enum dml_project_id dml2_p
out->SourcePixelFormat[location] = dml_420_10;
break;
case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616:
case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616:
case SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616F:
case SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616F:
out->SourcePixelFormat[location] = dml_444_64;

View File

@ -1225,7 +1225,7 @@ void dce110_blank_stream(struct pipe_ctx *pipe_ctx)
return;
if (link->local_sink && link->local_sink->sink_signal == SIGNAL_TYPE_EDP) {
if (!link->skip_implict_edp_power_control)
if (!link->skip_implict_edp_power_control && hws)
hws->funcs.edp_backlight_control(link, false);
link->dc->hwss.set_abm_immediate_disable(pipe_ctx);
}

View File

@ -1047,6 +1047,15 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context,
if (dc->caps.sequential_ono) {
update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false;
update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false;
/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp &&
pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) {
for (j = 0; j < dc->res_pool->pipe_count; ++j) {
update_state->pg_pipe_res_update[PG_HUBP][j] = false;
update_state->pg_pipe_res_update[PG_DPP][j] = false;
}
}
}
}
@ -1193,6 +1202,25 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true;
if (dc->caps.sequential_ono) {
for (i = 0; i < dc->res_pool->pipe_count; i++) {
struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
if (new_pipe->stream_res.dsc && !new_pipe->top_pipe &&
update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) {
update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true;
update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true;
/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
if (new_pipe->plane_res.hubp &&
new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) {
for (j = 0; j < dc->res_pool->pipe_count; ++j) {
update_state->pg_pipe_res_update[PG_HUBP][j] = true;
update_state->pg_pipe_res_update[PG_DPP][j] = true;
}
}
}
}
for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
update_state->pg_pipe_res_update[PG_DPP][i]) {

View File

@ -385,9 +385,15 @@ bool dp_is_128b_132b_signal(struct pipe_ctx *pipe_ctx)
bool dp_is_lttpr_present(struct dc_link *link)
{
/* Some sink devices report invalid LTTPR revision, so don't validate against that cap */
return (dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) != 0 &&
uint32_t lttpr_count = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
bool is_lttpr_present = (lttpr_count > 0 &&
link->dpcd_caps.lttpr_caps.max_lane_count > 0 &&
link->dpcd_caps.lttpr_caps.max_lane_count <= 4);
if (lttpr_count > 0 && !is_lttpr_present)
DC_LOG_ERROR("LTTPR count is nonzero but invalid lane count reported. Assuming no LTTPR present.\n");
return is_lttpr_present;
}
/* in DP compliance test, DPR-120 may have
@ -1551,6 +1557,8 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
uint8_t lttpr_dpcd_data[10] = {0};
enum dc_status status;
bool is_lttpr_present;
uint32_t lttpr_count;
uint32_t closest_lttpr_offset;
/* Logic to determine LTTPR support*/
bool vbios_lttpr_interop = link->dc->caps.vbios_lttpr_aware;
@ -1602,20 +1610,22 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
lttpr_dpcd_data[DP_LTTPR_ALPM_CAPABILITIES -
DP_LT_TUNABLE_PHY_REPEATER_FIELD_DATA_STRUCTURE_REV];
lttpr_count = dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
/* If this chip cap is set, at least one retimer must exist in the chain
* Override count to 1 if we receive a known bad count (0 or an invalid value) */
if (((link->chip_caps & AMD_EXT_DISPLAY_PATH_CAPS__EXT_CHIP_MASK) == AMD_EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN) &&
(dp_parse_lttpr_repeater_count(link->dpcd_caps.lttpr_caps.phy_repeater_cnt) == 0)) {
lttpr_count == 0) {
/* If you see this message consistently, either the host platform has FIXED_VS flag
* incorrectly configured or the sink device is returning an invalid count.
*/
DC_LOG_ERROR("lttpr_caps phy_repeater_cnt is 0x%x, forcing it to 0x80.",
link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
link->dpcd_caps.lttpr_caps.phy_repeater_cnt = 0x80;
lttpr_count = 1;
DC_LOG_DC("lttpr_caps forced phy_repeater_cnt = %d\n", link->dpcd_caps.lttpr_caps.phy_repeater_cnt);
}
/* Attempt to train in LTTPR transparent mode if repeater count exceeds 8. */
is_lttpr_present = dp_is_lttpr_present(link);
DC_LOG_DC("is_lttpr_present = %d\n", is_lttpr_present);
@ -1623,11 +1633,25 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
if (is_lttpr_present) {
CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
core_link_read_dpcd(link, DP_LTTPR_IEEE_OUI, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui), "LTTPR IEEE OUI: ");
// Identify closest LTTPR to determine if workarounds required for known embedded LTTPR
closest_lttpr_offset = dp_get_closest_lttpr_offset(lttpr_count);
core_link_read_dpcd(link, DP_LTTPR_DEVICE_ID, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id));
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id), "LTTPR Device ID: ");
core_link_read_dpcd(link, (DP_LTTPR_IEEE_OUI + closest_lttpr_offset),
link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
core_link_read_dpcd(link, (DP_LTTPR_DEVICE_ID + closest_lttpr_offset),
link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id));
if (lttpr_count > 1) {
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui),
"Closest LTTPR To Host's IEEE OUI: ");
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id),
"Closest LTTPR To Host's LTTPR Device ID: ");
} else {
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui),
"LTTPR IEEE OUI: ");
CONN_DATA_DETECT(link, link->dpcd_caps.lttpr_caps.lttpr_device_id, sizeof(link->dpcd_caps.lttpr_caps.lttpr_device_id),
"LTTPR Device ID: ");
}
}
return status;

View File

@ -1954,6 +1954,9 @@ static bool dcn31_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.num_of_host_routers = 2;
dc->caps.num_of_dpias_per_host_router = 2;
/* Use pipe context based otg sync logic */
dc->config.use_pipe_ctx_sync_logic = true;
dc->config.disable_hbr_audio_dp2 = true;

View File

@ -1885,6 +1885,9 @@ static bool dcn314_resource_construct(
dc->caps.max_disp_clock_khz_at_vmin = 650000;
dc->caps.num_of_host_routers = 2;
dc->caps.num_of_dpias_per_host_router = 2;
/* Use pipe context based otg sync logic */
dc->config.use_pipe_ctx_sync_logic = true;

View File

@ -1894,6 +1894,9 @@ static bool dcn35_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.num_of_host_routers = 2;
dc->caps.num_of_dpias_per_host_router = 2;
/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
* to provide some margin.
* It's expected for furture ASIC to have equal or higher value, in order to

View File

@ -1866,6 +1866,9 @@ static bool dcn351_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.num_of_host_routers = 2;
dc->caps.num_of_dpias_per_host_router = 2;
/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
* to provide some margin.
* It's expected for furture ASIC to have equal or higher value, in order to

View File

@ -1867,6 +1867,9 @@ static bool dcn36_resource_construct(
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.num_of_host_routers = 2;
dc->caps.num_of_dpias_per_host_router = 2;
/* max_disp_clock_khz_at_vmin is slightly lower than the STA value in order
* to provide some margin.
* It's expected for furture ASIC to have equal or higher value, in order to

View File

@ -58,6 +58,7 @@
MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin");
MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin");
MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin");
MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin");
MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin");
@ -92,7 +93,7 @@ const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
int smu_v13_0_init_microcode(struct smu_context *smu)
{
struct amdgpu_device *adev = smu->adev;
char ucode_prefix[15];
char ucode_prefix[30];
int err = 0;
const struct smc_firmware_header_v1_0 *hdr;
const struct common_firmware_header *header;
@ -103,8 +104,13 @@ int smu_v13_0_init_microcode(struct smu_context *smu)
return 0;
amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix));
err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s.bin", ucode_prefix);
if (amdgpu_is_kicker_fw(adev))
err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s_kicker.bin", ucode_prefix);
else
err = amdgpu_ucode_request(adev, &adev->pm.fw, AMDGPU_UCODE_REQUIRED,
"amdgpu/%s.bin", ucode_prefix);
if (err)
goto out;

View File

@ -159,7 +159,7 @@ bool malidp_format_mod_supported(struct drm_device *drm,
}
if (!fourcc_mod_is_vendor(modifier, ARM)) {
DRM_ERROR("Unknown modifier (not Arm)\n");
DRM_DEBUG_KMS("Unknown modifier (not Arm)\n");
return false;
}

Some files were not shown because too many files have changed in this diff Show More