ASoC: Fixes for v7.1

Another batch of fixes, plus a couple of quirks (mostly AMD ones, as has
 been the case recently).  All driver changes, including fixes for the
 KUnit tests for the Cirrus drivers that could cause memory corruption.
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEreZoqmdXGLWf4p/qJNaLcl1Uh9AFAmn7Sq0ACgkQJNaLcl1U
 h9B/xwf5ATCpvmEqNXRd2OY/X4N6wDmZHhghEUTqdvKtNyfdAdB73WThlJOlw3p7
 yGbtLZe3rUdRy2i7xSpc2gaihNqnLdE1FRmybQEfwWjRPDGJ5fQzcYA79PiPi+Zj
 9tw85vJMG679r2PI7X7Ta0PhSEoL+glj6YJ0GtAzsnVkeHiWQz/g7pLFSBbm3EMT
 AFBmZSDzfAOJA0ZYpxtiWcdkOh6zpEs8Kj5Fi92szhT6BDd/F1c9yTF5XxHSmqKO
 6Mc0Z1tmowPduN2Yf+skOfg/lCuYRyrKLkqJ3+DRbLUP/t2Q3KbAHxb28ui6fXco
 r2xNdSG6jQ3lwcRw7eLqRmAgMqjtKg==
 =DaNt
 -----END PGP SIGNATURE-----

Merge tag 'asoc-fix-v7.1-rc2' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v7.1

Another batch of fixes, plus a couple of quirks (mostly AMD ones, as has
been the case recently).  All driver changes, including fixes for the
KUnit tests for the Cirrus drivers that could cause memory corruption.
This commit is contained in:
Takashi Iwai 2026-05-06 16:10:00 +02:00
commit 06bc7ff0a1
535 changed files with 7509 additions and 5131 deletions

View File

@ -19,6 +19,7 @@ Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org>
Ahmad Masri <quic_amasri@quicinc.com> <amasri@codeaurora.org>
Adam Oldham <oldhamca@gmail.com>
Adam Radford <aradford@gmail.com>
Aditya Garg <gargaditya08@proton.me> <gargaditya08@live.com>
Adriana Reus <adi.reus@gmail.com> <adriana.reus@intel.com>
Adrian Bunk <bunk@stusta.de>
Ajay Kaher <ajay.kaher@broadcom.com> <akaher@vmware.com>
@ -207,6 +208,7 @@ Claudiu Beznea <claudiu.beznea@tuxon.dev> <claudiu.beznea@microchip.com>
Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com>
Corey Minyard <minyard@acm.org>
Damian Hobson-Garcia <dhobsong@igel.co.jp>
Dan Carpenter <error27@gmail.com> <dan.carpenter@linaro.org>
Dan Carpenter <error27@gmail.com> <dan.carpenter@oracle.com>
Dan Williams <djbw@kernel.org> <dan.j.williams@intel.com>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
@ -495,6 +497,7 @@ Leon Romanovsky <leon@kernel.org> <leon@leon.nu>
Leon Romanovsky <leon@kernel.org> <leonro@mellanox.com>
Leon Romanovsky <leon@kernel.org> <leonro@nvidia.com>
Leo Yan <leo.yan@linux.dev> <leo.yan@linaro.org>
Liam R. Howlett <liam@infradead.org> <Liam.Howlett@oracle.com>
Liam Mark <quic_lmark@quicinc.com> <lmark@codeaurora.org>
Linas Vepstas <linas@austin.ibm.com>
Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@ascom.ch>
@ -505,6 +508,8 @@ Linus Walleij <linusw@kernel.org> <linus.walleij@stericsson.com>
Linus Walleij <linusw@kernel.org> <linus.walleij@linaro.org>
Linus Walleij <linusw@kernel.org> <triad@df.lth.se>
<linux-hardening@vger.kernel.org> <kernel-hardening@lists.openwall.com>
Li Wang <li.wang@linux.dev> <liwang@redhat.com>
Li Wang <li.wang@linux.dev> <wangli.ahau@gmail.com>
Li Yang <leoyang.li@nxp.com> <leoli@freescale.com>
Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
Lior David <quic_liord@quicinc.com> <liord@codeaurora.org>
@ -687,6 +692,7 @@ Punit Agrawal <punitagrawal@gmail.com> <punit.agrawal@arm.com>
Puranjay Mohan <puranjay@kernel.org> <puranjay12@gmail.com>
Qais Yousef <qyousef@layalina.io> <qais.yousef@imgtec.com>
Qais Yousef <qyousef@layalina.io> <qais.yousef@arm.com>
Qi Zheng <qi.zheng@linux.dev> <zhengqi.arch@bytedance.com>
Quentin Monnet <qmo@kernel.org> <quentin.monnet@netronome.com>
Quentin Monnet <qmo@kernel.org> <quentin@isovalent.com>
Quentin Perret <qperret@qperret.net> <quentin.perret@arm.com>

View File

@ -220,7 +220,7 @@ cgroup v2 currently supports the following mount options.
memory_hugetlb_accounting
Count HugeTLB memory usage towards the cgroup's overall
memory usage for the memory controller (for the purpose of
statistics reporting and memory protetion). This is a new
statistics reporting and memory protection). This is a new
behavior that could regress existing setups, so it must be
explicitly opted in with this mount option.

View File

@ -24,6 +24,7 @@ properties:
compatible:
items:
- enum:
- qcom,eliza-ipcc
- qcom,glymur-ipcc
- qcom,kaanapali-ipcc
- qcom,milos-ipcc

View File

@ -57,7 +57,7 @@ Mount options unique to the isofs filesystem.
Recommended documents about ISO 9660 standard are located at:
- http://www.y-adagio.com/
- ftp://ftp.ecma.ch/ecma-st/Ecma-119.pdf
- https://ecma-international.org/wp-content/uploads/ECMA-119_2nd_edition_december_1987.pdf
Quoting from the PDF "This 2nd Edition of Standard ECMA-119 is technically
identical with ISO 9660.", so it is a valid and gratis substitute of the

View File

@ -188,6 +188,7 @@ operations:
name: dev-set
doc: Set the configuration of a PSP device.
attribute-set: dev
flags: [admin-perm]
do:
request:
attributes:
@ -207,6 +208,7 @@ operations:
name: key-rotate
doc: Rotate the device key.
attribute-set: dev
flags: [admin-perm]
do:
request:
attributes:

View File

@ -7873,7 +7873,7 @@ F: drivers/gpu/drm/sun4i/sun8i*
DRM DRIVER FOR APPLE TOUCH BARS
M: Aun-Ali Zaidi <admin@kodeit.net>
M: Aditya Garg <gargaditya08@live.com>
M: Aditya Garg <gargaditya08@proton.me>
L: dri-devel@lists.freedesktop.org
S: Maintained
T: git https://gitlab.freedesktop.org/drm/misc/kernel.git
@ -13860,7 +13860,6 @@ M: Pratyush Yadav <pratyush@kernel.org>
R: Dave Young <ruirui.yang@linux.dev>
L: kexec@lists.infradead.org
S: Maintained
W: http://lse.sourceforge.net/kdump/
F: Documentation/admin-guide/kdump/
F: fs/proc/vmcore.c
F: include/linux/crash_core.h
@ -15252,7 +15251,7 @@ M: Andrea Cervesato <andrea.cervesato@suse.com>
M: Cyril Hrubis <chrubis@suse.cz>
M: Jan Stancek <jstancek@redhat.com>
M: Petr Vorel <pvorel@suse.cz>
M: Li Wang <liwang@redhat.com>
M: Li Wang <li.wang@linux.dev>
M: Yang Xu <xuyang2018.jy@fujitsu.com>
M: Xiao Yang <yangx.jy@fujitsu.com>
L: ltp@lists.linux.it (subscribers-only)
@ -15399,7 +15398,7 @@ F: include/net/netns/mctp.h
F: net/mctp/
MAPLE TREE
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <liam@infradead.org>
R: Alice Ryhl <aliceryhl@google.com>
R: Andrew Ballance <andrewjballance@gmail.com>
L: maple-tree@lists.infradead.org
@ -16759,7 +16758,7 @@ MEMORY MANAGEMENT - CORE
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
R: Suren Baghdasaryan <surenb@google.com>
@ -16805,7 +16804,7 @@ F: mm/sparse.c
F: mm/util.c
F: mm/vmpressure.c
F: mm/vmstat.c
N: include/linux/page[-_]*
N: include\/linux\/page[-_][a-zA-Z]*
MEMORY MANAGEMENT - EXECMEM
M: Andrew Morton <akpm@linux-foundation.org>
@ -16895,7 +16894,7 @@ MEMORY MANAGEMENT - MISC
M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Mike Rapoport <rppt@kernel.org>
R: Suren Baghdasaryan <surenb@google.com>
@ -16962,6 +16961,7 @@ S: Maintained
F: include/linux/compaction.h
F: include/linux/gfp.h
F: include/linux/page-isolation.h
F: include/linux/pageblock-flags.h
F: mm/compaction.c
F: mm/debug_page_alloc.c
F: mm/debug_page_ref.c
@ -16983,7 +16983,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: Johannes Weiner <hannes@cmpxchg.org>
R: David Hildenbrand <david@kernel.org>
R: Michal Hocko <mhocko@kernel.org>
R: Qi Zheng <zhengqi.arch@bytedance.com>
R: Qi Zheng <qi.zheng@linux.dev>
R: Shakeel Butt <shakeel.butt@linux.dev>
R: Lorenzo Stoakes <ljs@kernel.org>
L: linux-mm@kvack.org
@ -16996,7 +16996,7 @@ M: Andrew Morton <akpm@linux-foundation.org>
M: David Hildenbrand <david@kernel.org>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Rik van Riel <riel@surriel.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Harry Yoo <harry@kernel.org>
R: Jann Horn <jannh@google.com>
@ -17043,7 +17043,7 @@ M: David Hildenbrand <david@kernel.org>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Zi Yan <ziy@nvidia.com>
R: Baolin Wang <baolin.wang@linux.alibaba.com>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
R: Nico Pache <npache@redhat.com>
R: Ryan Roberts <ryan.roberts@arm.com>
R: Dev Jain <dev.jain@arm.com>
@ -17081,7 +17081,7 @@ F: tools/testing/selftests/mm/uffd-*.[ch]
MEMORY MANAGEMENT - RUST
M: Alice Ryhl <aliceryhl@google.com>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
L: linux-mm@kvack.org
L: rust-for-linux@vger.kernel.org
S: Maintained
@ -17095,7 +17095,7 @@ F: rust/kernel/page.rs
MEMORY MAPPING
M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <liam@infradead.org>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Jann Horn <jannh@google.com>
@ -17127,7 +17127,7 @@ F: tools/testing/vma/
MEMORY MAPPING - LOCKING
M: Andrew Morton <akpm@linux-foundation.org>
M: Suren Baghdasaryan <surenb@google.com>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <liam@infradead.org>
M: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Shakeel Butt <shakeel.butt@linux.dev>
@ -17142,7 +17142,7 @@ F: mm/mmap_lock.c
MEMORY MAPPING - MADVISE (MEMORY ADVICE)
M: Andrew Morton <akpm@linux-foundation.org>
M: Liam R. Howlett <Liam.Howlett@oracle.com>
M: Liam R. Howlett <liam@infradead.org>
M: Lorenzo Stoakes <ljs@kernel.org>
M: David Hildenbrand <david@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
@ -18672,19 +18672,59 @@ F: net/xfrm/
F: tools/testing/selftests/net/ipsec.c
NETWORKING [IPv4/IPv6]
M: "David S. Miller" <davem@davemloft.net>
M: David Ahern <dsahern@kernel.org>
M: Ido Schimmel <idosch@nvidia.com>
L: netdev@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
F: arch/x86/net/*
F: include/linux/ip.h
F: include/linux/ipv6*
F: Documentation/netlink/specs/rt-addr.yaml
F: Documentation/netlink/specs/rt-neigh.yaml
F: Documentation/netlink/specs/rt-route.yaml
F: Documentation/netlink/specs/rt-rule.yaml
F: include/linux/inetdevice.h
F: include/linux/mroute*
F: include/net/addrconf.h
F: include/net/arp.h
F: include/net/fib*
F: include/net/if_inet6.h
F: include/net/inetpeer.h
F: include/net/ip*
F: include/net/lwtunnel.h
F: include/net/ndisc.h
F: include/net/netns/nexthop.h
F: include/net/nexthop.h
F: include/net/route.h
F: net/ipv4/
F: net/ipv6/
F: include/uapi/linux/fib_rules.h
F: include/uapi/linux/in_route.h
F: include/uapi/linux/mroute*
F: include/uapi/linux/nexthop.h
F: net/core/fib*
F: net/core/lwtunnel.c
F: net/ipv4/arp.c
F: net/ipv4/devinet.c
F: net/ipv4/fib*
F: net/ipv4/icmp.c
F: net/ipv4/igmp.c
F: net/ipv4/inet_fragment.c
F: net/ipv4/inetpeer.c
F: net/ipv4/ip*
F: net/ipv4/metrics.c
F: net/ipv4/netlink.c
F: net/ipv4/nexthop.c
F: net/ipv4/route.c
F: net/ipv6/addr*
F: net/ipv6/anycast.c
F: net/ipv6/exthdrs.c
F: net/ipv6/exthdrs_core.c
F: net/ipv6/fib*
F: net/ipv6/icmp.c
F: net/ipv6/ip*
F: net/ipv6/mcast*
F: net/ipv6/ndisc.c
F: net/ipv6/output_core.c
F: net/ipv6/reassembly.c
F: net/ipv6/route.c
F: tools/testing/selftests/net/fib*
F: tools/testing/selftests/net/forwarding/
NETWORKING [LABELED] (NetLabel, Labeled IPsec, SECMARK)
M: Paul Moore <paul@paul-moore.com>
@ -18819,18 +18859,11 @@ F: Documentation/networking/net_failover.rst
F: drivers/net/net_failover.c
F: include/net/net_failover.h
NEXTHOP
M: David Ahern <dsahern@kernel.org>
L: netdev@vger.kernel.org
S: Maintained
F: include/net/netns/nexthop.h
F: include/net/nexthop.h
F: include/uapi/linux/nexthop.h
F: net/ipv4/nexthop.c
NFC SUBSYSTEM
L: netdev@vger.kernel.org
S: Orphan
M: David Heidelberg <david+nfc@ixit.cz>
L: oe-linux-nfc@lists.linux.dev
S: Maintained
T: git https://codeberg.org/linux-nfc/linux.git
F: Documentation/devicetree/bindings/net/nfc/
F: drivers/nfc/
F: include/net/nfc/
@ -20774,6 +20807,7 @@ M: Dominik Brodowski <linux@dominikbrodowski.net>
S: Odd Fixes
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brodo/linux.git
F: Documentation/pcmcia/
F: drivers/net/ethernet/8390/pcnet_cs.c
F: drivers/pcmcia/
F: include/pcmcia/
F: tools/pcmcia/
@ -23369,7 +23403,7 @@ RUST [ALLOC]
M: Danilo Krummrich <dakr@kernel.org>
R: Lorenzo Stoakes <ljs@kernel.org>
R: Vlastimil Babka <vbabka@kernel.org>
R: Liam R. Howlett <Liam.Howlett@oracle.com>
R: Liam R. Howlett <liam@infradead.org>
R: Uladzislau Rezki <urezki@gmail.com>
L: rust-for-linux@vger.kernel.org
S: Maintained
@ -23521,7 +23555,7 @@ F: drivers/s390/net/
S390 PCI SUBSYSTEM
M: Niklas Schnelle <schnelle@linux.ibm.com>
M: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
M: Gerd Bayer <gbayer@linux.ibm.com>
L: linux-s390@vger.kernel.org
S: Supported
F: Documentation/arch/s390/pci.rst
@ -24314,7 +24348,7 @@ F: include/media/i2c/rj54n1cb0c.h
SHRINKER
M: Andrew Morton <akpm@linux-foundation.org>
M: Dave Chinner <david@fromorbit.com>
R: Qi Zheng <zhengqi.arch@bytedance.com>
R: Qi Zheng <qi.zheng@linux.dev>
R: Roman Gushchin <roman.gushchin@linux.dev>
R: Muchun Song <muchun.song@linux.dev>
L: linux-mm@kvack.org
@ -24764,6 +24798,7 @@ SOFTWARE RAID (Multiple Disks) SUPPORT
M: Song Liu <song@kernel.org>
M: Yu Kuai <yukuai@fnnas.com>
R: Li Nan <linan122@huawei.com>
R: Xiao Ni <xiao@kernel.org>
L: linux-raid@vger.kernel.org
S: Supported
Q: https://patchwork.kernel.org/project/linux-raid/list/

View File

@ -2,7 +2,7 @@
VERSION = 7
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc2
NAME = Baby Opossum Posse
# *DOCUMENTATION*

View File

@ -40,7 +40,7 @@ static __always_inline void __pmr_local_irq_enable(void)
barrier();
}
static inline void arch_local_irq_enable(void)
static __always_inline void arch_local_irq_enable(void)
{
if (system_uses_irq_prio_masking()) {
__pmr_local_irq_enable();
@ -68,7 +68,7 @@ static __always_inline void __pmr_local_irq_disable(void)
barrier();
}
static inline void arch_local_irq_disable(void)
static __always_inline void arch_local_irq_disable(void)
{
if (system_uses_irq_prio_masking()) {
__pmr_local_irq_disable();
@ -90,7 +90,7 @@ static __always_inline unsigned long __pmr_local_save_flags(void)
/*
* Save the current interrupt enable state.
*/
static inline unsigned long arch_local_save_flags(void)
static __always_inline unsigned long arch_local_save_flags(void)
{
if (system_uses_irq_prio_masking()) {
return __pmr_local_save_flags();
@ -109,7 +109,7 @@ static __always_inline bool __pmr_irqs_disabled_flags(unsigned long flags)
return flags != GIC_PRIO_IRQON;
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)
static __always_inline bool arch_irqs_disabled_flags(unsigned long flags)
{
if (system_uses_irq_prio_masking()) {
return __pmr_irqs_disabled_flags(flags);
@ -128,7 +128,7 @@ static __always_inline bool __pmr_irqs_disabled(void)
return __pmr_irqs_disabled_flags(__pmr_local_save_flags());
}
static inline bool arch_irqs_disabled(void)
static __always_inline bool arch_irqs_disabled(void)
{
if (system_uses_irq_prio_masking()) {
return __pmr_irqs_disabled();
@ -160,7 +160,7 @@ static __always_inline unsigned long __pmr_local_irq_save(void)
return flags;
}
static inline unsigned long arch_local_irq_save(void)
static __always_inline unsigned long arch_local_irq_save(void)
{
if (system_uses_irq_prio_masking()) {
return __pmr_local_irq_save();
@ -187,7 +187,7 @@ static __always_inline void __pmr_local_irq_restore(unsigned long flags)
/*
* restore saved IRQ state
*/
static inline void arch_local_irq_restore(unsigned long flags)
static __always_inline void arch_local_irq_restore(unsigned long flags)
{
if (system_uses_irq_prio_masking()) {
__pmr_local_irq_restore(flags);

View File

@ -68,7 +68,12 @@
#define KERNEL_SEGMENT_COUNT 5
#if SWAPPER_BLOCK_SIZE > SEGMENT_ALIGN
#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 1)
/*
* KERNEL_SEGMENT_COUNT counts the permanent kernel VMAs. The early mapping
* has one additional split, [_text, _stext). Reserve one more page for the
* SWAPPER_BLOCK_SIZE-unaligned boundaries.
*/
#define EARLY_SEGMENT_EXTRA_PAGES (KERNEL_SEGMENT_COUNT + 2)
/*
* The initial ID map consists of the kernel image, mapped as two separate
* segments, and may appear misaligned wrt the swapper block size. This means

View File

@ -50,6 +50,9 @@
#include <linux/mm.h>
#define MARKER(m) \
m, __after_##m = m - 1
enum __kvm_host_smccc_func {
/* Hypercalls that are unavailable once pKVM has finalised. */
/* __KVM_HOST_SMCCC_FUNC___kvm_hyp_init */
@ -59,8 +62,10 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___kvm_enable_ssbs,
__KVM_HOST_SMCCC_FUNC___vgic_v3_init_lrs,
__KVM_HOST_SMCCC_FUNC___vgic_v3_get_gic_config,
MARKER(__KVM_HOST_SMCCC_FUNC_MIN_PKVM),
__KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize,
__KVM_HOST_SMCCC_FUNC_MIN_PKVM = __KVM_HOST_SMCCC_FUNC___pkvm_prot_finalize,
/* Hypercalls that are always available and common to [nh]VHE/pKVM. */
__KVM_HOST_SMCCC_FUNC___kvm_adjust_pc,
@ -72,11 +77,20 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___kvm_tlb_flush_vmid_range,
__KVM_HOST_SMCCC_FUNC___kvm_flush_cpu_context,
__KVM_HOST_SMCCC_FUNC___kvm_timer_set_cntvoff,
__KVM_HOST_SMCCC_FUNC___tracing_load,
__KVM_HOST_SMCCC_FUNC___tracing_unload,
__KVM_HOST_SMCCC_FUNC___tracing_enable,
__KVM_HOST_SMCCC_FUNC___tracing_swap_reader,
__KVM_HOST_SMCCC_FUNC___tracing_update_clock,
__KVM_HOST_SMCCC_FUNC___tracing_reset,
__KVM_HOST_SMCCC_FUNC___tracing_enable_event,
__KVM_HOST_SMCCC_FUNC___tracing_write_event,
__KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs,
__KVM_HOST_SMCCC_FUNC___vgic_v3_restore_vmcr_aprs,
__KVM_HOST_SMCCC_FUNC___vgic_v5_save_apr,
__KVM_HOST_SMCCC_FUNC___vgic_v5_restore_vmcr_apr,
__KVM_HOST_SMCCC_FUNC_MAX_NO_PKVM = __KVM_HOST_SMCCC_FUNC___vgic_v5_restore_vmcr_apr,
MARKER(__KVM_HOST_SMCCC_FUNC_PKVM_ONLY),
/* Hypercalls that are available only when pKVM has finalised. */
__KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp,
@ -100,14 +114,8 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load,
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put,
__KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid,
__KVM_HOST_SMCCC_FUNC___tracing_load,
__KVM_HOST_SMCCC_FUNC___tracing_unload,
__KVM_HOST_SMCCC_FUNC___tracing_enable,
__KVM_HOST_SMCCC_FUNC___tracing_swap_reader,
__KVM_HOST_SMCCC_FUNC___tracing_update_clock,
__KVM_HOST_SMCCC_FUNC___tracing_reset,
__KVM_HOST_SMCCC_FUNC___tracing_enable_event,
__KVM_HOST_SMCCC_FUNC___tracing_write_event,
MARKER(__KVM_HOST_SMCCC_FUNC_MAX)
};
#define DECLARE_KVM_VHE_SYM(sym) extern char sym[]

View File

@ -450,9 +450,6 @@ struct kvm_vcpu_fault_info {
r = __VNCR_START__ + ((VNCR_ ## r) / 8), \
__after_##r = __MAX__(__before_##r - 1, r)
#define MARKER(m) \
m, __after_##m = m - 1
enum vcpu_sysreg {
__INVALID_SYSREG__, /* 0 is reserved as an invalid value */
MPIDR_EL1, /* MultiProcessor Affinity Register */
@ -1548,7 +1545,7 @@ static inline bool __vcpu_has_feature(const struct kvm_arch *ka, int feature)
#define kvm_vcpu_has_feature(k, f) __vcpu_has_feature(&(k)->arch, (f))
#define vcpu_has_feature(v, f) __vcpu_has_feature(&(v)->kvm->arch, (f))
#define kvm_vcpu_initialized(v) vcpu_get_flag(vcpu, VCPU_INITIALIZED)
#define kvm_vcpu_initialized(v) vcpu_get_flag(v, VCPU_INITIALIZED)
int kvm_trng_call(struct kvm_vcpu *vcpu);
#ifdef CONFIG_KVM

View File

@ -196,7 +196,7 @@ static int scs_handle_fde_frame(const struct eh_frame *frame,
loc += *opcode++ * code_alignment_factor;
loc += (*opcode++ << 8) * code_alignment_factor;
loc += (*opcode++ << 16) * code_alignment_factor;
loc += (*opcode++ << 24) * code_alignment_factor;
loc += ((u64)*opcode++ << 24) * code_alignment_factor;
size -= 4;
break;

View File

@ -67,6 +67,9 @@ struct rt_sigframe_user_layout {
unsigned long end_offset;
};
#define TERMINATOR_SIZE round_up(sizeof(struct _aarch64_ctx), 16)
#define EXTRA_CONTEXT_SIZE round_up(sizeof(struct extra_context), 16)
/*
* Holds any EL0-controlled state that influences unprivileged memory accesses.
* This includes both accesses done in userspace and uaccess done in the kernel.
@ -74,13 +77,35 @@ struct rt_sigframe_user_layout {
* This state needs to be carefully managed to ensure that it doesn't cause
* uaccess to fail when setting up the signal frame, and the signal handler
* itself also expects a well-defined state when entered.
*
* The struct should be zero-initialised. Its members should only be accessed
* via the accessors below. __valid_fields tracks which of the fields are valid
* (have been set to some value).
*/
struct user_access_state {
u64 por_el0;
unsigned int __valid_fields;
u64 __por_el0;
};
#define TERMINATOR_SIZE round_up(sizeof(struct _aarch64_ctx), 16)
#define EXTRA_CONTEXT_SIZE round_up(sizeof(struct extra_context), 16)
#define UA_STATE_HAS_POR_EL0 BIT(0)
static void set_ua_state_por_el0(struct user_access_state *ua_state,
u64 por_el0)
{
ua_state->__por_el0 = por_el0;
ua_state->__valid_fields |= UA_STATE_HAS_POR_EL0;
}
static int get_ua_state_por_el0(const struct user_access_state *ua_state,
u64 *por_el0)
{
if (ua_state->__valid_fields & UA_STATE_HAS_POR_EL0) {
*por_el0 = ua_state->__por_el0;
return 0;
}
return -ENOENT;
}
/*
* Save the user access state into ua_state and reset it to disable any
@ -94,7 +119,7 @@ static void save_reset_user_access_state(struct user_access_state *ua_state)
for (int pkey = 0; pkey < arch_max_pkey(); pkey++)
por_enable_all |= POR_ELx_PERM_PREP(pkey, POE_RWX);
ua_state->por_el0 = read_sysreg_s(SYS_POR_EL0);
set_ua_state_por_el0(ua_state, read_sysreg_s(SYS_POR_EL0));
write_sysreg_s(por_enable_all, SYS_POR_EL0);
/*
* No ISB required as we can tolerate spurious Overlay faults -
@ -122,8 +147,10 @@ static void set_handler_user_access_state(void)
*/
static void restore_user_access_state(const struct user_access_state *ua_state)
{
if (system_supports_poe())
write_sysreg_s(ua_state->por_el0, SYS_POR_EL0);
u64 por_el0;
if (get_ua_state_por_el0(ua_state, &por_el0) == 0)
write_sysreg_s(por_el0, SYS_POR_EL0);
}
static void init_user_layout(struct rt_sigframe_user_layout *user)
@ -333,11 +360,16 @@ static int restore_fpmr_context(struct user_ctxs *user)
static int preserve_poe_context(struct poe_context __user *ctx,
const struct user_access_state *ua_state)
{
int err = 0;
int err;
u64 por_el0;
err = get_ua_state_por_el0(ua_state, &por_el0);
if (WARN_ON_ONCE(err))
return err;
__put_user_error(POE_MAGIC, &ctx->head.magic, err);
__put_user_error(sizeof(*ctx), &ctx->head.size, err);
__put_user_error(ua_state->por_el0, &ctx->por_el0, err);
__put_user_error(por_el0, &ctx->por_el0, err);
return err;
}
@ -353,7 +385,7 @@ static int restore_poe_context(struct user_ctxs *user,
__get_user_error(por_el0, &(user->poe->por_el0), err);
if (!err)
ua_state->por_el0 = por_el0;
set_ua_state_por_el0(ua_state, por_el0);
return err;
}
@ -1095,7 +1127,7 @@ SYSCALL_DEFINE0(rt_sigreturn)
{
struct pt_regs *regs = current_pt_regs();
struct rt_sigframe __user *frame;
struct user_access_state ua_state;
struct user_access_state ua_state = {};
/* Always make any pending restarted system calls return -EINTR */
current->restart_block.fn = do_no_restart_syscall;
@ -1507,7 +1539,7 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
{
struct rt_sigframe_user_layout user;
struct rt_sigframe __user *frame;
struct user_access_state ua_state;
struct user_access_state ua_state = {};
int err = 0;
fpsimd_save_and_flush_current_state();

View File

@ -824,6 +824,10 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *v)
{
bool irq_lines = *vcpu_hcr(v) & (HCR_VI | HCR_VF | HCR_VSE);
irq_lines |= (!irqchip_in_kernel(v->kvm) &&
(kvm_timer_should_notify_user(v) ||
kvm_pmu_should_notify_user(v)));
return ((irq_lines || kvm_vgic_vcpu_pending_irq(v))
&& !kvm_arm_vcpu_stopped(v) && !v->arch.pause);
}

View File

@ -131,7 +131,6 @@ struct reg_feat_map_desc {
}
#define FEAT_SPE ID_AA64DFR0_EL1, PMSVer, IMP
#define FEAT_SPE_FnE ID_AA64DFR0_EL1, PMSVer, V1P2
#define FEAT_BRBE ID_AA64DFR0_EL1, BRBE, IMP
#define FEAT_TRC_SR ID_AA64DFR0_EL1, TraceVer, IMP
#define FEAT_PMUv3 ID_AA64DFR0_EL1, PMUVer, IMP
@ -192,7 +191,7 @@ struct reg_feat_map_desc {
#define FEAT_SRMASK ID_AA64MMFR4_EL1, SRMASK, IMP
#define FEAT_PoPS ID_AA64MMFR4_EL1, PoPS, IMP
#define FEAT_PFAR ID_AA64PFR1_EL1, PFAR, IMP
#define FEAT_Debugv8p9 ID_AA64DFR0_EL1, PMUVer, V3P9
#define FEAT_Debugv8p9 ID_AA64DFR0_EL1, DebugVer, V8P9
#define FEAT_PMUv3_SS ID_AA64DFR0_EL1, PMSS, IMP
#define FEAT_SEBEP ID_AA64DFR0_EL1, SEBEP, IMP
#define FEAT_EBEP ID_AA64DFR1_EL1, EBEP, IMP
@ -283,7 +282,7 @@ static bool feat_anerr(struct kvm *kvm)
static bool feat_sme_smps(struct kvm *kvm)
{
/*
* Revists this if KVM ever supports SME -- this really should
* Revisit this if KVM ever supports SME -- this really should
* look at the guest's view of SMIDR_EL1. Funnily enough, this
* is not captured in the JSON file, but only as a note in the
* ARM ARM.
@ -295,17 +294,27 @@ static bool feat_sme_smps(struct kvm *kvm)
static bool feat_spe_fds(struct kvm *kvm)
{
/*
* Revists this if KVM ever supports SPE -- this really should
* Revisit this if KVM ever supports SPE -- this really should
* look at the guest's view of PMSIDR_EL1.
*/
return (kvm_has_feat(kvm, FEAT_SPEv1p4) &&
(read_sysreg_s(SYS_PMSIDR_EL1) & PMSIDR_EL1_FDS));
}
static bool feat_spe_fne(struct kvm *kvm)
{
/*
* Revisit this if KVM ever supports SPE -- this really should
* look at the guest's view of PMSIDR_EL1.
*/
return (kvm_has_feat(kvm, FEAT_SPEv1p2) &&
(read_sysreg_s(SYS_PMSIDR_EL1) & PMSIDR_EL1_FnE));
}
static bool feat_trbe_mpam(struct kvm *kvm)
{
/*
* Revists this if KVM ever supports both MPAM and TRBE --
* Revisit this if KVM ever supports both MPAM and TRBE --
* this really should look at the guest's view of TRBIDR_EL1.
*/
return (kvm_has_feat(kvm, FEAT_TRBE) &&
@ -537,7 +546,7 @@ static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
HDFGRTR_EL2_PMBPTR_EL1 |
HDFGRTR_EL2_PMBLIMITR_EL1,
FEAT_SPE),
NEEDS_FEAT(HDFGRTR_EL2_nPMSNEVFR_EL1, FEAT_SPE_FnE),
NEEDS_FEAT(HDFGRTR_EL2_nPMSNEVFR_EL1, feat_spe_fne),
NEEDS_FEAT(HDFGRTR_EL2_nBRBDATA |
HDFGRTR_EL2_nBRBCTL |
HDFGRTR_EL2_nBRBIDR,
@ -605,7 +614,7 @@ static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
HDFGWTR_EL2_PMBPTR_EL1 |
HDFGWTR_EL2_PMBLIMITR_EL1,
FEAT_SPE),
NEEDS_FEAT(HDFGWTR_EL2_nPMSNEVFR_EL1, FEAT_SPE_FnE),
NEEDS_FEAT(HDFGWTR_EL2_nPMSNEVFR_EL1, feat_spe_fne),
NEEDS_FEAT(HDFGWTR_EL2_nBRBDATA |
HDFGWTR_EL2_nBRBCTL,
FEAT_BRBE),

View File

@ -709,6 +709,14 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__kvm_tlb_flush_vmid_range),
HANDLE_FUNC(__kvm_flush_cpu_context),
HANDLE_FUNC(__kvm_timer_set_cntvoff),
HANDLE_FUNC(__tracing_load),
HANDLE_FUNC(__tracing_unload),
HANDLE_FUNC(__tracing_enable),
HANDLE_FUNC(__tracing_swap_reader),
HANDLE_FUNC(__tracing_update_clock),
HANDLE_FUNC(__tracing_reset),
HANDLE_FUNC(__tracing_enable_event),
HANDLE_FUNC(__tracing_write_event),
HANDLE_FUNC(__vgic_v3_save_aprs),
HANDLE_FUNC(__vgic_v3_restore_vmcr_aprs),
HANDLE_FUNC(__vgic_v5_save_apr),
@ -735,22 +743,16 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__pkvm_vcpu_load),
HANDLE_FUNC(__pkvm_vcpu_put),
HANDLE_FUNC(__pkvm_tlb_flush_vmid),
HANDLE_FUNC(__tracing_load),
HANDLE_FUNC(__tracing_unload),
HANDLE_FUNC(__tracing_enable),
HANDLE_FUNC(__tracing_swap_reader),
HANDLE_FUNC(__tracing_update_clock),
HANDLE_FUNC(__tracing_reset),
HANDLE_FUNC(__tracing_enable_event),
HANDLE_FUNC(__tracing_write_event),
};
static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(unsigned long, id, host_ctxt, 0);
unsigned long hcall_min = 0, hcall_max = -1;
unsigned long hcall_min = 0, hcall_max = __KVM_HOST_SMCCC_FUNC_MAX;
hcall_t hfn;
BUILD_BUG_ON(ARRAY_SIZE(host_hcall) != __KVM_HOST_SMCCC_FUNC_MAX);
/*
* If pKVM has been initialised then reject any calls to the
* early "privileged" hypercalls. Note that we cannot reject
@ -763,16 +765,14 @@ static void handle_host_hcall(struct kvm_cpu_context *host_ctxt)
if (static_branch_unlikely(&kvm_protected_mode_initialized)) {
hcall_min = __KVM_HOST_SMCCC_FUNC_MIN_PKVM;
} else {
hcall_max = __KVM_HOST_SMCCC_FUNC_MAX_NO_PKVM;
hcall_max = __KVM_HOST_SMCCC_FUNC_PKVM_ONLY;
}
id &= ~ARM_SMCCC_CALL_HINTS;
id -= KVM_HOST_SMCCC_ID(0);
if (unlikely(id < hcall_min || id > hcall_max ||
id >= ARRAY_SIZE(host_hcall))) {
if (unlikely(id < hcall_min || id >= hcall_max))
goto inval;
}
hfn = host_hcall[id];
if (unlikely(!hfn))
@ -805,6 +805,10 @@ static void handle_host_smc(struct kvm_cpu_context *host_ctxt)
}
func_id &= ~ARM_SMCCC_CALL_HINTS;
if (upper_32_bits(func_id)) {
cpu_reg(host_ctxt, 0) = SMCCC_RET_NOT_SUPPORTED;
goto exit_skip_instr;
}
handled = kvm_host_psci_handler(host_ctxt, func_id);
if (!handled)

View File

@ -266,7 +266,8 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
if (hyp_vm->kvm.created_vcpus <= vcpu_idx)
goto unlock;
hyp_vcpu = hyp_vm->vcpus[vcpu_idx];
/* Pairs with smp_store_release() in register_hyp_vcpu(). */
hyp_vcpu = smp_load_acquire(&hyp_vm->vcpus[vcpu_idx]);
if (!hyp_vcpu)
goto unlock;
@ -860,12 +861,30 @@ int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
* the page-aligned size of 'struct pkvm_hyp_vcpu'.
* Return 0 on success, negative error code on failure.
*/
static int register_hyp_vcpu(struct pkvm_hyp_vm *hyp_vm,
struct pkvm_hyp_vcpu *hyp_vcpu)
{
unsigned int idx = hyp_vcpu->vcpu.vcpu_idx;
if (idx >= hyp_vm->kvm.created_vcpus)
return -EINVAL;
if (hyp_vm->vcpus[idx])
return -EINVAL;
/*
* Ensure the hyp_vcpu is initialised before publishing it to
* the vCPU-load path via 'hyp_vm->vcpus[]'.
*/
smp_store_release(&hyp_vm->vcpus[idx], hyp_vcpu);
return 0;
}
int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
unsigned long vcpu_hva)
{
struct pkvm_hyp_vcpu *hyp_vcpu;
struct pkvm_hyp_vm *hyp_vm;
unsigned int idx;
int ret;
hyp_vcpu = map_donated_memory(vcpu_hva, sizeof(*hyp_vcpu));
@ -884,18 +903,11 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
if (ret)
goto unlock;
idx = hyp_vcpu->vcpu.vcpu_idx;
if (idx >= hyp_vm->kvm.created_vcpus) {
ret = -EINVAL;
goto unlock;
ret = register_hyp_vcpu(hyp_vm, hyp_vcpu);
if (ret) {
unpin_host_vcpu(host_vcpu);
unpin_host_sve_state(hyp_vcpu);
}
if (hyp_vm->vcpus[idx]) {
ret = -EINVAL;
goto unlock;
}
hyp_vm->vcpus[idx] = hyp_vcpu;
unlock:
hyp_spin_unlock(&vm_table_lock);

View File

@ -312,10 +312,6 @@ void __noreturn __pkvm_init_finalise(void)
};
pkvm_pgtable.mm_ops = &pkvm_pgtable_mm_ops;
ret = fix_host_ownership();
if (ret)
goto out;
ret = fix_hyp_pgtable_refcnt();
if (ret)
goto out;
@ -324,6 +320,10 @@ void __noreturn __pkvm_init_finalise(void)
if (ret)
goto out;
ret = fix_host_ownership();
if (ret)
goto out;
ret = hyp_ffa_init(ffa_proxy_pages);
if (ret)
goto out;

View File

@ -91,7 +91,7 @@ static int vgic_mmio_uaccess_write_v2_misc(struct kvm_vcpu *vcpu,
* migration from old kernels to new kernels with legacy
* userspace.
*/
reg = FIELD_GET(GICD_IIDR_REVISION_MASK, reg);
reg = FIELD_GET(GICD_IIDR_REVISION_MASK, val);
switch (reg) {
case KVM_VGIC_IMP_REV_2:
case KVM_VGIC_IMP_REV_3:

View File

@ -194,7 +194,7 @@ static int vgic_mmio_uaccess_write_v3_misc(struct kvm_vcpu *vcpu,
if ((reg ^ val) & ~GICD_IIDR_REVISION_MASK)
return -EINVAL;
reg = FIELD_GET(GICD_IIDR_REVISION_MASK, reg);
reg = FIELD_GET(GICD_IIDR_REVISION_MASK, val);
switch (reg) {
case KVM_VGIC_IMP_REV_2:
case KVM_VGIC_IMP_REV_3:

View File

@ -1414,6 +1414,9 @@ static inline char *debug_get_user_string(const char __user *user_buf,
{
char *buffer;
if (!user_len)
return ERR_PTR(-EINVAL);
buffer = memdup_user_nul(user_buf, user_len);
if (IS_ERR(buffer))
return buffer;
@ -1584,6 +1587,11 @@ static int debug_input_flush_fn(debug_info_t *id, struct debug_view *view,
char input_buf[1];
int rc = user_len;
if (!user_len) {
rc = -EINVAL;
goto out;
}
if (user_len > 0x10000)
user_len = 0x10000;
if (*offset != 0) {

View File

@ -438,7 +438,7 @@ void do_secure_storage_access(struct pt_regs *regs)
panic("Unexpected PGM 0x3d with TEID bit 61=0");
}
if (is_kernel_fault(regs)) {
folio = phys_to_folio(addr);
folio = virt_to_folio((void *)addr);
if (unlikely(!folio_try_get(folio)))
return;
rc = uv_convert_from_secure(folio_to_phys(folio));

View File

@ -7,7 +7,7 @@
/*
* This is set up by the setup-routine at boot-time
*/
extern unsigned char *boot_params_page;
extern unsigned char boot_params_page[];
#define PARAM boot_params_page
#define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000))

View File

@ -390,6 +390,11 @@ static int crypto_authenc_esn_create(struct crypto_template *tmpl,
auth = crypto_spawn_ahash_alg(&ctx->auth);
auth_base = &auth->base;
if (auth->digestsize > 0 && auth->digestsize < 4) {
err = -EINVAL;
goto err_free_inst;
}
err = crypto_grab_skcipher(&ctx->enc, aead_crypto_instance(inst),
crypto_attr_alg_name(tb[2]), 0, mask);
if (err)

View File

@ -605,15 +605,12 @@ static umode_t acpi_tad_attr_is_visible(struct kobject *kobj,
return 0;
}
static const struct attribute_group acpi_tad_attr_group = {
static const struct attribute_group acpi_tad_group = {
.attrs = acpi_tad_attrs,
.is_visible = acpi_tad_attr_is_visible,
};
static const struct attribute_group *acpi_tad_attr_groups[] = {
&acpi_tad_attr_group,
NULL,
};
__ATTRIBUTE_GROUPS(acpi_tad);
#ifdef CONFIG_RTC_CLASS
/* RTC class device interface */
@ -683,9 +680,8 @@ static int acpi_tad_rtc_set_alarm(struct device *dev, struct rtc_wkalrm *t)
acpi_tad_rt_to_tm(&rt, &tm_now);
value = ktime_divns(ktime_sub(rtc_tm_to_ktime(t->time),
rtc_tm_to_ktime(tm_now)), NSEC_PER_SEC);
if (value <= 0 || value > U32_MAX)
value = rtc_tm_to_time64(&t->time) - rtc_tm_to_time64(&tm_now);
if (value <= 0 || value >= U32_MAX)
return -EINVAL;
}
@ -748,8 +744,7 @@ static int acpi_tad_rtc_read_alarm(struct device *dev, struct rtc_wkalrm *t)
if (retval != ACPI_TAD_WAKE_DISABLED) {
t->enabled = 1;
t->time = rtc_ktime_to_tm(ktime_add_ns(rtc_tm_to_ktime(tm_now),
(u64)retval * NSEC_PER_SEC));
rtc_time64_to_tm(rtc_tm_to_time64(&tm_now) + retval, &t->time);
} else {
t->enabled = 0;
t->time = tm_now;
@ -795,9 +790,9 @@ static int acpi_tad_disable_timer(struct device *dev, u32 timer_id)
return acpi_tad_wake_set(dev, "_STV", timer_id, ACPI_TAD_WAKE_DISABLED);
}
static void acpi_tad_remove(struct platform_device *pdev)
static void acpi_tad_remove(void *data)
{
struct device *dev = &pdev->dev;
struct device *dev = data;
struct acpi_tad_driver_data *dd = dev_get_drvdata(dev);
device_init_wakeup(dev, false);
@ -824,6 +819,7 @@ static int acpi_tad_probe(struct platform_device *pdev)
struct acpi_tad_driver_data *dd;
acpi_status status;
unsigned long long caps;
int ret;
/*
* Initialization failure messages are mostly about firmware issues, so
@ -863,13 +859,21 @@ static int acpi_tad_probe(struct platform_device *pdev)
}
/*
* The platform bus type layer tells the ACPI PM domain powers up the
* device, so set the runtime PM status of it to "active".
* The platform bus type probe callback tells the ACPI PM domain to
* power up the device, so set the runtime PM status of it to "active".
*/
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
pm_runtime_suspend(dev);
/*
* acpi_tad_remove() needs to run after unregistering the RTC class
* device to avoid racing with the latter's callbacks.
*/
ret = devm_add_action_or_reset(&pdev->dev, acpi_tad_remove, &pdev->dev);
if (ret)
return ret;
if (caps & ACPI_TAD_RT)
acpi_tad_register_rtc(dev, caps);
@ -885,10 +889,9 @@ static struct platform_driver acpi_tad_driver = {
.driver = {
.name = "acpi-tad",
.acpi_match_table = acpi_tad_ids,
.dev_groups = acpi_tad_attr_groups,
.dev_groups = acpi_tad_groups,
},
.probe = acpi_tad_probe,
.remove = acpi_tad_remove,
};
MODULE_DEVICE_TABLE(acpi, acpi_tad_ids);

View File

@ -401,8 +401,18 @@ static struct acpi_generic_address *einj_get_trigger_parameter_region(
return NULL;
}
static bool is_memory_injection(u32 type, u32 flags)
{
if (flags & SETWA_FLAGS_EINJV2)
return !!(type & ACPI_EINJV2_MEMORY);
if (type & ACPI5_VENDOR_BIT)
return !!(vendor_flags & SETWA_FLAGS_MEM);
return !!(type & MEM_ERROR_MASK) || !!(flags & SETWA_FLAGS_MEM);
}
/* Execute instructions in trigger error action table */
static int __einj_error_trigger(u64 trigger_paddr, u32 type,
static int __einj_error_trigger(u64 trigger_paddr, u32 type, u32 flags,
u64 param1, u64 param2)
{
struct acpi_einj_trigger trigger_tab;
@ -480,7 +490,7 @@ static int __einj_error_trigger(u64 trigger_paddr, u32 type,
* This will cause resource conflict with regular memory. So
* remove it from trigger table resources.
*/
if ((param_extension || acpi5) && (type & MEM_ERROR_MASK) && param2) {
if ((param_extension || acpi5) && is_memory_injection(type, flags)) {
struct apei_resources addr_resources;
apei_resources_init(&addr_resources);
@ -660,7 +670,7 @@ static int __einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2,
return rc;
trigger_paddr = apei_exec_ctx_get_output(&ctx);
if (notrigger == 0) {
rc = __einj_error_trigger(trigger_paddr, type, param1, param2);
rc = __einj_error_trigger(trigger_paddr, type, flags, param1, param2);
if (rc)
return rc;
}
@ -718,28 +728,6 @@ int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, u64 param3,
SETWA_FLAGS_PCIE_SBDF | SETWA_FLAGS_EINJV2)))
return -EINVAL;
/* check if type is a valid EINJv2 error type */
if (is_v2) {
if (!(type & available_error_type_v2))
return -EINVAL;
}
/*
* We need extra sanity checks for memory errors.
* Other types leap directly to injection.
*/
/* ensure param1/param2 existed */
if (!(param_extension || acpi5))
goto inject;
/* ensure injection is memory related */
if (type & ACPI5_VENDOR_BIT) {
if (vendor_flags != SETWA_FLAGS_MEM)
goto inject;
} else if (!(type & MEM_ERROR_MASK) && !(flags & SETWA_FLAGS_MEM)) {
goto inject;
}
/*
* Injections targeting a CXL 1.0/1.1 port have to be injected
* via the einj_cxl_rch_error_inject() path as that does the proper
@ -748,6 +736,23 @@ int einj_error_inject(u32 type, u32 flags, u64 param1, u64 param2, u64 param3,
if (einj_is_cxl_error_type(type) && (flags & SETWA_FLAGS_MEM))
return -EINVAL;
/* check if type is a valid EINJv2 error type */
if (is_v2) {
if (!(type & available_error_type_v2))
return -EINVAL;
}
/* ensure param1/param2 existed */
if (!(param_extension || acpi5))
goto inject;
/*
* We need extra sanity checks for memory errors.
* Other types leap directly to injection.
*/
if (!is_memory_injection(type, flags))
goto inject;
/*
* Disallow crazy address masks that give BIOS leeway to pick
* injection address almost anywhere. Insist on page or

View File

@ -16,7 +16,7 @@
static int psci_acpi_cpu_init_idle(unsigned int cpu)
{
int i, count;
int i;
struct acpi_lpi_state *lpi;
struct acpi_processor *pr = per_cpu(processors, cpu);
@ -30,14 +30,10 @@ static int psci_acpi_cpu_init_idle(unsigned int cpu)
if (!psci_ops.cpu_suspend)
return -EOPNOTSUPP;
count = pr->power.count - 1;
if (count <= 0)
return -ENODEV;
for (i = 0; i < count; i++) {
for (i = 1; i < pr->power.count; i++) {
u32 state;
lpi = &pr->power.lpi_states[i + 1];
lpi = &pr->power.lpi_states[i];
/*
* Only bits[31:0] represent a PSCI power_state while
* bits[63:32] must be 0x0 as per ARM ACPI FFH Specification

View File

@ -362,7 +362,7 @@ static int send_pcc_cmd(int pcc_ss_id, u16 cmd)
end:
if (cmd == CMD_WRITE) {
if (unlikely(ret)) {
for_each_online_cpu(i) {
for_each_possible_cpu(i) {
struct cpc_desc *desc = per_cpu(cpc_desc_ptr, i);
if (!desc)
@ -524,13 +524,13 @@ int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data)
else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY)
cpu_data->shared_type = CPUFREQ_SHARED_TYPE_ANY;
for_each_online_cpu(i) {
for_each_possible_cpu(i) {
if (i == cpu)
continue;
match_cpc_ptr = per_cpu(cpc_desc_ptr, i);
if (!match_cpc_ptr)
goto err_fault;
continue;
match_pdomain = &(match_cpc_ptr->domain_info);
if (match_pdomain->domain != pdomain->domain)

View File

@ -916,6 +916,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "82K8"),
},
},
{
.callback = video_detect_force_native,
/* HP OMEN Gaming Laptop 16-n0xxx */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "HP"),
DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16-n0xxx"),
},
},
/*
* x86 android tablets which directly control the backlight through

View File

@ -459,19 +459,11 @@ static void pata_parport_dev_release(struct device *dev)
kfree(pi);
}
static void pata_parport_bus_release(struct device *dev)
{
/* nothing to do here but required to avoid warning on device removal */
}
static const struct bus_type pata_parport_bus_type = {
.name = DRV_NAME,
};
static struct device pata_parport_bus = {
.init_name = DRV_NAME,
.release = pata_parport_bus_release,
};
static struct device *pata_parport_bus;
static const struct scsi_host_template pata_parport_sht = {
PATA_PARPORT_SHT("pata_parport")
@ -518,7 +510,7 @@ static struct pi_adapter *pi_init_one(struct parport *parport,
}
/* set up pi->dev before pi_probe_unit() so it can use dev_printk() */
pi->dev.parent = &pata_parport_bus;
pi->dev.parent = pata_parport_bus;
pi->dev.bus = &pata_parport_bus_type;
pi->dev.driver = &pr->driver;
pi->dev.release = pata_parport_dev_release;
@ -780,8 +772,9 @@ static __init int pata_parport_init(void)
return error;
}
error = device_register(&pata_parport_bus);
if (error) {
pata_parport_bus = root_device_register(DRV_NAME);
if (IS_ERR(pata_parport_bus)) {
error = PTR_ERR(pata_parport_bus);
pr_err("failed to register pata_parport bus, error: %d\n", error);
goto out_unregister_bus;
}
@ -811,7 +804,7 @@ static __init int pata_parport_init(void)
out_remove_new:
bus_remove_file(&pata_parport_bus_type, &bus_attr_new_device);
out_unregister_dev:
device_unregister(&pata_parport_bus);
root_device_unregister(pata_parport_bus);
out_unregister_bus:
bus_unregister(&pata_parport_bus_type);
return error;
@ -822,7 +815,7 @@ static __exit void pata_parport_exit(void)
parport_unregister_driver(&pata_parport_driver);
bus_remove_file(&pata_parport_bus_type, &bus_attr_new_device);
bus_remove_file(&pata_parport_bus_type, &bus_attr_delete_device);
device_unregister(&pata_parport_bus);
root_device_unregister(pata_parport_bus);
bus_unregister(&pata_parport_bus_type);
}

View File

@ -172,7 +172,7 @@ static int regmap_sdw_mbq_read(void *context, unsigned int reg, unsigned int *va
ret = regmap_sdw_mbq_read_impl(slave, reg, val, mbq_size);
if (ret == -ENODATA) {
if (!deferrable)
dev_warn(dev, "Defer on undeferable control: %x\n", reg);
dev_warn(dev, "Defer on undeferrable control: %x\n", reg);
ret = regmap_sdw_mbq_poll_busy(slave, reg, ctx);
if (ret)

View File

@ -631,6 +631,16 @@ int register_cdrom(struct gendisk *disk, struct cdrom_device_info *cdi)
WARN_ON(!cdo->generic_packet);
/*
* Propagate the drive's write support to the block layer so BLKROGET
* reflects actual write capability. Drivers that use GET CONFIGURATION
* features (CDC_MRW_W, CDC_RAM) must have called
* cdrom_probe_write_features() before register_cdrom() so the mask is
* complete here.
*/
set_disk_ro(disk, !CDROM_CAN(CDC_DVD_RAM | CDC_MRW_W | CDC_RAM |
CDC_CD_RW));
cd_dbg(CD_REG_UNREG, "drive \"/dev/%s\" registered\n", cdi->name);
mutex_lock(&cdrom_mutex);
list_add(&cdi->list, &cdrom_list);
@ -742,6 +752,44 @@ static int cdrom_is_random_writable(struct cdrom_device_info *cdi, int *write)
return 0;
}
/*
* Probe write-related MMC features via GET CONFIGURATION and update
* cdi->mask accordingly. Drivers that populate cdi->mask from the MODE SENSE
* capabilities page (e.g. sr) should call this after those MODE SENSE bits
* have been set but before register_cdrom(), so that the full set of
* write-capability bits is known by the time register_cdrom() decides on the
* initial read-only state of the disk.
*/
void cdrom_probe_write_features(struct cdrom_device_info *cdi)
{
int mrw, mrw_write, ram_write;
mrw = 0;
if (!cdrom_is_mrw(cdi, &mrw_write))
mrw = 1;
if (CDROM_CAN(CDC_MO_DRIVE))
ram_write = 1;
else
(void) cdrom_is_random_writable(cdi, &ram_write);
if (mrw)
cdi->mask &= ~CDC_MRW;
else
cdi->mask |= CDC_MRW;
if (mrw_write)
cdi->mask &= ~CDC_MRW_W;
else
cdi->mask |= CDC_MRW_W;
if (ram_write)
cdi->mask &= ~CDC_RAM;
else
cdi->mask |= CDC_RAM;
}
EXPORT_SYMBOL(cdrom_probe_write_features);
static int cdrom_media_erasable(struct cdrom_device_info *cdi)
{
disc_information di;
@ -894,33 +942,8 @@ static int cdrom_is_dvd_rw(struct cdrom_device_info *cdi)
*/
static int cdrom_open_write(struct cdrom_device_info *cdi)
{
int mrw, mrw_write, ram_write;
int ret = 1;
mrw = 0;
if (!cdrom_is_mrw(cdi, &mrw_write))
mrw = 1;
if (CDROM_CAN(CDC_MO_DRIVE))
ram_write = 1;
else
(void) cdrom_is_random_writable(cdi, &ram_write);
if (mrw)
cdi->mask &= ~CDC_MRW;
else
cdi->mask |= CDC_MRW;
if (mrw_write)
cdi->mask &= ~CDC_MRW_W;
else
cdi->mask |= CDC_MRW_W;
if (ram_write)
cdi->mask &= ~CDC_RAM;
else
cdi->mask |= CDC_RAM;
if (CDROM_CAN(CDC_MRW_W))
ret = cdrom_mrw_open_write(cdi);
else if (CDROM_CAN(CDC_DVD_RAM))

View File

@ -900,11 +900,21 @@ int dpll_pin_delete_ntf(struct dpll_pin *pin)
return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin);
}
/**
* __dpll_pin_change_ntf - notify that the pin has been changed
* @pin: registered pin pointer
*
* Context: caller must hold dpll_lock. Suitable for use inside pin
* callbacks which are already invoked under dpll_lock.
* Return: 0 if succeeds, error code otherwise.
*/
int __dpll_pin_change_ntf(struct dpll_pin *pin)
{
lockdep_assert_held(&dpll_lock);
dpll_pin_notify(pin, DPLL_PIN_CHANGED);
return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin);
}
EXPORT_SYMBOL_GPL(__dpll_pin_change_ntf);
/**
* dpll_pin_change_ntf - notify that the pin has been changed

View File

@ -11,5 +11,3 @@ int dpll_device_delete_ntf(struct dpll_device *dpll);
int dpll_pin_create_ntf(struct dpll_pin *pin);
int dpll_pin_delete_ntf(struct dpll_pin *pin);
int __dpll_pin_change_ntf(struct dpll_pin *pin);

View File

@ -2839,8 +2839,12 @@ static int amdgpu_device_ip_fini_early(struct amdgpu_device *adev)
* that checks whether the PSP is running. A solution for those issues
* in the APU is to trigger a GPU reset, but this should be done during
* the unload phase to avoid adding boot latency and screen flicker.
* GFX V11 has GC block as default off IP. Every time AMDGPU driver sends
* a request to PMFW to unload MP1, PMFW will put GC in reset and power down
* the voltage. Hence, skipping reset for APUs with GFX V11 or later.
*/
if ((adev->flags & AMD_IS_APU) && !adev->gmc.is_app_apu) {
if ((adev->flags & AMD_IS_APU) && !adev->gmc.is_app_apu &&
amdgpu_ip_version(adev, GC_HWIP, 0) < IP_VERSION(11, 0, 0)) {
r = amdgpu_asic_reset(adev);
if (r)
dev_err(adev->dev, "asic reset on %s failed\n", __func__);

View File

@ -3090,10 +3090,8 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(11, 5, 1):
case IP_VERSION(11, 5, 2):
case IP_VERSION(11, 5, 3):
adev->family = AMDGPU_FAMILY_GC_11_5_0;
break;
case IP_VERSION(11, 5, 4):
adev->family = AMDGPU_FAMILY_GC_11_5_4;
adev->family = AMDGPU_FAMILY_GC_11_5_0;
break;
case IP_VERSION(12, 0, 0):
case IP_VERSION(12, 0, 1):

View File

@ -3158,8 +3158,10 @@ static int __init amdgpu_init(void)
amdgpu_register_atpx_handler();
amdgpu_acpi_detect();
/* Ignore KFD init failures. Normal when CONFIG_HSA_AMD is not set. */
amdgpu_amdkfd_init();
/* Ignore KFD init failures when CONFIG_HSA_AMD is not set. */
r = amdgpu_amdkfd_init();
if (r && r != -ENOENT)
goto error_fence;
if (amdgpu_pp_feature_mask & PP_OVERDRIVE_MASK) {
add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);

View File

@ -314,7 +314,10 @@ void amdgpu_gmc_gart_location(struct amdgpu_device *adev, struct amdgpu_gmc *mc,
mc->gart_start = max_mc_address - mc->gart_size + 1;
break;
case AMDGPU_GART_PLACEMENT_LOW:
if (size_bf >= mc->gart_size)
mc->gart_start = 0;
else
mc->gart_start = ALIGN(mc->fb_end, four_gb);
break;
case AMDGPU_GART_PLACEMENT_BEST_FIT:
default:

View File

@ -873,68 +873,59 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp)
? -EFAULT : 0;
}
case AMDGPU_INFO_READ_MMR_REG: {
int ret = 0;
unsigned int n, alloc_size;
uint32_t *regs;
unsigned int se_num = (info->read_mmr_reg.instance >>
AMDGPU_INFO_MMR_SE_INDEX_SHIFT) &
AMDGPU_INFO_MMR_SE_INDEX_MASK;
unsigned int sh_num = (info->read_mmr_reg.instance >>
AMDGPU_INFO_MMR_SH_INDEX_SHIFT) &
AMDGPU_INFO_MMR_SH_INDEX_MASK;
if (!down_read_trylock(&adev->reset_domain->sem))
return -ENOENT;
unsigned int alloc_size;
uint32_t *regs;
int ret;
/* set full masks if the userspace set all bits
* in the bitfields
*/
if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK) {
if (se_num == AMDGPU_INFO_MMR_SE_INDEX_MASK)
se_num = 0xffffffff;
} else if (se_num >= AMDGPU_GFX_MAX_SE) {
ret = -EINVAL;
goto out;
}
else if (se_num >= AMDGPU_GFX_MAX_SE)
return -EINVAL;
if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK) {
if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK)
sh_num = 0xffffffff;
} else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE) {
ret = -EINVAL;
goto out;
}
else if (sh_num >= AMDGPU_GFX_MAX_SH_PER_SE)
return -EINVAL;
if (info->read_mmr_reg.count > 128) {
ret = -EINVAL;
goto out;
}
if (info->read_mmr_reg.count > 128)
return -EINVAL;
regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL);
if (!regs) {
ret = -ENOMEM;
goto out;
}
regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs),
GFP_KERNEL);
if (!regs)
return -ENOMEM;
down_read(&adev->reset_domain->sem);
alloc_size = info->read_mmr_reg.count * sizeof(*regs);
amdgpu_gfx_off_ctrl(adev, false);
ret = 0;
for (i = 0; i < info->read_mmr_reg.count; i++) {
if (amdgpu_asic_read_register(adev, se_num, sh_num,
info->read_mmr_reg.dword_offset + i,
&regs[i])) {
DRM_DEBUG_KMS("unallowed offset %#x\n",
info->read_mmr_reg.dword_offset + i);
kfree(regs);
amdgpu_gfx_off_ctrl(adev, true);
ret = -EFAULT;
goto out;
break;
}
}
amdgpu_gfx_off_ctrl(adev, true);
n = copy_to_user(out, regs, min(size, alloc_size));
kfree(regs);
ret = (n ? -EFAULT : 0);
out:
up_read(&adev->reset_domain->sem);
if (!ret) {
ret = copy_to_user(out, regs, min(size, alloc_size))
? -EFAULT : 0;
}
kfree(regs);
return ret;
}
case AMDGPU_INFO_DEV_INFO: {

View File

@ -1950,7 +1950,7 @@ void amdgpu_ras_check_bad_page_status(struct amdgpu_device *adev)
if (!control || amdgpu_bad_page_threshold == 0)
return;
if (control->ras_num_bad_pages >= ras->bad_page_cnt_threshold) {
if (control->ras_num_bad_pages > ras->bad_page_cnt_threshold) {
if (amdgpu_dpm_send_rma_reason(adev))
dev_warn(adev->dev, "Unable to send out-of-band RMA CPER");
else

View File

@ -75,6 +75,9 @@ static int amdgpu_ttm_init_on_chip(struct amdgpu_device *adev,
unsigned int type,
uint64_t size_in_page)
{
if (!size_in_page)
return 0;
return ttm_range_man_init(&adev->mman.bdev, type,
false, size_in_page);
}

View File

@ -205,6 +205,19 @@ void amdgpu_userq_start_hang_detect_work(struct amdgpu_usermode_queue *queue)
msecs_to_jiffies(timeout_ms));
}
void amdgpu_userq_process_fence_irq(struct amdgpu_device *adev, u32 doorbell)
{
struct xarray *xa = &adev->userq_doorbell_xa;
struct amdgpu_usermode_queue *queue;
unsigned long flags;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
}
static void amdgpu_userq_init_hang_detect_work(struct amdgpu_usermode_queue *queue)
{
INIT_DELAYED_WORK(&queue->hang_detect_work, amdgpu_userq_hang_detect_work);
@ -643,12 +656,6 @@ amdgpu_userq_destroy(struct amdgpu_userq_mgr *uq_mgr, struct amdgpu_usermode_que
#endif
amdgpu_userq_detect_and_reset_queues(uq_mgr);
r = amdgpu_userq_unmap_helper(queue);
/*TODO: It requires a reset for userq hw unmap error*/
if (r) {
drm_warn(adev_to_drm(uq_mgr->adev), "trying to destroy a HW mapping userq\n");
queue->state = AMDGPU_USERQ_STATE_HUNG;
}
atomic_dec(&uq_mgr->userq_count[queue->queue_type]);
amdgpu_userq_cleanup(queue);
mutex_unlock(&uq_mgr->userq_mutex);
@ -1187,7 +1194,7 @@ amdgpu_userq_vm_validate(struct amdgpu_userq_mgr *uq_mgr)
bo = range->bo;
ret = amdgpu_ttm_tt_get_user_pages(bo, range);
if (ret)
goto unlock_all;
goto free_ranges;
}
invalidated = true;
@ -1214,6 +1221,7 @@ amdgpu_userq_vm_validate(struct amdgpu_userq_mgr *uq_mgr)
unlock_all:
drm_exec_fini(&exec);
free_ranges:
xa_for_each(&xa, tmp_key, range) {
if (!range)
continue;

View File

@ -156,6 +156,7 @@ void amdgpu_userq_reset_work(struct work_struct *work);
void amdgpu_userq_pre_reset(struct amdgpu_device *adev);
int amdgpu_userq_post_reset(struct amdgpu_device *adev, bool vram_lost);
void amdgpu_userq_start_hang_detect_work(struct amdgpu_usermode_queue *queue);
void amdgpu_userq_process_fence_irq(struct amdgpu_device *adev, u32 doorbell);
int amdgpu_userq_input_va_validate(struct amdgpu_device *adev,
struct amdgpu_usermode_queue *queue,

View File

@ -3023,12 +3023,23 @@ bool amdgpu_vm_handle_fault(struct amdgpu_device *adev, u32 pasid,
is_compute_context = vm->is_compute_context;
if (is_compute_context && !svm_range_restore_pages(adev, pasid, vmid,
node_id, addr >> PAGE_SHIFT, ts, write_fault)) {
if (is_compute_context) {
/* Unreserve root since svm_range_restore_pages might try to reserve it. */
/* TODO: rework svm_range_restore_pages so that this isn't necessary. */
amdgpu_bo_unreserve(root);
if (!svm_range_restore_pages(adev, pasid, vmid,
node_id, addr >> PAGE_SHIFT, ts, write_fault)) {
amdgpu_bo_unref(&root);
return true;
}
amdgpu_bo_unref(&root);
/* Re-acquire the VM lock, could be that the VM was freed in between. */
vm = amdgpu_vm_lock_by_pasid(adev, &root, pasid);
if (!vm)
return false;
}
addr /= AMDGPU_GPU_PAGE_SIZE;
flags = AMDGPU_PTE_VALID | AMDGPU_PTE_SNOOPED |

View File

@ -6523,15 +6523,7 @@ static int gfx_v11_0_eop_irq(struct amdgpu_device *adev,
DRM_DEBUG("IH: CP EOP\n");
if (adev->enable_mes && doorbell_offset) {
struct amdgpu_usermode_queue *queue;
struct xarray *xa = &adev->userq_doorbell_xa;
unsigned long flags;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell_offset);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
amdgpu_userq_process_fence_irq(adev, doorbell_offset);
} else {
me_id = (entry->ring_id & 0x0c) >> 2;
pipe_id = (entry->ring_id & 0x03) >> 0;

View File

@ -4854,15 +4854,7 @@ static int gfx_v12_0_eop_irq(struct amdgpu_device *adev,
DRM_DEBUG("IH: CP EOP\n");
if (adev->enable_mes && doorbell_offset) {
struct xarray *xa = &adev->userq_doorbell_xa;
struct amdgpu_usermode_queue *queue;
unsigned long flags;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell_offset);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
amdgpu_userq_process_fence_irq(adev, doorbell_offset);
} else {
me_id = (entry->ring_id & 0x0c) >> 2;
pipe_id = (entry->ring_id & 0x03) >> 0;

View File

@ -3643,16 +3643,7 @@ static int gfx_v12_1_eop_irq(struct amdgpu_device *adev,
DRM_DEBUG("IH: CP EOP\n");
if (adev->enable_mes && doorbell_offset) {
struct xarray *xa = &adev->userq_doorbell_xa;
struct amdgpu_usermode_queue *queue;
unsigned long flags;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell_offset);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
amdgpu_userq_process_fence_irq(adev, doorbell_offset);
} else {
me_id = (entry->ring_id & 0x0c) >> 2;
pipe_id = (entry->ring_id & 0x03) >> 0;

View File

@ -1571,6 +1571,71 @@ static void gfx_v6_0_setup_spi(struct amdgpu_device *adev)
mutex_unlock(&adev->grbm_idx_mutex);
}
/**
* gfx_v6_0_setup_tcc() - setup which TCCs are used
*
* @adev: amdgpu_device pointer
*
* Verify whether the current GPU has any TCCs disabled,
* which can happen when the GPU is harvested and some
* memory channels are disabled, reducing the memory bus width.
* For example, on the Radeon HD 7870 XT (Tahiti LE).
*
* If some TCCs are disabled, we need to make sure that
* the disabled TCCs are not used, and the remaining TCCs
* are used optimally.
*
* TCP_CHAN_STEER_LO/HI control which TCC is used by TCP channels.
* TCP_ADDR_CONFIG.NUM_TCC_BANKS controls how many channels are used.
*
* For optimal performance:
* - Rely on the CHAN_STEER from the golden registers table,
* only skip disabled TCCs but keep the mapping order.
* - Limit NUM_TCC_BANKS to number of active TCCs to avoid thrashing,
* which performs better than using the same TCC twice.
*/
static void gfx_v6_0_setup_tcc(struct amdgpu_device *adev)
{
u32 i, tcc, tcp_addr_config, num_active_tcc = 0;
u64 chan_steer, patched_chan_steer = 0;
const u32 num_max_tcc = adev->gfx.config.max_texture_channel_caches;
const u32 dis_tcc_mask =
amdgpu_gfx_create_bitmask(num_max_tcc) &
(REG_GET_FIELD(RREG32(mmCGTS_TCC_DISABLE),
CGTS_TCC_DISABLE, TCC_DISABLE) |
REG_GET_FIELD(RREG32(mmCGTS_USER_TCC_DISABLE),
CGTS_USER_TCC_DISABLE, TCC_DISABLE));
/* When no TCC is disabled, the golden registers table already has optimal TCC setup */
if (!dis_tcc_mask)
return;
/* Each 4-bit nibble contains the index of a TCC used by all TCPs */
chan_steer = RREG32(mmTCP_CHAN_STEER_LO) | ((u64)RREG32(mmTCP_CHAN_STEER_HI) << 32ull);
/* Patch the TCP to TCC mapping to skip disabled TCCs */
for (i = 0; i < num_max_tcc; ++i) {
tcc = (chan_steer >> (u64)(4 * i)) & 0xf;
if (!((1 << tcc) & dis_tcc_mask)) {
/* Copy enabled TCC indices to the patched register value. */
patched_chan_steer |= (u64)tcc << (u64)(4 * num_active_tcc);
++num_active_tcc;
}
}
WARN_ON(num_active_tcc != num_max_tcc - hweight32(dis_tcc_mask));
/* Patch number of TCCs used by TCPs */
tcp_addr_config = REG_SET_FIELD(RREG32(mmTCP_ADDR_CONFIG),
TCP_ADDR_CONFIG, NUM_TCC_BANKS,
num_active_tcc - 1);
WREG32(mmTCP_ADDR_CONFIG, tcp_addr_config);
WREG32(mmTCP_CHAN_STEER_HI, upper_32_bits(patched_chan_steer));
WREG32(mmTCP_CHAN_STEER_LO, lower_32_bits(patched_chan_steer));
}
static void gfx_v6_0_config_init(struct amdgpu_device *adev)
{
adev->gfx.config.double_offchip_lds_buf = 0;
@ -1729,6 +1794,7 @@ static void gfx_v6_0_constants_init(struct amdgpu_device *adev)
gfx_v6_0_tiling_mode_table_init(adev);
gfx_v6_0_setup_rb(adev);
gfx_v6_0_setup_tcc(adev);
gfx_v6_0_setup_spi(adev);

View File

@ -802,6 +802,7 @@ static const struct amd_ip_funcs jpeg_v2_0_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v2_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v2_0_dec_ring_get_rptr,
.get_wptr = jpeg_v2_0_dec_ring_get_wptr,
.set_wptr = jpeg_v2_0_dec_ring_set_wptr,

View File

@ -693,6 +693,7 @@ static const struct amd_ip_funcs jpeg_v2_6_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v2_5_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v2_5_dec_ring_get_rptr,
.get_wptr = jpeg_v2_5_dec_ring_get_wptr,
.set_wptr = jpeg_v2_5_dec_ring_set_wptr,
@ -724,6 +725,7 @@ static const struct amdgpu_ring_funcs jpeg_v2_5_dec_ring_vm_funcs = {
static const struct amdgpu_ring_funcs jpeg_v2_6_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v2_5_dec_ring_get_rptr,
.get_wptr = jpeg_v2_5_dec_ring_get_wptr,
.set_wptr = jpeg_v2_5_dec_ring_set_wptr,

View File

@ -594,6 +594,7 @@ static const struct amd_ip_funcs jpeg_v3_0_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v3_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v3_0_dec_ring_get_rptr,
.get_wptr = jpeg_v3_0_dec_ring_get_wptr,
.set_wptr = jpeg_v3_0_dec_ring_set_wptr,

View File

@ -759,6 +759,7 @@ static const struct amd_ip_funcs jpeg_v4_0_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v4_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v4_0_dec_ring_get_rptr,
.get_wptr = jpeg_v4_0_dec_ring_get_wptr,
.set_wptr = jpeg_v4_0_dec_ring_set_wptr,

View File

@ -1219,6 +1219,7 @@ static const struct amd_ip_funcs jpeg_v4_0_3_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v4_0_3_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v4_0_3_dec_ring_get_rptr,
.get_wptr = jpeg_v4_0_3_dec_ring_get_wptr,
.set_wptr = jpeg_v4_0_3_dec_ring_set_wptr,

View File

@ -804,6 +804,7 @@ static const struct amd_ip_funcs jpeg_v4_0_5_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v4_0_5_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v4_0_5_dec_ring_get_rptr,
.get_wptr = jpeg_v4_0_5_dec_ring_get_wptr,
.set_wptr = jpeg_v4_0_5_dec_ring_set_wptr,

View File

@ -680,6 +680,7 @@ static const struct amd_ip_funcs jpeg_v5_0_0_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v5_0_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v5_0_0_dec_ring_get_rptr,
.get_wptr = jpeg_v5_0_0_dec_ring_get_wptr,
.set_wptr = jpeg_v5_0_0_dec_ring_set_wptr,

View File

@ -884,6 +884,7 @@ static const struct amd_ip_funcs jpeg_v5_0_1_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v5_0_1_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v5_0_1_dec_ring_get_rptr,
.get_wptr = jpeg_v5_0_1_dec_ring_get_wptr,
.set_wptr = jpeg_v5_0_1_dec_ring_set_wptr,

View File

@ -703,6 +703,7 @@ static const struct amd_ip_funcs jpeg_v5_0_2_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v5_0_2_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v5_0_2_dec_ring_get_rptr,
.get_wptr = jpeg_v5_0_2_dec_ring_get_wptr,
.set_wptr = jpeg_v5_0_2_dec_ring_set_wptr,

View File

@ -661,6 +661,7 @@ static const struct amd_ip_funcs jpeg_v5_3_0_ip_funcs = {
static const struct amdgpu_ring_funcs jpeg_v5_3_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_JPEG,
.align_mask = 0xf,
.no_user_fence = true,
.get_rptr = jpeg_v5_3_0_dec_ring_get_rptr,
.get_wptr = jpeg_v5_3_0_dec_ring_get_wptr,
.set_wptr = jpeg_v5_3_0_dec_ring_set_wptr,

View File

@ -1662,17 +1662,8 @@ static int sdma_v6_0_process_fence_irq(struct amdgpu_device *adev,
u32 doorbell_offset = entry->src_data[0];
if (adev->enable_mes && doorbell_offset) {
struct amdgpu_usermode_queue *queue;
struct xarray *xa = &adev->userq_doorbell_xa;
unsigned long flags;
doorbell_offset >>= SDMA0_QUEUE0_DOORBELL_OFFSET__OFFSET__SHIFT;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell_offset);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
amdgpu_userq_process_fence_irq(adev, doorbell_offset);
}
return 0;

View File

@ -1594,17 +1594,8 @@ static int sdma_v7_0_process_fence_irq(struct amdgpu_device *adev,
u32 doorbell_offset = entry->src_data[0];
if (adev->enable_mes && doorbell_offset) {
struct xarray *xa = &adev->userq_doorbell_xa;
struct amdgpu_usermode_queue *queue;
unsigned long flags;
doorbell_offset >>= SDMA0_QUEUE0_DOORBELL_OFFSET__OFFSET__SHIFT;
xa_lock_irqsave(xa, flags);
queue = xa_load(xa, doorbell_offset);
if (queue)
amdgpu_userq_fence_driver_process(queue->fence_drv);
xa_unlock_irqrestore(xa, flags);
amdgpu_userq_process_fence_irq(adev, doorbell_offset);
}
return 0;

View File

@ -242,6 +242,10 @@ static void uvd_v3_1_mc_resume(struct amdgpu_device *adev)
uint64_t addr;
uint32_t size;
/* When the keyselect is already set, don't perturb it. */
if (RREG32(mmUVD_FW_START))
return;
/* program the VCPU memory controller bits 0-27 */
addr = (adev->uvd.inst->gpu_addr + AMDGPU_UVD_FIRMWARE_OFFSET) >> 3;
size = AMDGPU_UVD_FIRMWARE_SIZE(adev) >> 3;
@ -284,6 +288,12 @@ static int uvd_v3_1_fw_validate(struct amdgpu_device *adev)
int i;
uint32_t keysel = adev->uvd.keyselect;
if (RREG32(mmUVD_FW_START) & UVD_FW_STATUS__PASS_MASK) {
dev_dbg(adev->dev, "UVD keyselect already set: 0x%x (on CPU: 0x%x)\n",
RREG32(mmUVD_FW_START), adev->uvd.keyselect);
return 0;
}
WREG32(mmUVD_FW_START, keysel);
for (i = 0; i < 10; ++i) {

View File

@ -2113,6 +2113,7 @@ static const struct amd_ip_funcs vcn_v2_0_ip_funcs = {
static const struct amdgpu_ring_funcs vcn_v2_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_DEC,
.align_mask = 0xf,
.no_user_fence = true,
.secure_submission_supported = true,
.get_rptr = vcn_v2_0_dec_ring_get_rptr,
.get_wptr = vcn_v2_0_dec_ring_get_wptr,
@ -2145,6 +2146,7 @@ static const struct amdgpu_ring_funcs vcn_v2_0_enc_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v2_0_enc_ring_get_rptr,
.get_wptr = vcn_v2_0_enc_ring_get_wptr,
.set_wptr = vcn_v2_0_enc_ring_set_wptr,

View File

@ -1778,6 +1778,7 @@ static void vcn_v2_5_dec_ring_set_wptr(struct amdgpu_ring *ring)
static const struct amdgpu_ring_funcs vcn_v2_5_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_DEC,
.align_mask = 0xf,
.no_user_fence = true,
.secure_submission_supported = true,
.get_rptr = vcn_v2_5_dec_ring_get_rptr,
.get_wptr = vcn_v2_5_dec_ring_get_wptr,
@ -1879,6 +1880,7 @@ static const struct amdgpu_ring_funcs vcn_v2_5_enc_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v2_5_enc_ring_get_rptr,
.get_wptr = vcn_v2_5_enc_ring_get_wptr,
.set_wptr = vcn_v2_5_enc_ring_set_wptr,

View File

@ -1856,6 +1856,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_dec_sw_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_DEC,
.align_mask = 0x3f,
.nop = VCN_DEC_SW_CMD_NO_OP,
.no_user_fence = true,
.secure_submission_supported = true,
.get_rptr = vcn_v3_0_dec_ring_get_rptr,
.get_wptr = vcn_v3_0_dec_ring_get_wptr,
@ -1972,6 +1973,7 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
for (i = 0, msg = &msg[6]; i < num_buffers; ++i, msg += 4) {
uint32_t offset, size, *create;
uint64_t buf_end;
if (msg[0] != RDECODE_MESSAGE_CREATE)
continue;
@ -1979,7 +1981,8 @@ static int vcn_v3_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
offset = msg[1];
size = msg[2];
if (size < 4 || offset + size > end - addr) {
if (size < 4 || check_add_overflow(offset, size, &buf_end) ||
buf_end > end - addr) {
DRM_ERROR("VCN message buffer exceeds BO bounds!\n");
r = -EINVAL;
goto out;
@ -2036,6 +2039,7 @@ static int vcn_v3_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
static const struct amdgpu_ring_funcs vcn_v3_0_dec_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_DEC,
.align_mask = 0xf,
.no_user_fence = true,
.secure_submission_supported = true,
.get_rptr = vcn_v3_0_dec_ring_get_rptr,
.get_wptr = vcn_v3_0_dec_ring_get_wptr,
@ -2138,6 +2142,7 @@ static const struct amdgpu_ring_funcs vcn_v3_0_enc_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v3_0_enc_ring_get_rptr,
.get_wptr = vcn_v3_0_enc_ring_get_wptr,
.set_wptr = vcn_v3_0_enc_ring_set_wptr,

View File

@ -1889,6 +1889,7 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
for (i = 0, msg = &msg[6]; i < num_buffers; ++i, msg += 4) {
uint32_t offset, size, *create;
uint64_t buf_end;
if (msg[0] != RDECODE_MESSAGE_CREATE)
continue;
@ -1896,7 +1897,8 @@ static int vcn_v4_0_dec_msg(struct amdgpu_cs_parser *p, struct amdgpu_job *job,
offset = msg[1];
size = msg[2];
if (size < 4 || offset + size > end - addr) {
if (size < 4 || check_add_overflow(offset, size, &buf_end) ||
buf_end > end - addr) {
DRM_ERROR("VCN message buffer exceeds BO bounds!\n");
r = -EINVAL;
goto out;
@ -1994,6 +1996,7 @@ static struct amdgpu_ring_funcs vcn_v4_0_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.extra_bytes = sizeof(struct amdgpu_vcn_rb_metadata),
.get_rptr = vcn_v4_0_unified_ring_get_rptr,
.get_wptr = vcn_v4_0_unified_ring_get_wptr,

View File

@ -1775,6 +1775,7 @@ static const struct amdgpu_ring_funcs vcn_v4_0_3_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v4_0_3_unified_ring_get_rptr,
.get_wptr = vcn_v4_0_3_unified_ring_get_wptr,
.set_wptr = vcn_v4_0_3_unified_ring_set_wptr,

View File

@ -1483,6 +1483,7 @@ static struct amdgpu_ring_funcs vcn_v4_0_5_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v4_0_5_unified_ring_get_rptr,
.get_wptr = vcn_v4_0_5_unified_ring_get_wptr,
.set_wptr = vcn_v4_0_5_unified_ring_set_wptr,

View File

@ -1207,6 +1207,7 @@ static const struct amdgpu_ring_funcs vcn_v5_0_0_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v5_0_0_unified_ring_get_rptr,
.get_wptr = vcn_v5_0_0_unified_ring_get_wptr,
.set_wptr = vcn_v5_0_0_unified_ring_set_wptr,

View File

@ -1419,6 +1419,7 @@ static const struct amdgpu_ring_funcs vcn_v5_0_1_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v5_0_1_unified_ring_get_rptr,
.get_wptr = vcn_v5_0_1_unified_ring_get_wptr,
.set_wptr = vcn_v5_0_1_unified_ring_set_wptr,

View File

@ -994,6 +994,7 @@ static const struct amdgpu_ring_funcs vcn_v5_0_2_unified_ring_vm_funcs = {
.type = AMDGPU_RING_TYPE_VCN_ENC,
.align_mask = 0x3f,
.nop = VCN_ENC_CMD_NO_OP,
.no_user_fence = true,
.get_rptr = vcn_v5_0_2_unified_ring_get_rptr,
.get_wptr = vcn_v5_0_2_unified_ring_get_wptr,
.set_wptr = vcn_v5_0_2_unified_ring_set_wptr,

View File

@ -25,6 +25,7 @@
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/overflow.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
@ -1695,6 +1696,16 @@ static int kfd_ioctl_smi_events(struct file *filep,
return kfd_smi_event_open(pdd->dev, &args->anon_fd);
}
static int kfd_ioctl_svm_validate(void *kdata, unsigned int usize)
{
struct kfd_ioctl_svm_args *args = kdata;
size_t expected = struct_size(args, attrs, args->nattr);
if (expected == SIZE_MAX || usize < expected)
return -EINVAL;
return 0;
}
#if IS_ENABLED(CONFIG_HSA_AMD_SVM)
static int kfd_ioctl_set_xnack_mode(struct file *filep,
@ -3209,7 +3220,11 @@ static int kfd_ioctl_create_process(struct file *filep, struct kfd_process *p, v
#define AMDKFD_IOCTL_DEF(ioctl, _func, _flags) \
[_IOC_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, \
.cmd_drv = 0, .name = #ioctl}
.validate = NULL, .cmd_drv = 0, .name = #ioctl}
#define AMDKFD_IOCTL_DEF_V(ioctl, _func, _validate, _flags) \
[_IOC_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, \
.validate = _validate, .cmd_drv = 0, .name = #ioctl}
/** Ioctl table */
static const struct amdkfd_ioctl_desc amdkfd_ioctls[] = {
@ -3306,7 +3321,8 @@ static const struct amdkfd_ioctl_desc amdkfd_ioctls[] = {
AMDKFD_IOCTL_DEF(AMDKFD_IOC_SMI_EVENTS,
kfd_ioctl_smi_events, 0),
AMDKFD_IOCTL_DEF(AMDKFD_IOC_SVM, kfd_ioctl_svm, 0),
AMDKFD_IOCTL_DEF_V(AMDKFD_IOC_SVM, kfd_ioctl_svm,
kfd_ioctl_svm_validate, 0),
AMDKFD_IOCTL_DEF(AMDKFD_IOC_SET_XNACK_MODE,
kfd_ioctl_set_xnack_mode, 0),
@ -3431,6 +3447,12 @@ static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
memset(kdata, 0, usize);
}
if (ioctl->validate) {
retcode = ioctl->validate(kdata, usize);
if (retcode)
goto err_i1;
}
retcode = func(filep, process, kdata);
if (cmd & IOC_OUT)

View File

@ -1047,10 +1047,13 @@ extern struct srcu_struct kfd_processes_srcu;
typedef int amdkfd_ioctl_t(struct file *filep, struct kfd_process *p,
void *data);
typedef int amdkfd_ioctl_validate_t(void *kdata, unsigned int usize);
struct amdkfd_ioctl_desc {
unsigned int cmd;
int flags;
amdkfd_ioctl_t *func;
amdkfd_ioctl_validate_t *validate;
unsigned int cmd_drv;
const char *name;
};

View File

@ -1366,6 +1366,12 @@ svm_range_unmap_from_gpu(struct amdgpu_device *adev, struct amdgpu_vm *vm,
pr_debug("CPU[0x%llx 0x%llx] -> GPU[0x%llx 0x%llx]\n", start, last,
gpu_start, gpu_end);
if (!amdgpu_vm_ready(vm)) {
pr_debug("VM not ready, canceling unmap\n");
return -EINVAL;
}
return amdgpu_vm_update_range(adev, vm, false, true, true, false, NULL, gpu_start,
gpu_end, init_pte_value, 0, 0, NULL, NULL,
fence);
@ -1443,6 +1449,11 @@ svm_range_map_to_gpu(struct kfd_process_device *pdd, struct svm_range *prange,
pr_debug("svms 0x%p [0x%lx 0x%lx] readonly %d\n", prange->svms,
last_start, last_start + npages - 1, readonly);
if (!amdgpu_vm_ready(vm)) {
pr_debug("VM not ready, canceling map\n");
return -EINVAL;
}
for (i = offset; i < offset + npages; i++) {
uint64_t gpu_start;
uint64_t gpu_end;

View File

@ -1903,6 +1903,10 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
/* special handling for early revisions of GC 11.5.4 */
if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(11, 5, 4))
init_data.asic_id.chip_family = AMDGPU_FAMILY_GC_11_5_4;
else
init_data.asic_id.chip_family = adev->family;
init_data.asic_id.pci_revision_id = adev->pdev->revision;
@ -9404,9 +9408,21 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
if (acrtc_state) {
timing = &acrtc_state->stream->timing;
if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
IP_VERSION(3, 5, 0) ||
if (amdgpu_ip_version(adev, DCE_HWIP, 0) >=
IP_VERSION(3, 2, 0) &&
!(adev->flags & AMD_IS_APU)) {
/*
* DGPUs NV3x and newer that support idle optimizations
* experience intermittent flip-done timeouts on cursor
* updates. Restore 5s offdelay behavior for now.
*
* Discussion on the issue:
* https://lore.kernel.org/amd-gfx/20260217191632.1243826-1-sysdadmin@m1k.cloud/
*/
config.offdelay_ms = 5000;
config.disable_immediate = false;
} else if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
IP_VERSION(3, 5, 0)) {
/*
* Older HW and DGPU have issues with instant off;
* use a 2 frame offdelay.

View File

@ -1032,6 +1032,45 @@ dm_helpers_read_acpi_edid(struct amdgpu_dm_connector *aconnector)
return drm_edid_read_custom(connector, dm_helpers_probe_acpi_edid, connector);
}
static const struct drm_edid *
dm_helpers_read_vbios_hardcoded_edid(struct dc_link *link, struct amdgpu_dm_connector *aconnector)
{
struct dc_bios *bios = link->ctx->dc_bios;
struct embedded_panel_info info;
const struct drm_edid *edid;
enum bp_result r;
if (!dc_is_embedded_signal(link->connector_signal) ||
!bios->funcs->get_embedded_panel_info)
return NULL;
memset(&info, 0, sizeof(info));
r = bios->funcs->get_embedded_panel_info(bios, &info);
if (r != BP_RESULT_OK) {
dm_error("Error when reading embedded panel info: %u\n", r);
return NULL;
}
if (!info.fake_edid || !info.fake_edid_size) {
dm_error("Embedded panel info doesn't contain an EDID\n");
return NULL;
}
edid = drm_edid_alloc(info.fake_edid, info.fake_edid_size);
if (!drm_edid_valid(edid)) {
dm_error("EDID from embedded panel info is invalid\n");
drm_edid_free(edid);
return NULL;
}
aconnector->base.display_info.width_mm = info.panel_width_mm;
aconnector->base.display_info.height_mm = info.panel_height_mm;
return edid;
}
void populate_hdmi_info_from_connector(struct drm_hdmi_info *hdmi, struct dc_edid_caps *edid_caps)
{
edid_caps->scdc_present = hdmi->scdc.supported;
@ -1052,6 +1091,9 @@ enum dc_edid_status dm_helpers_read_local_edid(
if (link->aux_mode)
ddc = &aconnector->dm_dp_aux.aux.ddc;
else if (link->ddc_hw_inst == GPIO_DDC_LINE_UNKNOWN &&
dc_is_embedded_signal(link->connector_signal))
ddc = NULL;
else
ddc = &aconnector->i2c->base;
@ -1065,6 +1107,8 @@ enum dc_edid_status dm_helpers_read_local_edid(
drm_edid = dm_helpers_read_acpi_edid(aconnector);
if (drm_edid)
drm_info(connector->dev, "Using ACPI provided EDID for %s\n", connector->name);
else if (!ddc)
drm_edid = dm_helpers_read_vbios_hardcoded_edid(link, aconnector);
else
drm_edid = drm_edid_read_ddc(connector, ddc);
drm_edid_connector_update(connector, drm_edid);

View File

@ -794,11 +794,13 @@ static enum bp_result bios_parser_external_encoder_control(
static enum bp_result bios_parser_dac_load_detection(
struct dc_bios *dcb,
enum engine_id engine_id)
enum engine_id engine_id,
struct graphics_object_id ext_enc_id)
{
struct bios_parser *bp = BP_FROM_DCB(dcb);
struct dc_context *ctx = dcb->ctx;
struct bp_load_detection_parameters bp_params = {0};
struct bp_external_encoder_control ext_cntl = {0};
enum bp_result bp_result = BP_RESULT_UNSUPPORTED;
uint32_t bios_0_scratch;
uint32_t device_id_mask = 0;
@ -824,6 +826,13 @@ static enum bp_result bios_parser_dac_load_detection(
bp_params.engine_id = engine_id;
bp_result = bp->cmd_tbl.dac_load_detection(bp, &bp_params);
} else if (ext_enc_id.id) {
if (!bp->cmd_tbl.external_encoder_control)
return BP_RESULT_UNSUPPORTED;
ext_cntl.action = EXTERNAL_ENCODER_CONTROL_DAC_LOAD_DETECT;
ext_cntl.encoder_id = ext_enc_id;
bp_result = bp->cmd_tbl.external_encoder_control(bp, &ext_cntl);
}
if (bp_result != BP_RESULT_OK)
@ -1304,6 +1313,60 @@ static enum bp_result bios_parser_get_embedded_panel_info(
return BP_RESULT_FAILURE;
}
static enum bp_result get_embedded_panel_extra_info(
struct bios_parser *bp,
struct embedded_panel_info *info,
const uint32_t table_offset)
{
uint8_t *record = bios_get_image(&bp->base, table_offset, 1);
ATOM_PANEL_RESOLUTION_PATCH_RECORD *panel_res_record;
ATOM_FAKE_EDID_PATCH_RECORD *fake_edid_record;
while (*record != ATOM_RECORD_END_TYPE) {
switch (*record) {
case LCD_MODE_PATCH_RECORD_MODE_TYPE:
record += sizeof(ATOM_PATCH_RECORD_MODE);
break;
case LCD_RTS_RECORD_TYPE:
record += sizeof(ATOM_LCD_RTS_RECORD);
break;
case LCD_CAP_RECORD_TYPE:
record += sizeof(ATOM_LCD_MODE_CONTROL_CAP);
break;
case LCD_FAKE_EDID_PATCH_RECORD_TYPE:
fake_edid_record = (ATOM_FAKE_EDID_PATCH_RECORD *)record;
if (fake_edid_record->ucFakeEDIDLength) {
if (fake_edid_record->ucFakeEDIDLength == 128)
info->fake_edid_size =
fake_edid_record->ucFakeEDIDLength;
else
info->fake_edid_size =
fake_edid_record->ucFakeEDIDLength * 128;
info->fake_edid = fake_edid_record->ucFakeEDIDString;
record += struct_size(fake_edid_record,
ucFakeEDIDString,
info->fake_edid_size);
} else {
/* empty fake edid record must be 3 bytes long */
record += sizeof(ATOM_FAKE_EDID_PATCH_RECORD) + 1;
}
break;
case LCD_PANEL_RESOLUTION_RECORD_TYPE:
panel_res_record = (ATOM_PANEL_RESOLUTION_PATCH_RECORD *)record;
info->panel_width_mm = panel_res_record->usHSize;
info->panel_height_mm = panel_res_record->usVSize;
record += sizeof(ATOM_PANEL_RESOLUTION_PATCH_RECORD);
break;
default:
return BP_RESULT_BADBIOSTABLE;
}
}
return BP_RESULT_OK;
}
static enum bp_result get_embedded_panel_info_v1_2(
struct bios_parser *bp,
struct embedded_panel_info *info)
@ -1420,6 +1483,10 @@ static enum bp_result get_embedded_panel_info_v1_2(
if (ATOM_PANEL_MISC_API_ENABLED & lvds->ucLVDS_Misc)
info->lcd_timing.misc_info.API_ENABLED = true;
if (lvds->usExtInfoTableOffset)
return get_embedded_panel_extra_info(bp, info,
le16_to_cpu(lvds->usExtInfoTableOffset) + DATA_TABLES(LCD_Info));
return BP_RESULT_OK;
}
@ -1545,6 +1612,10 @@ static enum bp_result get_embedded_panel_info_v1_3(
(uint32_t) (ATOM_PANEL_MISC_V13_GREY_LEVEL &
lvds->ucLCD_Misc) >> ATOM_PANEL_MISC_V13_GREY_LEVEL_SHIFT;
if (lvds->usExtInfoTableOffset)
return get_embedded_panel_extra_info(bp, info,
le16_to_cpu(lvds->usExtInfoTableOffset) + DATA_TABLES(LCD_Info));
return BP_RESULT_OK;
}

View File

@ -1682,7 +1682,7 @@ struct dc_scratch_space {
struct dc_link_training_overrides preferred_training_settings;
struct dp_audio_test_data audio_test_data;
uint8_t ddc_hw_inst;
enum gpio_ddc_line ddc_hw_inst;
uint8_t hpd_src;

View File

@ -102,7 +102,8 @@ struct dc_vbios_funcs {
struct bp_external_encoder_control *cntl);
enum bp_result (*dac_load_detection)(
struct dc_bios *bios,
enum engine_id engine_id);
enum engine_id engine_id,
struct graphics_object_id ext_enc_id);
enum bp_result (*transmitter_control)(
struct dc_bios *bios,
struct bp_transmitter_control *cntl);

View File

@ -1102,6 +1102,8 @@ void dce110_link_encoder_hw_init(
ASSERT(result == BP_RESULT_OK);
}
if (enc110->aux_regs)
aux_initialize(enc110);
/* reinitialize HPD.

View File

@ -40,8 +40,8 @@
#define FN(reg_name, field_name) \
mcif_wb30->mcif_wb_shift->field_name, mcif_wb30->mcif_wb_mask->field_name
#define MCIF_ADDR(addr) (((unsigned long long)addr & 0xffffffffff) + 0xFE) >> 8
#define MCIF_ADDR_HIGH(addr) (unsigned long long)addr >> 40
#define MCIF_ADDR(addr) ((uint32_t)((((unsigned long long)(addr) & 0xffffffffffULL) + 0xFEULL) >> 8))
#define MCIF_ADDR_HIGH(addr) ((uint32_t)(((unsigned long long)(addr)) >> 40))
/* wbif programming guide:
* 1. set up wbif parameter:

View File

@ -646,6 +646,9 @@ enum gpio_result dal_ddc_change_mode(
enum gpio_ddc_line dal_ddc_get_line(
const struct ddc *ddc)
{
if (!ddc)
return GPIO_DDC_LINE_UNKNOWN;
return (enum gpio_ddc_line)dal_gpio_get_enum(ddc->pin_data);
}

View File

@ -665,16 +665,45 @@ void dce110_update_info_frame(struct pipe_ctx *pipe_ctx)
}
static void
dce110_dac_encoder_control(struct pipe_ctx *pipe_ctx, bool enable)
dce110_external_encoder_control(enum bp_external_encoder_control_action action,
struct dc_link *link,
struct dc_crtc_timing *timing)
{
struct dc_link *link = pipe_ctx->stream->link;
struct dc *dc = link->ctx->dc;
struct dc_bios *bios = link->ctx->dc_bios;
struct bp_encoder_control encoder_control = {0};
const struct dc_link_settings *link_settings = &link->cur_link_settings;
enum bp_result bp_result = BP_RESULT_OK;
struct bp_external_encoder_control ext_cntl = {
.action = action,
.connector_obj_id = link->link_enc->connector,
.encoder_id = link->ext_enc_id,
.lanes_number = link_settings->lane_count,
.link_rate = link_settings->link_rate,
encoder_control.action = enable ? ENCODER_CONTROL_ENABLE : ENCODER_CONTROL_DISABLE;
encoder_control.engine_id = link->link_enc->analog_engine;
encoder_control.pixel_clock = pipe_ctx->stream->timing.pix_clk_100hz / 10;
bios->funcs->encoder_control(bios, &encoder_control);
/* Use signal type of the real link encoder, ie. DP */
.signal = link->connector_signal,
/* We don't know the timing yet when executing the SETUP action,
* so use a reasonably high default value. It seems that ENABLE
* can change the actual pixel clock but doesn't work with higher
* pixel clocks than what SETUP was called with.
*/
.pixel_clock = timing ? timing->pix_clk_100hz / 10 : 300000,
.color_depth = timing ? timing->display_color_depth : COLOR_DEPTH_888,
};
DC_LOGGER_INIT(dc->ctx);
bp_result = bios->funcs->external_encoder_control(bios, &ext_cntl);
if (bp_result != BP_RESULT_OK)
DC_LOG_ERROR("Failed to execute external encoder action: 0x%x\n", action);
}
static void
dce110_prepare_ddc(struct dc_link *link)
{
if (link->ext_enc_id.id)
dce110_external_encoder_control(EXTERNAL_ENCODER_CONTROL_DDC_SETUP, link, NULL);
}
static bool
@ -684,7 +713,8 @@ dce110_dac_load_detect(struct dc_link *link)
struct link_encoder *link_enc = link->link_enc;
enum bp_result bp_result;
bp_result = bios->funcs->dac_load_detection(bios, link_enc->analog_engine);
bp_result = bios->funcs->dac_load_detection(
bios, link_enc->analog_engine, link->ext_enc_id);
return bp_result == BP_RESULT_OK;
}
@ -700,7 +730,6 @@ void dce110_enable_stream(struct pipe_ctx *pipe_ctx)
uint32_t early_control = 0;
struct timing_generator *tg = pipe_ctx->stream_res.tg;
link_hwss->setup_stream_attribute(pipe_ctx);
link_hwss->setup_stream_encoder(pipe_ctx);
dc->hwss.update_info_frame(pipe_ctx);
@ -719,8 +748,8 @@ void dce110_enable_stream(struct pipe_ctx *pipe_ctx)
tg->funcs->set_early_control(tg, early_control);
if (dc_is_rgb_signal(pipe_ctx->stream->signal))
dce110_dac_encoder_control(pipe_ctx, true);
if (link->ext_enc_id.id)
dce110_external_encoder_control(EXTERNAL_ENCODER_CONTROL_ENABLE, link, timing);
}
static enum bp_result link_transmitter_control(
@ -1219,8 +1248,8 @@ void dce110_disable_stream(struct pipe_ctx *pipe_ctx)
link_enc->transmitter - TRANSMITTER_UNIPHY_A);
}
if (dc_is_rgb_signal(pipe_ctx->stream->signal))
dce110_dac_encoder_control(pipe_ctx, false);
if (link->ext_enc_id.id)
dce110_external_encoder_control(EXTERNAL_ENCODER_CONTROL_DISABLE, link, NULL);
}
void dce110_unblank_stream(struct pipe_ctx *pipe_ctx,
@ -1603,22 +1632,6 @@ static enum dc_status dce110_enable_stream_timing(
return DC_OK;
}
static void
dce110_select_crtc_source(struct pipe_ctx *pipe_ctx)
{
struct dc_link *link = pipe_ctx->stream->link;
struct dc_bios *bios = link->ctx->dc_bios;
struct bp_crtc_source_select crtc_source_select = {0};
enum engine_id engine_id = link->link_enc->preferred_engine;
if (dc_is_rgb_signal(pipe_ctx->stream->signal))
engine_id = link->link_enc->analog_engine;
crtc_source_select.controller_id = CONTROLLER_ID_D0 + pipe_ctx->stream_res.tg->inst;
crtc_source_select.color_depth = pipe_ctx->stream->timing.display_color_depth;
crtc_source_select.engine_id = engine_id;
crtc_source_select.sink_signal = pipe_ctx->stream->signal;
bios->funcs->select_crtc_source(bios, &crtc_source_select);
}
enum dc_status dce110_apply_single_controller_ctx_to_hw(
struct pipe_ctx *pipe_ctx,
@ -1639,10 +1652,6 @@ enum dc_status dce110_apply_single_controller_ctx_to_hw(
hws->funcs.disable_stream_gating(dc, pipe_ctx);
}
if (pipe_ctx->stream->signal == SIGNAL_TYPE_RGB) {
dce110_select_crtc_source(pipe_ctx);
}
if (pipe_ctx->stream_res.audio != NULL) {
struct audio_output audio_output = {0};
@ -1722,8 +1731,7 @@ enum dc_status dce110_apply_single_controller_ctx_to_hw(
pipe_ctx->stream_res.tg->funcs->set_static_screen_control(
pipe_ctx->stream_res.tg, event_triggers, 2);
if (!dc_is_virtual_signal(pipe_ctx->stream->signal) &&
!dc_is_rgb_signal(pipe_ctx->stream->signal))
if (!dc_is_virtual_signal(pipe_ctx->stream->signal))
pipe_ctx->stream_res.stream_enc->funcs->dig_connect_to_otg(
pipe_ctx->stream_res.stream_enc,
pipe_ctx->stream_res.tg->inst);
@ -3376,6 +3384,15 @@ void dce110_enable_tmds_link_output(struct dc_link *link,
link->phy_state.symclk_state = SYMCLK_ON_TX_ON;
}
static void dce110_enable_analog_link_output(
struct dc_link *link,
uint32_t pix_clk_100hz)
{
link->link_enc->funcs->enable_analog_output(
link->link_enc,
pix_clk_100hz);
}
void dce110_enable_dp_link_output(
struct dc_link *link,
const struct link_resource *link_res,
@ -3423,6 +3440,11 @@ void dce110_enable_dp_link_output(
}
}
if (link->ext_enc_id.id) {
dce110_external_encoder_control(EXTERNAL_ENCODER_CONTROL_INIT, link, NULL);
dce110_external_encoder_control(EXTERNAL_ENCODER_CONTROL_SETUP, link, NULL);
}
if (dc->link_srv->dp_get_encoding_format(link_settings) == DP_8b_10b_ENCODING) {
if (dc->clk_mgr->funcs->notify_link_rate_change)
dc->clk_mgr->funcs->notify_link_rate_change(dc->clk_mgr, link);
@ -3513,8 +3535,10 @@ static const struct hw_sequencer_funcs dce110_funcs = {
.enable_lvds_link_output = dce110_enable_lvds_link_output,
.enable_tmds_link_output = dce110_enable_tmds_link_output,
.enable_dp_link_output = dce110_enable_dp_link_output,
.enable_analog_link_output = dce110_enable_analog_link_output,
.disable_link_output = dce110_disable_link_output,
.dac_load_detect = dce110_dac_load_detect,
.prepare_ddc = dce110_prepare_ddc,
};
static const struct hwseq_private_funcs dce110_private_funcs = {

View File

@ -568,7 +568,9 @@ static bool construct_phy(struct dc_link *link,
goto ddc_create_fail;
}
if (!link->ddc->ddc_pin) {
/* Embedded display connectors such as LVDS may not have DDC. */
if (!link->ddc->ddc_pin &&
!dc_is_embedded_signal(link->connector_signal)) {
DC_ERROR("Failed to get I2C info for connector!\n");
goto ddc_create_fail;
}

View File

@ -753,7 +753,8 @@ static struct link_encoder *dce60_link_encoder_create(
enc_init_data,
&link_enc_feature,
&link_enc_regs[link_regs_id],
&link_enc_aux_regs[enc_init_data->channel - 1],
enc_init_data->channel == CHANNEL_ID_UNKNOWN ?
NULL : &link_enc_aux_regs[enc_init_data->channel - 1],
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
return &enc110->base;

View File

@ -760,7 +760,8 @@ static struct link_encoder *dce80_link_encoder_create(
enc_init_data,
&link_enc_feature,
&link_enc_regs[link_regs_id],
&link_enc_aux_regs[enc_init_data->channel - 1],
enc_init_data->channel == CHANNEL_ID_UNKNOWN ?
NULL : &link_enc_aux_regs[enc_init_data->channel - 1],
enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs) ?
NULL : &link_enc_hpd_regs[enc_init_data->hpd_source]);
return &enc110->base;

View File

@ -153,6 +153,10 @@ struct embedded_panel_info {
uint32_t drr_enabled;
uint32_t min_drr_refresh_rate;
bool realtek_eDPToLVDS;
uint16_t panel_width_mm;
uint16_t panel_height_mm;
uint16_t fake_edid_size;
const uint8_t *fake_edid;
};
struct dc_firmware_info {

View File

@ -425,6 +425,7 @@ static int aldebaran_set_default_dpm_table(struct smu_context *smu)
dpm_table->dpm_levels[0].enabled = true;
dpm_table->dpm_levels[1].value = pptable->GfxclkFmax;
dpm_table->dpm_levels[1].enabled = true;
dpm_table->flags |= SMU_DPM_TABLE_FINE_GRAINED;
} else {
dpm_table->count = 1;
dpm_table->dpm_levels[0].value = smu->smu_table.boot_values.gfxclk / 100;

View File

@ -1129,6 +1129,7 @@ static int smu_v13_0_6_set_default_dpm_table(struct smu_context *smu)
/* gfxclk dpm table setup */
dpm_table = &dpm_context->dpm_tables.gfx_table;
dpm_table->clk_type = SMU_GFXCLK;
dpm_table->flags = SMU_DPM_TABLE_FINE_GRAINED;
if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_DPM_GFXCLK_BIT)) {
/* In the case of gfxclk, only fine-grained dpm is honored.
* Get min/max values from FW.

View File

@ -1370,7 +1370,7 @@ int smu_cmn_print_dpm_clk_levels(struct smu_context *smu,
level_index = 1;
}
if (!is_fine_grained) {
if (!is_fine_grained || count == 1) {
for (i = 0; i < count; i++) {
freq_match = !is_deep_sleep &&
smu_cmn_freqs_match(

View File

@ -831,7 +831,7 @@ static void fill_palette_332(struct drm_crtc *crtc, u16 r, u16 g, u16 b,
}
/**
* drm_crtc_fill_palette_332 - Programs a default palette for R332-like formats
* drm_crtc_fill_palette_332 - Programs a default palette for RGB332-like formats
* @crtc: The displaying CRTC
* @set_palette: Callback for programming the hardware gamma LUT
*

View File

@ -172,8 +172,8 @@ int drm_gem_fb_init_with_funcs(struct drm_device *dev,
}
for (i = 0; i < info->num_planes; i++) {
unsigned int width = mode_cmd->width / (i ? info->hsub : 1);
unsigned int height = mode_cmd->height / (i ? info->vsub : 1);
unsigned int width = drm_format_info_plane_width(info, mode_cmd->width, i);
unsigned int height = drm_format_info_plane_height(info, mode_cmd->height, i);
unsigned int min_size;
objs[i] = drm_gem_object_lookup(file, mode_cmd->handles[i]);

View File

@ -558,6 +558,6 @@ pvr_fw_trace_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir)
&pvr_fw_trace_fops);
}
debugfs_create_file("trace_mask", 0600, dir, fw_trace,
debugfs_create_file("trace_mask", 0600, dir, pvr_dev,
&pvr_fw_trace_mask_fops);
}

View File

@ -350,6 +350,7 @@ static void ofdrm_pci_release(void *data)
struct pci_dev *pcidev = data;
pci_disable_device(pcidev);
pci_dev_put(pcidev);
}
static int ofdrm_device_init_pci(struct ofdrm_device *odev)
@ -375,6 +376,7 @@ static int ofdrm_device_init_pci(struct ofdrm_device *odev)
if (ret) {
drm_err(dev, "pci_enable_device(%s) failed: %d\n",
dev_name(&pcidev->dev), ret);
pci_dev_put(pcidev);
return ret;
}
ret = devm_add_action_or_reset(&pdev->dev, ofdrm_pci_release, pcidev);

View File

@ -353,7 +353,7 @@ static int appletbdrm_primary_plane_helper_atomic_check(struct drm_plane *plane,
frames_size +
sizeof(struct appletbdrm_fb_request_footer), 16);
appletbdrm_state->request = kzalloc(request_size, GFP_KERNEL);
appletbdrm_state->request = kvzalloc(request_size, GFP_KERNEL);
if (!appletbdrm_state->request)
return -ENOMEM;
@ -543,7 +543,7 @@ static void appletbdrm_primary_plane_destroy_state(struct drm_plane *plane,
{
struct appletbdrm_plane_state *appletbdrm_state = to_appletbdrm_plane_state(state);
kfree(appletbdrm_state->request);
kvfree(appletbdrm_state->request);
kfree(appletbdrm_state->response);
__drm_gem_destroy_shadow_plane_state(&appletbdrm_state->base);

View File

@ -285,13 +285,12 @@ static struct urb *udl_get_urb_locked(struct udl_device *udl, long timeout)
return unode->urb;
}
#define GET_URB_TIMEOUT HZ
struct urb *udl_get_urb(struct udl_device *udl)
{
struct urb *urb;
spin_lock_irq(&udl->urbs.lock);
urb = udl_get_urb_locked(udl, GET_URB_TIMEOUT);
urb = udl_get_urb_locked(udl, HZ * 2);
spin_unlock_irq(&udl->urbs.lock);
return urb;
}

View File

@ -21,6 +21,7 @@
#include <drm/drm_gem_framebuffer_helper.h>
#include <drm/drm_gem_shmem_helper.h>
#include <drm/drm_modeset_helper_vtables.h>
#include <drm/drm_print.h>
#include <drm/drm_probe_helper.h>
#include <drm/drm_vblank.h>
@ -342,8 +343,10 @@ static void udl_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atom
return;
urb = udl_get_urb(udl);
if (!urb)
if (!urb) {
drm_err_ratelimited(dev, "get urb failed when enabling crtc\n");
goto out;
}
buf = (char *)urb->transfer_buffer;
buf = udl_vidreg_lock(buf);

View File

@ -88,6 +88,7 @@ xe-y += xe_bb.o \
xe_irq.o \
xe_late_bind_fw.o \
xe_lrc.o \
xe_mem_pool.o \
xe_migrate.o \
xe_mmio.o \
xe_mmio_gem.o \

View File

@ -583,7 +583,7 @@
#define DISABLE_128B_EVICTION_COMMAND_UDW REG_BIT(36 - 32)
#define LSCFE_SAME_ADDRESS_ATOMICS_COALESCING_DISABLE REG_BIT(35 - 32)
#define ROW_CHICKEN5 XE_REG_MCR(0xe7f0)
#define ROW_CHICKEN5 XE_REG_MCR(0xe7f0, XE_REG_OPTION_MASKED)
#define CPSS_AWARE_DIS REG_BIT(3)
#define SARB_CHICKEN1 XE_REG_MCR(0xe90c)

View File

@ -2322,8 +2322,10 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
}
/* XE_BO_FLAG_GGTTx requires XE_BO_FLAG_GGTT also be set */
if ((flags & XE_BO_FLAG_GGTT_ALL) && !(flags & XE_BO_FLAG_GGTT))
if ((flags & XE_BO_FLAG_GGTT_ALL) && !(flags & XE_BO_FLAG_GGTT)) {
xe_bo_free(bo);
return ERR_PTR(-EINVAL);
}
if (flags & (XE_BO_FLAG_VRAM_MASK | XE_BO_FLAG_STOLEN) &&
!(flags & XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE) &&
@ -2342,8 +2344,10 @@ struct xe_bo *xe_bo_init_locked(struct xe_device *xe, struct xe_bo *bo,
alignment = SZ_4K >> PAGE_SHIFT;
}
if (type == ttm_bo_type_device && aligned_size != size)
if (type == ttm_bo_type_device && aligned_size != size) {
xe_bo_free(bo);
return ERR_PTR(-EINVAL);
}
if (!bo) {
bo = xe_bo_alloc();

View File

@ -18,6 +18,7 @@
#include "xe_ggtt_types.h"
struct xe_device;
struct xe_mem_pool_node;
struct xe_vm;
#define XE_BO_MAX_PLACEMENTS 3
@ -88,7 +89,7 @@ struct xe_bo {
bool ccs_cleared;
/** @bb_ccs: BB instructions of CCS read/write. Valid only for VF */
struct xe_bb *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
struct xe_mem_pool_node *bb_ccs[XE_SRIOV_VF_CCS_CTX_COUNT];
/**
* @cpu_caching: CPU caching mode. Currently only used for userspace

Some files were not shown because too many files have changed in this diff Show More