| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| In the Linux kernel, the following vulnerability has been resolved:
net: Fix rcu_tasks stall in threaded busypoll
I was debugging a NIC driver when I noticed that when I enable
threaded busypoll, bpftrace hangs when starting up. dmesg showed:
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 10658 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 40793 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 131273 jiffies old.
rcu_tasks_wait_gp: rcu_tasks grace period number 85 (since boot) is 402058 jiffies old.
INFO: rcu_tasks detected stalls on tasks:
00000000769f52cd: .N nvcsw: 2/2 holdout: 1 idle_cpu: -1/64
task:napi/eth2-8265 state:R running task stack:0 pid:48300 tgid:48300 ppid:2 task_flags:0x208040 flags:0x00004000
Call Trace:
<TASK>
? napi_threaded_poll_loop+0x27c/0x2c0
? __pfx_napi_threaded_poll+0x10/0x10
? napi_threaded_poll+0x26/0x80
? kthread+0xfa/0x240
? __pfx_kthread+0x10/0x10
? ret_from_fork+0x31/0x50
? __pfx_kthread+0x10/0x10
? ret_from_fork_asm+0x1a/0x30
</TASK>
The cause is that in threaded busypoll, the main loop is in
napi_threaded_poll rather than napi_threaded_poll_loop, where the
latter rarely iterates more than once within its loop. For
rcu_softirq_qs_periodic inside napi_threaded_poll_loop to report its
qs state, the last_qs must be 100ms behind, and this can't happen
because napi_threaded_poll_loop rarely iterates in threaded busypoll,
and each time napi_threaded_poll_loop is called last_qs is reset to
latest jiffies.
This patch changes so that in threaded busypoll, last_qs is saved
in the outer napi_threaded_poll, and whether busy_poll_last_qs
is NULL indicates whether napi_threaded_poll_loop is called for
busypoll. This way last_qs would not reset to latest jiffies on
each invocation of napi_threaded_poll_loop. |
| In the Linux kernel, the following vulnerability has been resolved:
sched/mmcid: Prevent CID stalls due to concurrent forks
A newly forked task is accounted as MMCID user before the task is visible
in the process' thread list and the global task list. This creates the
following problem:
CPU1 CPU2
fork()
sched_mm_cid_fork(tnew1)
tnew1->mm.mm_cid_users++;
tnew1->mm_cid.cid = getcid()
-> preemption
fork()
sched_mm_cid_fork(tnew2)
tnew2->mm.mm_cid_users++;
// Reaches the per CPU threshold
mm_cid_fixup_tasks_to_cpus()
for_each_other(current, p)
....
As tnew1 is not visible yet, this fails to fix up the already allocated CID
of tnew1. As a consequence a subsequent schedule in might fail to acquire a
(transitional) CID and the machine stalls.
Move the invocation of sched_mm_cid_fork() after the new task becomes
visible in the thread and the task list to prevent this.
This also makes it symmetrical vs. exit() where the task is removed as CID
user before the task is removed from the thread and task lists. |
| In the Linux kernel, the following vulnerability has been resolved:
mm: memfd_luo: always dirty all folios
A dirty folio is one which has been written to. A clean folio is its
opposite. Since a clean folio has no user data, it can be freed under
memory pressure.
memfd preservation with LUO saves the flag at preserve(). This is
problematic. The folio might get dirtied later. Saving it at freeze()
also doesn't work, since the dirty bit from PTE is normally synced at
unmap and there might still be mappings of the file at freeze().
To see why this is a problem, say a folio is clean at preserve, but gets
dirtied later. The serialized state of the folio will mark it as clean.
After retrieve, the next kernel will see the folio as clean and might try
to reclaim it under memory pressure. This will result in losing user
data.
Mark all folios of the file as dirty, and always set the
MEMFD_LUO_FOLIO_DIRTY flag. This comes with the side effect of making all
clean folios un-reclaimable. This is a cost that has to be paid for
participants of live update. It is not expected to be a common use case
to preserve a lot of clean folios anyway.
Since the value of pfolio->flags is a constant now, drop the flags
variable and set it directly. |
| In the Linux kernel, the following vulnerability has been resolved:
sched_ext: Fix starvation of scx_enable() under fair-class saturation
During scx_enable(), the READY -> ENABLED task switching loop changes the
calling thread's sched_class from fair to ext. Since fair has higher
priority than ext, saturating fair-class workloads can indefinitely starve
the enable thread, hanging the system. This was introduced when the enable
path switched from preempt_disable() to scx_bypass() which doesn't protect
against fair-class starvation. Note that the original preempt_disable()
protection wasn't complete either - in partial switch modes, the calling
thread could still be starved after preempt_enable() as it may have been
switched to ext class.
Fix it by offloading the enable body to a dedicated system-wide RT
(SCHED_FIFO) kthread which cannot be starved by either fair or ext class
tasks. scx_enable() lazily creates the kthread on first use and passes the
ops pointer through a struct scx_enable_cmd containing the kthread_work,
then synchronously waits for completion.
The workfn runs on a different kthread from sch->helper (which runs
disable_work), so it can safely flush disable_work on the error path
without deadlock. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/panthor: fix for dma-fence safe access rules
Commit 506aa8b02a8d6 ("dma-fence: Add safe access helpers and document
the rules") details the dma-fence safe access rules. The most common
culprit is that drm_sched_fence_get_timeline_name may race with
group_free_queue. |
| In the Linux kernel, the following vulnerability has been resolved:
clocksource/drivers/sh_tmu: Always leave device running after probe
The TMU device can be used as both a clocksource and a clockevent
provider. The driver tries to be smart and power itself on and off, as
well as enabling and disabling its clock when it's not in operation.
This behavior is slightly altered if the TMU is used as an early
platform device in which case the device is left powered on after probe,
but the clock is still enabled and disabled at runtime.
This has worked for a long time, but recent improvements in PREEMPT_RT
and PROVE_LOCKING have highlighted an issue. As the TMU registers itself
as a clockevent provider, clockevents_register_device(), it needs to use
raw spinlocks internally as this is the context of which the clockevent
framework interacts with the TMU driver. However in the context of
holding a raw spinlock the TMU driver can't really manage its power
state or clock with calls to pm_runtime_*() and clk_*() as these calls
end up in other platform drivers using regular spinlocks to control
power and clocks.
This mix of spinlock contexts trips a lockdep warning.
=============================
[ BUG: Invalid wait context ]
6.18.0-arm64-renesas-09926-gee959e7c5e34 #1 Not tainted
-----------------------------
swapper/0/0 is trying to lock:
ffff000008c9e180 (&dev->power.lock){-...}-{3:3}, at: __pm_runtime_resume+0x38/0x88
other info that might help us debug this:
context-{5:5}
1 lock held by swapper/0/0:
ccree e6601000.crypto: ARM CryptoCell 630P Driver: HW version 0xAF400001/0xDCC63000, Driver version 5.0
#0: ffff8000817ec298
ccree e6601000.crypto: ARM ccree device initialized
(tick_broadcast_lock){-...}-{2:2}, at: __tick_broadcast_oneshot_control+0xa4/0x3a8
stack backtrace:
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.18.0-arm64-renesas-09926-gee959e7c5e34 #1 PREEMPT
Hardware name: Renesas Salvator-X 2nd version board based on r8a77965 (DT)
Call trace:
show_stack+0x14/0x1c (C)
dump_stack_lvl+0x6c/0x90
dump_stack+0x14/0x1c
__lock_acquire+0x904/0x1584
lock_acquire+0x220/0x34c
_raw_spin_lock_irqsave+0x58/0x80
__pm_runtime_resume+0x38/0x88
sh_tmu_clock_event_set_oneshot+0x84/0xd4
clockevents_switch_state+0xfc/0x13c
tick_broadcast_set_event+0x30/0xa4
__tick_broadcast_oneshot_control+0x1e0/0x3a8
tick_broadcast_oneshot_control+0x30/0x40
cpuidle_enter_state+0x40c/0x680
cpuidle_enter+0x30/0x40
do_idle+0x1f4/0x280
cpu_startup_entry+0x34/0x40
kernel_init+0x0/0x130
do_one_initcall+0x0/0x230
__primary_switched+0x88/0x90
For non-PREEMPT_RT builds this is not really an issue, but for
PREEMPT_RT builds where normal spinlocks can sleep this might be an
issue. Be cautious and always leave the power and clock running after
probe. |
| In the Linux kernel, the following vulnerability has been resolved:
soc/tegra: pmc: Fix unsafe generic_handle_irq() call
Currently, when resuming from system suspend on Tegra platforms,
the following warning is observed:
WARNING: CPU: 0 PID: 14459 at kernel/irq/irqdesc.c:666
Call trace:
handle_irq_desc+0x20/0x58 (P)
tegra186_pmc_wake_syscore_resume+0xe4/0x15c
syscore_resume+0x3c/0xb8
suspend_devices_and_enter+0x510/0x540
pm_suspend+0x16c/0x1d8
The warning occurs because generic_handle_irq() is being called from
a non-interrupt context which is considered as unsafe.
Fix this warning by deferring generic_handle_irq() call to an IRQ work
which gets executed in hard IRQ context where generic_handle_irq()
can be called safely.
When PREEMPT_RT kernels are used, regular IRQ work (initialized with
init_irq_work) is deferred to run in per-CPU kthreads in preemptible
context rather than hard IRQ context. Hence, use the IRQ_WORK_INIT_HARD
variant so that with PREEMPT_RT kernels, the IRQ work is processed in
hardirq context instead of being deferred to a thread which is required
for calling generic_handle_irq().
On non-PREEMPT_RT kernels, both init_irq_work() and IRQ_WORK_INIT_HARD()
execute in IRQ context, so this change has no functional impact for
standard kernel configurations.
[treding@nvidia.com: miscellaneous cleanups] |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: SCO: fix race conditions in sco_sock_connect()
sco_sock_connect() checks sk_state and sk_type without holding
the socket lock. Two concurrent connect() syscalls on the same
socket can both pass the check and enter sco_connect(), leading
to use-after-free.
The buggy scenario involves three participants and was confirmed
with additional logging instrumentation:
Thread A (connect): HCI disconnect: Thread B (connect):
sco_sock_connect(sk) sco_sock_connect(sk)
sk_state==BT_OPEN sk_state==BT_OPEN
(pass, no lock) (pass, no lock)
sco_connect(sk): sco_connect(sk):
hci_dev_lock hci_dev_lock
hci_connect_sco <- blocked
-> hcon1
sco_conn_add->conn1
lock_sock(sk)
sco_chan_add:
conn1->sk = sk
sk->conn = conn1
sk_state=BT_CONNECT
release_sock
hci_dev_unlock
hci_dev_lock
sco_conn_del:
lock_sock(sk)
sco_chan_del:
sk->conn=NULL
conn1->sk=NULL
sk_state=
BT_CLOSED
SOCK_ZAPPED
release_sock
hci_dev_unlock
(unblocked)
hci_connect_sco
-> hcon2
sco_conn_add
-> conn2
lock_sock(sk)
sco_chan_add:
sk->conn=conn2
sk_state=
BT_CONNECT
// zombie sk!
release_sock
hci_dev_unlock
Thread B revives a BT_CLOSED + SOCK_ZAPPED socket back to
BT_CONNECT. Subsequent cleanup triggers double sock_put() and
use-after-free. Meanwhile conn1 is leaked as it was orphaned
when sco_conn_del() cleared the association.
Fix this by:
- Moving lock_sock() before the sk_state/sk_type checks in
sco_sock_connect() to serialize concurrent connect attempts
- Fixing the sk_type != SOCK_SEQPACKET check to actually
return the error instead of just assigning it
- Adding a state re-check in sco_connect() after lock_sock()
to catch state changes during the window between the locks
- Adding sco_pi(sk)->conn check in sco_chan_add() to prevent
double-attach of a socket to multiple connections
- Adding hci_conn_drop() on sco_chan_add failure to prevent
HCI connection leaks |
| In the Linux kernel, the following vulnerability has been resolved:
Bluetooth: hci_conn: fix potential UAF in set_cig_params_sync
hci_conn lookup and field access must be covered by hdev lock in
set_cig_params_sync, otherwise it's possible it is freed concurrently.
Take hdev lock to prevent hci_conn from being deleted or modified
concurrently. Just RCU lock is not suitable here, as we also want to
avoid "tearing" in the configuration. |
| In the Linux kernel, the following vulnerability has been resolved:
alpha: fix user-space corruption during memory compaction
Alpha systems can suffer sporadic user-space crashes and heap
corruption when memory compaction is enabled.
Symptoms include SIGSEGV, glibc allocator failures (e.g. "unaligned
tcache chunk"), and compiler internal errors. The failures disappear
when compaction is disabled or when using global TLB invalidation.
The root cause is insufficient TLB shootdown during page migration.
Alpha relies on ASN-based MM context rollover for instruction cache
coherency, but this alone is not sufficient to prevent stale data or
instruction translations from surviving migration.
Fix this by introducing a migration-specific helper that combines:
- MM context invalidation (ASN rollover),
- immediate per-CPU TLB invalidation (TBI),
- synchronous cross-CPU shootdown when required.
The helper is used only by migration/compaction paths to avoid changing
global TLB semantics.
Additionally, update flush_tlb_other(), pte_clear(), to use
READ_ONCE()/WRITE_ONCE() for correct SMP memory ordering.
This fixes observed crashes on both UP and SMP Alpha systems. |
| In the Linux kernel, the following vulnerability has been resolved:
smb: client: prevent races in ->query_interfaces()
It was possible for two query interface works to be concurrently trying
to update the interfaces.
Prevent this by checking and updating iface_last_update under
iface_lock. |
| In the Linux kernel, the following vulnerability has been resolved:
tcp: fix potential race in tcp_v6_syn_recv_sock()
Code in tcp_v6_syn_recv_sock() after the call to tcp_v4_syn_recv_sock()
is done too late.
After tcp_v4_syn_recv_sock(), the child socket is already visible
from TCP ehash table and other cpus might use it.
Since newinet->pinet6 is still pointing to the listener ipv6_pinfo
bad things can happen as syzbot found.
Move the problematic code in tcp_v6_mapped_child_init()
and call this new helper from tcp_v4_syn_recv_sock() before
the ehash insertion.
This allows the removal of one tcp_sync_mss(), since
tcp_v4_syn_recv_sock() will call it with the correct
context. |
| In the Linux kernel, the following vulnerability has been resolved:
iomap: fix invalid folio access when i_blkbits differs from I/O granularity
Commit aa35dd5cbc06 ("iomap: fix invalid folio access after
folio_end_read()") partially addressed invalid folio access for folios
without an ifs attached, but it did not handle the case where
1 << inode->i_blkbits matches the folio size but is different from the
granularity used for the IO, which means IO can be submitted for less
than the full folio for the !ifs case.
In this case, the condition:
if (*bytes_submitted == folio_len)
ctx->cur_folio = NULL;
in iomap_read_folio_iter() will not invalidate ctx->cur_folio, and
iomap_read_end() will still be called on the folio even though the IO
helper owns it and will finish the read on it.
Fix this by unconditionally invalidating ctx->cur_folio for the !ifs
case. |
| In the Linux kernel, the following vulnerability has been resolved:
team: avoid NETDEV_CHANGEMTU event when unregistering slave
syzbot is reporting
unregister_netdevice: waiting for netdevsim0 to become free. Usage count = 3
ref_tracker: netdev@ffff88807dcf8618 has 1/2 users at
__netdev_tracker_alloc include/linux/netdevice.h:4400 [inline]
netdev_hold include/linux/netdevice.h:4429 [inline]
inetdev_init+0x201/0x4e0 net/ipv4/devinet.c:286
inetdev_event+0x251/0x1610 net/ipv4/devinet.c:1600
notifier_call_chain+0x19d/0x3a0 kernel/notifier.c:85
call_netdevice_notifiers_mtu net/core/dev.c:2318 [inline]
netif_set_mtu_ext+0x5aa/0x800 net/core/dev.c:9886
netif_set_mtu+0xd7/0x1b0 net/core/dev.c:9907
dev_set_mtu+0x126/0x260 net/core/dev_api.c:248
team_port_del+0xb07/0xcb0 drivers/net/team/team_core.c:1333
team_del_slave drivers/net/team/team_core.c:1936 [inline]
team_device_event+0x207/0x5b0 drivers/net/team/team_core.c:2929
notifier_call_chain+0x19d/0x3a0 kernel/notifier.c:85
call_netdevice_notifiers_extack net/core/dev.c:2281 [inline]
call_netdevice_notifiers net/core/dev.c:2295 [inline]
__dev_change_net_namespace+0xcb7/0x2050 net/core/dev.c:12592
do_setlink+0x2ce/0x4590 net/core/rtnetlink.c:3060
rtnl_changelink net/core/rtnetlink.c:3776 [inline]
__rtnl_newlink net/core/rtnetlink.c:3935 [inline]
rtnl_newlink+0x15a9/0x1be0 net/core/rtnetlink.c:4072
rtnetlink_rcv_msg+0x7d5/0xbe0 net/core/rtnetlink.c:6958
netlink_rcv_skb+0x232/0x4b0 net/netlink/af_netlink.c:2550
netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
netlink_unicast+0x80f/0x9b0 net/netlink/af_netlink.c:1344
netlink_sendmsg+0x813/0xb40 net/netlink/af_netlink.c:1894
problem. Ido Schimmel found steps to reproduce
ip link add name team1 type team
ip link add name dummy1 mtu 1499 master team1 type dummy
ip netns add ns1
ip link set dev dummy1 netns ns1
ip -n ns1 link del dev dummy1
and also found that the same issue was fixed in the bond driver in
commit f51048c3e07b ("bonding: avoid NETDEV_CHANGEMTU event when
unregistering slave").
Let's do similar thing for the team driver, with commit ad7c7b2172c3 ("net:
hold netdev instance lock during sysfs operations") and commit 303a8487a657
("net: s/__dev_set_mtu/__netif_set_mtu/") also applied. |
| In the Linux kernel, the following vulnerability has been resolved:
drm/amd/display: Adjust PHY FSM transition to TX_EN-to-PLL_ON for TMDS on DCN35
[Why]
A backport of the change made for DCN401 that addresses an issue where
we turn off the PHY PLL when disabling TMDS output, which causes the
OTG to remain stuck.
The OTG being stuck can lead to a hang in the DCHVM's ability to ACK
invalidations when it thinks the HUBP is still on but it's not receiving
global sync.
The transition to PLL_ON needs to be atomic as there's no guarantee
that the thread isn't pre-empted or is able to complete before the
IOMMU watchdog times out.
[How]
Backport the implementation from dcn401 back to dcn35.
There's a functional difference in when the eDP output is disabled in
dcn401 code so we don't want to utilize it directly. |
| In the Linux kernel, the following vulnerability has been resolved:
usb: gadget: f_hid: move list and spinlock inits from bind to alloc
There was an issue when you did the following:
- setup and bind an hid gadget
- open /dev/hidg0
- use the resulting fd in EPOLL_CTL_ADD
- unbind the UDC
- bind the UDC
- use the fd in EPOLL_CTL_DEL
When CONFIG_DEBUG_LIST was enabled, a list_del corruption was reported
within remove_wait_queue (via ep_remove_wait_queue). After some
debugging I found out that the queues, which f_hid registers via
poll_wait were the problem. These were initialized using
init_waitqueue_head inside hidg_bind. So effectively, the bind function
re-initialized the queues while there were still items in them.
The solution is to move the initialization from hidg_bind to hidg_alloc
to extend their lifetimes to the lifetime of the function instance.
Additionally, I found many other possibly problematic init calls in the
bind function, which I moved as well. |
| Concurrent execution using shared resource with improper synchronization ('race condition') in .NET Framework allows an unauthorized attacker to deny service over a network. |
| In the Linux kernel, the following vulnerability has been resolved:
KVM: SEV: Reject attempts to sync VMSA of an already-launched/encrypted vCPU
Reject synchronizing vCPU state to its associated VMSA if the vCPU has
already been launched, i.e. if the VMSA has already been encrypted. On a
host with SNP enabled, accessing guest-private memory generates an RMP #PF
and panics the host.
BUG: unable to handle page fault for address: ff1276cbfdf36000
#PF: supervisor write access in kernel mode
#PF: error_code(0x80000003) - RMP violation
PGD 5a31801067 P4D 5a31802067 PUD 40ccfb5063 PMD 40e5954063 PTE 80000040fdf36163
SEV-SNP: PFN 0x40fdf36, RMP entry: [0x6010fffffffff001 - 0x000000000000001f]
Oops: Oops: 0003 [#1] SMP NOPTI
CPU: 33 UID: 0 PID: 996180 Comm: qemu-system-x86 Tainted: G OE
Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
Hardware name: Dell Inc. PowerEdge R7625/0H1TJT, BIOS 1.5.8 07/21/2023
RIP: 0010:sev_es_sync_vmsa+0x54/0x4c0 [kvm_amd]
Call Trace:
<TASK>
snp_launch_update_vmsa+0x19d/0x290 [kvm_amd]
snp_launch_finish+0xb6/0x380 [kvm_amd]
sev_mem_enc_ioctl+0x14e/0x720 [kvm_amd]
kvm_arch_vm_ioctl+0x837/0xcf0 [kvm]
kvm_vm_ioctl+0x3fd/0xcc0 [kvm]
__x64_sys_ioctl+0xa3/0x100
x64_sys_call+0xfe0/0x2350
do_syscall_64+0x81/0x10f0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7ffff673287d
</TASK>
Note, the KVM flaw has been present since commit ad73109ae7ec ("KVM: SVM:
Provide support to launch and run an SEV-ES guest"), but has only been
actively dangerous for the host since SNP support was added. With SEV-ES,
KVM would "just" clobber guest state, which is totally fine from a host
kernel perspective since userspace can clobber guest state any time before
sev_launch_update_vmsa(). |
| In the Linux kernel, the following vulnerability has been resolved:
smb: server: make use of smbdirect_socket.send_io.bcredits
It turns out that our code will corrupt the stream of
reassabled data transfer messages when we trigger an
immendiate (empty) send.
In order to fix this we'll have a single 'batch' credit per
connection. And code getting that credit is free to use
as much messages until remaining_length reaches 0, then
the batch credit it given back and the next logical send can
happen. |
| In the Linux kernel, the following vulnerability has been resolved:
smb: server: make use of smbdirect_socket.recv_io.credits.available
The logic off managing recv credits by counting posted recv_io and
granted credits is racy.
That's because the peer might already consumed a credit,
but between receiving the incoming recv at the hardware
and processing the completion in the 'recv_done' functions
we likely have a window where we grant credits, which
don't really exist.
So we better have a decicated counter for the
available credits, which will be incremented
when we posted new recv buffers and drained when
we grant the credits to the peer.
This fixes regression Namjae reported with
the 6.18 release. |