Pull drm updates from Dave Airlie:
 "cross-subsystem:
   - i2c-hid: Make elan touch controllers power on after panel is
     enabled
   - dt bindings for STM32MP25 SoC
   - pci vgaarb: use screen_info helpers
   - rust pin-init updates
   - add MEI driver for late binding firmware update/load

  uapi:
   - add ioctl for reassigning GEM handles
   - provide boot_display attribute on boot-up devices

  core:
   - document DRM_MODE_PAGE_FLIP_EVENT
   - add vendor specific recovery method to drm device wedged uevent

  gem:
   - Simplify gpuvm locking

  ttm:
   - add interface to populate buffers

  sched:
   - Fix race condition in trace code

  atomic:
   - Reallow no-op async page flips

  display:
   - dp: Fix command length

  video:
   - Improve pixel-format handling for struct screen_info

  rust:
   - drop Opaque<> from ioctl args
   - Alloc:
       - BorrowedPage type and AsPageIter traits
       - Implement Vmalloc::to_page() and VmallocPageIter
   - DMA/Scatterlist:
       - Add dma::DataDirection and type alias for dma_addr_t
       - Abstraction for struct scatterlist and sg_table
   - DRM:
       - simplify use of generics
       - add DriverFile type alias
       - drop Object::SIZE
   - Rust:
       - pin-init tree merge
       - Various methods for AsBytes and FromBytes traits

  gpuvm:
   - Support madvice in Xe driver

  gpusvm:
   - fix hmm_pfn_to_map_order usage in gpusvm

  bridge:
   - Improve and fix ref counting on bridge management
   - cdns-dsi: Various improvements to mode setting
   - Support Solomon SSD2825 plus DT bindings
   - Support Waveshare DSI2DPI plus DT bindings
   - Support Content Protection property
   - display-connector: Improve DP display detection
   - Add support for Radxa Ra620 plus DT bindings
   - adv7511: Provide SPD and HDMI infoframes
   - it6505: Replace crypto_shash with sha()
   - synopsys: Add support for DW DPTX Controller plus DT bindings
   - adv7511: Write full Audio infoframe
   - ite6263: Support vendor-specific infoframes
   - simple: Add support for Realtek RTD2171 DP-to-HDMI plus DT bindings

  panel:
   - panel-edp: Support mt8189 Chromebooks; Support BOE NV140WUM-N64;
     Support SHP LQ134Z1; Fixes
   - panel-simple: Support Olimex LCD-OLinuXino-5CTS plus DT bindings
   - Support Samsung AMS561RA01
   - Support Hydis HV101HD1 plus DT bindings
   - ilitek-ili9881c: Refactor mode setting; Add support for Bestar
     BSD1218-A101KL68 LCD plus DT bindings
   - lvds: Add support for Ampire AMP19201200B5TZQW-T03 to DT bindings
   - edp: Add support for additonal mt8189 Chromebook panels
   - lvds: Add DT bindings for EDT ETML0700Z8DHA

  amdgpu:
   - add CRIU support for gem objects
   - RAS updates
   - VCN SRAM load fixes
   - EDID read fixes
   - eDP ALPM support
   - Documentation updates
   - Rework PTE flag generation
   - DCE6 fixes
   - VCN devcoredump cleanup
   - MMHUB client id fixes
   - VCN 5.0.1 RAS support
   - SMU 13.0.x updates
   - Expanded PCIe DPC support
   - Expanded VCN reset support
   - VPE per queue reset support
   - give kernel jobs unique id for tracing
   - pre-populate exported buffers
   - cyan skillfish updates
   - make vbios build number available in sysfs
   - userq updates
   - HDCP updates
   - support MMIO remap page as ttm pool
   - JPEG parser updates
   - DCE6 DC updates
   - use devm for i2c buses
   - GPUVM locking updates
   - Drop non-DC DCE11 code
   - improve fallback handling for pixel encoding

  amdkfd:
   - SVM/page migration fixes
   - debugfs fixes
   - add CRIO support for gem objects
   - SVM updates

  radeon:
   - use dev_warn_once in CS parsers

  xe:
   - add madvise interface
   - add DRM_IOCTL_XE_VM_QUERY_MEMORY_RANGE_ATTRS to query VMA count
     and memory attributes
   - drop L# bank mask reporting from media GT3 on Xe3+.
   - add SLPC power_profile sysfs interface
   - add configs attribs to add post/mid context-switch commands
   - handle firmware reported hardware errors notifying userspace with
     device wedged uevent
   - use same dir structure across sysfs/debugfs
   - cleanup and future proof vram region init
   - add G-states and PCI link states to debugfs
   - Add SRIOV support for CCS surfaces on Xe2+
   - Enable SRIOV PF mode by default on supported platforms
   - move flush to common code
   - extended core workarounds for Xe2/3
   - use DRM scheduler for delayed GT TLB invalidations
   - configs improvements and allow VF device enablement
   - prep work to expose mmio regions to userspace
   - VF migration support added
   - prepare GPU SVM for THP migration
   - start fixing XE_PAGE_SIZE vs PAGE_SIZE
   - add PSMI support for hw validation
   - resize VF bars to max possible size according to number of VFs
   - Ensure GT is in C0 during resume
   - pre-populate exported buffers
   - replace xe_hmm with gpusvm
   - add more SVM GT stats to debugfs
   - improve fake pci and WA kunnit handle for new platform testing
   - Test GuC to GuC comms to add debugging
   - use attribute groups to simplify sysfs registration
   - add Late Binding firmware code to interact with MEI

  i915:
   - apply multiple JSL/EHL/Gen7/Gen6 workarounds properly
   - protect against overflow in active_engine()
   - Use try_cmpxchg64() in __active_lookup()
   - include GuC registers in error state
   - get rid of dev->struct_mutex
   - iopoll: generalize read_poll_timout
   - lots more display refactoring
   - Reject HBR3 in any eDP Panel
   - Prune modes for YUV420
   - Display Wa fix, additions, and updates
   - DP: Fix 2.7 Gbps link training on g4x
   - DP: Adjust the idle pattern handling
   - DP: Shuffle the link training code a bit
   - Don't set/read the DSI C clock divider on GLK
   - Enable_psr kernel parameter changes
   - Type-C enabled/disconnected dp-alt sink
   - Wildcat Lake enabling
   - DP HDR updates
   - DRAM detection
   - wait PSR idle on dsb commit
   - Remove FBC modulo 4 restriction for ADL-P+
   - panic: refactor framebuffer allocation

  habanalabs:
   - debug/visibility improvements
   - vmalloc-backed coherent mmap support
   - HLDIO infrastructure

  nova-core:
   - various register!() macro improvements
   - minor vbios/firmware fixes/refactoring
   - advance firmware boot stages; process Booter and patch signatures
   - process GSP and GSP bootloader
   - Add r570.144 firmware bindings and update to it
   - Move GSP boot code to own module
   - Use new pin-init features to store driver's private data in a
     single allocation
   - Update ARef import from sync::aref

  nova-drm:
   - Update ARef import from sync::aref

  tyr:
   - initial driver skeleton for a rust driver for ARM Mali GPUs
   - capable of powering up, query metadata and provide it to userspace.

  msm:
   - GPU and Core:
      - in DT bindings describe clocks per GPU type
      - GMU bandwidth voting for x1-85
      - a623/a663 speedbins
      - cleanup some remaining no-iommu leftovers after VM_BIND conversion
      - fix GEM obj 32b size truncation
      - add missing VM_BIND param validation
      - IFPC for x1-85 and a750
      - register xml and gen_header.py sync from mesa
   - Display:
      - add missing bindings for display on SC8180X
      - added DisplayPort MST bindings
      - conversion from round_rate() to determine_rate()

  amdxdna:
   - add IOCTL_AMDXDNA_GET_ARRAY
   - support user space allocated buffers
   - streamline PM interfaces
   - Refactoring wrt. hardware contexts
   - improve error reporting

  nouveau:
   - use GSP firmware by default
   - improve error reporting
   - Pre-populate exported buffers

  ast:
   - Clean up detection of DRAM config

  exynos:
   - add DSIM bridge driver support for Exynos7870
   - Document Exynos7870 DSIM compatible in dt-binding

  panthor:
   - Print task/pid on errors
   - Add support for Mali G710, G510, G310, Gx15, Gx20, Gx25
   - Improve cache flushing
   - Fail VM bind if BO has offset

  renesas:
   - convert to RUNTIME_PM_OPS

  rcar-du:
   - Make number of lanes configurable
   - Use RUNTIME_PM_OPS
   - Add support for DSI commands

  rocket:
   - Add driver for Rockchip NPU plus DT bindings
   - Use kfree() and sizeof() correctly
   - Test DMA status

  rockchip:
   - dsi2: Add support for RK3576 plus DT bindings
   - Add support for RK3588 DPTX output

  tidss:
   - Use crtc_ fields for programming display mode
   - Remove other drivers from aperture

  pixpaper:
   - Add support for Mayqueen Pixpaper plus DT bindings

  v3d:
   - Support querying nubmer of GPU resets for KHR_robustness

  stm:
   - Clean up logging
   - ltdc: Add support support for STM32MP257F-EV1 plus DT bindings

  sitronix:
   - st7571-i2c: Add support for inverted displays and 2-bit grayscale

  tidss:
   - Convert to kernel's FIELD_ macros

  vesadrm:
   - Support 8-bit palette mode

  imagination:
   - Improve power management
   - Add support for TH1520 GPU
   - Support Risc-V architectures

  v3d:
   - Improve job management and locking

  vkms:
   - Support variants of ARGB8888, ARGB16161616, RGB565, RGB888 and P01x
   - Spport YUV with 16-bit components"

* tag 'drm-next-2025-10-01' of https://gitlab.freedesktop.org/drm/kernel: (1455 commits)
  drm/amd: Add name to modes from amdgpu_connector_add_common_modes()
  drm/amd: Drop some common modes from amdgpu_connector_add_common_modes()
  drm/amdgpu: update MODULE_PARM_DESC for freesync_video
  drm/amd: Use dynamic array size declaration for amdgpu_connector_add_common_modes()
  drm/amd/display: Share dce100_validate_global with DCE6-8
  drm/amd/display: Share dce100_validate_bandwidth with DCE6-8
  drm/amdgpu: Fix fence signaling race condition in userqueue
  amd/amdkfd: enhance kfd process check in switch partition
  amd/amdkfd: resolve a race in amdgpu_amdkfd_device_fini_sw
  drm/amd/display: Reject modes with too high pixel clock on DCE6-10
  drm/amd: Drop unnecessary check in amdgpu_connector_add_common_modes()
  drm/amd/display: Only enable common modes for eDP and LVDS
  drm/amdgpu: remove the redeclaration of variable i
  drm/amdgpu/userq: assign an error code for invalid userq va
  drm/amdgpu: revert "rework reserved VMID handling" v2
  drm/amdgpu: remove leftover from enforcing isolation by VMID
  drm/amdgpu: Add fallback to pipe reset if KCQ ring reset fails
  accel/habanalabs: add Infineon version check
  accel/habanalabs/gaudi2: read preboot status after recovering from dirty state
  accel/habanalabs: add HL_GET_P_STATE passthrough type
  ...
This commit is contained in:
Linus Torvalds
2025-10-02 12:47:25 -07:00
1254 changed files with 52110 additions and 19221 deletions

View File

@@ -15,10 +15,14 @@ use core::ptr::NonNull;
use crate::alloc::{AllocError, Allocator};
use crate::bindings;
use crate::page;
use crate::pr_warn;
const ARCH_KMALLOC_MINALIGN: usize = bindings::ARCH_KMALLOC_MINALIGN;
mod iter;
pub use self::iter::VmallocPageIter;
/// The contiguous kernel allocator.
///
/// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also
@@ -146,6 +150,54 @@ unsafe impl Allocator for Kmalloc {
}
}
impl Vmalloc {
/// Convert a pointer to a [`Vmalloc`] allocation to a [`page::BorrowedPage`].
///
/// # Examples
///
/// ```
/// # use core::ptr::{NonNull, from_mut};
/// # use kernel::{page, prelude::*};
/// use kernel::alloc::allocator::Vmalloc;
///
/// let mut vbox = VBox::<[u8; page::PAGE_SIZE]>::new_uninit(GFP_KERNEL)?;
///
/// {
/// // SAFETY: By the type invariant of `Box` the inner pointer of `vbox` is non-null.
/// let ptr = unsafe { NonNull::new_unchecked(from_mut(&mut *vbox)) };
///
/// // SAFETY:
/// // `ptr` is a valid pointer to a `Vmalloc` allocation.
/// // `ptr` is valid for the entire lifetime of `page`.
/// let page = unsafe { Vmalloc::to_page(ptr.cast()) };
///
/// // SAFETY: There is no concurrent read or write to the same page.
/// unsafe { page.fill_zero_raw(0, page::PAGE_SIZE)? };
/// }
/// # Ok::<(), Error>(())
/// ```
///
/// # Safety
///
/// - `ptr` must be a valid pointer to a [`Vmalloc`] allocation.
/// - `ptr` must remain valid for the entire duration of `'a`.
pub unsafe fn to_page<'a>(ptr: NonNull<u8>) -> page::BorrowedPage<'a> {
// SAFETY: `ptr` is a valid pointer to `Vmalloc` memory.
let page = unsafe { bindings::vmalloc_to_page(ptr.as_ptr().cast()) };
// SAFETY: `vmalloc_to_page` returns a valid pointer to a `struct page` for a valid pointer
// to `Vmalloc` memory.
let page = unsafe { NonNull::new_unchecked(page) };
// SAFETY:
// - `page` is a valid pointer to a `struct page`, given that by the safety requirements of
// this function `ptr` is a valid pointer to a `Vmalloc` allocation.
// - By the safety requirements of this function `ptr` is valid for the entire lifetime of
// `'a`.
unsafe { page::BorrowedPage::from_raw(page) }
}
}
// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
// - memory remains valid until it is explicitly freed,
// - passing a pointer to a valid memory allocation is OK,

View File

@@ -0,0 +1,102 @@
// SPDX-License-Identifier: GPL-2.0
use super::Vmalloc;
use crate::page;
use core::marker::PhantomData;
use core::ptr::NonNull;
/// An [`Iterator`] of [`page::BorrowedPage`] items owned by a [`Vmalloc`] allocation.
///
/// # Guarantees
///
/// The pages iterated by the [`Iterator`] appear in the order as they are mapped in the CPU's
/// virtual address space ascendingly.
///
/// # Invariants
///
/// - `buf` is a valid and [`page::PAGE_SIZE`] aligned pointer into a [`Vmalloc`] allocation.
/// - `size` is the number of bytes from `buf` until the end of the [`Vmalloc`] allocation `buf`
/// points to.
pub struct VmallocPageIter<'a> {
/// The base address of the [`Vmalloc`] buffer.
buf: NonNull<u8>,
/// The size of the buffer pointed to by `buf` in bytes.
size: usize,
/// The current page index of the [`Iterator`].
index: usize,
_p: PhantomData<page::BorrowedPage<'a>>,
}
impl<'a> Iterator for VmallocPageIter<'a> {
type Item = page::BorrowedPage<'a>;
fn next(&mut self) -> Option<Self::Item> {
let offset = self.index.checked_mul(page::PAGE_SIZE)?;
// Even though `self.size()` may be smaller than `Self::page_count() * page::PAGE_SIZE`, it
// is always a number between `(Self::page_count() - 1) * page::PAGE_SIZE` and
// `Self::page_count() * page::PAGE_SIZE`, hence the check below is sufficient.
if offset < self.size() {
self.index += 1;
} else {
return None;
}
// TODO: Use `NonNull::add()` instead, once the minimum supported compiler version is
// bumped to 1.80 or later.
//
// SAFETY: `offset` is in the interval `[0, (self.page_count() - 1) * page::PAGE_SIZE]`,
// hence the resulting pointer is guaranteed to be within the same allocation.
let ptr = unsafe { self.buf.as_ptr().add(offset) };
// SAFETY: `ptr` is guaranteed to be non-null given that it is derived from `self.buf`.
let ptr = unsafe { NonNull::new_unchecked(ptr) };
// SAFETY:
// - `ptr` is a valid pointer to a `Vmalloc` allocation.
// - `ptr` is valid for the duration of `'a`.
Some(unsafe { Vmalloc::to_page(ptr) })
}
fn size_hint(&self) -> (usize, Option<usize>) {
let remaining = self.page_count().saturating_sub(self.index);
(remaining, Some(remaining))
}
}
impl<'a> VmallocPageIter<'a> {
/// Creates a new [`VmallocPageIter`] instance.
///
/// # Safety
///
/// - `buf` must be a [`page::PAGE_SIZE`] aligned pointer into a [`Vmalloc`] allocation.
/// - `buf` must be valid for at least the lifetime of `'a`.
/// - `size` must be the number of bytes from `buf` until the end of the [`Vmalloc`] allocation
/// `buf` points to.
pub unsafe fn new(buf: NonNull<u8>, size: usize) -> Self {
// INVARIANT: By the safety requirements, `buf` is a valid and `page::PAGE_SIZE` aligned
// pointer into a [`Vmalloc`] allocation.
Self {
buf,
size,
index: 0,
_p: PhantomData,
}
}
/// Returns the size of the backing [`Vmalloc`] allocation in bytes.
///
/// Note that this is the size the [`Vmalloc`] allocation has been allocated with. Hence, this
/// number may be smaller than `[`Self::page_count`] * [`page::PAGE_SIZE`]`.
#[inline]
pub fn size(&self) -> usize {
self.size
}
/// Returns the number of pages owned by the backing [`Vmalloc`] allocation.
#[inline]
pub fn page_count(&self) -> usize {
self.size().div_ceil(page::PAGE_SIZE)
}
}

View File

@@ -3,7 +3,7 @@
//! Implementation of [`Box`].
#[allow(unused_imports)] // Used in doc comments.
use super::allocator::{KVmalloc, Kmalloc, Vmalloc};
use super::allocator::{KVmalloc, Kmalloc, Vmalloc, VmallocPageIter};
use super::{AllocError, Allocator, Flags};
use core::alloc::Layout;
use core::borrow::{Borrow, BorrowMut};
@@ -18,6 +18,7 @@ use core::result::Result;
use crate::ffi::c_void;
use crate::fmt;
use crate::init::InPlaceInit;
use crate::page::AsPageIter;
use crate::types::ForeignOwnable;
use pin_init::{InPlaceWrite, Init, PinInit, ZeroableOption};
@@ -680,3 +681,40 @@ where
unsafe { A::free(self.0.cast(), layout) };
}
}
/// # Examples
///
/// ```
/// # use kernel::prelude::*;
/// use kernel::alloc::allocator::VmallocPageIter;
/// use kernel::page::{AsPageIter, PAGE_SIZE};
///
/// let mut vbox = VBox::new((), GFP_KERNEL)?;
///
/// assert!(vbox.page_iter().next().is_none());
///
/// let mut vbox = VBox::<[u8; PAGE_SIZE]>::new_uninit(GFP_KERNEL)?;
///
/// let page = vbox.page_iter().next().expect("At least one page should be available.\n");
///
/// // SAFETY: There is no concurrent read or write to the same page.
/// unsafe { page.fill_zero_raw(0, PAGE_SIZE)? };
/// # Ok::<(), Error>(())
/// ```
impl<T> AsPageIter for VBox<T> {
type Iter<'a>
= VmallocPageIter<'a>
where
T: 'a;
fn page_iter(&mut self) -> Self::Iter<'_> {
let ptr = self.0.cast();
let size = core::mem::size_of::<T>();
// SAFETY:
// - `ptr` is a valid pointer to the beginning of a `Vmalloc` allocation.
// - `ptr` is guaranteed to be valid for the lifetime of `'a`.
// - `size` is the size of the `Vmalloc` allocation `ptr` points to.
unsafe { VmallocPageIter::new(ptr, size) }
}
}

View File

@@ -3,11 +3,14 @@
//! Implementation of [`Vec`].
use super::{
allocator::{KVmalloc, Kmalloc, Vmalloc},
allocator::{KVmalloc, Kmalloc, Vmalloc, VmallocPageIter},
layout::ArrayLayout,
AllocError, Allocator, Box, Flags,
};
use crate::fmt;
use crate::{
fmt,
page::AsPageIter,
};
use core::{
borrow::{Borrow, BorrowMut},
marker::PhantomData,
@@ -1027,6 +1030,43 @@ where
}
}
/// # Examples
///
/// ```
/// # use kernel::prelude::*;
/// use kernel::alloc::allocator::VmallocPageIter;
/// use kernel::page::{AsPageIter, PAGE_SIZE};
///
/// let mut vec = VVec::<u8>::new();
///
/// assert!(vec.page_iter().next().is_none());
///
/// vec.reserve(PAGE_SIZE, GFP_KERNEL)?;
///
/// let page = vec.page_iter().next().expect("At least one page should be available.\n");
///
/// // SAFETY: There is no concurrent read or write to the same page.
/// unsafe { page.fill_zero_raw(0, PAGE_SIZE)? };
/// # Ok::<(), Error>(())
/// ```
impl<T> AsPageIter for VVec<T> {
type Iter<'a>
= VmallocPageIter<'a>
where
T: 'a;
fn page_iter(&mut self) -> Self::Iter<'_> {
let ptr = self.ptr.cast();
let size = self.layout.size();
// SAFETY:
// - `ptr` is a valid pointer to the beginning of a `Vmalloc` allocation.
// - `ptr` is guaranteed to be valid for the lifetime of `'a`.
// - `size` is the size of the `Vmalloc` allocation `ptr` points to.
unsafe { VmallocPageIter::new(ptr, size) }
}
}
/// An [`Iterator`] implementation for [`Vec`] that moves elements out of a vector.
///
/// This structure is created by the [`Vec::into_iter`] method on [`Vec`] (provided by the

View File

@@ -98,6 +98,11 @@ impl<T> ArrayLayout<T> {
pub const fn is_empty(&self) -> bool {
self.len == 0
}
/// Returns the size of the [`ArrayLayout`] in bytes.
pub const fn size(&self) -> usize {
self.len() * core::mem::size_of::<T>()
}
}
impl<T> From<ArrayLayout<T>> for Layout {

View File

@@ -135,11 +135,9 @@ impl<T: Send> Devres<T> {
T: 'a,
Error: From<E>,
{
let callback = Self::devres_callback;
try_pin_init!(&this in Self {
dev: dev.into(),
callback,
callback: Self::devres_callback,
// INVARIANT: `inner` is properly initialized.
inner <- Opaque::pin_init(try_pin_init!(Inner {
devm <- Completion::new(),
@@ -160,7 +158,7 @@ impl<T: Send> Devres<T> {
// properly initialized, because we require `dev` (i.e. the *bound* device) to
// live at least as long as the returned `impl PinInit<Self, Error>`.
to_result(unsafe {
bindings::devm_add_action(dev.as_raw(), Some(callback), inner.cast())
bindings::devm_add_action(dev.as_raw(), Some(*callback), inner.cast())
}).inspect_err(|_| {
let inner = Opaque::cast_into(inner);

View File

@@ -13,6 +13,16 @@ use crate::{
transmute::{AsBytes, FromBytes},
};
/// DMA address type.
///
/// Represents a bus address used for Direct Memory Access (DMA) operations.
///
/// This is an alias of the kernel's `dma_addr_t`, which may be `u32` or `u64` depending on
/// `CONFIG_ARCH_DMA_ADDR_T_64BIT`.
///
/// Note that this may be `u64` even on 32-bit architectures.
pub type DmaAddress = bindings::dma_addr_t;
/// Trait to be implemented by DMA capable bus devices.
///
/// The [`dma::Device`](Device) trait should be implemented by bus specific device representations,
@@ -244,6 +254,74 @@ pub mod attrs {
pub const DMA_ATTR_PRIVILEGED: Attrs = Attrs(bindings::DMA_ATTR_PRIVILEGED);
}
/// DMA data direction.
///
/// Corresponds to the C [`enum dma_data_direction`].
///
/// [`enum dma_data_direction`]: srctree/include/linux/dma-direction.h
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
#[repr(u32)]
pub enum DataDirection {
/// The DMA mapping is for bidirectional data transfer.
///
/// This is used when the buffer can be both read from and written to by the device.
/// The cache for the corresponding memory region is both flushed and invalidated.
Bidirectional = Self::const_cast(bindings::dma_data_direction_DMA_BIDIRECTIONAL),
/// The DMA mapping is for data transfer from memory to the device (write).
///
/// The CPU has prepared data in the buffer, and the device will read it.
/// The cache for the corresponding memory region is flushed before device access.
ToDevice = Self::const_cast(bindings::dma_data_direction_DMA_TO_DEVICE),
/// The DMA mapping is for data transfer from the device to memory (read).
///
/// The device will write data into the buffer for the CPU to read.
/// The cache for the corresponding memory region is invalidated before CPU access.
FromDevice = Self::const_cast(bindings::dma_data_direction_DMA_FROM_DEVICE),
/// The DMA mapping is not for data transfer.
///
/// This is primarily for debugging purposes. With this direction, the DMA mapping API
/// will not perform any cache coherency operations.
None = Self::const_cast(bindings::dma_data_direction_DMA_NONE),
}
impl DataDirection {
/// Casts the bindgen-generated enum type to a `u32` at compile time.
///
/// This function will cause a compile-time error if the underlying value of the
/// C enum is out of bounds for `u32`.
const fn const_cast(val: bindings::dma_data_direction) -> u32 {
// CAST: The C standard allows compilers to choose different integer types for enums.
// To safely check the value, we cast it to a wide signed integer type (`i128`)
// which can hold any standard C integer enum type without truncation.
let wide_val = val as i128;
// Check if the value is outside the valid range for the target type `u32`.
// CAST: `u32::MAX` is cast to `i128` to match the type of `wide_val` for the comparison.
if wide_val < 0 || wide_val > u32::MAX as i128 {
// Trigger a compile-time error in a const context.
build_error!("C enum value is out of bounds for the target type `u32`.");
}
// CAST: This cast is valid because the check above guarantees that `wide_val`
// is within the representable range of `u32`.
wide_val as u32
}
}
impl From<DataDirection> for bindings::dma_data_direction {
/// Returns the raw representation of [`enum dma_data_direction`].
fn from(direction: DataDirection) -> Self {
// CAST: `direction as u32` gets the underlying representation of our `#[repr(u32)]` enum.
// The subsequent cast to `Self` (the bindgen type) assumes the C enum is compatible
// with the enum variants of `DataDirection`, which is a valid assumption given our
// compile-time checks.
direction as u32 as Self
}
}
/// An abstraction of the `dma_alloc_coherent` API.
///
/// This is an abstraction around the `dma_alloc_coherent` API which is used to allocate and map
@@ -275,7 +353,7 @@ pub mod attrs {
// entire `CoherentAllocation` including the allocated memory itself.
pub struct CoherentAllocation<T: AsBytes + FromBytes> {
dev: ARef<device::Device>,
dma_handle: bindings::dma_addr_t,
dma_handle: DmaAddress,
count: usize,
cpu_addr: *mut T,
dma_attrs: Attrs,
@@ -376,7 +454,7 @@ impl<T: AsBytes + FromBytes> CoherentAllocation<T> {
/// Returns a DMA handle which may be given to the device as the DMA address base of
/// the region.
pub fn dma_handle(&self) -> bindings::dma_addr_t {
pub fn dma_handle(&self) -> DmaAddress {
self.dma_handle
}
@@ -384,13 +462,13 @@ impl<T: AsBytes + FromBytes> CoherentAllocation<T> {
/// device as the DMA address base of the region.
///
/// Returns `EINVAL` if `offset` is not within the bounds of the allocation.
pub fn dma_handle_with_offset(&self, offset: usize) -> Result<bindings::dma_addr_t> {
pub fn dma_handle_with_offset(&self, offset: usize) -> Result<DmaAddress> {
if offset >= self.count {
Err(EINVAL)
} else {
// INVARIANT: The type invariant of `Self` guarantees that `size_of::<T> * count` fits
// into a `usize`, and `offset` is inferior to `count`.
Ok(self.dma_handle + (offset * core::mem::size_of::<T>()) as bindings::dma_addr_t)
Ok(self.dma_handle + (offset * core::mem::size_of::<T>()) as DmaAddress)
}
}

View File

@@ -11,7 +11,8 @@ use crate::{
error::from_err_ptr,
error::Result,
prelude::*,
types::{ARef, AlwaysRefCounted, Opaque},
sync::aref::{ARef, AlwaysRefCounted},
types::Opaque,
};
use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull};

View File

@@ -8,7 +8,7 @@ use crate::{
bindings, device, devres, drm,
error::{to_result, Result},
prelude::*,
types::ARef,
sync::aref::ARef,
};
use macros::vtable;
@@ -86,6 +86,9 @@ pub struct AllocOps {
/// Trait for memory manager implementations. Implemented internally.
pub trait AllocImpl: super::private::Sealed + drm::gem::IntoGEMObject {
/// The [`Driver`] implementation for this [`AllocImpl`].
type Driver: drm::Driver;
/// The C callback operations for this memory manager.
const ALLOC_OPS: AllocOps;
}

View File

@@ -10,36 +10,37 @@ use crate::{
drm::driver::{AllocImpl, AllocOps},
error::{to_result, Result},
prelude::*,
types::{ARef, AlwaysRefCounted, Opaque},
sync::aref::{ARef, AlwaysRefCounted},
types::Opaque,
};
use core::{mem, ops::Deref, ptr::NonNull};
use core::{ops::Deref, ptr::NonNull};
/// A type alias for retrieving a [`Driver`]s [`DriverFile`] implementation from its
/// [`DriverObject`] implementation.
///
/// [`Driver`]: drm::Driver
/// [`DriverFile`]: drm::file::DriverFile
pub type DriverFile<T> = drm::File<<<T as DriverObject>::Driver as drm::Driver>::File>;
/// GEM object functions, which must be implemented by drivers.
pub trait BaseDriverObject<T: BaseObject>: Sync + Send + Sized {
pub trait DriverObject: Sync + Send + Sized {
/// Parent `Driver` for this object.
type Driver: drm::Driver;
/// Create a new driver data object for a GEM object of a given size.
fn new(dev: &drm::Device<T::Driver>, size: usize) -> impl PinInit<Self, Error>;
fn new(dev: &drm::Device<Self::Driver>, size: usize) -> impl PinInit<Self, Error>;
/// Open a new handle to an existing object, associated with a File.
fn open(
_obj: &<<T as IntoGEMObject>::Driver as drm::Driver>::Object,
_file: &drm::File<<<T as IntoGEMObject>::Driver as drm::Driver>::File>,
) -> Result {
fn open(_obj: &<Self::Driver as drm::Driver>::Object, _file: &DriverFile<Self>) -> Result {
Ok(())
}
/// Close a handle to an existing object, associated with a File.
fn close(
_obj: &<<T as IntoGEMObject>::Driver as drm::Driver>::Object,
_file: &drm::File<<<T as IntoGEMObject>::Driver as drm::Driver>::File>,
) {
}
fn close(_obj: &<Self::Driver as drm::Driver>::Object, _file: &DriverFile<Self>) {}
}
/// Trait that represents a GEM object subtype
pub trait IntoGEMObject: Sized + super::private::Sealed + AlwaysRefCounted {
/// Owning driver for this type
type Driver: drm::Driver;
/// Returns a reference to the raw `drm_gem_object` structure, which must be valid as long as
/// this owning object is valid.
fn as_raw(&self) -> *mut bindings::drm_gem_object;
@@ -74,25 +75,16 @@ unsafe impl<T: IntoGEMObject> AlwaysRefCounted for T {
}
}
/// Trait which must be implemented by drivers using base GEM objects.
pub trait DriverObject: BaseDriverObject<Object<Self>> {
/// Parent `Driver` for this object.
type Driver: drm::Driver;
}
extern "C" fn open_callback<T: BaseDriverObject<U>, U: BaseObject>(
extern "C" fn open_callback<T: DriverObject>(
raw_obj: *mut bindings::drm_gem_object,
raw_file: *mut bindings::drm_file,
) -> core::ffi::c_int {
// SAFETY: `open_callback` is only ever called with a valid pointer to a `struct drm_file`.
let file = unsafe {
drm::File::<<<U as IntoGEMObject>::Driver as drm::Driver>::File>::from_raw(raw_file)
};
// SAFETY: `open_callback` is specified in the AllocOps structure for `Object<T>`, ensuring that
// `raw_obj` is indeed contained within a `Object<T>`.
let obj = unsafe {
<<<U as IntoGEMObject>::Driver as drm::Driver>::Object as IntoGEMObject>::from_raw(raw_obj)
};
let file = unsafe { DriverFile::<T>::from_raw(raw_file) };
// SAFETY: `open_callback` is specified in the AllocOps structure for `DriverObject<T>`,
// ensuring that `raw_obj` is contained within a `DriverObject<T>`
let obj = unsafe { <<T::Driver as drm::Driver>::Object as IntoGEMObject>::from_raw(raw_obj) };
match T::open(obj, file) {
Err(e) => e.to_errno(),
@@ -100,26 +92,21 @@ extern "C" fn open_callback<T: BaseDriverObject<U>, U: BaseObject>(
}
}
extern "C" fn close_callback<T: BaseDriverObject<U>, U: BaseObject>(
extern "C" fn close_callback<T: DriverObject>(
raw_obj: *mut bindings::drm_gem_object,
raw_file: *mut bindings::drm_file,
) {
// SAFETY: `open_callback` is only ever called with a valid pointer to a `struct drm_file`.
let file = unsafe {
drm::File::<<<U as IntoGEMObject>::Driver as drm::Driver>::File>::from_raw(raw_file)
};
let file = unsafe { DriverFile::<T>::from_raw(raw_file) };
// SAFETY: `close_callback` is specified in the AllocOps structure for `Object<T>`, ensuring
// that `raw_obj` is indeed contained within a `Object<T>`.
let obj = unsafe {
<<<U as IntoGEMObject>::Driver as drm::Driver>::Object as IntoGEMObject>::from_raw(raw_obj)
};
let obj = unsafe { <<T::Driver as drm::Driver>::Object as IntoGEMObject>::from_raw(raw_obj) };
T::close(obj, file);
}
impl<T: DriverObject> IntoGEMObject for Object<T> {
type Driver = T::Driver;
fn as_raw(&self) -> *mut bindings::drm_gem_object {
self.obj.get()
}
@@ -141,10 +128,12 @@ pub trait BaseObject: IntoGEMObject {
/// Creates a new handle for the object associated with a given `File`
/// (or returns an existing one).
fn create_handle(
&self,
file: &drm::File<<<Self as IntoGEMObject>::Driver as drm::Driver>::File>,
) -> Result<u32> {
fn create_handle<D, F>(&self, file: &drm::File<F>) -> Result<u32>
where
Self: AllocImpl<Driver = D>,
D: drm::Driver<Object = Self, File = F>,
F: drm::file::DriverFile<Driver = D>,
{
let mut handle: u32 = 0;
// SAFETY: The arguments are all valid per the type invariants.
to_result(unsafe {
@@ -154,10 +143,12 @@ pub trait BaseObject: IntoGEMObject {
}
/// Looks up an object by its handle for a given `File`.
fn lookup_handle(
file: &drm::File<<<Self as IntoGEMObject>::Driver as drm::Driver>::File>,
handle: u32,
) -> Result<ARef<Self>> {
fn lookup_handle<D, F>(file: &drm::File<F>, handle: u32) -> Result<ARef<Self>>
where
Self: AllocImpl<Driver = D>,
D: drm::Driver<Object = Self, File = F>,
F: drm::file::DriverFile<Driver = D>,
{
// SAFETY: The arguments are all valid per the type invariants.
let ptr = unsafe { bindings::drm_gem_object_lookup(file.as_raw().cast(), handle) };
if ptr.is_null() {
@@ -207,13 +198,10 @@ pub struct Object<T: DriverObject + Send + Sync> {
}
impl<T: DriverObject> Object<T> {
/// The size of this object's structure.
pub const SIZE: usize = mem::size_of::<Self>();
const OBJECT_FUNCS: bindings::drm_gem_object_funcs = bindings::drm_gem_object_funcs {
free: Some(Self::free_callback),
open: Some(open_callback::<T, Object<T>>),
close: Some(close_callback::<T, Object<T>>),
open: Some(open_callback::<T>),
close: Some(close_callback::<T>),
print_info: None,
export: None,
pin: None,
@@ -296,6 +284,8 @@ impl<T: DriverObject> Deref for Object<T> {
}
impl<T: DriverObject> AllocImpl for Object<T> {
type Driver = T::Driver;
const ALLOC_OPS: AllocOps = AllocOps {
gem_create_object: None,
prime_handle_to_fd: None,

View File

@@ -83,7 +83,7 @@ pub mod internal {
///
/// ```ignore
/// fn foo(device: &kernel::drm::Device<Self>,
/// data: &Opaque<uapi::argument_type>,
/// data: &mut uapi::argument_type,
/// file: &kernel::drm::File<Self::File>,
/// ) -> Result<u32>
/// ```
@@ -138,9 +138,12 @@ macro_rules! declare_drm_ioctls {
// SAFETY: The ioctl argument has size `_IOC_SIZE(cmd)`, which we
// asserted above matches the size of this type, and all bit patterns of
// UAPI structs must be valid.
let data = unsafe {
&*(raw_data as *const $crate::types::Opaque<$crate::uapi::$struct>)
};
// The `ioctl` argument is exclusively owned by the handler
// and guaranteed by the C implementation (`drm_ioctl()`) to remain
// valid for the entire lifetime of the reference taken here.
// There is no concurrent access or aliasing; no other references
// to this object exist during this call.
let data = unsafe { &mut *(raw_data.cast::<$crate::uapi::$struct>()) };
// SAFETY: This is just the DRM file structure
let file = unsafe { $crate::drm::File::from_raw(raw_file) };

View File

@@ -19,6 +19,7 @@
// Stable since Rust 1.79.0.
#![feature(generic_nonzero)]
#![feature(inline_const)]
#![feature(pointer_is_aligned)]
//
// Stable since Rust 1.81.0.
#![feature(lint_reasons)]
@@ -121,6 +122,7 @@ pub mod ptr;
pub mod rbtree;
pub mod regulator;
pub mod revocable;
pub mod scatterlist;
pub mod security;
pub mod seq_file;
pub mod sizes;

View File

@@ -9,7 +9,12 @@ use crate::{
error::Result,
uaccess::UserSliceReader,
};
use core::ptr::{self, NonNull};
use core::{
marker::PhantomData,
mem::ManuallyDrop,
ops::Deref,
ptr::{self, NonNull},
};
/// A bitwise shift for the page size.
pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize;
@@ -30,6 +35,86 @@ pub const fn page_align(addr: usize) -> usize {
(addr + (PAGE_SIZE - 1)) & PAGE_MASK
}
/// Representation of a non-owning reference to a [`Page`].
///
/// This type provides a borrowed version of a [`Page`] that is owned by some other entity, e.g. a
/// [`Vmalloc`] allocation such as [`VBox`].
///
/// # Example
///
/// ```
/// # use kernel::{bindings, prelude::*};
/// use kernel::page::{BorrowedPage, Page, PAGE_SIZE};
/// # use core::{mem::MaybeUninit, ptr, ptr::NonNull };
///
/// fn borrow_page<'a>(vbox: &'a mut VBox<MaybeUninit<[u8; PAGE_SIZE]>>) -> BorrowedPage<'a> {
/// let ptr = ptr::from_ref(&**vbox);
///
/// // SAFETY: `ptr` is a valid pointer to `Vmalloc` memory.
/// let page = unsafe { bindings::vmalloc_to_page(ptr.cast()) };
///
/// // SAFETY: `vmalloc_to_page` returns a valid pointer to a `struct page` for a valid
/// // pointer to `Vmalloc` memory.
/// let page = unsafe { NonNull::new_unchecked(page) };
///
/// // SAFETY:
/// // - `self.0` is a valid pointer to a `struct page`.
/// // - `self.0` is valid for the entire lifetime of `self`.
/// unsafe { BorrowedPage::from_raw(page) }
/// }
///
/// let mut vbox = VBox::<[u8; PAGE_SIZE]>::new_uninit(GFP_KERNEL)?;
/// let page = borrow_page(&mut vbox);
///
/// // SAFETY: There is no concurrent read or write to this page.
/// unsafe { page.fill_zero_raw(0, PAGE_SIZE)? };
/// # Ok::<(), Error>(())
/// ```
///
/// # Invariants
///
/// The borrowed underlying pointer to a `struct page` is valid for the entire lifetime `'a`.
///
/// [`VBox`]: kernel::alloc::VBox
/// [`Vmalloc`]: kernel::alloc::allocator::Vmalloc
pub struct BorrowedPage<'a>(ManuallyDrop<Page>, PhantomData<&'a Page>);
impl<'a> BorrowedPage<'a> {
/// Constructs a [`BorrowedPage`] from a raw pointer to a `struct page`.
///
/// # Safety
///
/// - `ptr` must point to a valid `bindings::page`.
/// - `ptr` must remain valid for the entire lifetime `'a`.
pub unsafe fn from_raw(ptr: NonNull<bindings::page>) -> Self {
let page = Page { page: ptr };
// INVARIANT: The safety requirements guarantee that `ptr` is valid for the entire lifetime
// `'a`.
Self(ManuallyDrop::new(page), PhantomData)
}
}
impl<'a> Deref for BorrowedPage<'a> {
type Target = Page;
fn deref(&self) -> &Self::Target {
&self.0
}
}
/// Trait to be implemented by types which provide an [`Iterator`] implementation of
/// [`BorrowedPage`] items, such as [`VmallocPageIter`](kernel::alloc::allocator::VmallocPageIter).
pub trait AsPageIter {
/// The [`Iterator`] type, e.g. [`VmallocPageIter`](kernel::alloc::allocator::VmallocPageIter).
type Iter<'a>: Iterator<Item = BorrowedPage<'a>>
where
Self: 'a;
/// Returns an [`Iterator`] of [`BorrowedPage`] items over all pages owned by `self`.
fn page_iter(&mut self) -> Self::Iter<'_>;
}
/// A pointer to a page that owns the page allocation.
///
/// # Invariants

491
rust/kernel/scatterlist.rs Normal file
View File

@@ -0,0 +1,491 @@
// SPDX-License-Identifier: GPL-2.0
//! Abstractions for scatter-gather lists.
//!
//! C header: [`include/linux/scatterlist.h`](srctree/include/linux/scatterlist.h)
//!
//! Scatter-gather (SG) I/O is a memory access technique that allows devices to perform DMA
//! operations on data buffers that are not physically contiguous in memory. It works by creating a
//! "scatter-gather list", an array where each entry specifies the address and length of a
//! physically contiguous memory segment.
//!
//! The device's DMA controller can then read this list and process the segments sequentially as
//! part of one logical I/O request. This avoids the need for a single, large, physically contiguous
//! memory buffer, which can be difficult or impossible to allocate.
//!
//! This module provides safe Rust abstractions over the kernel's `struct scatterlist` and
//! `struct sg_table` types.
//!
//! The main entry point is the [`SGTable`] type, which represents a complete scatter-gather table.
//! It can be either:
//!
//! - An owned table ([`SGTable<Owned<P>>`]), created from a Rust memory buffer (e.g., [`VVec`]).
//! This type manages the allocation of the `struct sg_table`, the DMA mapping of the buffer, and
//! the automatic cleanup of all resources.
//! - A borrowed reference (&[`SGTable`]), which provides safe, read-only access to a table that was
//! allocated by other (e.g., C) code.
//!
//! Individual entries in the table are represented by [`SGEntry`], which can be accessed by
//! iterating over an [`SGTable`].
use crate::{
alloc,
alloc::allocator::VmallocPageIter,
bindings,
device::{Bound, Device},
devres::Devres,
dma, error,
io::resource::ResourceSize,
page,
prelude::*,
types::{ARef, Opaque},
};
use core::{ops::Deref, ptr::NonNull};
/// A single entry in a scatter-gather list.
///
/// An `SGEntry` represents a single, physically contiguous segment of memory that has been mapped
/// for DMA.
///
/// Instances of this struct are obtained by iterating over an [`SGTable`]. Drivers do not create
/// or own [`SGEntry`] objects directly.
#[repr(transparent)]
pub struct SGEntry(Opaque<bindings::scatterlist>);
// SAFETY: `SGEntry` can be sent to any task.
unsafe impl Send for SGEntry {}
// SAFETY: `SGEntry` has no interior mutability and can be accessed concurrently.
unsafe impl Sync for SGEntry {}
impl SGEntry {
/// Convert a raw `struct scatterlist *` to a `&'a SGEntry`.
///
/// # Safety
///
/// Callers must ensure that the `struct scatterlist` pointed to by `ptr` is valid for the
/// lifetime `'a`.
#[inline]
unsafe fn from_raw<'a>(ptr: *mut bindings::scatterlist) -> &'a Self {
// SAFETY: The safety requirements of this function guarantee that `ptr` is a valid pointer
// to a `struct scatterlist` for the duration of `'a`.
unsafe { &*ptr.cast() }
}
/// Obtain the raw `struct scatterlist *`.
#[inline]
fn as_raw(&self) -> *mut bindings::scatterlist {
self.0.get()
}
/// Returns the DMA address of this SG entry.
///
/// This is the address that the device should use to access the memory segment.
#[inline]
pub fn dma_address(&self) -> dma::DmaAddress {
// SAFETY: `self.as_raw()` is a valid pointer to a `struct scatterlist`.
unsafe { bindings::sg_dma_address(self.as_raw()) }
}
/// Returns the length of this SG entry in bytes.
#[inline]
pub fn dma_len(&self) -> ResourceSize {
#[allow(clippy::useless_conversion)]
// SAFETY: `self.as_raw()` is a valid pointer to a `struct scatterlist`.
unsafe { bindings::sg_dma_len(self.as_raw()) }.into()
}
}
/// The borrowed generic type of an [`SGTable`], representing a borrowed or externally managed
/// table.
#[repr(transparent)]
pub struct Borrowed(Opaque<bindings::sg_table>);
// SAFETY: `Borrowed` can be sent to any task.
unsafe impl Send for Borrowed {}
// SAFETY: `Borrowed` has no interior mutability and can be accessed concurrently.
unsafe impl Sync for Borrowed {}
/// A scatter-gather table.
///
/// This struct is a wrapper around the kernel's `struct sg_table`. It manages a list of DMA-mapped
/// memory segments that can be passed to a device for I/O operations.
///
/// The generic parameter `T` is used as a generic type to distinguish between owned and borrowed
/// tables.
///
/// - [`SGTable<Owned>`]: An owned table created and managed entirely by Rust code. It handles
/// allocation, DMA mapping, and cleanup of all associated resources. See [`SGTable::new`].
/// - [`SGTable<Borrowed>`} (or simply [`SGTable`]): Represents a table whose lifetime is managed
/// externally. It can be used safely via a borrowed reference `&'a SGTable`, where `'a` is the
/// external lifetime.
///
/// All [`SGTable`] variants can be iterated over the individual [`SGEntry`]s.
#[repr(transparent)]
#[pin_data]
pub struct SGTable<T: private::Sealed = Borrowed> {
#[pin]
inner: T,
}
impl SGTable {
/// Creates a borrowed `&'a SGTable` from a raw `struct sg_table` pointer.
///
/// This allows safe access to an `sg_table` that is managed elsewhere (for example, in C code).
///
/// # Safety
///
/// Callers must ensure that:
///
/// - the `struct sg_table` pointed to by `ptr` is valid for the entire lifetime of `'a`,
/// - the data behind `ptr` is not modified concurrently for the duration of `'a`.
#[inline]
pub unsafe fn from_raw<'a>(ptr: *mut bindings::sg_table) -> &'a Self {
// SAFETY: The safety requirements of this function guarantee that `ptr` is a valid pointer
// to a `struct sg_table` for the duration of `'a`.
unsafe { &*ptr.cast() }
}
#[inline]
fn as_raw(&self) -> *mut bindings::sg_table {
self.inner.0.get()
}
/// Returns an [`SGTableIter`] bound to the lifetime of `self`.
pub fn iter(&self) -> SGTableIter<'_> {
// SAFETY: `self.as_raw()` is a valid pointer to a `struct sg_table`.
let nents = unsafe { (*self.as_raw()).nents };
let pos = if nents > 0 {
// SAFETY: `self.as_raw()` is a valid pointer to a `struct sg_table`.
let ptr = unsafe { (*self.as_raw()).sgl };
// SAFETY: `ptr` is guaranteed to be a valid pointer to a `struct scatterlist`.
Some(unsafe { SGEntry::from_raw(ptr) })
} else {
None
};
SGTableIter { pos, nents }
}
}
/// Represents the DMA mapping state of a `struct sg_table`.
///
/// This is used as an inner type of [`Owned`] to manage the DMA mapping lifecycle.
///
/// # Invariants
///
/// - `sgt` is a valid pointer to a `struct sg_table` for the entire lifetime of the
/// [`DmaMappedSgt`].
/// - `sgt` is always DMA mapped.
struct DmaMappedSgt {
sgt: NonNull<bindings::sg_table>,
dev: ARef<Device>,
dir: dma::DataDirection,
}
// SAFETY: `DmaMappedSgt` can be sent to any task.
unsafe impl Send for DmaMappedSgt {}
// SAFETY: `DmaMappedSgt` has no interior mutability and can be accessed concurrently.
unsafe impl Sync for DmaMappedSgt {}
impl DmaMappedSgt {
/// # Safety
///
/// - `sgt` must be a valid pointer to a `struct sg_table` for the entire lifetime of the
/// returned [`DmaMappedSgt`].
/// - The caller must guarantee that `sgt` remains DMA mapped for the entire lifetime of
/// [`DmaMappedSgt`].
unsafe fn new(
sgt: NonNull<bindings::sg_table>,
dev: &Device<Bound>,
dir: dma::DataDirection,
) -> Result<Self> {
// SAFETY:
// - `dev.as_raw()` is a valid pointer to a `struct device`, which is guaranteed to be
// bound to a driver for the duration of this call.
// - `sgt` is a valid pointer to a `struct sg_table`.
error::to_result(unsafe {
bindings::dma_map_sgtable(dev.as_raw(), sgt.as_ptr(), dir.into(), 0)
})?;
// INVARIANT: By the safety requirements of this function it is guaranteed that `sgt` is
// valid for the entire lifetime of this object instance.
Ok(Self {
sgt,
dev: dev.into(),
dir,
})
}
}
impl Drop for DmaMappedSgt {
#[inline]
fn drop(&mut self) {
// SAFETY:
// - `self.dev.as_raw()` is a pointer to a valid `struct device`.
// - `self.dev` is the same device the mapping has been created for in `Self::new()`.
// - `self.sgt.as_ptr()` is a valid pointer to a `struct sg_table` by the type invariants
// of `Self`.
// - `self.dir` is the same `dma::DataDirection` the mapping has been created with in
// `Self::new()`.
unsafe {
bindings::dma_unmap_sgtable(self.dev.as_raw(), self.sgt.as_ptr(), self.dir.into(), 0)
};
}
}
/// A transparent wrapper around a `struct sg_table`.
///
/// While we could also create the `struct sg_table` in the constructor of [`Owned`], we can't tear
/// down the `struct sg_table` in [`Owned::drop`]; the drop order in [`Owned`] matters.
#[repr(transparent)]
struct RawSGTable(Opaque<bindings::sg_table>);
// SAFETY: `RawSGTable` can be sent to any task.
unsafe impl Send for RawSGTable {}
// SAFETY: `RawSGTable` has no interior mutability and can be accessed concurrently.
unsafe impl Sync for RawSGTable {}
impl RawSGTable {
/// # Safety
///
/// - `pages` must be a slice of valid `struct page *`.
/// - The pages pointed to by `pages` must remain valid for the entire lifetime of the returned
/// [`RawSGTable`].
unsafe fn new(
pages: &mut [*mut bindings::page],
size: usize,
max_segment: u32,
flags: alloc::Flags,
) -> Result<Self> {
// `sg_alloc_table_from_pages_segment()` expects at least one page, otherwise it
// produces a NPE.
if pages.is_empty() {
return Err(EINVAL);
}
let sgt = Opaque::zeroed();
// SAFETY:
// - `sgt.get()` is a valid pointer to uninitialized memory.
// - As by the check above, `pages` is not empty.
error::to_result(unsafe {
bindings::sg_alloc_table_from_pages_segment(
sgt.get(),
pages.as_mut_ptr(),
pages.len().try_into()?,
0,
size,
max_segment,
flags.as_raw(),
)
})?;
Ok(Self(sgt))
}
#[inline]
fn as_raw(&self) -> *mut bindings::sg_table {
self.0.get()
}
}
impl Drop for RawSGTable {
#[inline]
fn drop(&mut self) {
// SAFETY: `sgt` is a valid and initialized `struct sg_table`.
unsafe { bindings::sg_free_table(self.0.get()) };
}
}
/// The [`Owned`] generic type of an [`SGTable`].
///
/// A [`SGTable<Owned>`] signifies that the [`SGTable`] owns all associated resources:
///
/// - The backing memory pages.
/// - The `struct sg_table` allocation (`sgt`).
/// - The DMA mapping, managed through a [`Devres`]-managed `DmaMappedSgt`.
///
/// Users interact with this type through the [`SGTable`] handle and do not need to manage
/// [`Owned`] directly.
#[pin_data]
pub struct Owned<P> {
// Note: The drop order is relevant; we first have to unmap the `struct sg_table`, then free the
// `struct sg_table` and finally free the backing pages.
#[pin]
dma: Devres<DmaMappedSgt>,
sgt: RawSGTable,
_pages: P,
}
// SAFETY: `Owned` can be sent to any task if `P` can be send to any task.
unsafe impl<P: Send> Send for Owned<P> {}
// SAFETY: `Owned` has no interior mutability and can be accessed concurrently if `P` can be
// accessed concurrently.
unsafe impl<P: Sync> Sync for Owned<P> {}
impl<P> Owned<P>
where
for<'a> P: page::AsPageIter<Iter<'a> = VmallocPageIter<'a>> + 'static,
{
fn new(
dev: &Device<Bound>,
mut pages: P,
dir: dma::DataDirection,
flags: alloc::Flags,
) -> Result<impl PinInit<Self, Error> + '_> {
let page_iter = pages.page_iter();
let size = page_iter.size();
let mut page_vec: KVec<*mut bindings::page> =
KVec::with_capacity(page_iter.page_count(), flags)?;
for page in page_iter {
page_vec.push(page.as_ptr(), flags)?;
}
// `dma_max_mapping_size` returns `size_t`, but `sg_alloc_table_from_pages_segment()` takes
// an `unsigned int`.
//
// SAFETY: `dev.as_raw()` is a valid pointer to a `struct device`.
let max_segment = match unsafe { bindings::dma_max_mapping_size(dev.as_raw()) } {
0 => u32::MAX,
max_segment => u32::try_from(max_segment).unwrap_or(u32::MAX),
};
Ok(try_pin_init!(&this in Self {
// SAFETY:
// - `page_vec` is a `KVec` of valid `struct page *` obtained from `pages`.
// - The pages contained in `pages` remain valid for the entire lifetime of the
// `RawSGTable`.
sgt: unsafe { RawSGTable::new(&mut page_vec, size, max_segment, flags) }?,
dma <- {
// SAFETY: `this` is a valid pointer to uninitialized memory.
let sgt = unsafe { &raw mut (*this.as_ptr()).sgt }.cast();
// SAFETY: `sgt` is guaranteed to be non-null.
let sgt = unsafe { NonNull::new_unchecked(sgt) };
// SAFETY:
// - It is guaranteed that the object returned by `DmaMappedSgt::new` won't out-live
// `sgt`.
// - `sgt` is never DMA unmapped manually.
Devres::new(dev, unsafe { DmaMappedSgt::new(sgt, dev, dir) })
},
_pages: pages,
}))
}
}
impl<P> SGTable<Owned<P>>
where
for<'a> P: page::AsPageIter<Iter<'a> = VmallocPageIter<'a>> + 'static,
{
/// Allocates a new scatter-gather table from the given pages and maps it for DMA.
///
/// This constructor creates a new [`SGTable<Owned>`] that takes ownership of `P`.
/// It allocates a `struct sg_table`, populates it with entries corresponding to the physical
/// pages of `P`, and maps the table for DMA with the specified [`Device`] and
/// [`dma::DataDirection`].
///
/// The DMA mapping is managed through [`Devres`], ensuring that the DMA mapping is unmapped
/// once the associated [`Device`] is unbound, or when the [`SGTable<Owned>`] is dropped.
///
/// # Parameters
///
/// * `dev`: The [`Device`] that will be performing the DMA.
/// * `pages`: The entity providing the backing pages. It must implement [`page::AsPageIter`].
/// The ownership of this entity is moved into the new [`SGTable<Owned>`].
/// * `dir`: The [`dma::DataDirection`] of the DMA transfer.
/// * `flags`: Allocation flags for internal allocations (e.g., [`GFP_KERNEL`]).
///
/// # Examples
///
/// ```
/// use kernel::{
/// device::{Bound, Device},
/// dma, page,
/// prelude::*,
/// scatterlist::{SGTable, Owned},
/// };
///
/// fn test(dev: &Device<Bound>) -> Result {
/// let size = 4 * page::PAGE_SIZE;
/// let pages = VVec::<u8>::with_capacity(size, GFP_KERNEL)?;
///
/// let sgt = KBox::pin_init(SGTable::new(
/// dev,
/// pages,
/// dma::DataDirection::ToDevice,
/// GFP_KERNEL,
/// ), GFP_KERNEL)?;
///
/// Ok(())
/// }
/// ```
pub fn new(
dev: &Device<Bound>,
pages: P,
dir: dma::DataDirection,
flags: alloc::Flags,
) -> impl PinInit<Self, Error> + '_ {
try_pin_init!(Self {
inner <- Owned::new(dev, pages, dir, flags)?
})
}
}
impl<P> Deref for SGTable<Owned<P>> {
type Target = SGTable;
#[inline]
fn deref(&self) -> &Self::Target {
// SAFETY:
// - `self.inner.sgt.as_raw()` is a valid pointer to a `struct sg_table` for the entire
// lifetime of `self`.
// - The backing `struct sg_table` is not modified for the entire lifetime of `self`.
unsafe { SGTable::from_raw(self.inner.sgt.as_raw()) }
}
}
mod private {
pub trait Sealed {}
impl Sealed for super::Borrowed {}
impl<P> Sealed for super::Owned<P> {}
}
/// An [`Iterator`] over the DMA mapped [`SGEntry`] items of an [`SGTable`].
///
/// Note that the existence of an [`SGTableIter`] does not guarantee that the [`SGEntry`] items
/// actually remain DMA mapped; they are prone to be unmapped on device unbind.
pub struct SGTableIter<'a> {
pos: Option<&'a SGEntry>,
/// The number of DMA mapped entries in a `struct sg_table`.
nents: c_uint,
}
impl<'a> Iterator for SGTableIter<'a> {
type Item = &'a SGEntry;
fn next(&mut self) -> Option<Self::Item> {
let entry = self.pos?;
self.nents = self.nents.saturating_sub(1);
// SAFETY: `entry.as_raw()` is a valid pointer to a `struct scatterlist`.
let next = unsafe { bindings::sg_next(entry.as_raw()) };
self.pos = (!next.is_null() && self.nents > 0).then(|| {
// SAFETY: If `next` is not NULL, `sg_next()` guarantees to return a valid pointer to
// the next `struct scatterlist`.
unsafe { SGEntry::from_raw(next) }
});
Some(entry)
}
}

View File

@@ -2,6 +2,8 @@
//! Traits for transmuting types.
use core::mem::size_of;
/// Types for which any bit pattern is valid.
///
/// Not all types are valid for all values. For example, a `bool` must be either zero or one, so
@@ -9,10 +11,93 @@
///
/// It's okay for the type to have padding, as initializing those bytes has no effect.
///
/// # Examples
///
/// ```
/// use kernel::transmute::FromBytes;
///
/// # fn test() -> Option<()> {
/// let raw = [1, 2, 3, 4];
///
/// let result = u32::from_bytes(&raw)?;
///
/// #[cfg(target_endian = "little")]
/// assert_eq!(*result, 0x4030201);
///
/// #[cfg(target_endian = "big")]
/// assert_eq!(*result, 0x1020304);
///
/// # Some(()) }
/// # test().ok_or(EINVAL)?;
/// # Ok::<(), Error>(())
/// ```
///
/// # Safety
///
/// All bit-patterns must be valid for this type. This type must not have interior mutability.
pub unsafe trait FromBytes {}
pub unsafe trait FromBytes {
/// Converts a slice of bytes to a reference to `Self`.
///
/// Succeeds if the reference is properly aligned, and the size of `bytes` is equal to that of
/// `T` and different from zero.
///
/// Otherwise, returns [`None`].
fn from_bytes(bytes: &[u8]) -> Option<&Self>
where
Self: Sized,
{
let slice_ptr = bytes.as_ptr().cast::<Self>();
let size = size_of::<Self>();
#[allow(clippy::incompatible_msrv)]
if bytes.len() == size && slice_ptr.is_aligned() {
// SAFETY: Size and alignment were just checked.
unsafe { Some(&*slice_ptr) }
} else {
None
}
}
/// Converts a mutable slice of bytes to a reference to `Self`.
///
/// Succeeds if the reference is properly aligned, and the size of `bytes` is equal to that of
/// `T` and different from zero.
///
/// Otherwise, returns [`None`].
fn from_bytes_mut(bytes: &mut [u8]) -> Option<&mut Self>
where
Self: AsBytes + Sized,
{
let slice_ptr = bytes.as_mut_ptr().cast::<Self>();
let size = size_of::<Self>();
#[allow(clippy::incompatible_msrv)]
if bytes.len() == size && slice_ptr.is_aligned() {
// SAFETY: Size and alignment were just checked.
unsafe { Some(&mut *slice_ptr) }
} else {
None
}
}
/// Creates an owned instance of `Self` by copying `bytes`.
///
/// Unlike [`FromBytes::from_bytes`], which requires aligned input, this method can be used on
/// non-aligned data at the cost of a copy.
fn from_bytes_copy(bytes: &[u8]) -> Option<Self>
where
Self: Sized,
{
if bytes.len() == size_of::<Self>() {
// SAFETY: we just verified that `bytes` has the same size as `Self`, and per the
// invariants of `FromBytes`, any byte sequence of the correct length is a valid value
// for `Self`.
Some(unsafe { core::ptr::read_unaligned(bytes.as_ptr().cast::<Self>()) })
} else {
None
}
}
}
macro_rules! impl_frombytes {
($($({$($generics:tt)*})? $t:ty, )*) => {
@@ -47,7 +132,32 @@ impl_frombytes! {
///
/// Values of this type may not contain any uninitialized bytes. This type must not have interior
/// mutability.
pub unsafe trait AsBytes {}
pub unsafe trait AsBytes {
/// Returns `self` as a slice of bytes.
fn as_bytes(&self) -> &[u8] {
// CAST: `Self` implements `AsBytes` thus all bytes of `self` are initialized.
let data = core::ptr::from_ref(self).cast::<u8>();
let len = core::mem::size_of_val(self);
// SAFETY: `data` is non-null and valid for reads of `len * sizeof::<u8>()` bytes.
unsafe { core::slice::from_raw_parts(data, len) }
}
/// Returns `self` as a mutable slice of bytes.
fn as_bytes_mut(&mut self) -> &mut [u8]
where
Self: FromBytes,
{
// CAST: `Self` implements both `AsBytes` and `FromBytes` thus making `Self`
// bi-directionally transmutable to `[u8; size_of_val(self)]`.
let data = core::ptr::from_mut(self).cast::<u8>();
let len = core::mem::size_of_val(self);
// SAFETY: `data` is non-null and valid for read and writes of `len * sizeof::<u8>()`
// bytes.
unsafe { core::slice::from_raw_parts_mut(data, len) }
}
}
macro_rules! impl_asbytes {
($($({$($generics:tt)*})? $t:ty, )*) => {

View File

@@ -356,18 +356,11 @@ struct ClosureWork<T> {
func: Option<T>,
}
impl<T> ClosureWork<T> {
fn project(self: Pin<&mut Self>) -> &mut Option<T> {
// SAFETY: The `func` field is not structurally pinned.
unsafe { &mut self.get_unchecked_mut().func }
}
}
impl<T: FnOnce()> WorkItem for ClosureWork<T> {
type Pointer = Pin<KBox<Self>>;
fn run(mut this: Pin<KBox<Self>>) {
if let Some(func) = this.as_mut().project().take() {
if let Some(func) = this.as_mut().project().func.take() {
(func)()
}
}