patina_paging

Crates.iopatina_paging
lib.rspatina_paging
version11.0.0
created_at2025-10-11 20:22:56.677883+00
updated_at2026-01-13 20:09:42.233738+00
descriptionPaging library for AArch64 & X64 architectures
homepage
repositoryhttps://github.com/OpenDevicePartnership/patina-paging
max_upload_size
id1878520
size3,707,324
(patina-fw)

documentation

README

CPU Paging Support

release commit ci miri cov

Introduction

This repo include the X64/AArch64 paging logic. You can generate API documentation by running cargo make doc. To have the documentation open in your browser, you can run cargo make doc-open.

Public API

The main traits/structs for public consumption are PageTable/PageAllocator/X64PageTable/Aarch64PageTable.

bitflags! {
    #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
    pub struct MemoryAttributes: u64 {
        // Memory Caching Attributes
        const Uncached          = 0x00000000_00000001u64;
        const WriteCombining    = 0x00000000_00000002u64;
        const WriteThrough      = 0x00000000_00000004u64;
        const Writeback         = 0x00000000_00000008u64;
        const UncachedExport    = 0x00000000_00000010u64;
        const WriteProtect      = 0x00000000_00001000u64;

        // Memory Access Attributes
        const ReadProtect       = 0x00000000_00002000u64;   // Maps to Present bit on X64
        const ExecuteProtect    = 0x00000000_00004000u64;   // Maps to NX bit on X64
        const ReadOnly          = 0x00000000_00020000u64;   // Maps to Read/Write bit on X64


        const CacheAttributesMask = Self::Uncached.bits() |
                                    Self::WriteCombining.bits() |
                                    Self::WriteThrough.bits() |
                                    Self::Writeback.bits() |
                                    Self::UncachedExport.bits() |
                                    Self::WriteProtect.bits();

        const AccessAttributesMask = Self::ReadProtect.bits() |
                                     Self::ExecuteProtect.bits() |
                                     Self::ReadOnly.bits();
    }
}

pub trait PageTable {
    /// Function to map the designated memory region to with provided
    /// attributes. The requested memory region will be mapped with the specified
    /// attributes, regardless of the current mapping state of the region.
    ///
    /// ## Arguments
    /// * `address` - The memory address to map.
    /// * `size` - The memory size to map.
    /// * `attributes` - The memory attributes to map. The acceptable
    ///   input will be ExecuteProtect, ReadOnly, as well as Uncached,
    ///   WriteCombining, WriteThrough, Writeback, UncachedExport.
    ///   Compatible attributes can be "Ored"
    ///
    /// ## Errors
    /// * Returns `Ok(())` if successful else `Err(PtError)` if failed
    fn map_memory_region(&mut self, address: u64, size: u64, attributes: MemoryAttributes) -> Result<(), PtError>;

    /// Function to unmap the memory region provided by the caller. The
    /// requested memory region must be fully mapped prior to this call. The
    /// entire region does not need to have the same mapping state in order
    /// to unmap it.
    ///
    /// ## Arguments
    /// * `address` - The memory address to map.
    /// * `size` - The memory size to map.
    ///
    /// ## Errors
    /// * Returns `Ok(())` if successful else `Err(PtError)` if failed
    fn unmap_memory_region(&mut self, address: u64, size: u64) -> Result<(), PtError>;

    /// Function to install the page table from this page table instance.
    ///
    /// ## Errors
    /// * Returns `Ok(())` if successful else `Err(PtError)` if failed
    fn install_page_table(&self) -> Result<(), PtError>;

    /// Function to query the mapping status and return attribute of supplied
    /// memory region if it is properly and consistently mapped.
    ///
    /// ## Arguments
    /// * `address` - The memory address to map.
    /// * `size` - The memory size to map.
    ///
    /// ## Returns
    /// Returns memory attributes
    ///
    /// ## Errors
    /// * Returns `Ok(MemoryAttributes)` if successful else `Err(PtError)` if failed
    fn query_memory_region(&self, address: u64, size: u64) -> Result<MemoryAttributes, PtError>;

    /// Function to dump memory ranges with their attributes. It uses current
    /// cr3 as the base.
    ///
    /// ## Arguments
    /// * `address` - The memory address to map.
    /// * `size` - The memory size to map.
    /// ```
    /// ---------------------------------------------[0x0000000000000000 0x0000000000007FFF]------------------------------------------------
    ///                                                       6362        52 51                                   12 11 9 8 7 6 5 4 3 2 1 0
    ///                                                       |N|           |                                        |   |M|M|I| |P|P|U|R| |
    ///                                                       |X| Available |     Page-Map Level-4 Base Address      |AVL|B|B|G|A|C|W|/|/|P|
    ///                                                       | |           |                                        |   |Z|Z|N| |D|T|S|W| |
    /// ------------------------------------------------------------------------------------------------------------------------------------
    /// PML4 |  [0x0000000000000000 0x0000000000007FFF]       |0|00000000000|0000000000011001001110001110011101001101|000|0|0|0|0|0|0|1|1|1|
    /// PDP  |    [0x0000000000000000 0x0000000000007FFF]     |0|00000000000|0000000000011001001110001110011101001110|000|0|0|0|0|0|0|1|1|1|
    /// PD   |      [0x0000000000000000 0x0000000000007FFF]   |0|00000000000|0000000000011001001110001110011101001111|000|0|0|0|0|0|0|1|1|1|
    /// PT   |        [0x0000000000000000 0x0000000000000FFF] |0|00000000000|0000000000000000000000000000000000000000|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000001000 0x0000000000001FFF] |0|00000000000|0000000000000000000000000000000000000001|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000002000 0x0000000000002FFF] |0|00000000000|0000000000000000000000000000000000000010|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000003000 0x0000000000003FFF] |0|00000000000|0000000000000000000000000000000000000011|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000004000 0x0000000000004FFF] |0|00000000000|0000000000000000000000000000000000000100|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000005000 0x0000000000005FFF] |0|00000000000|0000000000000000000000000000000000000101|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000006000 0x0000000000006FFF] |0|00000000000|0000000000000000000000000000000000000110|000|0|0|0|0|0|0|1|0|1|
    /// PT   |        [0x0000000000007000 0x0000000000007FFF] |0|00000000000|0000000000000000000000000000000000000111|000|0|0|0|0|0|0|1|0|1|
    /// ------------------------------------------------------------------------------------------------------------------------------------
    /// ```
    fn dump_page_tables(&self, address: u64, size: u64);
}
/// PageAllocator trait facilitates `allocate()` method for allocating new pages
pub trait PageAllocator {
    /// Allocate aligned pages from physical memory.
    ///
    /// ## Arguments
    /// * `align` - on x64 this will be 4KB page alignment.
    /// * `size` - on x64 this will be 4KB page size.
    ///
    /// ## Returns
    /// * `Result<u64, PtError>` - Physical address of the allocated page.
    fn allocate_page(&mut self, align: u64, size: u64) -> Result<u64, PtError>;
}

API usage

    use PageTable;

    let page_allocator = ...;

    let pt = X64PageTable::new(page_allocator, PagingType::PagingLevel)?;

    let attributes = MemoryAttributes::ReadOnly;
    let res = pt.map_memory_region(address, size, attributes);
    ...
    let res = pt.unmap_memory_region(address, size);
    ...

Reference

More reference test cases are in src\tests\x64_paging_tests.rs General paging related documentation docs\paging.md

Commit count: 174

cargo fmt