Skip to content

[DRAFT] I/O virtual memory (IOMMU) support #327

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
### Added

- \[[#311](https://github.com/rust-vmm/vm-memory/pull/311)\] Allow compiling without the ReadVolatile and WriteVolatile implementations
- \[[#327](https://github.com/rust-vmm/vm-memory/pull/327)\] I/O virtual memory support via `IoMemory`, `IommuMemory`, and `Iommu`/`Iotlb`

### Changed

Expand Down
2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,15 @@ default = ["rawfd"]
backend-bitmap = []
backend-mmap = ["dep:libc"]
backend-atomic = ["arc-swap"]
iommu = ["dep:rangemap"]
rawfd = ["dep:libc"]
xen = ["backend-mmap", "bitflags", "vmm-sys-util"]

[dependencies]
libc = { version = "0.2.39", optional = true }
arc-swap = { version = "1.0.0", optional = true }
bitflags = { version = "2.4.0", optional = true }
rangemap = { version = "1.5.1", optional = true }
thiserror = "1.0.40"
vmm-sys-util = { version = "0.12.1", optional = true }

Expand Down
30 changes: 27 additions & 3 deletions DESIGN.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

## Objectives

- Provide a set of traits for accessing and configuring the physical memory of
a virtual machine.
- Provide a set of traits for accessing and configuring the physical and/or
I/O virtual memory of a virtual machine.
- Provide a clean abstraction of the VM memory such that rust-vmm components
can use it without depending on the implementation details specific to
different VMMs.
Expand Down Expand Up @@ -122,6 +122,29 @@ let buf = &mut [0u8; 5];
let result = guest_memory_mmap.write(buf, addr);
```

### I/O Virtual Address Space

When using an IOMMU, there no longer is direct access to the guest (physical)
address space, but instead only to I/O virtual address space. In this case:

- `IoMemory` replaces `GuestMemory`: It requires specifying the required access
permissions (which are relevant for virtual memory). It also removes
interfaces that imply a mostly linear memory layout, because virtual memory is
fragmented into many pages instead of few (large) memory regions.
- Any `IoMemory` still has a `GuestMemory` inside as the underlying address
space, but if an IOMMU is used, that will generally not be guest physical
address space. With vhost-user, for example, it will be the VMM’s user
address space instead.
- `IommuMemory` as our only actually IOMMU-supporting `IoMemory`
implementation uses an `Iommu` object to translate I/O virtual addresses
(IOVAs) into VMM user addresses (VUAs), which are then passed to the inner
`GuestMemory` implementation (like `GuestMemoryMmap`).
- `GuestAddress` (for compatibility) refers to an address in any of these
address spaces:
- Guest physical addresses (GPAs) when no IOMMU is used,
- I/O virtual addresses (IOVAs),
- VMM user addresses (VUAs).

### Utilities and Helpers

The following utilities and helper traits/macros are imported from the
Expand All @@ -143,7 +166,8 @@ with minor changes:
- `Address` inherits `AddressValue`
- `GuestMemoryRegion` inherits `Bytes<MemoryRegionAddress, E = Error>`. The
`Bytes` trait must be implemented.
- `GuestMemory` has a generic implementation of `Bytes<GuestAddress>`.
- `GuestMemory` has a generic implementation of `IoMemory`
- `IoMemory` has a generic implementation of `Bytes<GuestAddress>`.

**Types**:

Expand Down
39 changes: 20 additions & 19 deletions src/atomic.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
// Copyright (C) 2020 Red Hat, Inc. All rights reserved.
// SPDX-License-Identifier: Apache-2.0

//! A wrapper over an `ArcSwap<GuestMemory>` struct to support RCU-style mutability.
//! A wrapper over an `ArcSwap<IoMemory>` struct to support RCU-style mutability.
//!
//! With the `backend-atomic` feature enabled, simply replacing `GuestMemoryMmap`
//! with `GuestMemoryAtomic<GuestMemoryMmap>` will enable support for mutable memory maps.
Expand All @@ -15,17 +15,17 @@ use arc_swap::{ArcSwap, Guard};
use std::ops::Deref;
use std::sync::{Arc, LockResult, Mutex, MutexGuard, PoisonError};

use crate::{GuestAddressSpace, GuestMemory};
use crate::{GuestAddressSpace, IoMemory};

/// A fast implementation of a mutable collection of memory regions.
///
/// This implementation uses `ArcSwap` to provide RCU-like snapshotting of the memory map:
/// every update of the memory map creates a completely new `GuestMemory` object, and
/// every update of the memory map creates a completely new `IoMemory` object, and
/// readers will not be blocked because the copies they retrieved will be collected once
/// no one can access them anymore. Under the assumption that updates to the memory map
/// are rare, this allows a very efficient implementation of the `memory()` method.
#[derive(Clone, Debug)]
pub struct GuestMemoryAtomic<M: GuestMemory> {
pub struct GuestMemoryAtomic<M: IoMemory> {
// GuestAddressSpace<M>, which we want to implement, is basically a drop-in
// replacement for &M. Therefore, we need to pass to devices the `GuestMemoryAtomic`
// rather than a reference to it. To obtain this effect we wrap the actual fields
Expand All @@ -34,9 +34,9 @@ pub struct GuestMemoryAtomic<M: GuestMemory> {
inner: Arc<(ArcSwap<M>, Mutex<()>)>,
}

impl<M: GuestMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
impl<M: IoMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
/// create a new `GuestMemoryAtomic` object whose initial contents come from
/// the `map` reference counted `GuestMemory`.
/// the `map` reference counted `IoMemory`.
fn from(map: Arc<M>) -> Self {
let inner = (ArcSwap::new(map), Mutex::new(()));
GuestMemoryAtomic {
Expand All @@ -45,9 +45,9 @@ impl<M: GuestMemory> From<Arc<M>> for GuestMemoryAtomic<M> {
}
}

impl<M: GuestMemory> GuestMemoryAtomic<M> {
impl<M: IoMemory> GuestMemoryAtomic<M> {
/// create a new `GuestMemoryAtomic` object whose initial contents come from
/// the `map` `GuestMemory`.
/// the `map` `IoMemory`.
pub fn new(map: M) -> Self {
Arc::new(map).into()
}
Expand Down Expand Up @@ -75,7 +75,7 @@ impl<M: GuestMemory> GuestMemoryAtomic<M> {
}
}

impl<M: GuestMemory> GuestAddressSpace for GuestMemoryAtomic<M> {
impl<M: IoMemory> GuestAddressSpace for GuestMemoryAtomic<M> {
type T = GuestMemoryLoadGuard<M>;
type M = M;

Expand All @@ -86,14 +86,14 @@ impl<M: GuestMemory> GuestAddressSpace for GuestMemoryAtomic<M> {

/// A guard that provides temporary access to a `GuestMemoryAtomic`. This
/// object is returned from the `memory()` method. It dereference to
/// a snapshot of the `GuestMemory`, so it can be used transparently to
/// a snapshot of the `IoMemory`, so it can be used transparently to
/// access memory.
#[derive(Debug)]
pub struct GuestMemoryLoadGuard<M: GuestMemory> {
pub struct GuestMemoryLoadGuard<M: IoMemory> {
guard: Guard<Arc<M>>,
}

impl<M: GuestMemory> GuestMemoryLoadGuard<M> {
impl<M: IoMemory> GuestMemoryLoadGuard<M> {
/// Make a clone of the held pointer and returns it. This is more
/// expensive than just using the snapshot, but it allows to hold on
/// to the snapshot outside the scope of the guard. It also allows
Expand All @@ -104,15 +104,15 @@ impl<M: GuestMemory> GuestMemoryLoadGuard<M> {
}
}

impl<M: GuestMemory> Clone for GuestMemoryLoadGuard<M> {
impl<M: IoMemory> Clone for GuestMemoryLoadGuard<M> {
fn clone(&self) -> Self {
GuestMemoryLoadGuard {
guard: Guard::from_inner(Arc::clone(&*self.guard)),
}
}
}

impl<M: GuestMemory> Deref for GuestMemoryLoadGuard<M> {
impl<M: IoMemory> Deref for GuestMemoryLoadGuard<M> {
type Target = M;

fn deref(&self) -> &Self::Target {
Expand All @@ -125,12 +125,12 @@ impl<M: GuestMemory> Deref for GuestMemoryLoadGuard<M> {
/// possibly after updating the memory map represented by the
/// `GuestMemoryAtomic` that created the guard.
#[derive(Debug)]
pub struct GuestMemoryExclusiveGuard<'a, M: GuestMemory> {
pub struct GuestMemoryExclusiveGuard<'a, M: IoMemory> {
parent: &'a GuestMemoryAtomic<M>,
_guard: MutexGuard<'a, ()>,
}

impl<M: GuestMemory> GuestMemoryExclusiveGuard<'_, M> {
impl<M: IoMemory> GuestMemoryExclusiveGuard<'_, M> {
/// Replace the memory map in the `GuestMemoryAtomic` that created the guard
/// with the new memory map, `map`. The lock is then dropped since this
/// method consumes the guard.
Expand All @@ -143,7 +143,7 @@ impl<M: GuestMemory> GuestMemoryExclusiveGuard<'_, M> {
#[cfg(feature = "backend-mmap")]
mod tests {
use super::*;
use crate::{GuestAddress, GuestMemory, GuestMemoryRegion, GuestUsize, MmapRegion};
use crate::{GuestAddress, GuestMemory, GuestMemoryRegion, GuestUsize, IoMemory, MmapRegion};

type GuestMemoryMmap = crate::GuestMemoryMmap<()>;
type GuestRegionMmap = crate::GuestRegionMmap<()>;
Expand All @@ -159,7 +159,8 @@ mod tests {
let mut iterated_regions = Vec::new();
let gmm = GuestMemoryMmap::from_ranges(&regions).unwrap();
let gm = GuestMemoryMmapAtomic::new(gmm);
let mem = gm.memory();
let vmem = gm.memory();
let mem = vmem.physical_memory().unwrap();

for region in mem.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
Expand All @@ -178,7 +179,7 @@ mod tests {
.map(|x| (x.0, x.1))
.eq(iterated_regions.iter().copied()));

let mem2 = mem.into_inner();
let mem2 = vmem.into_inner();
for region in mem2.iter() {
assert_eq!(region.len(), region_size as GuestUsize);
}
Expand Down
Loading