Skip to content
Open
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 16 additions & 14 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,8 @@ thiserror = "1.0"
toml = "0.5"
trycmd = "0.13.2"
winapi = "0.3.9"
zerocopy = "0.6.1"
zerocopy = "0.6.6"
zerocopy-derive = "0.6.6"
zip = "0.6.4"

[profile.release]
Expand Down
1 change: 1 addition & 0 deletions cmd/gpio/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ hif.workspace = true
clap.workspace = true
anyhow.workspace = true
parse_int.workspace = true
zerocopy.workspace = true

humility-cli.workspace = true
humility-cmd.workspace = true
Expand Down
63 changes: 63 additions & 0 deletions cmd/gpio/src/config_cache.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this
// file, You can obtain one at https://mozilla.org/MPL/2.0/.

//! Generic cache for GPIO configuration register blocks.
//!
//! This module provides a chip-independent way to cache raw GPIO register
//! block data from device memory. The cache stores raw bytes indexed by
//! GPIO group identifiers, avoiding repeated memory reads for the same
//! GPIO port.

use anyhow::Result;
use humility_cli::ExecutionContext;
use std::collections::BTreeMap;

/// A generic cache for GPIO configuration register blocks.
///
/// This cache stores raw byte data for GPIO register blocks, indexed by
/// a group identifier (typically a character like 'A', 'B', etc.). The
/// cache is chip-independent and works with any architecture that organizes
/// GPIO pins into groups/ports with contiguous register blocks.
pub struct ConfigCache {
cache: BTreeMap<char, Vec<u8>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We never store multiple register block types, so this should be made generic across a register block type T. This would let us avoid needing to parse from bytes elsewhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm ok with this separation right now. The code that knows about STM32H7 registers doesn't do IO, doesn't hold state, etc. The code that does the IO is just told to that if you want to talk about a pin group, you need to get me this data. There needs to be a from_bytes somewhere to translate the raw data from the device. I think this approach would work well for non STM32H7 type devices as well. But, I'd like to have a motivating case before making a change here.

}

impl ConfigCache {
/// Creates a new empty configuration cache.
pub fn new() -> ConfigCache {
ConfigCache { cache: BTreeMap::new() }
}

/// Gets or fetches the raw register block data for a GPIO group.
///
/// If the data for the specified group is already cached, returns it
/// directly. Otherwise, calls the provided `fetch_fn` to read the data
/// from device memory, caches it, and returns it.
///
/// # Arguments
///
/// * `context` - Execution context with access to the device core
/// * `group` - GPIO group identifier (e.g., 'A', 'B', 'C')
/// * `fetch_fn` - Function that fetches the register block from device memory
///
/// # Returns
///
/// A reference to the cached raw register block data
pub fn get_or_fetch<F>(
&mut self,
context: &mut ExecutionContext,
group: char,
fetch_fn: F,
) -> Result<&[u8]>
where
F: FnOnce(&mut ExecutionContext, char) -> Result<Vec<u8>>,
{
use std::collections::btree_map::Entry;
if let Entry::Vacant(e) = self.cache.entry(group) {
let data = fetch_fn(context, group)?;
e.insert(data);
}
Ok(self.cache.get(&group).unwrap().as_slice())
}
}
Loading
Loading