Skip to content

Conversation

@jenswi-linaro
Copy link
Contributor

No description provided.

@jenswi-linaro jenswi-linaro force-pushed the dyn-pager branch 5 times, most recently from 14def50 to 55d614b Compare June 9, 2025 12:11
@jenswi-linaro
Copy link
Contributor Author

CI is passing. There are a couple of false positives from checkpatch.

Copy link
Contributor

@etienne-lms etienne-lms left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some feedback. I've not cover the whole series yet and I'll need to re-read some commits. In the mean time:

Acked-by: Etienne Carriere <[email protected]> for commits:
"core: boot_mem: remove multiple of align restriction" and
"core: simplify core_init_mmu_map()".

Reviewed-by: Etienne Carriere <[email protected]> for commits
"core: mm: update is_nexus()",
"core: mm: add mobj_page_alloc()",
"core: mm: pgt_init(): use virt_page_alloc() if available" and
"core: set uref_base for uref calculations".

Commits that LGTM with questions or minor comments:
"core: mm: add mobj_phys_alloc_flags()", and
"core: arm: init_user_kcode(): use mobj_page_alloc() if available".

m->mobj.ops = &mobj_phys_ops;
refcount_set(&m->mobj.refc, 1);
m->pa = pa;
m->va = va;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allow one to provide a NULL va value?
We have this test in mobj_phys_alloc() whre only SDP buffer may not have a VA.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I'll add a check.

enum teecore_memtypes memtype,
enum buf_is_attr battr, uint32_t flags)
{
uint32_t f = (flags & MAF_NEX) | MAF_ZERO_INIT;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this return an error if flag other than MAF_NEX and MAF_ZERO_INIT is requested?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the idea is to ignore unknown or "don't care" MAF-flags to allow flags to propagate between layers without all the functions the chain knowing all the flags.


static void init_user_kcode(void)
{
__maybe_unused struct mobj *m;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

= NULL

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'll fix.

*offset = (vaddr_t)thread_user_kdata_page -
(vaddr_t)mobj_get_va(*mobj, 0, *sz);
#endif
*sz = thread_user_kdata_page_size;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

size if always ROUNDUP(sizeof(struct thread_core_local) * CFG_TEE_CORE_NB_CORE, SMALL_PAGE_SIZE). Is it worth using a memory cell to store it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a not too distant future it will become ROUNDUP(sizeof(struct thread_core_local) * thread_core_count, SMALL_PAGE_SIZE). I can wrap it in a helper function instead if you prefer that.

asan_tag_access(__pageable_start, __pageable_end);
#endif /*CFG_WITH_PAGER*/
#ifndef CFG_DYN_CONFIG
asan_tag_access(__nozi_start, __nozi_end);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MMU tables may still be allocated from the nozi section. Shouldn't it still be tagged for ASAN access in these cases?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

__nozi_start and __nozi_end doesn't exist with CFG_DYN_CONFIG. This is instead covered by the boot_mem_init_asan() call above.

@jenswi-linaro
Copy link
Contributor Author

Fixed up "core: mm: add mobj_phys_alloc_flags()" and "core: arm: init_user_kcode(): use mobj_page_alloc() if available" as
requested.

Tags applied.

@github-actions
Copy link

This pull request has been marked as a stale pull request because it has been open (more than) 30 days with no activity. Remove the stale label or add a comment, otherwise this pull request will automatically be closed in 5 days. Note, that you can always re-open a closed issue at any time.

@jenswi-linaro
Copy link
Contributor Author

Rebased on master

@etienne-lms
Copy link
Contributor

Hi @jenswi-linaro, sorry i've left this P-R for quite a while now. The last time I tested it on my HW, it seemed the pagestore hashes where corrupted during OP-TEE initialization. I'll check back on the rebased update and tell you.

The two internal boot mem functions mem_alloc_tmp() and mem_alloc() has
until now required the length of the request to be a multiple of align.
While it may minimize amount of wasted padding it serves no real purpose
and with the commit bd8bea6 ("core: boot_mem: enable asan support")
the alignment is increased internally with CFG_CORE_SANITIZE_KADDRESS=y.
This makes it virtually impossible for the caller always supply correct
lengths. So fix this by removing that restriction.

Fixes: bd8bea6 ("core: boot_mem: enable asan support")
Signed-off-by: Jens Wiklander <[email protected]>
Acked-by: Etienne Carriere <[email protected]>
boot_mem_release_tmp_alloc() releases memory between the recorded
mem_end and orig_mem_end. However, if mem_start has reached the same
page as mem_end before boot_mem_release_unused() is called we're using a
part the first page in the temporary area boot_mem_release_tmp_alloc()
tries to release. So fix this by keeping track of the last used page at
the end of boot_mem_release_unused() and make sure not to free that page
in boot_mem_release_tmp_alloc().

Fixes: fe85eae ("core: add CFG_BOOT_MEM and  boot_mem_*() functions")
Signed-off-by: Jens Wiklander <[email protected]>
Update the function is_nexus() to take additional NEX_RAM_RW and
NEX_DYN_VASPACE sections into account if the virtual address isn't in
the first nexus range.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Add mobj_phys_alloc_flags() to cover the use case where physical memory
is mapped at a dynamically allocated virtual address in for instance the
memory areas MEM_AREA_NEX_DYN_VASPACE and MEM_AREA_TEE_DYN_VASPACE.

Signed-off-by: Jens Wiklander <[email protected]>
Add mobj_page_alloc() as an alternative to virt_page_alloc() when a
struct  mobj is needed to represent that allocated pages.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
Simplify start address calculation for the dummy static_memory_map used
while initializing the translation tables.

Signed-off-by: Jens Wiklander <[email protected]>
Acked-by: Etienne Carriere <[email protected]>
If configured with CFG_DYN_CONFIG=y and CFG_CORE_PREALLOC_EL0_TBLS=n and
CFG_WITH_PAGER=n, allocate the translation tables using
virt_page_alloc() instead of static allocation in the .nozi.pgt_cache
section.

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
If configured with CFG_DYN_CONFIG=y, and init_user_kcode() needs
thread_user_kdata_page, allocate it using mobj_page_alloc() instead of
static a allocation in the .nozi.kdata_page or .nex_nozi.kdata_page
section.

Signed-off-by: Jens Wiklander <[email protected]>
kaddr_to_uref() and uref_to_vaddr() was until this patch using
VCORE_START_VA as the lowest possible core data address. However, with
dynamic mapping that address might become even lower depending on the
virtual address assignment. So find out the lowest address that can be
used and use that instead of VCORE_START_VA to avoid possible integer
overflow or underflow in uref_to_vaddr() and kaddr_to_uref().

Signed-off-by: Jens Wiklander <[email protected]>
Reviewed-by: Etienne Carriere <[email protected]>
For configurations without paging enabled, allocate heaps from the
physical memory pool if CFG_DYN_CONFIG=y. This avoids the static
allocations in the linker script and enables more flexible memory
management.

Signed-off-by: Jens Wiklander <[email protected]>
Add support for ASLR by compensating for the map-offset first in the
linear lookup and then when comparing with the non-relocated constant
TEE_RAM_START.

Add support to lookup addresses that aren't page aligned, but with the
memory range within the page.

Signed-off-by: Jens Wiklander <[email protected]>
Add support for allocating translation tables from pager once the MMU
has been enabled. This is needed in later patches CFG_DYN_CONFIG=y when
the pager is enabled.

Signed-off-by: Jens Wiklander <[email protected]>
Add support for allocating the kdata page(s) from pager needed by
CFG_CORE_UNMAP_CORE_AT_EL0=y. This is needed in later patches
CFG_DYN_CONFIG=y when the pager is enabled.

Signed-off-by: Jens Wiklander <[email protected]>
Add core_mmu_linear_map_end, used during virt_to_phys() when
CFG_WITH_PAGER=y to keep track of the limit of linear mapped memory.
This is needed in later patches with CFG_WITH_PAGER=y and
CFG_DYN_CONFIG=y after boot memory has been released.

Signed-off-by: Jens Wiklander <[email protected]>
When configuring paging, a thread context is needed to keep the VFP
state, but thread context might not have been allocated yet with
CFG_DYN_CONFIG=y so a dummy thread context might be needed until then.
When thread_init_threads() assigns and initializes the real thread
context, update the pointer and count variables using an unpaged
function to make sure that pager code doesn't access the thread context
while it's being replaced. This is needed in later patches with
CFG_DYN_CONFIG=y and CFG_WITH_PAGER=y.

Signed-off-by: Jens Wiklander <[email protected]>
With CFG_WITH_PAGER=y, a special section of memory is carved out by the
linker script to be used by the boot_mem*() functions during boot. The
amount of memory must cover the worst case, but the remaining unused
memory will be returned to be managed by the pager at end of boot in a
later patch. This enables boot configuration in the same way regardless
of CFG_DYN_CONFIG.

Add support in boot_mem_release_unused() and
boot_mem_release_tmp_alloc() to pass the unused memory to the pager.
This is needed in later patches with CFG_WITH_PAGER=y and
CFG_DYN_CONFIG=y.

Signed-off-by: Jens Wiklander <[email protected]>
Fixing the cache-flush problem.

Signed-off-by: Jens Wiklander <[email protected]>
Add support for CFG_DYN_CONFIG=y and CFG_WITH_PAGER=y.  The special
.pager_boot_mem section is extended to cover the memory needed by
CFG_DYN_CONFIG=y, this includes translation tables, stacks, and heap.

Later patches can make CFG_DYN_CONFIG=y mandatory for ARCH=arm and
remove redundant code.

Signed-off-by: Jens Wiklander <[email protected]>
Fix the size of the base tables.

Signed-off-by: Jens Wiklander <[email protected]>
@jenswi-linaro
Copy link
Contributor Author

Rebased with fixes commit for the problems I found when testing on Hikey with pager enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants