Skip to content

Commit aabd126

Browse files
Petr TesarikChristoph Hellwig
Petr Tesarik
authored and
Christoph Hellwig
committed
swiotlb: always set the number of areas before allocating the pool
The number of areas defaults to the number of possible CPUs. However, the total number of slots may have to be increased after adjusting the number of areas. Consequently, the number of areas must be determined before allocating the memory pool. This is even explained with a comment in swiotlb_init_remap(), but swiotlb_init_late() adjusts the number of areas after slots are already allocated. The areas may end up being smaller than IO_TLB_SEGSIZE, which breaks per-area locking. While fixing swiotlb_init_late(), move all relevant comments before the definition of swiotlb_adjust_nareas() and convert them to kernel-doc. Fixes: 20347fc ("swiotlb: split up the global swiotlb lock") Signed-off-by: Petr Tesarik <[email protected]> Reviewed-by: Roberto Sassu <[email protected]> Signed-off-by: Christoph Hellwig <[email protected]>
1 parent 0a2f637 commit aabd126

File tree

1 file changed

+11
-8
lines changed

1 file changed

+11
-8
lines changed

kernel/dma/swiotlb.c

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -115,9 +115,16 @@ static bool round_up_default_nslabs(void)
115115
return true;
116116
}
117117

118+
/**
119+
* swiotlb_adjust_nareas() - adjust the number of areas and slots
120+
* @nareas: Desired number of areas. Zero is treated as 1.
121+
*
122+
* Adjust the default number of areas in a memory pool.
123+
* The default size of the memory pool may also change to meet minimum area
124+
* size requirements.
125+
*/
118126
static void swiotlb_adjust_nareas(unsigned int nareas)
119127
{
120-
/* use a single area when non is specified */
121128
if (!nareas)
122129
nareas = 1;
123130
else if (!is_power_of_2(nareas))
@@ -298,10 +305,6 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
298305
if (swiotlb_force_disable)
299306
return;
300307

301-
/*
302-
* default_nslabs maybe changed when adjust area number.
303-
* So allocate bounce buffer after adjusting area number.
304-
*/
305308
if (!default_nareas)
306309
swiotlb_adjust_nareas(num_possible_cpus());
307310

@@ -363,6 +366,9 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
363366
if (swiotlb_force_disable)
364367
return 0;
365368

369+
if (!default_nareas)
370+
swiotlb_adjust_nareas(num_possible_cpus());
371+
366372
retry:
367373
order = get_order(nslabs << IO_TLB_SHIFT);
368374
nslabs = SLABS_PER_PAGE << order;
@@ -397,9 +403,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
397403
(PAGE_SIZE << order) >> 20);
398404
}
399405

400-
if (!default_nareas)
401-
swiotlb_adjust_nareas(num_possible_cpus());
402-
403406
area_order = get_order(array_size(sizeof(*mem->areas),
404407
default_nareas));
405408
mem->areas = (struct io_tlb_area *)

0 commit comments

Comments
 (0)