CSD lockup during kexec due to unbounded busy-wait in pl011_console_write_atomic #351
Draft
nirmoy wants to merge 527 commits intoNVIDIA:24.04_linux-nvidia-6.17-nextfrom
Draft
CSD lockup during kexec due to unbounded busy-wait in pl011_console_write_atomic #351nirmoy wants to merge 527 commits intoNVIDIA:24.04_linux-nvidia-6.17-nextfrom
nirmoy wants to merge 527 commits intoNVIDIA:24.04_linux-nvidia-6.17-nextfrom
Conversation
BugLink: https://bugs.launchpad.net/bugs/2122432 Resctrl has an overflow handler that runs on each domain every second to ensure that any overflow of the hardware counter is accounted for. MPAM can have counters as large as 63 bits, in which case there is no need to check for overflow. Call the new arch helpers to determine this. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 7120f602ee83c2fb75517b137efd41b83777ba76 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…/cache_id BugLink: https://bugs.launchpad.net/bugs/2122432 This patch uniform data type of component_id/domid/id/cache_id to u32 to avoid type confusion. According to ACPI for mpam, cache id is used as locator for cache MSC. Reference to RD_PPTT_CACHE_ID definition from edk2-platforms, u32 is enough for cache_id. ( \ (((PackageId) & 0xF) << 20) | (((ClusterId) & 0xFF) << 12) | \ (((CoreId) & 0xFF) << 4) | ((CacheType) & 0xF) \ ) refs: 1. ACPI for mpam: https://developer.arm.com/documentation/den0065/latest/ 2. RD_PPTT_CACHE_ID from edk2-platforms: https://github.com/tianocore/edk2-platforms/blob/master/Platform/ARM/SgiPkg/Include/SgiAcpiHeader.h#L202 Signed-off-by: Rex Nie <rex.nie@jaguarmicro.com> Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit fb0bcda86e18dded432e3dbe684ea494f9ea71ab https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 mpam_reprogram_ris_partid() always resets the CMAX/CMIN controls to their 'unrestricted' value. This prevents the controls from being configured. Add fields in struct mpam_config, and program these values when they are set in the features bitmask. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 72fd49e3e0392832f9840f83769c38e4ab61e23f https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…re-use BugLink: https://bugs.launchpad.net/bugs/2122432 Functions like mbw_max_to_percent() convert a value into MPAMs 16 bit fixed point fraction format. These are not only used for memory bandwidth, but cache capcity controls too. Rename these functions to convert to/from a 'fract16', and add helpers for the specific mbw_max/cmax controls. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit a852dfa8ee9ed70c8397a9d89206eebcdd2f368e https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
… separate struct BugLink: https://bugs.launchpad.net/bugs/2122432 struct resctrl_membw combines parameters that are related to the control value, and parameters that are specific to the MBA resource. To allow the control value parsing and management code to be re-used for other resources, it needs to be separated from the MBA resource. Add struct resctrl_mba that holds all the parameters that are specific to the MBA resource. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 2187592a6e6508e2a342cfd18a472304c70e4538 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 parse_cbm() and parse_bw() both test the staged config for an existing entry. These would indicate user-space has provided a schema with a duplicate domain entry. e.g: | L3:0=ffff;1=f00f;0=f00f If new parsers are added this duplicate domain test has to be duplicated. Move it to the caller. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit ef00c0fb556566630ade5166b6f7cff772e5e977 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…nstead of parse_bw() BugLink: https://bugs.launchpad.net/bugs/2122432 MBA is only supported on platforms where the delay inserted by the control is linear. Resctrl checks the two properties provided by the arch code match each time it parses part of a new control value. This doesn't need to be done so frequently, and obscures changes to parse_bw() to abstract it for use with other control types. Move this check to the parse_line() caller so it only happens once. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 24f77499324e859031b686ae592465eb459e41b1 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…de resource BugLink: https://bugs.launchpad.net/bugs/2122432 resctrl_get_default_ctrl() is called by both the architecture code and filesystem code to return the default value for a control. This depends on the schema format. parse_bw() doesn't bother checking the bounds it is given if the resource is in use by mba_sc. This is because the values parsed from user-space are not the same as those the control should take. To make this disparity easier to work with, a second different copy of the schema format is needed, which would need a version of resctrl_get_default_ctrl(). This would let the resctrl change the schema format presented to user-space, provided it converts it to match what the architecture code expects. Rename resctrl_get_default_ctrl() to make it clear it returns the resource default. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 55030f526cf5a4d204bf017d26339bb59a9cab7c https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…g it to be different BugLink: https://bugs.launchpad.net/bugs/2122432 parse_bw() doesn't bother checking the bounds it is given if the resource is in use by mba_sc. This is because the values parsed from user-space are not the same as those the control should take. To make this disparity easier to work with, a second different copy of the schema format is needed, which would need a version of resctrl_get_default_ctrl(). This would let the resctrl change the schema format presented to user-space, provided it converts it to match what the architecture code expects. Add a second schema format for use with mba_sc. The membw properties are copied and the schema version is used. When mba_sc is enabled the schema copy of these properties is modified. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 3e066eaa16feb66603fef0add0fdbc2af5bea63e https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
… a bitmap BugLink: https://bugs.launchpad.net/bugs/2122432 rdtgroup_cbm_to_size() uses a WARN_ON_ONCE() to assert that the resource it has been passed is one of the L2 or L3 cache. This is to avoid using uninitialised bitmap properties. Updating this list for every resource that is configured by a bitmap doesn't scale. Instead change the WARN_ON_ONCE() to use the schema format the arch code requested for the resource. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit b17ce95b490d79604112587139d854fba384d34f https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 Resctrl allows the architecture code to specify the schema format for a control. Controls can either take a bitmap, or some kind of number. If user-space doesn't know what a control is by its name, it could be told the schema format. 'Some kind of number' isn't useful as the difference between a percentage and a value in MB/s affects how these would be programmed, even if resctrl's parsing code doesn't need to care. Add the types resctrl already has in addition to 'range'. This allows architectures to move over before 'range' is removed. These new schema formats are parsed the same, but will additionally affect which files are visible. Schema formats with a double underscore should not be considered portable between architectures, and are likely to be described to user-space as 'platform defined'. AMDs MBA resource is configured with an absolute bandwidth measured in multiples of one eighth of a GB per second. resctrl needs to be aware of this platform defined format to ensure the existing 'MB' files continue to be shown. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 50eed7f84b62a3a4f539c579035ab72ec150cf19 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 Resctrl specifies the schema format for MB and SMBA in rdt_resources_all[]. Intel platforms take a percentage for MB, AMD platforms take an absolute value which isn't MB/s. Currently these are both treated as a 'range'. Adding support for additional types of control shows that user-space needs to be told what the control formats are. Today users of resctrl must already know if their platform is Intel or AMD to know how the MB resource will behave. The MPAM support exposes new control types that take a 'percentage'. The Intel MB resource is also configured by a percentage, so should be able to expose this to user-space. Remove the static configuration for schema_fmt in rdt_resources_all[] and specify it with the other control properties in __get_mem_config_intel() or __get_mem_config_amd(). Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 20f0c13f4ffd01cb6fc239248afa05d602f9e8d4 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 MPAMs bandwidth controls are both exposed to resctrl as if they take a percentage. Update the schema format so that user-space can be told this is a perentage, and files that describe this control format are exposed. (e.g. min_percent) Existing variation in this area is covered by requiring user-space to know if it is running on an Intel or AMD platform. Exposing the schema format directly will avoid modifying user-space to know it is running on an MPAM or RISCV platform. MPAM can also expose bitmap controls for memory bandwidth, which may become important for use-cases in the future. These are currently converted to a percentage to fit the existing definition of the MB resource. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit ea03ef359eb04c8c0f557f589578bb4777b8e2b5 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 Resctrl previously had a 'range' schema format that took some kind of number. This has since been split into percentage, MB/s and an AMD platform specific scheme. As range is no longer used, remove it. The last user is mba_sc which should be described as taking MB/s. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 93fda1d6632174fefddfe5e712110dd1e2947c95 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…tmap controls BugLink: https://bugs.launchpad.net/bugs/2122432 MPAM has cache capacity controls that effectively take a percentage. Resctrl supports percentages, but the collection of files that are exposed to describe this control belong to the MB resource. To find the minimum granularity of the percentage cache capacity controls, user-space is expected to rad the banwdidth_gran file, and know this has nothing to do with bandwidth. The only problem here is the name of the file. Add duplicates of these properties with percentage and bitmap in the name. These will be exposed based on the schema format. The existing files must remain tied to the specific resources so that they remain visible to user-space. Using the same helpers ensures the values will always be the same regardless of the file used. These files are not exposed until the new RFTYPE schema flags are set on a resource 'fflags'. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 673bcb00d2371a2876e164da55d642fdf7657b8d https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…n schema format BugLink: https://bugs.launchpad.net/bugs/2122432 MPAM has cache capacity controls that effectively take a percentage. Resctrl supports percentages, but the collection of files that are exposed to describe this control belong to the MB resource. New files have been added that are selected based on the schema format. Apply the flags to enable these files based on the schema format. Add a new fflags_from_schema() that is used for controls. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit a837ccc258380d6aeef86df709cc0484b60a4acf https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 If more schemas are added to resctrl, user-space needs to know how to configure them. To allow user-space to configure schema it doesn't know about, it would be helpful to tell user-space the format, e.g. percentage. Add a file under info that describes the schema format. Percentages and 'mbps' are implicitly decimal, bitmaps are expected to be in hex. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit b457019d995b2849e683aef0fd89066e64c679a4 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 MPAM can have both cache portion and cache capacity controls on any cache that supports MPAM. Cache portion bitmaps can be exposed via resctrl if they are implemented on L2 or L3. The cache capacity controls can not be used to isolate portions, which is in implicit in the L2 or L3 bitmap provided by user-space. These controls need to be configured with something more like a percentage. Add the resource enum entries for these two resources. No additional resctrl code is needed because the architecture code will specify this resource takes a 'percentage', re-using the support previously used only for the MB resource. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit b601bbf375b016c417db4ec0e8bd6ae58b9057aa https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…m cmax BugLink: https://bugs.launchpad.net/bugs/2122432 MPAM's maximum cache-capacity controls take a fixed point fraction format. Instead of dumping this on user-space, convert it to a percentage. User-space using resctrl already knows how to handle percentages. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 183d4c43260089e6b51518e50427d0f04a6af875 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 The cpu hotplug lock has a helper lockdep_assert_cpus_held() that makes it easy to annotate functions that must be called with the cpu hotplug lock held. Do the same for memory. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit f40d4b8451b3d9e197166ff33104bd63f93709d0 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…PU hotplug lock BugLink: https://bugs.launchpad.net/bugs/2122432 resctrl takes the read side CPU hotplug lock whenever it is working with the list of domains. This prevents a CPU being brought online and the list being modified while resctrl is walking the list, or picking CPUs from the CPU masks. If resctrl domains for CPU-less NUMA nodes are to be supported, this would not be enough to prevent the domain list form being modified as a NUMA node can come online with only memory. Take the memory hotplug lock whenever the CPU hotplug lock is taken. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit f5a082989a5f40b9b95515d68b230f8125648fdb https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…arch stubs BugLink: https://bugs.launchpad.net/bugs/2122432 Resctrl expects the domain IDs for the 'MB' resource to be the corresponding L3 cache-ids. This is a problem for platforms where the memory bandwidth controls are implemented somewhere other than the L3 cache, and exist on a platform with CPU-less NUMA nodes. Such platforms can't currently be exposed via resctrl as not all the memory bandwidth can be controlled. Add a mount option to allow user-space to opt-in to the domain IDs for the MB resource to be the NUMA nid instead. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit ae8929caac02dccdc932666c1d8c906dda541bf1 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 idx is not used. Remove it to avoid build warning. The author is James but he doesn't add his Signed-off-by. (backported from commit c9b4fabe0b1b4805186d4326d47547993a02d191 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) [fenghuay: Change subject to a meaningfull one. Add commit message.] Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…stead of cache-id BugLink: https://bugs.launchpad.net/bugs/2122432 The MB domain ids are the L3 cache-id. This is unfortunate if the memory bandwidth controls are implemented for CPU-less NUMA nodes as there is no L3 whose cache-id can be used to expose these controls to resctrl. When picking the class to use as MB, note whether it is possible for the NUMA nid to be used as the domain-id. By default the MB resource will use the cache-id. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit c2506e7fdb9e9de624af635f5060a1fe56a6bb80 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
… work with a set of CPUs BugLink: https://bugs.launchpad.net/bugs/2122432 mpam_resctrl_offline_domain_hdr() expects to take a single CPU that is going offline. Once all CPUs are offline, the domain header is removed from its parent list, and the structure can be freed. This doesn't work for NUMA nodes. Change the CPU passed to mpam_resctrl_offline_domain_hdr() and mpam_resctrl_domain_hdr_init to be a cpumask. This allows a single CPU to be passed for CPUs going offline, and cpu_possible_mask to be passed for a NUMA node going offline. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 093483e5bca0aef546208b32eedf59f3aac665ff https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…domain() to have CPU and node BugLink: https://bugs.launchpad.net/bugs/2122432 mpam_resctrl_alloc_domain() brings a domain with CPUs online. To allow for domains that don't have any CPUs, split it into a CPU and NUMA node version. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit 817d04bd296871b61dd70f68d160b85837dfe9a8 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…nline/offline BugLink: https://bugs.launchpad.net/bugs/2122432 To expose resctrl resources that contain CPU-less NUMA domains, resctrl needs to be told when a CPU-less NUMA domain comes online. This can't be done with the cpuhp callbacks. Add a memory hotplug notifier, and use this to create and destroy resctrl domains. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit caf4034229d8df2c306658c2ddbe3c1ab73df109 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
…UMA nid as MB domain-id BugLink: https://bugs.launchpad.net/bugs/2122432 Enable resctrl's use of NUMA nid as the domain-id for the MB resource. Changing this state involves changing the IDs of all the domains visible to resctrl. Writing to this list means preventing CPU and memory hotplug. Signed-off-by: James Morse <james.morse@arm.com> (cherry picked from commit a795ac909c6c050daaf095abc9043217ddf5e746 https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git) Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 Modified for latest MPAM. Signed-off-by: Brad Figg <bfigg@nvidia.com> Signed-off-by: Koba Ko <kobak@nvidia.com> Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> (forward ported from commit 77bd02c https://github.com/NVIDIA/NV-Kernels/tree/24.04_linux-nvidia-6.14-next) [fenghuay: change 6.14 path to 6.17] Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Matt Ochs <mochs@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Acked-by: Koba Ko <kobak@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2122432 Define the missing SHIFT definitions to fix build errors. Fixes: a76ea20 ("NVIDIA: SAUCE: arm_mpam: Add quirk framework") Signed-off-by: Fenghua Yu <fenghuay@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com>
The match_char() macro evaluates its character parameter multiple times when traversing differential encoding chains. When invoked with *str++, the string pointer advances on each iteration of the inner do-while loop, causing the DFA to check different characters at each iteration and therefore skip input characters. This results in out-of-bounds reads when the pointer advances past the input buffer boundary. [ 94.984676] ================================================================== [ 94.985301] BUG: KASAN: slab-out-of-bounds in aa_dfa_match+0x5ae/0x760 [ 94.985655] Read of size 1 at addr ffff888100342000 by task file/976 [ 94.986319] CPU: 7 UID: 1000 PID: 976 Comm: file Not tainted 6.19.0-rc7-next-20260127 #1 PREEMPT(lazy) [ 94.986322] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 94.986329] Call Trace: [ 94.986341] <TASK> [ 94.986347] dump_stack_lvl+0x5e/0x80 [ 94.986374] print_report+0xc8/0x270 [ 94.986384] ? aa_dfa_match+0x5ae/0x760 [ 94.986388] kasan_report+0x118/0x150 [ 94.986401] ? aa_dfa_match+0x5ae/0x760 [ 94.986405] aa_dfa_match+0x5ae/0x760 [ 94.986408] __aa_path_perm+0x131/0x400 [ 94.986418] aa_path_perm+0x219/0x2f0 [ 94.986424] apparmor_file_open+0x345/0x570 [ 94.986431] security_file_open+0x5c/0x140 [ 94.986442] do_dentry_open+0x2f6/0x1120 [ 94.986450] vfs_open+0x38/0x2b0 [ 94.986453] ? may_open+0x1e2/0x2b0 [ 94.986466] path_openat+0x231b/0x2b30 [ 94.986469] ? __x64_sys_openat+0xf8/0x130 [ 94.986477] do_file_open+0x19d/0x360 [ 94.986487] do_sys_openat2+0x98/0x100 [ 94.986491] __x64_sys_openat+0xf8/0x130 [ 94.986499] do_syscall_64+0x8e/0x660 [ 94.986515] ? count_memcg_events+0x15f/0x3c0 [ 94.986526] ? srso_alias_return_thunk+0x5/0xfbef5 [ 94.986540] ? handle_mm_fault+0x1639/0x1ef0 [ 94.986551] ? vma_start_read+0xf0/0x320 [ 94.986558] ? srso_alias_return_thunk+0x5/0xfbef5 [ 94.986561] ? srso_alias_return_thunk+0x5/0xfbef5 [ 94.986563] ? fpregs_assert_state_consistent+0x50/0xe0 [ 94.986572] ? srso_alias_return_thunk+0x5/0xfbef5 [ 94.986574] ? arch_exit_to_user_mode_prepare+0x9/0xb0 [ 94.986587] ? srso_alias_return_thunk+0x5/0xfbef5 [ 94.986588] ? irqentry_exit+0x3c/0x590 [ 94.986595] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 94.986597] RIP: 0033:0x7fda4a79c3ea Fix by extracting the character value before invoking match_char, ensuring single evaluation per outer loop. Fixes: 074c1cd ("apparmor: dfa move character match into a macro") Reported-by: Qualys Security Advisory <qsa@qualys.com> Tested-by: Salvatore Bonaccorso <carnil@debian.org> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Massimiliano Pellizzer <massimiliano.pellizzer@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit 2b61194 questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
The verify_dfa() function only checks DEFAULT_TABLE bounds when the state is not differentially encoded. When the verification loop traverses the differential encoding chain, it reads k = DEFAULT_TABLE[j] and uses k as an array index without validation. A malformed DFA with DEFAULT_TABLE[j] >= state_count, therefore, causes both out-of-bounds reads and writes. [ 57.179855] ================================================================== [ 57.180549] BUG: KASAN: slab-out-of-bounds in verify_dfa+0x59a/0x660 [ 57.180904] Read of size 4 at addr ffff888100eadec4 by task su/993 [ 57.181554] CPU: 1 UID: 0 PID: 993 Comm: su Not tainted 6.19.0-rc7-next-20260127 #1 PREEMPT(lazy) [ 57.181558] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 [ 57.181563] Call Trace: [ 57.181572] <TASK> [ 57.181577] dump_stack_lvl+0x5e/0x80 [ 57.181596] print_report+0xc8/0x270 [ 57.181605] ? verify_dfa+0x59a/0x660 [ 57.181608] kasan_report+0x118/0x150 [ 57.181620] ? verify_dfa+0x59a/0x660 [ 57.181623] verify_dfa+0x59a/0x660 [ 57.181627] aa_dfa_unpack+0x1610/0x1740 [ 57.181629] ? __kmalloc_cache_noprof+0x1d0/0x470 [ 57.181640] unpack_pdb+0x86d/0x46b0 [ 57.181647] ? srso_alias_return_thunk+0x5/0xfbef5 [ 57.181653] ? srso_alias_return_thunk+0x5/0xfbef5 [ 57.181656] ? aa_unpack_nameX+0x1a8/0x300 [ 57.181659] aa_unpack+0x20b0/0x4c30 [ 57.181662] ? srso_alias_return_thunk+0x5/0xfbef5 [ 57.181664] ? stack_depot_save_flags+0x33/0x700 [ 57.181681] ? kasan_save_track+0x4f/0x80 [ 57.181683] ? kasan_save_track+0x3e/0x80 [ 57.181686] ? __kasan_kmalloc+0x93/0xb0 [ 57.181688] ? __kvmalloc_node_noprof+0x44a/0x780 [ 57.181693] ? aa_simple_write_to_buffer+0x54/0x130 [ 57.181697] ? policy_update+0x154/0x330 [ 57.181704] aa_replace_profiles+0x15a/0x1dd0 [ 57.181707] ? srso_alias_return_thunk+0x5/0xfbef5 [ 57.181710] ? __kvmalloc_node_noprof+0x44a/0x780 [ 57.181712] ? aa_loaddata_alloc+0x77/0x140 [ 57.181715] ? srso_alias_return_thunk+0x5/0xfbef5 [ 57.181717] ? _copy_from_user+0x2a/0x70 [ 57.181730] policy_update+0x17a/0x330 [ 57.181733] profile_replace+0x153/0x1a0 [ 57.181735] ? rw_verify_area+0x93/0x2d0 [ 57.181740] vfs_write+0x235/0xab0 [ 57.181745] ksys_write+0xb0/0x170 [ 57.181748] do_syscall_64+0x8e/0x660 [ 57.181762] entry_SYSCALL_64_after_hwframe+0x76/0x7e [ 57.181765] RIP: 0033:0x7f6192792eb2 Remove the MATCH_FLAG_DIFF_ENCODE condition to validate all DEFAULT_TABLE entries unconditionally. Fixes: 031dcc8 ("apparmor: dfa add support for state differential encoding") Reported-by: Qualys Security Advisory <qsa@qualys.com> Tested-by: Salvatore Bonaccorso <carnil@debian.org> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Massimiliano Pellizzer <massimiliano.pellizzer@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit f59e976 questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
if ns_name is NULL after
1071 error = aa_unpack(udata, &lh, &ns_name);
and if ent->ns_name contains an ns_name in
1089 } else if (ent->ns_name) {
then ns_name is assigned the ent->ns_name
1095 ns_name = ent->ns_name;
however ent->ns_name is freed at
1262 aa_load_ent_free(ent);
and then again when freeing ns_name at
1270 kfree(ns_name);
Fix this by NULLing out ent->ns_name after it is transferred to ns_name
Fixes: 04dc715 ("apparmor: audit policy ns specified in policy load")
Reported-by: Qualys Security Advisory <qsa@qualys.com>
Tested-by: Salvatore Bonaccorso <carnil@debian.org>
Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com>
Reviewed-by: Cengiz Can <cengiz.can@canonical.com>
Signed-off-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: Cengiz Can <cengiz.can@canonical.com>
Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com>
(cherry picked from commit 3a7dc9b questing:linux)
Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
…ment An unprivileged local user can load, replace, and remove profiles by opening the apparmorfs interfaces, via a confused deputy attack, by passing the opened fd to a privileged process, and getting the privileged process to write to the interface. This does require a privileged target that can be manipulated to do the write for the unprivileged process, but once such access is achieved full policy management is possible and all the possible implications that implies: removing confinement, DoS of system or target applications by denying all execution, by-passing the unprivileged user namespace restriction, to exploiting kernel bugs for a local privilege escalation. The policy management interface can not have its permissions simply changed from 0666 to 0600 because non-root processes need to be able to load policy to different policy namespaces. Instead ensure the task writing the interface has privileges that are a subset of the task that opened the interface. This is already done via policy for confined processes, but unconfined can delegate access to the opened fd, by-passing the usual policy check. Fixes: c88d4c7 ("AppArmor: core policy routines") Reported-by: Qualys Security Advisory <qsa@qualys.com> Tested-by: Salvatore Bonaccorso <carnil@debian.org> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit 9c6712e questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
Differential encoding allows loops to be created if it is abused. To prevent this the unpack should verify that a diff-encode chain terminates. Unfortunately the differential encode verification had two bugs. 1. it conflated states that had gone through check and already been marked, with states that were currently being checked and marked. This means that loops in the current chain being verified are treated as a chain that has already been verified. 2. the order bailout on already checked states compared current chain check iterators j,k instead of using the outer loop iterator i. Meaning a step backwards in states in the current chain verification was being mistaken for moving to an already verified state. Move to a double mark scheme where already verified states get a different mark, than the current chain being kept. This enables us to also drop the backwards verification check that was the cause of the second error as any already verified state is already marked. Fixes: 031dcc8 ("apparmor: dfa add support for state differential encoding") Reported-by: Qualys Security Advisory <qsa@qualys.com> Tested-by: Salvatore Bonaccorso <carnil@debian.org> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com> Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit 36b4187 questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
There is a race condition that leads to a use-after-free situation: because the rawdata inodes are not refcounted, an attacker can start open()ing one of the rawdata files, and at the same time remove the last reference to this rawdata (by removing the corresponding profile, for example), which frees its struct aa_loaddata; as a result, when seq_rawdata_open() is reached, i_private is a dangling pointer and freed memory is accessed. The rawdata inodes weren't refcounted to avoid a circular refcount and were supposed to be held by the profile rawdata reference. However during profile removal there is a window where the vfs and profile destruction race, resulting in the use after free. Fix this by moving to a double refcount scheme. Where the profile refcount on rawdata is used to break the circular dependency. Allowing for freeing of the rawdata once all inode references to the rawdata are put. Fixes: 5d5182c ("apparmor: move to per loaddata files, instead of replicating in profiles") Reported-by: Qualys Security Advisory <qsa@qualys.com> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Maxime Bélair <maxime.belair@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Tested-by: Salvatore Bonaccorso <carnil@debian.org> Signed-off-by: John Johansen <john.johansen@canonical.com> [cengizcan: adjusted context in policy.c aa_free_profile because aa_audit_cache_destroy exists after aa_label_destroy, shifting the hunk context] Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit 1d18980 questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
AppArmor was putting the reference to i_private data on its end after removing the original entry from the file system. However the inode can aand does live beyond that point and it is possible that some of the fs call back functions will be invoked after the reference has been put, which results in a race between freeing the data and accessing it through the fs. While the rawdata/loaddata is the most likely candidate to fail the race, as it has the fewest references. If properly crafted it might be possible to trigger a race for the other types stored in i_private. Fix this by moving the put of i_private referenced data to the correct place which is during inode eviction. Fixes: c961ee5 ("apparmor: convert from securityfs to apparmorfs for policy ns files") Reported-by: Qualys Security Advisory <qsa@qualys.com> Reviewed-by: Georgia Garcia <georgia.garcia@canonical.com> Reviewed-by: Maxime Bélair <maxime.belair@canonical.com> Reviewed-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: John Johansen <john.johansen@canonical.com> [cengizcan: adjusted context in label.c aa_alloc_proxy because kzalloc_obj is not available, used kzalloc(sizeof(...)) instead] Signed-off-by: Cengiz Can <cengiz.can@canonical.com> Signed-off-by: Mehmet Basaran <mehmet.basaran@canonical.com> (cherry picked from commit 7467180 questing:linux) Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
Ignore: yes Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2144672 Properties: no-test-build Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
Ignore: yes Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
Signed-off-by: Jacob Martin <jacob.martin@canonical.com>
BugLink: https://bugs.launchpad.net/bugs/2143761 For NVFS I/Os, blk_rq_nr_phys_segments() can undercount GPU physical segments since CPU-side segment contiguity may differ from GPU physical contiguity. This can undersize the SGL pool causing a buffer overrun. Use payload size to select the pool: small (16 entries) for < 15*64K, large (256 entries) otherwise, bounded by actual pool capacity. Also guard nvfs_blk_rq_dma_map_iter_next() with mapped < entries so we never create a DMA mapping that cannot be stored or unmapped, which would leave nvfs_n_ops elevated and block nvfs_exit() on rmmod. Signed-off-by: Sourab Gupta <sougupta@nvidia.com> Reviewed-by: Kiran Modukuri <kmodukuri@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143602 contpte_ptep_set_access_flags() compared the gathered ptep_get() value against the requested entry to detect no-ops. ptep_get() ORs AF/dirty from all sub-PTEs in the CONT block, so a dirty sibling can make the target appear already-dirty. When the gathered value matches entry, the function returns 0 even though the target sub-PTE still has PTE_RDONLY set in hardware. For a CPU with FEAT_HAFDBS this gathered view is fine, since hardware may set AF/dirty on any sub-PTE and CPU TLB behavior is effectively gathered across the CONT range. But page-table walkers that evaluate each descriptor individually (e.g. a CPU without DBM support, or an SMMU without HTTU, or with HA/HD disabled in CD.TCR) can keep faulting on the unchanged target sub-PTE, causing an infinite fault loop. Gathering can therefore cause false no-ops when only a sibling has been updated: - write faults: target still has PTE_RDONLY (needs PTE_RDONLY cleared) - read faults: target still lacks PTE_AF Fix by checking each sub-PTE against the requested AF/dirty/write state (the same bits consumed by __ptep_set_access_flags()), using raw per-PTE values rather than the gathered ptep_get() view, before returning no-op. Keep using the raw target PTE for the write-bit unfold decision. Per Arm ARM (DDI 0487) D8.7.1 ("The Contiguous bit"), any sub-PTE in a CONT range may become the effective cached translation and software must maintain consistent attributes across the range. Link: https://lkml.kernel.org/r/20260305-contpte-fault-loop-v2-1-0216f0026d7f@nvidia.com Fixes: 4602e57 ("arm64/mm: wire up PTE_CONT for user mappings") Signed-off-by: Piotr Jaroszynski <pjaroszynski@nvidia.com> Reviewed-by: Alistair Popple <apopple@nvidia.com> Reviewed-by: James Houghton <jthoughton@google.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Breno Leitao <leitao@debian.org> Cc: Will Deacon <will@kernel.org> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Zi Yan <ziy@nvidia.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> (cherry picked from commit 1cabb564ae7bdf96bfebc02c6ad735cdb3dbaeae linux-next) Signed-off-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143862 C_BAD_STE was observed when updating nested STE from an S1-bypass mode to an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly different than the normal S1-bypass and S1DSS-bypass modes. As a result, fields like MEV and EATS in S2's used list marked the word1 as a critical word that requested a STE.V=0. This breaks a hitless update. However, both MEV and EATS aren't critical in terms of STE update. One controls the merge of the events and the other controls the ATS that is managed by the driver at the same time via pci_enable_ats(). Add an arm_smmu_get_ste_update_safe() to allow STE update algorithm to relax those fields, avoiding the STE update breakages. After this change, entry_set has no caller checking its return value, so change it to void. Note that this change is required by both MEV and EATS fields, which were introduced in different kernel versions. So add get_update_safe() first. MEV and EATS will be added to arm_smmu_get_ste_update_safe() separately. Fixes: 1e8be08 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED") Cc: stable@vger.kernel.org Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Pranjal Shrivastava <praan@google.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit 2781f2a) Signed-off-by: Nathan Chen <nathanc@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143862 Nested CD tables set the MEV bit to try to reduce multi-fault spamming on the hypervisor. Since MEV is in STE word 1 this causes a breaking update sequence that is not required and impacts real workloads. For the purposes of STE updates the value of MEV doesn't matter, if it is set/cleared early or late it just results in a change to the fault reports that must be supported by the kernel anyhow. The spec says: Note: Software must expect, and be able to deal with, coalesced fault records even when MEV == 0. So mark STE MEV safe when computing the update sequence, to avoid creating a breaking update. Fixes: da0c565 ("iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations") Cc: stable@vger.kernel.org Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Pranjal Shrivastava <praan@google.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit f3c1d37) Signed-off-by: Nathan Chen <nathanc@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
…uence BugLink: https://bugs.launchpad.net/bugs/2143862 If VM wants to toggle EATS_TRANS off at the same time as changing the CFG, hypervisor will see EATS change to 0 and insert a V=0 breaking update into the STE even though the VM did not ask for that. In bare metal, EATS_TRANS is ignored by CFG=ABORT/BYPASS, which is why this does not cause a problem until we have the nested case where CFG is always a variation of S2 trans that does use EATS_TRANS. Relax the rules for EATS_TRANS sequencing, we don't need it to be exact as the enclosing code will always disable ATS at the PCI device when changing EATS_TRANS. This ensures there are no ATS transactions that can race with an EATS_TRANS change so we don't need to carefully sequence these bits. Fixes: 1e8be08 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED") Cc: stable@vger.kernel.org Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit 7cad800) Signed-off-by: Nathan Chen <nathanc@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143862 STE in a nested case requires both S1 and S2 fields. And this makes the use case different from the existing one. Add coverage for previously failed cases shifting between S2-only and S1+S2 STEs. Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com> Reviewed-by: Mostafa Saleh <smostafa@google.com> Reviewed-by: Pranjal Shrivastava <praan@google.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit a4f976e) Signed-off-by: Nathan Chen <nathanc@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Noah Wager <noah.wager@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
…diness check BugLink: https://bugs.launchpad.net/bugs/2144760 Blackwell-Next GPUs report device readiness via the CXL DVSEC Range 1 Low register (offset 0x1C) instead of the BAR0 HBM training register used by GB200. The GPU memory readiness is checked by polling for the Memory_Active bit (bit 1) for the Memory_Active_Timeout (bits 15:13). Add runtime detection by checking the presence of the DVSEC register. Route to the new method if present, otherwise continue using the legacy approach. Signed-off-by: Ankit Agrawal <ankita@nvidia.com> Acked-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143954 Introduce TEGRA_GPIO_PREFIX() to define the Tegra SoC GPIO name prefix in one place. Use it for the Tegra410 COMPUTE and SYSTEM controllers so the prefix is "COMPUTE-" and "SYSTEM-" respectively. Signed-off-by: Prathamesh Shete <pshete@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Link: https://patch.msgid.link/20260217081431.1208351-1-pshete@nvidia.com Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@oss.qualcomm.com> (cherry picked from commit 2423e336d94868f0d2fcd81a87b90c5ea59736e0 linux-next) Signed-off-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143954 On Tegra platforms, multiple SoC instances may be present with each defining the same GPIO name. For such devices, this results in duplicate GPIO names. When the device has a valid NUMA node, prepend the NUMA node ID to the GPIO name prefix. The node ID identifies each socket, ensuring GPIO line names remain distinct across multiple sockets. Signed-off-by: Prathamesh Shete <pshete@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Link: https://patch.msgid.link/20260217081431.1208351-2-pshete@nvidia.com Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@oss.qualcomm.com> (cherry picked from commit 2c299030c6813eaa9ef95773c64d65c50fa706ac linux-next) Signed-off-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
…ion race BugLink: https://bugs.launchpad.net/bugs/2144345 Move INIT_WORK() for fw_images_update_work from update_fw_images_tree() to lfa_init() so the work item is initialized once at module load rather than re-initialized on every firmware image tree update. Re-initializing a work item that may already be queued is unsafe and can corrupt the workqueue. Add flush_workqueue() in lfa_notify_handler() before rescanning the image list to ensure any pending remove_invalid_fw_images work completes first, preventing use-after-free on the image list. Fixes: 1dd9a8f ("NVIDIA: VR: SAUCE: firmware: smccc: add support for Live Firmware Activation (LFA)") Signed-off-by: Nirmoy Das <nirmoyd@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: KobaKoNvidia Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143859 The TLBI instruction accepts XZR as a register argument, and for TLBI operations with a register argument, there is no functional difference between using XZR or another GPR which contains zeroes. Operations without a register argument are encoded as if XZR were used. Allow the __TLBI_1() macro to use XZR when a register argument is all zeroes. Today this only results in a trivial code saving in __do_compat_cache_op()'s workaround for Neoverse-N1 erratum #1542419. In subsequent patches this pattern will be used more generally. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oupton@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit bfd9c93) Signed-off-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
BugLink: https://bugs.launchpad.net/bugs/2143859 The ARM64_WORKAROUND_REPEAT_TLBI workaround is used to mitigate several errata where broadcast TLBI;DSB sequences don't provide all the architecturally required synchronization. The workaround performs more work than necessary, and can have significant overhead. This patch optimizes the workaround, as explained below. The workaround was originally added for Qualcomm Falkor erratum 1009 in commit: d9ff80f ("arm64: Work around Falkor erratum 1009") As noted in the message for that commit, the workaround is applied even in cases where it is not strictly necessary. The workaround was later reused without changes for: * Arm Cortex-A76 erratum #1286807 SDEN v33: https://developer.arm.com/documentation/SDEN-885749/33-0/ * Arm Cortex-A55 erratum #2441007 SDEN v16: https://developer.arm.com/documentation/SDEN-859338/1600/ * Arm Cortex-A510 erratum #2441009 SDEN v19: https://developer.arm.com/documentation/SDEN-1873351/1900/ The important details to note are as follows: 1. All relevant errata only affect the ordering and/or completion of memory accesses which have been translated by an invalidated TLB entry. The actual invalidation of TLB entries is unaffected. 2. The existing workaround is applied to both broadcast and local TLB invalidation, whereas for all relevant errata it is only necessary to apply a workaround for broadcast invalidation. 3. The existing workaround replaces every TLBI with a TLBI;DSB;TLBI sequence, whereas for all relevant errata it is only necessary to execute a single additional TLBI;DSB sequence after any number of TLBIs are completed by a DSB. For example, for a sequence of batched TLBIs: TLBI <op1>[, <arg1>] TLBI <op2>[, <arg2>] TLBI <op3>[, <arg3>] DSB ISH ... the existing workaround will expand this to: TLBI <op1>[, <arg1>] DSB ISH // additional TLBI <op1>[, <arg1>] // additional TLBI <op2>[, <arg2>] DSB ISH // additional TLBI <op2>[, <arg2>] // additional TLBI <op3>[, <arg3>] DSB ISH // additional TLBI <op3>[, <arg3>] // additional DSB ISH ... whereas it is sufficient to have: TLBI <op1>[, <arg1>] TLBI <op2>[, <arg2>] TLBI <op3>[, <arg3>] DSB ISH TLBI <opX>[, <argX>] // additional DSB ISH // additional Using a single additional TBLI and DSB at the end of the sequence can have significantly lower overhead as each DSB which completes a TLBI must synchronize with other PEs in the system, with potential performance effects both locally and system-wide. 4. The existing workaround repeats each specific TLBI operation, whereas for all relevant errata it is sufficient for the additional TLBI to use *any* operation which will be broadcast, regardless of which translation regime or stage of translation the operation applies to. For example, for a single TLBI: TLBI ALLE2IS DSB ISH ... the existing workaround will expand this to: TLBI ALLE2IS DSB ISH TLBI ALLE2IS // additional DSB ISH // additional ... whereas it is sufficient to have: TLBI ALLE2IS DSB ISH TLBI VALE1IS, XZR // additional DSB ISH // additional As the additional TLBI doesn't have to match a specific earlier TLBI, the additional TLBI can be implemented in separate code, with no memory of the earlier TLBIs. The additional TLBI can also use a cheaper TLBI operation. 5. The existing workaround is applied to both Stage-1 and Stage-2 TLB invalidation, whereas for all relevant errata it is only necessary to apply a workaround for Stage-1 invalidation. Architecturally, TLBI operations which invalidate only Stage-2 information (e.g. IPAS2E1IS) are not required to invalidate TLB entries which combine information from Stage-1 and Stage-2 translation table entries, and consequently may not complete memory accesses translated by those combined entries. In these cases, completion of memory accesses is only guaranteed after subsequent invalidation of Stage-1 information (e.g. VMALLE1IS). Taking the above points into account, this patch reworks the workaround logic to reduce overhead: * New __tlbi_sync_s1ish() and __tlbi_sync_s1ish_hyp() functions are added and used in place of any dsb(ish) which is used to complete broadcast Stage-1 TLB maintenance. When the ARM64_WORKAROUND_REPEAT_TLBI workaround is enabled, these helpers will execute an additional TLBI;DSB sequence. For consistency, it might make sense to add __tlbi_sync_*() helpers for local and stage 2 maintenance. For now I've left those with open-coded dsb() to keep the diff small. * The duplication of TLBIs in __TLBI_0() and __TLBI_1() is removed. This is no longer needed as the necessary synchronization will happen in __tlbi_sync_s1ish() or __tlbi_sync_s1ish_hyp(). * The additional TLBI operation is chosen to have minimal impact: - __tlbi_sync_s1ish() uses "TLBI VALE1IS, XZR". This is only used at EL1 or at EL2 with {E2H,TGE}=={1,1}, where it will target an unused entry for the reserved ASID in the kernel's own translation regime, and have no adverse affect. - __tlbi_sync_s1ish_hyp() uses "TLBI VALE2IS, XZR". This is only used in hyp code, where it will target an unused entry in the hyp code's TTBR0 mapping, and should have no adverse effect. * As __TLBI_0() and __TLBI_1() no longer replace each TLBI with a TLBI;DSB;TLBI sequence, batching TLBIs is worthwhile, and there's no need for arch_tlbbatch_should_defer() to consider ARM64_WORKAROUND_REPEAT_TLBI. When building defconfig with GCC 15.1.0, compared to v6.19-rc1, this patch saves ~1KiB of text, makes the vmlinux ~42KiB smaller, and makes the resulting Image 64KiB smaller: | [mark@lakrids:~/src/linux]% size vmlinux-* | text data bss dec hex filename | 21179831 19660919 708216 4154896 279fca6 vmlinux-after | 21181075 19660903 708216 41550194 27a0172 vmlinux-before | [mark@lakrids:~/src/linux]% ls -l vmlinux-* | -rwxr-xr-x 1 mark mark 157771472 Feb 4 12:05 vmlinux-after | -rwxr-xr-x 1 mark mark 157815432 Feb 4 12:05 vmlinux-before | [mark@lakrids:~/src/linux]% ls -l Image-* | -rw-r--r-- 1 mark mark 41007616 Feb 4 12:05 Image-after | -rw-r--r-- 1 mark mark 41073152 Feb 4 12:05 Image-before Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oupton@kernel.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Will Deacon <will@kernel.org> (cherry picked from commit a8f7868) Signed-off-by: Matthew R. Ochs <mochs@nvidia.com> Acked-by: Jamie Nguyen <jamien@nvidia.com> Acked-by: Carol L Soto <csoto@nvidia.com> Acked-by: Jacob Martin <jacob.martin@canonical.com> Acked-by: Abdur Rahman <abdur.rahman@canonical.com> Signed-off-by: Brad Figg <bfigg@nvidia.com>
The commit d5d399e ("printk/nbcon: Release nbcon consoles ownership in atomic flush after each emitted record") prevented stall of a CPU which lost nbcon console ownership because another CPU entered an emergency flush. But there is still the problem that the CPU doing the emergency flush might cause a stall on its own. Let's go even further and restore IRQ in the atomic flush after each emitted record. It is not a complete solution. The interrupts and/or scheduling might still be blocked when the emergency atomic flush was called with IRQs and/or scheduling disabled. But it should remove the following lockup: mlx5_core 0000:03:00.0: Shutdown was called kvm: exiting hardware virtualization arm-smmu-v3 arm-smmu-v3.10.auto: CMD_SYNC timeout at 0x00000103 [hwprod 0x00000104, hwcons 0x00000102] smp: csd: Detected non-responsive CSD lock (#1) on CPU#4, waiting 5000000032 ns for CPU#00 do_nothing (kernel/smp.c:1057) smp: csd: CSD lock (#1) unresponsive. [...] Call trace: pl011_console_write_atomic (./arch/arm64/include/asm/vdso/processor.h:12 drivers/tty/serial/amba-pl011.c:2540) (P) nbcon_emit_next_record (kernel/printk/nbcon.c:1049) __nbcon_atomic_flush_pending_con (kernel/printk/nbcon.c:1517) __nbcon_atomic_flush_pending.llvm.15488114865160659019 (./arch/arm64/include/asm/alternative-macros.h:254 ./arch/arm64/include/asm/cpufeature.h:808 ./arch/arm64/include/asm/irqflags.h:192 kernel/printk/nbcon.c:1562 kernel/printk/nbcon.c:1612) nbcon_atomic_flush_pending (kernel/printk/nbcon.c:1629) printk_kthreads_shutdown (kernel/printk/printk.c:?) syscore_shutdown (drivers/base/syscore.c:120) kernel_kexec (kernel/kexec_core.c:1045) __arm64_sys_reboot (kernel/reboot.c:794 kernel/reboot.c:722 kernel/reboot.c:722) invoke_syscall (arch/arm64/kernel/syscall.c:50) el0_svc_common.llvm.14158405452757855239 (arch/arm64/kernel/syscall.c:?) do_el0_svc (arch/arm64/kernel/syscall.c:152) el0_svc (./arch/arm64/include/asm/alternative-macros.h:254 ./arch/arm64/include/asm/cpufeature.h:808 ./arch/arm64/include/asm/irqflags.h:73 arch/arm64/kernel/entry-common.c:169 arch/arm64/kernel/entry-common.c:182 arch/arm64/kernel/entry-common.c:749) el0t_64_sync_handler (arch/arm64/kernel/entry-common.c:820) el0t_64_sync (arch/arm64/kernel/entry.S:600) In this case, nbcon_atomic_flush_pending() is called from printk_kthreads_shutdown() with IRQs and scheduling enabled. Note that __nbcon_atomic_flush_pending_con() is directly called also from nbcon_device_release() where the disabled IRQs might break PREEMPT_RT guarantees. But the atomic flush is called only in emergency or panic situations where the latencies are irrelevant anyway. An ultimate solution would be a touching of watchdogs. But it would hide all problems. Let's do it later when anyone reports a stall which does not have a better solution. Closes: https://lore.kernel.org/r/sqwajvt7utnt463tzxgwu2yctyn5m6bjwrslsnupfexeml6hkd@v6sqmpbu3vvu Tested-by: Breno Leitao <leitao@debian.org> Reviewed-by: John Ogness <john.ogness@linutronix.de> Link: https://patch.msgid.link/20251212124520.244483-1-pmladek@suse.com Signed-off-by: Petr Mladek <pmladek@suse.com> (cherry picked from commit 9bd18e1) [nirmoy: In our tree __nbcon_atomic_flush_pending_con() had an extra nbcon_context_try_acquire() call before the while loop from a prior adaptation. Removed it since the acquire is now done inside scoped_guard(irqsave) on each loop iteration, matching upstream.] Signed-off-by: Nirmoy Das <nirmoyd@nvidia.com>
Collaborator
Author
|
I should also add d5d399e |
9364d8b to
8dab82a
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Backport
9bd18e1262c0 "printk/nbcon: Restore IRQ in atomic flush after each emitted record"to fix:https://lore.kernel.org/all/sqwajvt7utnt463tzxgwu2yctyn5m6bjwrslsnupfexeml6hkd@v6sqmpbu3vvu/
There are two more patches
which are of interest but we disable CONFIG_KFENCE_STATIC_KEYS so those fixes can be ignored.
LP: https://bugs.launchpad.net/ubuntu/+source/linux-nvidia-6.17/+bug/2146955
https://nvbugspro.nvidia.com/bug/5983492