loader,smp: Make sure MMU is enabled on all cores for cache coherency.#465
loader,smp: Make sure MMU is enabled on all cores for cache coherency.#465
Conversation
Ensure all cores are in the same coherency domain before using cached shared data. Thus moved arch_mmu_enable() out of start_kernel(). Signed-off-by: Christian Bruel <christian.bruel@foss.st.com>
midnightveil
left a comment
There was a problem hiding this comment.
This seems like it would be correct to me, but I'd like to confirm it against the ARM ARM first.
What is the shareability of these mappings, do you know? Hopefully we set it up as ISH.
Maybe I misunderstood @KurtWu10 but I was under the impression that this would be fine as we only disabled the MMU & L1 cache (or actually thinking about it I guess we also disable the cache itself. I was asking about how it works it you just disable the MMU, not the cache too)).
This probably would affect spin table booting too, though I'm inclined to leave it as is as nothing but raspberry pi uses it (looking at Linux it does a cache clean & invalidate to PoC there)
(Not something to fix here) this does make me realise I could have switched to the kernel page tables instead of disabling the uboot ones, as that would stop the loader loading things over the current page tables since it knows to not load over itself. Although we probably want to clean caches then.
|
Initially, I thought this was correct, but now I am not sure whether enabling MMU is required for cache coherency. According to the Arm ARM D8.2.12 The effects of disabling an address translation stage, assuming that the effective value of
then according to B2.3 Ordering requirements defined by the formal concurrency model,
My interpretation is, with MMU disabled, data memory accesses use the device attribute, and cache coherency for them is provided for load-acquire and store-release instructions. The reasoning above assumes that the effective value of |
I think you are correct here: cacheability and coherency can indeed apply regardless of MMU, as long as the cores are in the same shareability domain and cache attributes are consistent. My mistake in the description was that I confused “enabling the caches” with “enabling the MMU”, because both are currently done inside enable_mmu(). A possible refinement would be to separate cache activation from the enable_mmu() with a new function called before accessing shared data. Which makes now 2 possible fixes :
I am fine with either option Thank you for raising this inconsistency! |
In #464 I set inner shareability in the memory translation attributes
That the point, we could disable the MMU and only enable the cache. But in this situation, I don't know what the default shareability settings. I tested on STM32MP and this works, but I'm afraid we could fall into SoC or undefined behavior. Making sure attributes in the translation tables are set (thus with MMU enabled) is IMHO the best way to avoid ambiguity
|
|
Yes, I agree. I think it makes sense to turn MMU and caches on. |
Hello,
This fixes an SMP boot issue observed on the STM32MP2 platform: core 0 does not start because print_lock is never seen as released by the secondary cores. The issue is related to cachability, which is not yet enabled at this stage.
Ensure all cores are in the same coherency domain before using cached shared data by enable MMU on all cores before using shared datas
Thus moved arch_mmu_enable() out of start_kernel().