x86/mm: Page size aware flush_tlb_mm_range()
Use the new tlb_get_unmap_shift() to determine the stride of the
INVLPG loop.
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org>
Showing
- arch/x86/include/asm/tlb.h 14 additions, 7 deletionsarch/x86/include/asm/tlb.h
- arch/x86/include/asm/tlbflush.h 8 additions, 4 deletionsarch/x86/include/asm/tlbflush.h
- arch/x86/kernel/ldt.c 1 addition, 1 deletionarch/x86/kernel/ldt.c
- arch/x86/kernel/vm86_32.c 1 addition, 1 deletionarch/x86/kernel/vm86_32.c
- arch/x86/mm/tlb.c 8 additions, 9 deletionsarch/x86/mm/tlb.c
- mm/pgtable-generic.c 1 addition, 0 deletionsmm/pgtable-generic.c
Loading
Please register or sign in to comment