2019-08-25 10:49:17 +01:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-09-16 19:27:51 -07:00
|
|
|
/*
|
2008-02-04 22:30:53 -08:00
|
|
|
* Copyright (C) 2000 - 2007 Jeff Dike (jdike@{addtoit,linux.intel}.com)
|
2005-04-16 15:20:36 -07:00
|
|
|
*/
|
|
|
|
|
|
2008-02-04 22:30:53 -08:00
|
|
|
#include <linux/stddef.h>
|
2011-08-18 20:14:10 +01:00
|
|
|
#include <linux/module.h>
|
2018-10-30 15:09:21 -07:00
|
|
|
#include <linux/memblock.h>
|
2008-02-04 22:30:53 -08:00
|
|
|
#include <linux/mm.h>
|
|
|
|
|
#include <linux/swap.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 17:04:11 +09:00
|
|
|
#include <linux/slab.h>
|
2025-02-10 17:09:25 +01:00
|
|
|
#include <linux/init.h>
|
|
|
|
|
#include <asm/sections.h>
|
2008-02-04 22:30:53 -08:00
|
|
|
#include <asm/page.h>
|
2024-03-06 18:19:21 +08:00
|
|
|
#include <asm/pgalloc.h>
|
2012-10-08 03:27:32 +01:00
|
|
|
#include <as-layout.h>
|
|
|
|
|
#include <init.h>
|
|
|
|
|
#include <kern.h>
|
|
|
|
|
#include <kern_util.h>
|
|
|
|
|
#include <mem_user.h>
|
|
|
|
|
#include <os.h>
|
2024-03-06 18:19:21 +08:00
|
|
|
#include <um_malloc.h>
|
UML: add support for KASAN under x86_64
Make KASAN run on User Mode Linux on x86_64.
The UML-specific KASAN initializer uses mmap to map the ~16TB of shadow
memory to the location defined by KASAN_SHADOW_OFFSET. kasan_init()
utilizes constructors to initialize KASAN before main().
The location of the KASAN shadow memory, starting at
KASAN_SHADOW_OFFSET, can be configured using the KASAN_SHADOW_OFFSET
option. The default location of this offset is 0x100000000000, which
keeps it out-of-the-way even on UML setups with more "physical" memory.
For low-memory setups, 0x7fff8000 can be used instead, which fits in an
immediate and is therefore faster, as suggested by Dmitry Vyukov. There
is usually enough free space at this location; however, it is a config
option so that it can be easily changed if needed.
Note that, unlike KASAN on other architectures, vmalloc allocations
still use the shadow memory allocated upfront, rather than allocating
and free-ing it per-vmalloc allocation.
If another architecture chooses to go down the same path, we should
replace the checks for CONFIG_UML with something more generic, such
as:
- A CONFIG_KASAN_NO_SHADOW_ALLOC option, which architectures could set
- or, a way of having architecture-specific versions of these vmalloc
and module shadow memory allocation options.
Also note that, while UML supports both KASAN in inline mode
(CONFIG_KASAN_INLINE) and static linking (CONFIG_STATIC_LINK), it does
not support both at the same time.
Signed-off-by: Patricia Alfonso <trishalfonso@google.com>
Co-developed-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-07-01 17:16:20 +08:00
|
|
|
#include <linux/sched/task.h>
|
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
Patch series "kasan: unify kasan_enabled() and remove arch-specific
implementations", v6.
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.
The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior
This patch (of 2):
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up, and
unify the static key infrastructure across all KASAN modes.
[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
The core issue is that different architectures haveinconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions or always-on
behavior
This patch addresses the fragmentation in KASAN initialization across
architectures by introducing a unified approach that eliminates duplicate
static keys and arch-specific kasan_arch_is_ready() implementations.
Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support. For other arch,
kasan_enabled() checks the enablement during compile time.
Now KASAN users can use a single kasan_enabled() check everywhere.
Link: https://lkml.kernel.org/r/20250810125746.1105476-1-snovitoll@gmail.com
Link: https://lkml.kernel.org/r/20250810125746.1105476-2-snovitoll@gmail.com
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> #powerpc
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Gow <davidgow@google.com>
Cc: Dmitriy Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Marco Elver <elver@google.com>
Cc: Qing Zhang <zhangqing@loongson.cn>
Cc: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-10 17:57:45 +05:00
|
|
|
#include <linux/kasan.h>
|
UML: add support for KASAN under x86_64
Make KASAN run on User Mode Linux on x86_64.
The UML-specific KASAN initializer uses mmap to map the ~16TB of shadow
memory to the location defined by KASAN_SHADOW_OFFSET. kasan_init()
utilizes constructors to initialize KASAN before main().
The location of the KASAN shadow memory, starting at
KASAN_SHADOW_OFFSET, can be configured using the KASAN_SHADOW_OFFSET
option. The default location of this offset is 0x100000000000, which
keeps it out-of-the-way even on UML setups with more "physical" memory.
For low-memory setups, 0x7fff8000 can be used instead, which fits in an
immediate and is therefore faster, as suggested by Dmitry Vyukov. There
is usually enough free space at this location; however, it is a config
option so that it can be easily changed if needed.
Note that, unlike KASAN on other architectures, vmalloc allocations
still use the shadow memory allocated upfront, rather than allocating
and free-ing it per-vmalloc allocation.
If another architecture chooses to go down the same path, we should
replace the checks for CONFIG_UML with something more generic, such
as:
- A CONFIG_KASAN_NO_SHADOW_ALLOC option, which architectures could set
- or, a way of having architecture-specific versions of these vmalloc
and module shadow memory allocation options.
Also note that, while UML supports both KASAN in inline mode
(CONFIG_KASAN_INLINE) and static linking (CONFIG_STATIC_LINK), it does
not support both at the same time.
Signed-off-by: Patricia Alfonso <trishalfonso@google.com>
Co-developed-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-07-01 17:16:20 +08:00
|
|
|
|
|
|
|
|
#ifdef CONFIG_KASAN
|
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
Patch series "kasan: unify kasan_enabled() and remove arch-specific
implementations", v6.
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.
The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior
This patch (of 2):
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up, and
unify the static key infrastructure across all KASAN modes.
[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
The core issue is that different architectures haveinconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions or always-on
behavior
This patch addresses the fragmentation in KASAN initialization across
architectures by introducing a unified approach that eliminates duplicate
static keys and arch-specific kasan_arch_is_ready() implementations.
Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support. For other arch,
kasan_enabled() checks the enablement during compile time.
Now KASAN users can use a single kasan_enabled() check everywhere.
Link: https://lkml.kernel.org/r/20250810125746.1105476-1-snovitoll@gmail.com
Link: https://lkml.kernel.org/r/20250810125746.1105476-2-snovitoll@gmail.com
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> #powerpc
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Gow <davidgow@google.com>
Cc: Dmitriy Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Marco Elver <elver@google.com>
Cc: Qing Zhang <zhangqing@loongson.cn>
Cc: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-10 17:57:45 +05:00
|
|
|
void __init kasan_init(void)
|
UML: add support for KASAN under x86_64
Make KASAN run on User Mode Linux on x86_64.
The UML-specific KASAN initializer uses mmap to map the ~16TB of shadow
memory to the location defined by KASAN_SHADOW_OFFSET. kasan_init()
utilizes constructors to initialize KASAN before main().
The location of the KASAN shadow memory, starting at
KASAN_SHADOW_OFFSET, can be configured using the KASAN_SHADOW_OFFSET
option. The default location of this offset is 0x100000000000, which
keeps it out-of-the-way even on UML setups with more "physical" memory.
For low-memory setups, 0x7fff8000 can be used instead, which fits in an
immediate and is therefore faster, as suggested by Dmitry Vyukov. There
is usually enough free space at this location; however, it is a config
option so that it can be easily changed if needed.
Note that, unlike KASAN on other architectures, vmalloc allocations
still use the shadow memory allocated upfront, rather than allocating
and free-ing it per-vmalloc allocation.
If another architecture chooses to go down the same path, we should
replace the checks for CONFIG_UML with something more generic, such
as:
- A CONFIG_KASAN_NO_SHADOW_ALLOC option, which architectures could set
- or, a way of having architecture-specific versions of these vmalloc
and module shadow memory allocation options.
Also note that, while UML supports both KASAN in inline mode
(CONFIG_KASAN_INLINE) and static linking (CONFIG_STATIC_LINK), it does
not support both at the same time.
Signed-off-by: Patricia Alfonso <trishalfonso@google.com>
Co-developed-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-07-01 17:16:20 +08:00
|
|
|
{
|
|
|
|
|
/*
|
|
|
|
|
* kasan_map_memory will map all of the required address space and
|
|
|
|
|
* the host machine will allocate physical memory as necessary.
|
|
|
|
|
*/
|
|
|
|
|
kasan_map_memory((void *)KASAN_SHADOW_START, KASAN_SHADOW_SIZE);
|
|
|
|
|
init_task.kasan_depth = 0;
|
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
Patch series "kasan: unify kasan_enabled() and remove arch-specific
implementations", v6.
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.
The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior
This patch (of 2):
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up, and
unify the static key infrastructure across all KASAN modes.
[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
The core issue is that different architectures haveinconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions or always-on
behavior
This patch addresses the fragmentation in KASAN initialization across
architectures by introducing a unified approach that eliminates duplicate
static keys and arch-specific kasan_arch_is_ready() implementations.
Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support. For other arch,
kasan_enabled() checks the enablement during compile time.
Now KASAN users can use a single kasan_enabled() check everywhere.
Link: https://lkml.kernel.org/r/20250810125746.1105476-1-snovitoll@gmail.com
Link: https://lkml.kernel.org/r/20250810125746.1105476-2-snovitoll@gmail.com
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> #powerpc
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Gow <davidgow@google.com>
Cc: Dmitriy Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Marco Elver <elver@google.com>
Cc: Qing Zhang <zhangqing@loongson.cn>
Cc: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-10 17:57:45 +05:00
|
|
|
/*
|
|
|
|
|
* Since kasan_init() is called before main(),
|
|
|
|
|
* KASAN is initialized but the enablement is deferred after
|
|
|
|
|
* jump_label_init(). See arch_mm_preinit().
|
|
|
|
|
*/
|
UML: add support for KASAN under x86_64
Make KASAN run on User Mode Linux on x86_64.
The UML-specific KASAN initializer uses mmap to map the ~16TB of shadow
memory to the location defined by KASAN_SHADOW_OFFSET. kasan_init()
utilizes constructors to initialize KASAN before main().
The location of the KASAN shadow memory, starting at
KASAN_SHADOW_OFFSET, can be configured using the KASAN_SHADOW_OFFSET
option. The default location of this offset is 0x100000000000, which
keeps it out-of-the-way even on UML setups with more "physical" memory.
For low-memory setups, 0x7fff8000 can be used instead, which fits in an
immediate and is therefore faster, as suggested by Dmitry Vyukov. There
is usually enough free space at this location; however, it is a config
option so that it can be easily changed if needed.
Note that, unlike KASAN on other architectures, vmalloc allocations
still use the shadow memory allocated upfront, rather than allocating
and free-ing it per-vmalloc allocation.
If another architecture chooses to go down the same path, we should
replace the checks for CONFIG_UML with something more generic, such
as:
- A CONFIG_KASAN_NO_SHADOW_ALLOC option, which architectures could set
- or, a way of having architecture-specific versions of these vmalloc
and module shadow memory allocation options.
Also note that, while UML supports both KASAN in inline mode
(CONFIG_KASAN_INLINE) and static linking (CONFIG_STATIC_LINK), it does
not support both at the same time.
Signed-off-by: Patricia Alfonso <trishalfonso@google.com>
Co-developed-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: David Gow <davidgow@google.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Reviewed-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
2022-07-01 17:16:20 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void (*kasan_init_ptr)(void)
|
|
|
|
|
__section(".kasan_init") __used
|
|
|
|
|
= kasan_init;
|
|
|
|
|
#endif
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2007-02-10 01:44:18 -08:00
|
|
|
/* allocated in paging_init, zeroed in mem_init, and unchanged thereafter */
|
2005-04-16 15:20:36 -07:00
|
|
|
unsigned long *empty_zero_page = NULL;
|
2011-08-18 20:14:10 +01:00
|
|
|
EXPORT_SYMBOL(empty_zero_page);
|
2008-02-04 22:31:17 -08:00
|
|
|
|
|
|
|
|
/*
|
|
|
|
|
* Initialized during boot, and readonly for initializing page tables
|
|
|
|
|
* afterwards
|
|
|
|
|
*/
|
2005-04-16 15:20:36 -07:00
|
|
|
pgd_t swapper_pg_dir[PTRS_PER_PGD];
|
2008-02-04 22:31:17 -08:00
|
|
|
|
|
|
|
|
/* Initialized at boot time, and readonly after that */
|
2005-04-16 15:20:36 -07:00
|
|
|
int kmalloc_ok = 0;
|
|
|
|
|
|
2008-02-04 22:31:17 -08:00
|
|
|
/* Used during early boot */
|
2005-04-16 15:20:36 -07:00
|
|
|
static unsigned long brk_end;
|
|
|
|
|
|
2025-03-13 15:50:02 +02:00
|
|
|
void __init arch_mm_preinit(void)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
kasan: introduce ARCH_DEFER_KASAN and unify static key across modes
Patch series "kasan: unify kasan_enabled() and remove arch-specific
implementations", v6.
This patch series addresses the fragmentation in KASAN initialization
across architectures by introducing a unified approach that eliminates
duplicate static keys and arch-specific kasan_arch_is_ready()
implementations.
The core issue is that different architectures have inconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions
or always-on behavior
This patch (of 2):
Introduce CONFIG_ARCH_DEFER_KASAN to identify architectures [1] that need
to defer KASAN initialization until shadow memory is properly set up, and
unify the static key infrastructure across all KASAN modes.
[1] PowerPC, UML, LoongArch selects ARCH_DEFER_KASAN.
The core issue is that different architectures haveinconsistent approaches
to KASAN readiness tracking:
- PowerPC, LoongArch, and UML arch, each implement own
kasan_arch_is_ready()
- Only HW_TAGS mode had a unified static key (kasan_flag_enabled)
- Generic and SW_TAGS modes relied on arch-specific solutions or always-on
behavior
This patch addresses the fragmentation in KASAN initialization across
architectures by introducing a unified approach that eliminates duplicate
static keys and arch-specific kasan_arch_is_ready() implementations.
Let's replace kasan_arch_is_ready() with existing kasan_enabled() check,
which examines the static key being enabled if arch selects
ARCH_DEFER_KASAN or has HW_TAGS mode support. For other arch,
kasan_enabled() checks the enablement during compile time.
Now KASAN users can use a single kasan_enabled() check everywhere.
Link: https://lkml.kernel.org/r/20250810125746.1105476-1-snovitoll@gmail.com
Link: https://lkml.kernel.org/r/20250810125746.1105476-2-snovitoll@gmail.com
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217049
Signed-off-by: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> #powerpc
Cc: Alexander Gordeev <agordeev@linux.ibm.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Alexandre Ghiti <alex@ghiti.fr>
Cc: Alexandre Ghiti <alexghiti@rivosinc.com>
Cc: Andrey Konovalov <andreyknvl@gmail.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Cc: Baoquan He <bhe@redhat.com>
Cc: David Gow <davidgow@google.com>
Cc: Dmitriy Vyukov <dvyukov@google.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Huacai Chen <chenhuacai@loongson.cn>
Cc: Marco Elver <elver@google.com>
Cc: Qing Zhang <zhangqing@loongson.cn>
Cc: Sabyrzhan Tasbolatov <snovitoll@gmail.com>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-08-10 17:57:45 +05:00
|
|
|
/* Safe to call after jump_label_init(). Enables KASAN. */
|
|
|
|
|
kasan_init_generic();
|
|
|
|
|
|
2007-02-10 01:44:10 -08:00
|
|
|
/* clear the zero-page */
|
2008-02-04 22:30:41 -08:00
|
|
|
memset(empty_zero_page, 0, PAGE_SIZE);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
|
/* Map in the area just after the brk now that kmalloc is about
|
|
|
|
|
* to be turned on.
|
|
|
|
|
*/
|
|
|
|
|
brk_end = (unsigned long) UML_ROUND_UP(sbrk(0));
|
2008-02-04 22:31:24 -08:00
|
|
|
map_memory(brk_end, __pa(brk_end), uml_reserved - brk_end, 1, 1, 0);
|
2021-11-05 13:43:22 -07:00
|
|
|
memblock_free((void *)brk_end, uml_reserved - brk_end);
|
2005-04-16 15:20:36 -07:00
|
|
|
uml_reserved = brk_end;
|
2025-02-21 12:18:55 +08:00
|
|
|
min_low_pfn = PFN_UP(__pa(uml_reserved));
|
2025-03-13 15:50:02 +02:00
|
|
|
max_pfn = max_low_pfn;
|
|
|
|
|
}
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2025-03-13 15:50:02 +02:00
|
|
|
void __init mem_init(void)
|
|
|
|
|
{
|
2005-04-16 15:20:36 -07:00
|
|
|
kmalloc_ok = 1;
|
|
|
|
|
}
|
|
|
|
|
|
2024-11-28 16:19:38 +08:00
|
|
|
#if IS_ENABLED(CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA)
|
2005-05-20 13:59:12 -07:00
|
|
|
/*
|
|
|
|
|
* Create a page table and place a pointer to it in a middle page
|
|
|
|
|
* directory entry.
|
|
|
|
|
*/
|
|
|
|
|
static void __init one_page_table_init(pmd_t *pmd)
|
|
|
|
|
{
|
|
|
|
|
if (pmd_none(*pmd)) {
|
2018-10-30 15:08:54 -07:00
|
|
|
pte_t *pte = (pte_t *) memblock_alloc_low(PAGE_SIZE,
|
|
|
|
|
PAGE_SIZE);
|
2019-03-11 23:30:31 -07:00
|
|
|
if (!pte)
|
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx\n",
|
|
|
|
|
__func__, PAGE_SIZE, PAGE_SIZE);
|
|
|
|
|
|
2005-05-20 13:59:12 -07:00
|
|
|
set_pmd(pmd, __pmd(_KERNPG_TABLE +
|
|
|
|
|
(unsigned long) __pa(pte)));
|
2021-03-17 15:11:36 +08:00
|
|
|
BUG_ON(pte != pte_offset_kernel(pmd, 0));
|
2005-05-20 13:59:12 -07:00
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void __init one_md_table_init(pud_t *pud)
|
|
|
|
|
{
|
2024-09-19 14:45:11 +02:00
|
|
|
#if CONFIG_PGTABLE_LEVELS > 2
|
2018-10-30 15:08:54 -07:00
|
|
|
pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
|
2019-03-11 23:30:31 -07:00
|
|
|
if (!pmd_table)
|
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx\n",
|
|
|
|
|
__func__, PAGE_SIZE, PAGE_SIZE);
|
|
|
|
|
|
2005-05-20 13:59:12 -07:00
|
|
|
set_pud(pud, __pud(_KERNPG_TABLE + (unsigned long) __pa(pmd_table)));
|
2021-08-27 18:11:08 -07:00
|
|
|
BUG_ON(pmd_table != pmd_offset(pud, 0));
|
2005-05-20 13:59:12 -07:00
|
|
|
#endif
|
|
|
|
|
}
|
|
|
|
|
|
2024-09-19 14:45:11 +02:00
|
|
|
static void __init one_ud_table_init(p4d_t *p4d)
|
|
|
|
|
{
|
|
|
|
|
#if CONFIG_PGTABLE_LEVELS > 3
|
|
|
|
|
pud_t *pud_table = (pud_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
|
|
|
|
|
if (!pud_table)
|
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx\n",
|
|
|
|
|
__func__, PAGE_SIZE, PAGE_SIZE);
|
|
|
|
|
|
|
|
|
|
set_p4d(p4d, __p4d(_KERNPG_TABLE + (unsigned long) __pa(pud_table)));
|
|
|
|
|
BUG_ON(pud_table != pud_offset(p4d, 0));
|
|
|
|
|
#endif
|
|
|
|
|
}
|
|
|
|
|
|
2008-02-04 22:30:53 -08:00
|
|
|
static void __init fixrange_init(unsigned long start, unsigned long end,
|
2005-04-16 15:20:36 -07:00
|
|
|
pgd_t *pgd_base)
|
|
|
|
|
{
|
|
|
|
|
pgd_t *pgd;
|
2019-12-04 16:54:28 -08:00
|
|
|
p4d_t *p4d;
|
2005-05-20 13:59:12 -07:00
|
|
|
pud_t *pud;
|
2005-04-16 15:20:36 -07:00
|
|
|
pmd_t *pmd;
|
|
|
|
|
int i, j;
|
|
|
|
|
unsigned long vaddr;
|
|
|
|
|
|
|
|
|
|
vaddr = start;
|
|
|
|
|
i = pgd_index(vaddr);
|
|
|
|
|
j = pmd_index(vaddr);
|
|
|
|
|
pgd = pgd_base + i;
|
|
|
|
|
|
|
|
|
|
for ( ; (i < PTRS_PER_PGD) && (vaddr < end); pgd++, i++) {
|
2019-12-04 16:54:28 -08:00
|
|
|
p4d = p4d_offset(pgd, vaddr);
|
2024-09-19 14:45:11 +02:00
|
|
|
if (p4d_none(*p4d))
|
|
|
|
|
one_ud_table_init(p4d);
|
2019-12-04 16:54:28 -08:00
|
|
|
pud = pud_offset(p4d, vaddr);
|
2005-05-20 13:59:12 -07:00
|
|
|
if (pud_none(*pud))
|
|
|
|
|
one_md_table_init(pud);
|
|
|
|
|
pmd = pmd_offset(pud, vaddr);
|
2008-02-04 22:30:55 -08:00
|
|
|
for (; (j < PTRS_PER_PMD) && (vaddr < end); pmd++, j++) {
|
2005-05-20 13:59:12 -07:00
|
|
|
one_page_table_init(pmd);
|
2005-04-16 15:20:36 -07:00
|
|
|
vaddr += PMD_SIZE;
|
|
|
|
|
}
|
|
|
|
|
j = 0;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
static void __init fixaddr_user_init( void)
|
|
|
|
|
{
|
|
|
|
|
long size = FIXADDR_USER_END - FIXADDR_USER_START;
|
|
|
|
|
pte_t *pte;
|
2008-02-04 22:30:55 -08:00
|
|
|
phys_t p;
|
|
|
|
|
unsigned long v, vaddr = FIXADDR_USER_START;
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2008-02-04 22:30:55 -08:00
|
|
|
if (!size)
|
2005-04-16 15:20:36 -07:00
|
|
|
return;
|
|
|
|
|
|
|
|
|
|
fixrange_init( FIXADDR_USER_START, FIXADDR_USER_END, swapper_pg_dir);
|
2018-10-30 15:08:54 -07:00
|
|
|
v = (unsigned long) memblock_alloc_low(size, PAGE_SIZE);
|
2019-03-11 23:30:31 -07:00
|
|
|
if (!v)
|
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx\n",
|
|
|
|
|
__func__, size, PAGE_SIZE);
|
|
|
|
|
|
2008-02-04 22:30:55 -08:00
|
|
|
memcpy((void *) v , (void *) FIXADDR_USER_START, size);
|
|
|
|
|
p = __pa(v);
|
2008-02-04 22:30:53 -08:00
|
|
|
for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE,
|
2008-02-04 22:30:55 -08:00
|
|
|
p += PAGE_SIZE) {
|
2020-06-08 21:33:05 -07:00
|
|
|
pte = virt_to_kpte(vaddr);
|
2008-02-04 22:30:55 -08:00
|
|
|
pte_set_val(*pte, p, PAGE_READONLY);
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
}
|
2024-11-28 16:19:38 +08:00
|
|
|
#endif
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2007-05-06 14:51:11 -07:00
|
|
|
void __init paging_init(void)
|
2005-04-16 15:20:36 -07:00
|
|
|
{
|
2020-06-03 15:57:06 -07:00
|
|
|
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2018-10-30 15:08:54 -07:00
|
|
|
empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
|
|
|
|
|
PAGE_SIZE);
|
2019-03-11 23:30:31 -07:00
|
|
|
if (!empty_zero_page)
|
|
|
|
|
panic("%s: Failed to allocate %lu bytes align=%lx\n",
|
|
|
|
|
__func__, PAGE_SIZE, PAGE_SIZE);
|
|
|
|
|
|
2020-06-03 15:57:06 -07:00
|
|
|
max_zone_pfn[ZONE_NORMAL] = end_iomem >> PAGE_SHIFT;
|
|
|
|
|
free_area_init(max_zone_pfn);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
2024-11-28 16:19:38 +08:00
|
|
|
#if IS_ENABLED(CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA)
|
2005-04-16 15:20:36 -07:00
|
|
|
fixaddr_user_init();
|
2024-11-28 16:19:38 +08:00
|
|
|
#endif
|
2005-04-16 15:20:36 -07:00
|
|
|
}
|
|
|
|
|
|
uml: header untangling
Untangle UML headers somewhat and add some includes where they were
needed explicitly, but gotten accidentally via some other header.
arch/um/include/um_uaccess.h loses asm/fixmap.h because it uses no
fixmap stuff and gains elf.h, because it needs FIXADDR_USER_*, and
archsetjmp.h, because it needs jmp_buf.
pmd_alloc_one is uninlined because it needs mm_struct, and that's
inconvenient to provide in asm-um/pgtable-3level.h.
elf_core_copy_fpregs is also uninlined from elf-i386.h and
elf-x86_64.h, which duplicated the code anyway, to
arch/um/kernel/process.c, so that the reference to current_thread
doesn't pull sched.h or anything related into asm/elf.h.
arch/um/sys-i386/ldt.c, arch/um/kernel/tlb.c and
arch/um/kernel/skas/uaccess.c got sched.h because they dereference
task_structs. Its includes of linux and asm headers got turned from
"" to <>.
arch/um/sys-i386/bug.c gets asm/errno.h because it needs errno
constants.
asm/elf-i386 gets asm/user.h because it needs user_regs_struct.
asm/fixmap.h gets page.h because it needs PAGE_SIZE and PAGE_MASK and
system.h for BUG_ON.
asm/pgtable doesn't need sched.h.
asm/processor-generic.h defined mm_segment_t, but didn't use it. So,
that definition is moved to uaccess.h, which defines a bunch of
mm_segment_t-related stuff. thread_info.h uses mm_segment_t, and
includes uaccess.h, which causes a recursion. So, the definition is
placed above the include of thread_info. in uaccess.h. thread_info.h
also gets page.h because it needs PAGE_SIZE.
ObCheckpatchViolationJustification - I'm not adding a typedef; I'm
moving mm_segment_t from one place to another.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-04 22:30:53 -08:00
|
|
|
/*
|
|
|
|
|
* This can't do anything because nothing in the kernel image can be freed
|
2005-04-16 15:20:36 -07:00
|
|
|
* since it's not in kernel physical memory.
|
|
|
|
|
*/
|
|
|
|
|
|
|
|
|
|
void free_initmem(void)
|
|
|
|
|
{
|
|
|
|
|
}
|
|
|
|
|
|
uml: header untangling
Untangle UML headers somewhat and add some includes where they were
needed explicitly, but gotten accidentally via some other header.
arch/um/include/um_uaccess.h loses asm/fixmap.h because it uses no
fixmap stuff and gains elf.h, because it needs FIXADDR_USER_*, and
archsetjmp.h, because it needs jmp_buf.
pmd_alloc_one is uninlined because it needs mm_struct, and that's
inconvenient to provide in asm-um/pgtable-3level.h.
elf_core_copy_fpregs is also uninlined from elf-i386.h and
elf-x86_64.h, which duplicated the code anyway, to
arch/um/kernel/process.c, so that the reference to current_thread
doesn't pull sched.h or anything related into asm/elf.h.
arch/um/sys-i386/ldt.c, arch/um/kernel/tlb.c and
arch/um/kernel/skas/uaccess.c got sched.h because they dereference
task_structs. Its includes of linux and asm headers got turned from
"" to <>.
arch/um/sys-i386/bug.c gets asm/errno.h because it needs errno
constants.
asm/elf-i386 gets asm/user.h because it needs user_regs_struct.
asm/fixmap.h gets page.h because it needs PAGE_SIZE and PAGE_MASK and
system.h for BUG_ON.
asm/pgtable doesn't need sched.h.
asm/processor-generic.h defined mm_segment_t, but didn't use it. So,
that definition is moved to uaccess.h, which defines a bunch of
mm_segment_t-related stuff. thread_info.h uses mm_segment_t, and
includes uaccess.h, which causes a recursion. So, the definition is
placed above the include of thread_info. in uaccess.h. thread_info.h
also gets page.h because it needs PAGE_SIZE.
ObCheckpatchViolationJustification - I'm not adding a typedef; I'm
moving mm_segment_t from one place to another.
Signed-off-by: Jeff Dike <jdike@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-04 22:30:53 -08:00
|
|
|
/* Allocate and free page tables. */
|
2005-04-16 15:20:36 -07:00
|
|
|
|
|
|
|
|
pgd_t *pgd_alloc(struct mm_struct *mm)
|
|
|
|
|
{
|
asm-generic: pgalloc: provide generic __pgd_{alloc,free}
We already have a generic implementation of alloc/free up to P4D level, as
well as pgd_free(). Let's finish the work and add a generic PGD-level
alloc helper as well.
Unlike at lower levels, almost all architectures need some specific magic
at PGD level (typically initialising PGD entries), so introducing a
generic pgd_alloc() isn't worth it. Instead we introduce two new helpers,
__pgd_alloc() and __pgd_free(), and make use of them in the arch-specific
pgd_alloc() and pgd_free() wherever possible. To accommodate as many arch
as possible, __pgd_alloc() takes a page allocation order.
Because pagetable_alloc() allocates zeroed pages, explicit zeroing in
pgd_alloc() becomes redundant and we can get rid of it. Some trivial
implementations of pgd_free() also become unnecessary once __pgd_alloc()
is used; remove them.
Another small improvement is consistent accounting of PGD pages by using
GFP_PGTABLE_{USER,KERNEL} as appropriate.
Not all PGD allocations can be handled by the generic helpers. In
particular, multiple architectures allocate PGDs from a kmem_cache, and
those PGDs may not be page-sized.
Link: https://lkml.kernel.org/r/20250103184415.2744423-6-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-03 18:44:14 +00:00
|
|
|
pgd_t *pgd = __pgd_alloc(mm, 0);
|
2005-04-16 15:20:36 -07:00
|
|
|
|
asm-generic: pgalloc: provide generic __pgd_{alloc,free}
We already have a generic implementation of alloc/free up to P4D level, as
well as pgd_free(). Let's finish the work and add a generic PGD-level
alloc helper as well.
Unlike at lower levels, almost all architectures need some specific magic
at PGD level (typically initialising PGD entries), so introducing a
generic pgd_alloc() isn't worth it. Instead we introduce two new helpers,
__pgd_alloc() and __pgd_free(), and make use of them in the arch-specific
pgd_alloc() and pgd_free() wherever possible. To accommodate as many arch
as possible, __pgd_alloc() takes a page allocation order.
Because pagetable_alloc() allocates zeroed pages, explicit zeroing in
pgd_alloc() becomes redundant and we can get rid of it. Some trivial
implementations of pgd_free() also become unnecessary once __pgd_alloc()
is used; remove them.
Another small improvement is consistent accounting of PGD pages by using
GFP_PGTABLE_{USER,KERNEL} as appropriate.
Not all PGD allocations can be handled by the generic helpers. In
particular, multiple architectures allocate PGDs from a kmem_cache, and
those PGDs may not be page-sized.
Link: https://lkml.kernel.org/r/20250103184415.2744423-6-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-03 18:44:14 +00:00
|
|
|
if (pgd)
|
2008-02-04 22:30:53 -08:00
|
|
|
memcpy(pgd + USER_PTRS_PER_PGD,
|
|
|
|
|
swapper_pg_dir + USER_PTRS_PER_PGD,
|
2005-04-16 15:20:36 -07:00
|
|
|
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
|
asm-generic: pgalloc: provide generic __pgd_{alloc,free}
We already have a generic implementation of alloc/free up to P4D level, as
well as pgd_free(). Let's finish the work and add a generic PGD-level
alloc helper as well.
Unlike at lower levels, almost all architectures need some specific magic
at PGD level (typically initialising PGD entries), so introducing a
generic pgd_alloc() isn't worth it. Instead we introduce two new helpers,
__pgd_alloc() and __pgd_free(), and make use of them in the arch-specific
pgd_alloc() and pgd_free() wherever possible. To accommodate as many arch
as possible, __pgd_alloc() takes a page allocation order.
Because pagetable_alloc() allocates zeroed pages, explicit zeroing in
pgd_alloc() becomes redundant and we can get rid of it. Some trivial
implementations of pgd_free() also become unnecessary once __pgd_alloc()
is used; remove them.
Another small improvement is consistent accounting of PGD pages by using
GFP_PGTABLE_{USER,KERNEL} as appropriate.
Not all PGD allocations can be handled by the generic helpers. In
particular, multiple architectures allocate PGDs from a kmem_cache, and
those PGDs may not be page-sized.
Link: https://lkml.kernel.org/r/20250103184415.2744423-6-kevin.brodsky@arm.com
Signed-off-by: Kevin Brodsky <kevin.brodsky@arm.com>
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
Acked-by: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-01-03 18:44:14 +00:00
|
|
|
|
2005-04-16 15:20:36 -07:00
|
|
|
return pgd;
|
|
|
|
|
}
|
|
|
|
|
|
2008-05-12 14:01:52 -07:00
|
|
|
void *uml_kmalloc(int size, int flags)
|
|
|
|
|
{
|
|
|
|
|
return kmalloc(size, flags);
|
|
|
|
|
}
|
2022-07-11 12:35:58 +05:30
|
|
|
|
|
|
|
|
static const pgprot_t protection_map[16] = {
|
|
|
|
|
[VM_NONE] = PAGE_NONE,
|
|
|
|
|
[VM_READ] = PAGE_READONLY,
|
|
|
|
|
[VM_WRITE] = PAGE_COPY,
|
|
|
|
|
[VM_WRITE | VM_READ] = PAGE_COPY,
|
|
|
|
|
[VM_EXEC] = PAGE_READONLY,
|
|
|
|
|
[VM_EXEC | VM_READ] = PAGE_READONLY,
|
|
|
|
|
[VM_EXEC | VM_WRITE] = PAGE_COPY,
|
|
|
|
|
[VM_EXEC | VM_WRITE | VM_READ] = PAGE_COPY,
|
|
|
|
|
[VM_SHARED] = PAGE_NONE,
|
|
|
|
|
[VM_SHARED | VM_READ] = PAGE_READONLY,
|
|
|
|
|
[VM_SHARED | VM_WRITE] = PAGE_SHARED,
|
|
|
|
|
[VM_SHARED | VM_WRITE | VM_READ] = PAGE_SHARED,
|
|
|
|
|
[VM_SHARED | VM_EXEC] = PAGE_READONLY,
|
|
|
|
|
[VM_SHARED | VM_EXEC | VM_READ] = PAGE_READONLY,
|
|
|
|
|
[VM_SHARED | VM_EXEC | VM_WRITE] = PAGE_SHARED,
|
|
|
|
|
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED
|
|
|
|
|
};
|
|
|
|
|
DECLARE_VM_GET_PAGE_PROT
|
2025-02-10 17:09:25 +01:00
|
|
|
|
|
|
|
|
void mark_rodata_ro(void)
|
|
|
|
|
{
|
|
|
|
|
unsigned long rodata_start = PFN_ALIGN(__start_rodata);
|
|
|
|
|
unsigned long rodata_end = PFN_ALIGN(__end_rodata);
|
|
|
|
|
|
|
|
|
|
os_protect_memory((void *)rodata_start, rodata_end - rodata_start, 1, 0, 0);
|
|
|
|
|
}
|