Skip to content

Commit 0483e1f

Browse files
thgarnieIngo Molnar
authored and
Ingo Molnar
committed
x86/mm: Implement ASLR for kernel memory regions
Randomizes the virtual address space of kernel memory regions for x86_64. This first patch adds the infrastructure and does not randomize any region. The following patches will randomize the physical memory mapping, vmalloc and vmemmap regions. This security feature mitigates exploits relying on predictable kernel addresses. These addresses can be used to disclose the kernel modules base addresses or corrupt specific structures to elevate privileges bypassing the current implementation of KASLR. This feature can be enabled with the CONFIG_RANDOMIZE_MEMORY option. The order of each memory region is not changed. The feature looks at the available space for the regions based on different configuration options and randomizes the base and space between each. The size of the physical memory mapping is the available physical memory. No performance impact was detected while testing the feature. Entropy is generated using the KASLR early boot functions now shared in the lib directory (originally written by Kees Cook). Randomization is done on PGD & PUD page table levels to increase possible addresses. The physical memory mapping code was adapted to support PUD level virtual addresses. This implementation on the best configuration provides 30,000 possible virtual addresses in average for each memory region. An additional low memory page is used to ensure each CPU can start with a PGD aligned virtual address (for realmode). x86/dump_pagetable was updated to correctly display each region. Updated documentation on x86_64 memory layout accordingly. Performance data, after all patches in the series: Kernbench shows almost no difference (-+ less than 1%): Before: Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.63 (1.2695) User Time 1034.89 (1.18115) System Time 87.056 (0.456416) Percent CPU 1092.9 (13.892) Context Switches 199805 (3455.33) Sleeps 97907.8 (900.636) After: Average Optimal load -j 12 Run (std deviation): Elapsed Time 102.489 (1.10636) User Time 1034.86 (1.36053) System Time 87.764 (0.49345) Percent CPU 1095 (12.7715) Context Switches 199036 (4298.1) Sleeps 97681.6 (1031.11) Hackbench shows 0% difference on average (hackbench 90 repeated 10 times): attemp,before,after 1,0.076,0.069 2,0.072,0.069 3,0.066,0.066 4,0.066,0.068 5,0.066,0.067 6,0.066,0.069 7,0.067,0.066 8,0.063,0.067 9,0.067,0.065 10,0.068,0.071 average,0.0677,0.0677 Signed-off-by: Thomas Garnier <[email protected]> Signed-off-by: Kees Cook <[email protected]> Cc: Alexander Kuleshov <[email protected]> Cc: Alexander Popov <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Aneesh Kumar K.V <[email protected]> Cc: Baoquan He <[email protected]> Cc: Boris Ostrovsky <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Brian Gerst <[email protected]> Cc: Christian Borntraeger <[email protected]> Cc: Dan Williams <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Dave Young <[email protected]> Cc: Denys Vlasenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Jan Beulich <[email protected]> Cc: Joerg Roedel <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Juergen Gross <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Lv Zheng <[email protected]> Cc: Mark Salter <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephen Smalley <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Toshi Kani <[email protected]> Cc: Xiao Guangrong <[email protected]> Cc: Yinghai Lu <[email protected]> Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Ingo Molnar <[email protected]>
1 parent b234e8a commit 0483e1f

File tree

9 files changed

+202
-5
lines changed

9 files changed

+202
-5
lines changed

Documentation/x86/x86_64/mm.txt

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,8 @@ memory window (this size is arbitrary, it can be raised later if needed).
3939
The mappings are not part of any other kernel PGD and are only available
4040
during EFI runtime calls.
4141

42+
Note that if CONFIG_RANDOMIZE_MEMORY is enabled, the direct mapping of all
43+
physical memory, vmalloc/ioremap space and virtual memory map are randomized.
44+
Their order is preserved but their base will be offset early at boot time.
45+
4246
-Andi Kleen, Jul 2004

arch/x86/Kconfig

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1993,6 +1993,23 @@ config PHYSICAL_ALIGN
19931993

19941994
Don't change this unless you know what you are doing.
19951995

1996+
config RANDOMIZE_MEMORY
1997+
bool "Randomize the kernel memory sections"
1998+
depends on X86_64
1999+
depends on RANDOMIZE_BASE
2000+
default RANDOMIZE_BASE
2001+
---help---
2002+
Randomizes the base virtual address of kernel memory sections
2003+
(physical memory mapping, vmalloc & vmemmap). This security feature
2004+
makes exploits relying on predictable memory locations less reliable.
2005+
2006+
The order of allocations remains unchanged. Entropy is generated in
2007+
the same way as RANDOMIZE_BASE. Current implementation in the optimal
2008+
configuration have in average 30,000 different possible virtual
2009+
addresses for each memory section.
2010+
2011+
If unsure, say N.
2012+
19962013
config HOTPLUG_CPU
19972014
bool "Support for hot-pluggable CPUs"
19982015
depends on SMP

arch/x86/include/asm/kaslr.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,10 @@
33

44
unsigned long kaslr_get_random_long(const char *purpose);
55

6+
#ifdef CONFIG_RANDOMIZE_MEMORY
7+
void kernel_randomize_memory(void);
8+
#else
9+
static inline void kernel_randomize_memory(void) { }
10+
#endif /* CONFIG_RANDOMIZE_MEMORY */
11+
612
#endif

arch/x86/include/asm/pgtable.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -732,11 +732,16 @@ void early_alloc_pgt_buf(void);
732732
#ifdef CONFIG_X86_64
733733
/* Realmode trampoline initialization. */
734734
extern pgd_t trampoline_pgd_entry;
735-
static inline void __meminit init_trampoline(void)
735+
static inline void __meminit init_trampoline_default(void)
736736
{
737737
/* Default trampoline pgd value */
738738
trampoline_pgd_entry = init_level4_pgt[pgd_index(__PAGE_OFFSET)];
739739
}
740+
# ifdef CONFIG_RANDOMIZE_MEMORY
741+
void __meminit init_trampoline(void);
742+
# else
743+
# define init_trampoline init_trampoline_default
744+
# endif
740745
#else
741746
static inline void init_trampoline(void) { }
742747
#endif

arch/x86/kernel/setup.c

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,7 @@
113113
#include <asm/prom.h>
114114
#include <asm/microcode.h>
115115
#include <asm/mmu_context.h>
116+
#include <asm/kaslr.h>
116117

117118
/*
118119
* max_low_pfn_mapped: highest direct mapped pfn under 4GB
@@ -942,6 +943,8 @@ void __init setup_arch(char **cmdline_p)
942943

943944
x86_init.oem.arch_setup();
944945

946+
kernel_randomize_memory();
947+
945948
iomem_resource.end = (1ULL << boot_cpu_data.x86_phys_bits) - 1;
946949
setup_memory_map();
947950
parse_setup_data();

arch/x86/mm/Makefile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,4 +37,5 @@ obj-$(CONFIG_NUMA_EMU) += numa_emulation.o
3737

3838
obj-$(CONFIG_X86_INTEL_MPX) += mpx.o
3939
obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o
40+
obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o
4041

arch/x86/mm/dump_pagetables.c

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -72,9 +72,9 @@ static struct addr_marker address_markers[] = {
7272
{ 0, "User Space" },
7373
#ifdef CONFIG_X86_64
7474
{ 0x8000000000000000UL, "Kernel Space" },
75-
{ PAGE_OFFSET, "Low Kernel Mapping" },
76-
{ VMALLOC_START, "vmalloc() Area" },
77-
{ VMEMMAP_START, "Vmemmap" },
75+
{ 0/* PAGE_OFFSET */, "Low Kernel Mapping" },
76+
{ 0/* VMALLOC_START */, "vmalloc() Area" },
77+
{ 0/* VMEMMAP_START */, "Vmemmap" },
7878
# ifdef CONFIG_X86_ESPFIX64
7979
{ ESPFIX_BASE_ADDR, "ESPfix Area", 16 },
8080
# endif
@@ -434,8 +434,16 @@ void ptdump_walk_pgd_level_checkwx(void)
434434

435435
static int __init pt_dump_init(void)
436436
{
437+
/*
438+
* Various markers are not compile-time constants, so assign them
439+
* here.
440+
*/
441+
#ifdef CONFIG_X86_64
442+
address_markers[LOW_KERNEL_NR].start_address = PAGE_OFFSET;
443+
address_markers[VMALLOC_START_NR].start_address = VMALLOC_START;
444+
address_markers[VMEMMAP_START_NR].start_address = VMEMMAP_START;
445+
#endif
437446
#ifdef CONFIG_X86_32
438-
/* Not a compile-time constant on x86-32 */
439447
address_markers[VMALLOC_START_NR].start_address = VMALLOC_START;
440448
address_markers[VMALLOC_END_NR].start_address = VMALLOC_END;
441449
# ifdef CONFIG_HIGHMEM

arch/x86/mm/init.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717
#include <asm/proto.h>
1818
#include <asm/dma.h> /* for MAX_DMA_PFN */
1919
#include <asm/microcode.h>
20+
#include <asm/kaslr.h>
2021

2122
/*
2223
* We need to define the tracepoints somewhere, and tlb.c

arch/x86/mm/kaslr.c

Lines changed: 152 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
/*
2+
* This file implements KASLR memory randomization for x86_64. It randomizes
3+
* the virtual address space of kernel memory regions (physical memory
4+
* mapping, vmalloc & vmemmap) for x86_64. This security feature mitigates
5+
* exploits relying on predictable kernel addresses.
6+
*
7+
* Entropy is generated using the KASLR early boot functions now shared in
8+
* the lib directory (originally written by Kees Cook). Randomization is
9+
* done on PGD & PUD page table levels to increase possible addresses. The
10+
* physical memory mapping code was adapted to support PUD level virtual
11+
* addresses. This implementation on the best configuration provides 30,000
12+
* possible virtual addresses in average for each memory region. An additional
13+
* low memory page is used to ensure each CPU can start with a PGD aligned
14+
* virtual address (for realmode).
15+
*
16+
* The order of each memory region is not changed. The feature looks at
17+
* the available space for the regions based on different configuration
18+
* options and randomizes the base and space between each. The size of the
19+
* physical memory mapping is the available physical memory.
20+
*/
21+
22+
#include <linux/kernel.h>
23+
#include <linux/init.h>
24+
#include <linux/random.h>
25+
26+
#include <asm/pgalloc.h>
27+
#include <asm/pgtable.h>
28+
#include <asm/setup.h>
29+
#include <asm/kaslr.h>
30+
31+
#include "mm_internal.h"
32+
33+
#define TB_SHIFT 40
34+
35+
/*
36+
* Virtual address start and end range for randomization. The end changes base
37+
* on configuration to have the highest amount of space for randomization.
38+
* It increases the possible random position for each randomized region.
39+
*
40+
* You need to add an if/def entry if you introduce a new memory region
41+
* compatible with KASLR. Your entry must be in logical order with memory
42+
* layout. For example, ESPFIX is before EFI because its virtual address is
43+
* before. You also need to add a BUILD_BUG_ON in kernel_randomize_memory to
44+
* ensure that this order is correct and won't be changed.
45+
*/
46+
static const unsigned long vaddr_start;
47+
static const unsigned long vaddr_end;
48+
49+
/*
50+
* Memory regions randomized by KASLR (except modules that use a separate logic
51+
* earlier during boot). The list is ordered based on virtual addresses. This
52+
* order is kept after randomization.
53+
*/
54+
static __initdata struct kaslr_memory_region {
55+
unsigned long *base;
56+
unsigned long size_tb;
57+
} kaslr_regions[] = {
58+
};
59+
60+
/* Get size in bytes used by the memory region */
61+
static inline unsigned long get_padding(struct kaslr_memory_region *region)
62+
{
63+
return (region->size_tb << TB_SHIFT);
64+
}
65+
66+
/*
67+
* Apply no randomization if KASLR was disabled at boot or if KASAN
68+
* is enabled. KASAN shadow mappings rely on regions being PGD aligned.
69+
*/
70+
static inline bool kaslr_memory_enabled(void)
71+
{
72+
return kaslr_enabled() && !config_enabled(CONFIG_KASAN);
73+
}
74+
75+
/* Initialize base and padding for each memory region randomized with KASLR */
76+
void __init kernel_randomize_memory(void)
77+
{
78+
size_t i;
79+
unsigned long vaddr = vaddr_start;
80+
unsigned long rand;
81+
struct rnd_state rand_state;
82+
unsigned long remain_entropy;
83+
84+
if (!kaslr_memory_enabled())
85+
return;
86+
87+
/* Calculate entropy available between regions */
88+
remain_entropy = vaddr_end - vaddr_start;
89+
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++)
90+
remain_entropy -= get_padding(&kaslr_regions[i]);
91+
92+
prandom_seed_state(&rand_state, kaslr_get_random_long("Memory"));
93+
94+
for (i = 0; i < ARRAY_SIZE(kaslr_regions); i++) {
95+
unsigned long entropy;
96+
97+
/*
98+
* Select a random virtual address using the extra entropy
99+
* available.
100+
*/
101+
entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
102+
prandom_bytes_state(&rand_state, &rand, sizeof(rand));
103+
entropy = (rand % (entropy + 1)) & PUD_MASK;
104+
vaddr += entropy;
105+
*kaslr_regions[i].base = vaddr;
106+
107+
/*
108+
* Jump the region and add a minimum padding based on
109+
* randomization alignment.
110+
*/
111+
vaddr += get_padding(&kaslr_regions[i]);
112+
vaddr = round_up(vaddr + 1, PUD_SIZE);
113+
remain_entropy -= entropy;
114+
}
115+
}
116+
117+
/*
118+
* Create PGD aligned trampoline table to allow real mode initialization
119+
* of additional CPUs. Consume only 1 low memory page.
120+
*/
121+
void __meminit init_trampoline(void)
122+
{
123+
unsigned long paddr, paddr_next;
124+
pgd_t *pgd;
125+
pud_t *pud_page, *pud_page_tramp;
126+
int i;
127+
128+
if (!kaslr_memory_enabled()) {
129+
init_trampoline_default();
130+
return;
131+
}
132+
133+
pud_page_tramp = alloc_low_page();
134+
135+
paddr = 0;
136+
pgd = pgd_offset_k((unsigned long)__va(paddr));
137+
pud_page = (pud_t *) pgd_page_vaddr(*pgd);
138+
139+
for (i = pud_index(paddr); i < PTRS_PER_PUD; i++, paddr = paddr_next) {
140+
pud_t *pud, *pud_tramp;
141+
unsigned long vaddr = (unsigned long)__va(paddr);
142+
143+
pud_tramp = pud_page_tramp + pud_index(paddr);
144+
pud = pud_page + pud_index(vaddr);
145+
paddr_next = (paddr & PUD_MASK) + PUD_SIZE;
146+
147+
*pud_tramp = *pud;
148+
}
149+
150+
set_pgd(&trampoline_pgd_entry,
151+
__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
152+
}

0 commit comments

Comments
 (0)