From: Ravikiran G Thirumalai While reserving KVA for lmem_maps of node, we have to make sure that node_remap_start_pfn[] is aligned to a proper pmd boundary. (node_remap_start_pfn[] gets its value from node_end_pfn[]) Signed-off-by: Ravikiran Thirumalai Signed-off-by: Shai Fultheim Signed-off-by: Andrew Morton --- arch/i386/mm/discontig.c | 8 ++++++++ 1 files changed, 8 insertions(+) diff -puN arch/i386/mm/discontig.c~mm-ensure-proper-alignment-for-node_remap_start_pfn arch/i386/mm/discontig.c --- devel/arch/i386/mm/discontig.c~mm-ensure-proper-alignment-for-node_remap_start_pfn 2005-07-27 18:18:02.000000000 -0700 +++ devel-akpm/arch/i386/mm/discontig.c 2005-07-27 18:18:02.000000000 -0700 @@ -243,6 +243,14 @@ static unsigned long calculate_numa_rema /* now the roundup is correct, convert to PAGE_SIZE pages */ size = size * PTRS_PER_PTE; + if (node_end_pfn[nid] & (PTRS_PER_PTE-1)) { + /* + * Adjust size if node_end_pfn is not on a proper + * pmd boundary. remap_numa_kva will barf otherwise. + */ + size += node_end_pfn[nid] & (PTRS_PER_PTE-1); + } + /* * Validate the region we are allocating only contains valid * pages. _