53adb9d43e
The commit7197fb4058
("util/mmap-alloc: fix hugetlb support on ppc64") fixed Huge TLB mappings on ppc64. However, we still need to consider the underlying huge page size during munmap() because it requires that both address and length be a multiple of the underlying huge page size for Huge TLB mappings. Quote from "Huge page (Huge TLB) mappings" paragraph under NOTES section of the munmap(2) manual: "For munmap(), addr and length must both be a multiple of the underlying huge page size." On ppc64, the munmap() in qemu_ram_munmap() does not work for Huge TLB mappings because the mapped segment can be aligned with the underlying huge page size, not aligned with the native system page size, as returned by getpagesize(). This has the side effect of not releasing huge pages back to the pool after a hugetlbfs file-backed memory device is hot-unplugged. This patch fixes the situation in qemu_ram_mmap() and qemu_ram_munmap() by considering the underlying page size on ppc64. After this patch, memory hot-unplug releases huge pages back to the pool. Fixes:7197fb4058
Signed-off-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
15 lines
304 B
C
15 lines
304 B
C
#ifndef QEMU_MMAP_ALLOC_H
|
|
#define QEMU_MMAP_ALLOC_H
|
|
|
|
#include "qemu-common.h"
|
|
|
|
size_t qemu_fd_getpagesize(int fd);
|
|
|
|
size_t qemu_mempath_getpagesize(const char *mem_path);
|
|
|
|
void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared);
|
|
|
|
void qemu_ram_munmap(int fd, void *ptr, size_t size);
|
|
|
|
#endif
|