Hugetlbfs file mappings are handled differently than regular files: - pager_req_create will tell us the file is in a hugetlbfs - allocate memory upfront, we need to fail if not enough memory - the memory needs to be given again if another process maps the same file This implementation still has some hacks, in particular, the memory needs to be freed when all mappings are done and the file has been deleted/closed by all processes. We cannot know when the file is closed/unlinked easily, so clean up memory when all processes have exited. To test, install libhugetlbfs and link a program with the additional LDFLAGS += -B /usr/share/libhugetlbfs -Wl,--hugetlbfs-align Then run with HUGETLB_ELFMAP=RW set, you can check this works with HUGETLB_DEBUG=1 HUGETLB_VERBOSE=2 Change-Id: I327920ff06efd82e91b319b27319f41912169af1
14 lines
326 B
C
14 lines
326 B
C
#include <unistd.h>
|
|
|
|
#define __unused __attribute__((unused))
|
|
|
|
static __unused int data[1024*1024] = { 1, 0 };
|
|
static __unused int data_zero[1024*1024] = { 0 };
|
|
static __unused int const data_ro[1024*1024] = { 1, 0 };
|
|
static __unused int const data_ro_zero[1024*1024] = { 0 };
|
|
|
|
int main(int argc, char *argv[])
|
|
{
|
|
return 0;
|
|
}
|