* Mapping attached part of segment is done at attach time instead of
make time to work with runtimes (e.g. OpenMPI) xpmem_make-ing the
entire user-space
* Mapping attached part of segment at attach time can be turned off by
specifying xpmem_remote_on_demand in kernel argument
* Mapping attachment chooses appropriate page-sizes, i.e., largest
allowed by memory range and segment page boundary
Fixes: a8696d8 "xpmem: Support large page attachment"
Change-Id: I44663865204036520e5f62fe22b9134ee4629f9b
Add similar protection to clear_host_pte than to set_host_vma (see #986)
Also make the page fault handler only skip taking lock if the munmap
happened on the same cpu id
Change-Id: I6d9e68e8f8905b20bb2ccfa72848e04fe6404ab6
This reverts commit 0d3ef65092.
Reason for revert: This fix causes circular dependency with memory_range manipulation and TLB flush. See #1394.
Change-Id: I4774e81ff300c199629e283e538c0a30ad0eeaae
Separate copyright bumps in a different commit.
A lot of files only had the copyright change at this point; these
were probably changes I added separatly in other patches but just
split these in a different commit instead to simplify git stats
Change-Id: I93cf3fc1c0fa04ee743a79c3fe9768933e6bd0d2
Add "-T 0" to mcreboot.sh if you want to turn off time sharing. When
it's turned off, McKernel doesn't activate interval timer when the
length of per-CPU run-queue is larger than one.
Change-Id: I2cedc1b30a9cd9a0f4608a32ecec0a0d58c6225e
This reverts commit b70d470e20.
That commit had been landed too fast after a mistake during migration
from old to new gerrit that didn't keep -1 vote ; it needs some fix
Change-Id: Ifc8a23e42449dfe471049270b4706e9b137e096e
One CPU could be chosen by concurrent forks because CPU selection and
runq addition are not done atomicly. So this fix makes the two steps
atomic.
Change-Id: Ib6b75ad655789385d13207e0a47fa4717dec854a
This replaces the chained list used to keep track of all memory ranges
of a process by a standard rbtree (no need of interval tree here
because there is no overlap)
Accesses that were done directly through vm_range_list before were
replaced by lookup_process_memory_range, even full list scan (e.g.
coredump).
The full scans will thus be less efficient because calls to rb_next()
will not be inlined, but these are rarer calls that can probably afford
this compared to code simplicity.
The only reference to the actual backing structure left outside of
process.c is a call to rb_erase in xpmem_free_process_memory_range.
v2: fix lookup_process_memory_range with small start address
v3: make vm_range_insert error out properly
Panic does not lead to easy debug, all error paths
are handled to just return someting on error
v4: fix lookup_process_memory_range (again)
That optimistically going left was a more serious bug than just
last iteration, we could just pass by a match and continue down
the tree if the match was not a leaf.
v5: some users actually needed leftmost match, so restore behavior
without the breakage (hopefully)