In arm64, glibc-open of /dev/xpmem is hooked in sys_openat. This
commit adds xpmem_openat which is called by sys_openat.
This commit silently applies copy_from_user fix to sys_open as well.
Change-Id: I3b4f7bf0e152c359250bb2b56910db9192390cb1
Fujitsu: POSTK_DEBUG_ARCH_DEP_46, POSTK_DEBUG_ARCH_DEP_62
This includes the following fix:
send_syscall, do_syscall: remove argument pid
Fujitsu: POSTK_TEMP_FIX_26
Refs: #1165
Change-Id: I702362c07a28f507a5e43dd751949aefa24bc8c0
We had a deadlock between:
- free_process_memory_range (take lock) -> ihk_mc_pt_free_range ->
... -> remote_flush_tlb_array_cpumask -> "/* Wait for all cores */"
and
- obj_list_lookup() under fileobj_list_lock that disabled irqs
and thus never ack'd the remote flush
The rework is quite big but removes the need for the big lock,
although devobj and shmobj needed a new smaller lock to be
introduced - the new locks are used much more locally and
should not cause problems.
On the bright side, refcounting being moved to memobj level means
we could remove refcounting implemented separately in all object
types and simplifies code a bit.
Change-Id: I6bc8438a98b1d8edddc91c4ac33c11b88e097ebb
Heavily inspired off linux kernel's dynamic debug:
* add a /sys/kernel/debug/dynamic_debug/control file
(accessible from linux side in /sys/class/mcos/mcos0/sys/kernel/debug/dynamic_debug/control)
* read from file to list debug statements (currently limited to 4k in size)
* write to file with '[file foo ][func bar ][line [x][-[y]]] [+-]p' to change values
Side effects:
* reindented all linker scripts, there is a new __verbose section
* added string function strpbrk
Change-Id: I36d7707274dcc3ecaf200075a31a2f0f76021059
This replaces the chained list used to keep track of all memory ranges
of a process by a standard rbtree (no need of interval tree here
because there is no overlap)
Accesses that were done directly through vm_range_list before were
replaced by lookup_process_memory_range, even full list scan (e.g.
coredump).
The full scans will thus be less efficient because calls to rb_next()
will not be inlined, but these are rarer calls that can probably afford
this compared to code simplicity.
The only reference to the actual backing structure left outside of
process.c is a call to rb_erase in xpmem_free_process_memory_range.
v2: fix lookup_process_memory_range with small start address
v3: make vm_range_insert error out properly
Panic does not lead to easy debug, all error paths
are handled to just return someting on error
v4: fix lookup_process_memory_range (again)
That optimistically going left was a more serious bug than just
last iteration, we could just pass by a match and continue down
the tree if the match was not a leaf.
v5: some users actually needed leftmost match, so restore behavior
without the breakage (hopefully)