153a59a6f4
gettimeofday: avoid per-cpu data in calculation
...
Because it is difficult to safely update per-cpu data of other cpus in
settimeofday().
2015-10-27 19:21:50 +09:00
343bfbd30a
rename back status field
2015-10-22 20:26:50 +09:00
4e4f1208f7
delete unused member
2015-10-19 20:12:26 +09:00
a325a78866
refactoring to send signal
2015-10-15 17:10:02 +09:00
04e193de13
refactoring process structures
2015-10-13 23:04:08 +09:00
cb4f3a4d65
take into account args/envs' offset in page
...
- prepare_process_ranges_args_envs()
2015-10-01 21:08:42 +09:00
51789fcd38
initialize idle_vm for page faluts
2015-10-01 21:08:35 +09:00
8dd9175411
schedule: fix null pointer dereference
2015-09-29 19:10:00 +09:00
a14768c49a
kmalloc: fix missing unlock on out-of-memory path
2015-09-18 21:26:15 +09:00
56e57775e7
clone: fix error message
2015-09-18 21:26:15 +09:00
b3b752ba41
nanosleep: use copy_from_user instead of direct access
2015-09-17 21:46:32 +09:00
7b32f2f73b
nanosleep: fix tscs_rem underflow issue
2015-09-17 21:46:26 +09:00
ea5a1a8693
nanosleep: update *rem whenever signaled
2015-09-17 21:44:49 +09:00
92f8fb2b2b
nanosleep: use copy_to_user instead of direct access
2015-09-17 21:44:49 +09:00
a3e440414d
nanosleep: cosmetic change
2015-09-17 21:44:49 +09:00
ccb7c30a05
page_fault_handler(): reenable preempt after failed PF when process is exiting
2015-09-17 10:05:32 +09:00
7dfeb8e7ce
create demand-paging mapping in case of MAP_SHARED
...
On current McKernel, only mappings for demand paging can be shared.
Therefore, if MAP_SHARED and MAP_ANONYMOUS are specified and
anon_on_demand is disabled, then mmap(2) should create a mapping which
is for demand paging and is entirely populated with physical pages.
2015-09-16 21:38:00 +09:00
bd5708286d
make sys_gettimeofday() use copy_to_user()
2015-09-16 21:26:32 +09:00
c8a13cf213
make gettimeofday ignore NULL parameter
2015-09-16 21:26:24 +09:00
5ad0a03d18
make gettimeofday handle second parameter (timezone)
2015-09-16 21:25:29 +09:00
3819eec03f
cosmetic changes
...
- sys_gettimeofday()
2015-09-16 21:13:12 +09:00
40b8587a8a
schedule(): sync CPU_FLAG_NEED_RESCHED flag with clone and migrate
2015-09-16 19:22:40 +09:00
e1a01803d0
disable demand paging on ANONYMOUS mappings unless anon_on_demand kernel argument is passed
2015-09-14 17:26:37 +09:00
69f4b0e1ad
gettimeofday()/nanosleep(): check arguments, return on pending signal
2015-09-14 17:05:30 +09:00
0909a5bed5
tracee context is broken when tracee call execve
2015-09-03 10:05:25 +09:00
9dd224385e
When SIGSEGV occurred on a tracee process, a tracee process freezes.
2015-09-01 17:37:56 +09:00
9ae5bcf46e
gettimeofday(): an implementation based on CPU invariant TSC support
2015-08-24 23:53:56 +02:00
c85a9b99e1
a couple of cosmetic changes of debug messages
2015-08-22 18:53:14 +09:00
5a0cd3f53f
ptrace_detach when exiting
...
refs #590
2015-08-18 18:03:09 +09:00
9fa62adfe7
execve(): stay compliant with locked context switching
2015-08-10 14:18:11 +09:00
f0ab8ec89a
sched_request_migrate(): change CPU flags atomically
2015-08-10 12:45:59 +09:00
f4cc82578d
check_need_resched(): no thread migration in IRQ context
2015-08-10 12:43:35 +09:00
9ba40dc0ff
schedule(): hold runq lock for the entire duration of context switching
...
releasing the runq lock after loading page tables but before the actual
context switch can leave execution in an inconsistent if the current
process is descheduled from an IRQ between these two steps.
this patch holds the runq lock with IRQs disabled and makes the context
switch a single atomic operation.
2015-08-10 12:37:12 +09:00
8d6c97ea5c
schedule(): disable auto thread migration
2015-08-07 16:07:31 +09:00
215cd370a1
ap_init(): clean up AP boot kernel messages
2015-08-07 10:57:59 +09:00
0a0e2c04a0
support for dynamically toggling time sharing when CPU is oversubscribed
2015-08-07 08:51:50 +09:00
aa191b87d3
schedule(): use XSAVE/XRSTOR and swap floating point registers in context switch
2015-08-07 08:41:00 +09:00
d5c243571f
cpu_clear_and_set(): atomic CPU mask update in migration code
2015-08-06 10:49:55 +09:00
328e69a335
schedule(): do not preempt while holding spinlocks or while in offloaded syscall
2015-08-06 10:36:13 +09:00
b77755d0f7
obtain_clone_cpuid(): always start from CPU 0 and fill in cores linearily
2015-07-28 20:20:47 +09:00
d7bae14707
TEMPORARY: schedule(): move threads when core is explicitly oversubscribed
2015-07-28 20:12:58 +09:00
4e58d08f5c
schedule_timeout(): give a chance to other process in spin sleep if CPU core is oversubscribed
2015-07-28 20:06:56 +09:00
9b1e691588
fix thread migration code (i.e., sched_setaffinity())
...
- moved migration code into idle() process and updated schedule() to detect
when a thread has moved to another CPU in order to avoid doing housekeeping
on behalf of the original one
- start CPU head from core 0
- keeps track of nested interrupts
2015-07-24 20:09:17 +09:00
3988b0fc61
keep track of IRQ context and don't do thread migration there
2015-07-23 16:56:58 +09:00
54eb345847
settid(): prevent modifying tid after thread migration
2015-07-23 16:51:24 +09:00
bbe7aef95b
fix calling do_signal (argument lacked)
2015-07-17 10:18:43 +09:00
1ff4cf68c2
support SA_RESTART flag and restart syscall
2015-07-16 16:33:14 +09:00
1bc84d3feb
modify to copy credentials
2015-07-13 15:29:26 +09:00
f7d78c8b7d
sched_getaffinity(): return EINVAL for 0 lenght request (fixes LTP sched_getaffinity01)
2015-07-10 11:00:43 +09:00
7647c99cc2
do_migrate(): disable IRQ while holding migq_lock to avoid deadlocking with reschedule interrupts
2015-07-09 15:23:28 +09:00